Mar 13 10:03:44 crc systemd[1]: Starting Kubernetes Kubelet... Mar 13 10:03:44 crc restorecon[4586]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 10:03:44 crc restorecon[4586]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 13 10:03:45 crc restorecon[4586]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 13 10:03:45 crc restorecon[4586]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Mar 13 10:03:47 crc kubenswrapper[4632]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 10:03:47 crc kubenswrapper[4632]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 13 10:03:47 crc kubenswrapper[4632]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 10:03:47 crc kubenswrapper[4632]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 10:03:47 crc kubenswrapper[4632]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 13 10:03:47 crc kubenswrapper[4632]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.397897 4632 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.401812 4632 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.401848 4632 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.401862 4632 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.401870 4632 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.401876 4632 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.401881 4632 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.401886 4632 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.401891 4632 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.401896 4632 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.401900 4632 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.401904 4632 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.401909 4632 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.401913 4632 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.401925 4632 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.401930 4632 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.401957 4632 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.401963 4632 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.401969 4632 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.401975 4632 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.401981 4632 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.401987 4632 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.401992 4632 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.401996 4632 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402001 4632 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402006 4632 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402011 4632 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402016 4632 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402021 4632 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402025 4632 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402031 4632 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402036 4632 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402041 4632 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402046 4632 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402050 4632 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402055 4632 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402060 4632 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402064 4632 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402068 4632 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402073 4632 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402077 4632 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402082 4632 feature_gate.go:330] unrecognized feature gate: Example Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402086 4632 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402090 4632 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402095 4632 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402099 4632 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402104 4632 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402108 4632 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402117 4632 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402123 4632 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402132 4632 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402137 4632 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402143 4632 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402151 4632 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402156 4632 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402160 4632 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402166 4632 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402170 4632 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402175 4632 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402179 4632 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402184 4632 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402188 4632 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402193 4632 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402197 4632 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402201 4632 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402206 4632 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402212 4632 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402217 4632 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402222 4632 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402229 4632 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402234 4632 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.402239 4632 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403275 4632 flags.go:64] FLAG: --address="0.0.0.0" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403294 4632 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403304 4632 flags.go:64] FLAG: --anonymous-auth="true" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403313 4632 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403321 4632 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403326 4632 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403334 4632 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403342 4632 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403347 4632 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403353 4632 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403360 4632 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403366 4632 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403371 4632 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403390 4632 flags.go:64] FLAG: --cgroup-root="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403402 4632 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403412 4632 flags.go:64] FLAG: --client-ca-file="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403498 4632 flags.go:64] FLAG: --cloud-config="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403507 4632 flags.go:64] FLAG: --cloud-provider="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403512 4632 flags.go:64] FLAG: --cluster-dns="[]" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403521 4632 flags.go:64] FLAG: --cluster-domain="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403526 4632 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403532 4632 flags.go:64] FLAG: --config-dir="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403537 4632 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403545 4632 flags.go:64] FLAG: --container-log-max-files="5" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403554 4632 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403559 4632 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403565 4632 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403571 4632 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403576 4632 flags.go:64] FLAG: --contention-profiling="false" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403581 4632 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403589 4632 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403595 4632 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403600 4632 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403607 4632 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403612 4632 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403618 4632 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403623 4632 flags.go:64] FLAG: --enable-load-reader="false" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403629 4632 flags.go:64] FLAG: --enable-server="true" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403636 4632 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403644 4632 flags.go:64] FLAG: --event-burst="100" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403650 4632 flags.go:64] FLAG: --event-qps="50" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403655 4632 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403660 4632 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403666 4632 flags.go:64] FLAG: --eviction-hard="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403674 4632 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403680 4632 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403684 4632 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403691 4632 flags.go:64] FLAG: --eviction-soft="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403696 4632 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403730 4632 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403737 4632 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403743 4632 flags.go:64] FLAG: --experimental-mounter-path="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403748 4632 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403753 4632 flags.go:64] FLAG: --fail-swap-on="true" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403758 4632 flags.go:64] FLAG: --feature-gates="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403765 4632 flags.go:64] FLAG: --file-check-frequency="20s" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403771 4632 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403776 4632 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403782 4632 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403787 4632 flags.go:64] FLAG: --healthz-port="10248" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403793 4632 flags.go:64] FLAG: --help="false" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403798 4632 flags.go:64] FLAG: --hostname-override="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403803 4632 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403808 4632 flags.go:64] FLAG: --http-check-frequency="20s" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403814 4632 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403819 4632 flags.go:64] FLAG: --image-credential-provider-config="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403824 4632 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403831 4632 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403837 4632 flags.go:64] FLAG: --image-service-endpoint="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403842 4632 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403847 4632 flags.go:64] FLAG: --kube-api-burst="100" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403852 4632 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403858 4632 flags.go:64] FLAG: --kube-api-qps="50" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403863 4632 flags.go:64] FLAG: --kube-reserved="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403868 4632 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403873 4632 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403879 4632 flags.go:64] FLAG: --kubelet-cgroups="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403885 4632 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403891 4632 flags.go:64] FLAG: --lock-file="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403896 4632 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403902 4632 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403907 4632 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403915 4632 flags.go:64] FLAG: --log-json-split-stream="false" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403920 4632 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403925 4632 flags.go:64] FLAG: --log-text-split-stream="false" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403931 4632 flags.go:64] FLAG: --logging-format="text" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403955 4632 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403961 4632 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403966 4632 flags.go:64] FLAG: --manifest-url="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403972 4632 flags.go:64] FLAG: --manifest-url-header="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403980 4632 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403986 4632 flags.go:64] FLAG: --max-open-files="1000000" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403993 4632 flags.go:64] FLAG: --max-pods="110" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.403998 4632 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404004 4632 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404009 4632 flags.go:64] FLAG: --memory-manager-policy="None" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404015 4632 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404021 4632 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404028 4632 flags.go:64] FLAG: --node-ip="192.168.126.11" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404036 4632 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404050 4632 flags.go:64] FLAG: --node-status-max-images="50" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404055 4632 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404061 4632 flags.go:64] FLAG: --oom-score-adj="-999" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404066 4632 flags.go:64] FLAG: --pod-cidr="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404071 4632 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404082 4632 flags.go:64] FLAG: --pod-manifest-path="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404087 4632 flags.go:64] FLAG: --pod-max-pids="-1" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404093 4632 flags.go:64] FLAG: --pods-per-core="0" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404099 4632 flags.go:64] FLAG: --port="10250" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404105 4632 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404111 4632 flags.go:64] FLAG: --provider-id="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404117 4632 flags.go:64] FLAG: --qos-reserved="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404122 4632 flags.go:64] FLAG: --read-only-port="10255" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404128 4632 flags.go:64] FLAG: --register-node="true" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404134 4632 flags.go:64] FLAG: --register-schedulable="true" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404139 4632 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404156 4632 flags.go:64] FLAG: --registry-burst="10" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404161 4632 flags.go:64] FLAG: --registry-qps="5" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404166 4632 flags.go:64] FLAG: --reserved-cpus="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404171 4632 flags.go:64] FLAG: --reserved-memory="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404179 4632 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404184 4632 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404190 4632 flags.go:64] FLAG: --rotate-certificates="false" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404195 4632 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404201 4632 flags.go:64] FLAG: --runonce="false" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404206 4632 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404212 4632 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404261 4632 flags.go:64] FLAG: --seccomp-default="false" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404267 4632 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404272 4632 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404279 4632 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404286 4632 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404292 4632 flags.go:64] FLAG: --storage-driver-password="root" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404297 4632 flags.go:64] FLAG: --storage-driver-secure="false" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404303 4632 flags.go:64] FLAG: --storage-driver-table="stats" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404309 4632 flags.go:64] FLAG: --storage-driver-user="root" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404315 4632 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404320 4632 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404325 4632 flags.go:64] FLAG: --system-cgroups="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404330 4632 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404339 4632 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404344 4632 flags.go:64] FLAG: --tls-cert-file="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404350 4632 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404357 4632 flags.go:64] FLAG: --tls-min-version="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404363 4632 flags.go:64] FLAG: --tls-private-key-file="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404368 4632 flags.go:64] FLAG: --topology-manager-policy="none" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404373 4632 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404378 4632 flags.go:64] FLAG: --topology-manager-scope="container" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404383 4632 flags.go:64] FLAG: --v="2" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404392 4632 flags.go:64] FLAG: --version="false" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404400 4632 flags.go:64] FLAG: --vmodule="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404407 4632 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404413 4632 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404563 4632 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404570 4632 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404575 4632 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404582 4632 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404588 4632 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404596 4632 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404611 4632 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404617 4632 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404627 4632 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404632 4632 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404638 4632 feature_gate.go:330] unrecognized feature gate: Example Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404644 4632 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404648 4632 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404653 4632 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404658 4632 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404662 4632 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404667 4632 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404671 4632 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404676 4632 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404680 4632 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404685 4632 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404691 4632 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404697 4632 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404703 4632 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404708 4632 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404713 4632 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404719 4632 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404725 4632 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404730 4632 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404736 4632 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404741 4632 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404746 4632 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404751 4632 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404757 4632 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404762 4632 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404766 4632 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404771 4632 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404778 4632 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404783 4632 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404788 4632 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404794 4632 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404799 4632 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404804 4632 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404809 4632 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404814 4632 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404819 4632 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404824 4632 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404828 4632 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404833 4632 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404837 4632 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404842 4632 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404847 4632 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404852 4632 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404856 4632 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404861 4632 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404865 4632 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404870 4632 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404875 4632 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404879 4632 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404884 4632 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404889 4632 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404893 4632 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404899 4632 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404904 4632 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404908 4632 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404913 4632 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404918 4632 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404922 4632 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404927 4632 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404931 4632 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.404966 4632 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.404983 4632 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.627477 4632 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.627543 4632 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.627725 4632 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.627741 4632 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.627750 4632 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.627757 4632 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.627785 4632 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.627794 4632 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.627802 4632 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.627809 4632 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.627815 4632 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.627821 4632 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.627827 4632 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.627833 4632 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.627843 4632 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.627849 4632 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.627854 4632 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.627859 4632 feature_gate.go:330] unrecognized feature gate: Example Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.627865 4632 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.627870 4632 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.627875 4632 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.627880 4632 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.627886 4632 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.627892 4632 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.627897 4632 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.627904 4632 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.627912 4632 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.627921 4632 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.627930 4632 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.627936 4632 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.627965 4632 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.627971 4632 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.627976 4632 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.627983 4632 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.627990 4632 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.627997 4632 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628005 4632 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628012 4632 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628017 4632 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628029 4632 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628037 4632 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628045 4632 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628052 4632 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628059 4632 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628065 4632 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628070 4632 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628076 4632 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628082 4632 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628088 4632 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628094 4632 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628099 4632 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628104 4632 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628115 4632 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628123 4632 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628129 4632 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628134 4632 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628139 4632 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628144 4632 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628149 4632 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628195 4632 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628429 4632 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628501 4632 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628506 4632 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628513 4632 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628517 4632 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628524 4632 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628529 4632 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628533 4632 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628537 4632 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628542 4632 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628546 4632 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628554 4632 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628564 4632 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.628573 4632 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628769 4632 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628781 4632 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628786 4632 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628790 4632 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628795 4632 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628799 4632 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628803 4632 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628807 4632 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628810 4632 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628814 4632 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628818 4632 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628822 4632 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628826 4632 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628829 4632 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628835 4632 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628841 4632 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628846 4632 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628850 4632 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628854 4632 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628859 4632 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628863 4632 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628867 4632 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628871 4632 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628875 4632 feature_gate.go:330] unrecognized feature gate: Example Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628878 4632 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628882 4632 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628885 4632 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628890 4632 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628894 4632 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628897 4632 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628901 4632 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628906 4632 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628911 4632 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628915 4632 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628918 4632 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628922 4632 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628925 4632 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628930 4632 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628950 4632 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628954 4632 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628958 4632 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628962 4632 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628966 4632 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628970 4632 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628975 4632 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628979 4632 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628983 4632 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628990 4632 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628995 4632 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.628999 4632 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.629003 4632 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.629006 4632 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.629010 4632 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.629015 4632 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.629019 4632 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.629024 4632 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.629028 4632 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.629032 4632 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.629037 4632 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.629041 4632 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.629045 4632 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.629049 4632 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.629053 4632 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.629057 4632 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.629062 4632 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.629066 4632 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.629069 4632 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.629073 4632 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.629077 4632 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.629081 4632 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.629085 4632 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.629091 4632 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.630121 4632 server.go:940] "Client rotation is on, will bootstrap in background" Mar 13 10:03:47 crc kubenswrapper[4632]: E0313 10:03:47.635016 4632 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.639682 4632 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.639838 4632 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.642954 4632 server.go:997] "Starting client certificate rotation" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.642997 4632 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.643221 4632 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.780356 4632 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.782392 4632 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 13 10:03:47 crc kubenswrapper[4632]: E0313 10:03:47.784671 4632 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.861819 4632 log.go:25] "Validated CRI v1 runtime API" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.899782 4632 log.go:25] "Validated CRI v1 image API" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.904835 4632 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.912343 4632 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-03-13-09-58-50-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.912404 4632 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.930422 4632 manager.go:217] Machine: {Timestamp:2026-03-13 10:03:47.926538474 +0000 UTC m=+1.949068637 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2799998 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:e8be0c8f-16ef-4a1d-b190-772a9f649bc5 BootID:b5d63e17-4c81-494f-81b9-40163ac26c6b Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:35:93:68 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:35:93:68 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:06:1a:b1 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:1c:34:be Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:ec:c5:1e Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:2f:7b:ae Speed:-1 Mtu:1496} {Name:eth10 MacAddress:c2:a2:2a:ad:63:e9 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:52:1d:6e:98:22:b2 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.930730 4632 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.931025 4632 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.932003 4632 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.932227 4632 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.932266 4632 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.932553 4632 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.932564 4632 container_manager_linux.go:303] "Creating device plugin manager" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.933079 4632 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.933110 4632 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.934190 4632 state_mem.go:36] "Initialized new in-memory state store" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.934329 4632 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.939929 4632 kubelet.go:418] "Attempting to sync node with API server" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.939988 4632 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.940026 4632 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.940045 4632 kubelet.go:324] "Adding apiserver pod source" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.940063 4632 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.967405 4632 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.967585 4632 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.967581 4632 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Mar 13 10:03:47 crc kubenswrapper[4632]: E0313 10:03:47.967721 4632 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Mar 13 10:03:47 crc kubenswrapper[4632]: E0313 10:03:47.967741 4632 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.968556 4632 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.970127 4632 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.971658 4632 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.971685 4632 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.971693 4632 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.971702 4632 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.971720 4632 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.971729 4632 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.971738 4632 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.971751 4632 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.971762 4632 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.971771 4632 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.971784 4632 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.971792 4632 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.972873 4632 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.973489 4632 server.go:1280] "Started kubelet" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.973851 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.974574 4632 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.974561 4632 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.975580 4632 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 10:03:47 crc systemd[1]: Started Kubernetes Kubelet. Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.976357 4632 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.976419 4632 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.977001 4632 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.977030 4632 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.977200 4632 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 13 10:03:47 crc kubenswrapper[4632]: E0313 10:03:47.978025 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:03:47 crc kubenswrapper[4632]: E0313 10:03:47.978132 4632 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="200ms" Mar 13 10:03:47 crc kubenswrapper[4632]: W0313 10:03:47.978446 4632 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Mar 13 10:03:47 crc kubenswrapper[4632]: E0313 10:03:47.978492 4632 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.978840 4632 factory.go:55] Registering systemd factory Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.978856 4632 factory.go:221] Registration of the systemd container factory successfully Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.979213 4632 factory.go:153] Registering CRI-O factory Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.979229 4632 factory.go:221] Registration of the crio container factory successfully Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.979295 4632 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.979328 4632 factory.go:103] Registering Raw factory Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.979345 4632 manager.go:1196] Started watching for new ooms in manager Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.980082 4632 manager.go:319] Starting recovery of all containers Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.981045 4632 server.go:460] "Adding debug handlers to kubelet server" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.987763 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.987837 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.987863 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.987886 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.987900 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.987918 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.987954 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.987972 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988005 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988026 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988045 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988060 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988079 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988221 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988242 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988263 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988285 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988299 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988313 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988331 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988344 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988362 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988379 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988392 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988410 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988422 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988448 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988471 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988487 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988502 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988521 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988536 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988554 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988566 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988578 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988594 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988607 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988623 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988636 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988649 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988665 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988753 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988767 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988785 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988798 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988818 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988833 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988846 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988862 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988874 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988891 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.988903 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.989068 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.989086 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.989107 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.989123 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.989135 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.989151 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.989163 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.989179 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.989269 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.989282 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.989302 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.989316 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.989331 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.989343 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: E0313 10:03:47.986967 4632 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.182:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189c5e79366017e8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:47.9734538 +0000 UTC m=+1.995983933,LastTimestamp:2026-03-13 10:03:47.9734538 +0000 UTC m=+1.995983933,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.989353 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.990884 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.990900 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.990915 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.990932 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.990964 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.990982 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.990997 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.991008 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.991024 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.991037 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.991054 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.991067 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.991080 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.991220 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.991236 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.992437 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.992450 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.992463 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.992479 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.992492 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.992510 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.992520 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.992531 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.992547 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.992558 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.992574 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.992586 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.992597 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.992615 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.992627 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.992645 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.992658 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.992702 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.992720 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.992740 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.992760 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.992780 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.994709 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.994761 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.994785 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Mar 13 10:03:47 crc kubenswrapper[4632]: I0313 10:03:47.994813 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.001455 4632 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.001549 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.001588 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.001607 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.001630 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.001649 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.001668 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.001685 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.001696 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.001713 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.001730 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.001747 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.001866 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.002073 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.002102 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.002482 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.002531 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.002554 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.002572 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.002596 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003047 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003081 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003153 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003177 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003197 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003219 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003238 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003261 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003283 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003299 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003319 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003336 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003358 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003375 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003391 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003413 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003430 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003446 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003472 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003487 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003507 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003525 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003542 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003565 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003585 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003606 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003623 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003641 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003662 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003679 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003702 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003721 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003738 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003761 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003807 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003835 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003853 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003873 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003896 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003912 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003932 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003968 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.003984 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.004003 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.004018 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.004121 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.004147 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.004164 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.004874 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.004925 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.004982 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.005005 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.005021 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.005039 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.005094 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.005113 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.005137 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.005156 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.005180 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.005197 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.005215 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.005241 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.005259 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.005282 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.005334 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.005352 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.005426 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.005445 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.005467 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.005485 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.005501 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.005524 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.005544 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.005628 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.005652 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.005668 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.005688 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.005705 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.005720 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.005741 4632 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.005756 4632 reconstruct.go:97] "Volume reconstruction finished" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.005767 4632 reconciler.go:26] "Reconciler: start to sync state" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.011514 4632 manager.go:324] Recovery completed Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.024157 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.025769 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.025816 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.025829 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.031511 4632 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.031566 4632 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.031622 4632 state_mem.go:36] "Initialized new in-memory state store" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.040300 4632 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.042850 4632 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.042904 4632 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.042932 4632 kubelet.go:2335] "Starting kubelet main sync loop" Mar 13 10:03:48 crc kubenswrapper[4632]: E0313 10:03:48.043172 4632 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 10:03:48 crc kubenswrapper[4632]: W0313 10:03:48.047446 4632 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Mar 13 10:03:48 crc kubenswrapper[4632]: E0313 10:03:48.047737 4632 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.055398 4632 policy_none.go:49] "None policy: Start" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.056422 4632 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.056607 4632 state_mem.go:35] "Initializing new in-memory state store" Mar 13 10:03:48 crc kubenswrapper[4632]: E0313 10:03:48.078278 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.121691 4632 manager.go:334] "Starting Device Plugin manager" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.122825 4632 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.123137 4632 server.go:79] "Starting device plugin registration server" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.123606 4632 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.123619 4632 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.123857 4632 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.124056 4632 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.124065 4632 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 10:03:48 crc kubenswrapper[4632]: E0313 10:03:48.131150 4632 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.143959 4632 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.144055 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.145086 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.145116 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.145125 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.145229 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.145674 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.145697 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.146373 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.146386 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.146394 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.146473 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.146797 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.146818 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.146853 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.146919 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.146929 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.147298 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.147320 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.147329 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.147386 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.147400 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.147402 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.147458 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.147527 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.147556 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.148002 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.148030 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.148039 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.148124 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.148232 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.148269 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.148511 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.148573 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.148586 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.148719 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.148736 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.148746 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.148856 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.148877 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.149701 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.149734 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.149747 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.150546 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.150606 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.150621 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:48 crc kubenswrapper[4632]: E0313 10:03:48.178867 4632 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="400ms" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.212227 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.212266 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.212289 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.212309 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.212341 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.212360 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.212418 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.212469 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.212492 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.212507 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.212577 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.212607 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.212628 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.212658 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.212677 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.224089 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.225798 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.225858 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.225869 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.225902 4632 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 13 10:03:48 crc kubenswrapper[4632]: E0313 10:03:48.226538 4632 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.182:6443: connect: connection refused" node="crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.313596 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.313672 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.313697 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.313720 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.313738 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.313755 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.313773 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.313784 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.313825 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.313794 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.313848 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.313895 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.313910 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.313883 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.313862 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.313961 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.313953 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.313983 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.314006 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.314019 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.314026 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.313867 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.314006 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.314044 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.314088 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.314088 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.314108 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.314122 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.314112 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.314140 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.427349 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.429967 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.430023 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.430041 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.430078 4632 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 13 10:03:48 crc kubenswrapper[4632]: E0313 10:03:48.430569 4632 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.182:6443: connect: connection refused" node="crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.485748 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.495682 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.517496 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.537154 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.547275 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 13 10:03:48 crc kubenswrapper[4632]: W0313 10:03:48.551174 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-8a837521daf688e33fb4e6fbf5cbabe8729fa8258893d103fa454605c4a0eba7 WatchSource:0}: Error finding container 8a837521daf688e33fb4e6fbf5cbabe8729fa8258893d103fa454605c4a0eba7: Status 404 returned error can't find the container with id 8a837521daf688e33fb4e6fbf5cbabe8729fa8258893d103fa454605c4a0eba7 Mar 13 10:03:48 crc kubenswrapper[4632]: W0313 10:03:48.552743 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-f2ca7e88f9e3349411e4a1364ab9c541d5ee7284178c249a401a6700ea5f9269 WatchSource:0}: Error finding container f2ca7e88f9e3349411e4a1364ab9c541d5ee7284178c249a401a6700ea5f9269: Status 404 returned error can't find the container with id f2ca7e88f9e3349411e4a1364ab9c541d5ee7284178c249a401a6700ea5f9269 Mar 13 10:03:48 crc kubenswrapper[4632]: W0313 10:03:48.568846 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-127b7383b05ead4e61b986500f7a6fbcc9b94e8af451c60a18cbbf65b5633c3f WatchSource:0}: Error finding container 127b7383b05ead4e61b986500f7a6fbcc9b94e8af451c60a18cbbf65b5633c3f: Status 404 returned error can't find the container with id 127b7383b05ead4e61b986500f7a6fbcc9b94e8af451c60a18cbbf65b5633c3f Mar 13 10:03:48 crc kubenswrapper[4632]: W0313 10:03:48.570031 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-1cfdd9bf16760719f287994cb12d52eb0c1a9dcd9ecfd2e1c8dff79662b1e333 WatchSource:0}: Error finding container 1cfdd9bf16760719f287994cb12d52eb0c1a9dcd9ecfd2e1c8dff79662b1e333: Status 404 returned error can't find the container with id 1cfdd9bf16760719f287994cb12d52eb0c1a9dcd9ecfd2e1c8dff79662b1e333 Mar 13 10:03:48 crc kubenswrapper[4632]: W0313 10:03:48.571603 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-28009d100a8e1b9b5c2aa69cc316573e1ee346b21b26aeb590e8663fdbd671a1 WatchSource:0}: Error finding container 28009d100a8e1b9b5c2aa69cc316573e1ee346b21b26aeb590e8663fdbd671a1: Status 404 returned error can't find the container with id 28009d100a8e1b9b5c2aa69cc316573e1ee346b21b26aeb590e8663fdbd671a1 Mar 13 10:03:48 crc kubenswrapper[4632]: E0313 10:03:48.580186 4632 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="800ms" Mar 13 10:03:48 crc kubenswrapper[4632]: W0313 10:03:48.811264 4632 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Mar 13 10:03:48 crc kubenswrapper[4632]: E0313 10:03:48.811368 4632 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.831494 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.832847 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.832892 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.832906 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.832929 4632 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 13 10:03:48 crc kubenswrapper[4632]: E0313 10:03:48.833399 4632 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.182:6443: connect: connection refused" node="crc" Mar 13 10:03:48 crc kubenswrapper[4632]: I0313 10:03:48.974686 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Mar 13 10:03:49 crc kubenswrapper[4632]: I0313 10:03:49.047730 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"f2ca7e88f9e3349411e4a1364ab9c541d5ee7284178c249a401a6700ea5f9269"} Mar 13 10:03:49 crc kubenswrapper[4632]: I0313 10:03:49.054303 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"28009d100a8e1b9b5c2aa69cc316573e1ee346b21b26aeb590e8663fdbd671a1"} Mar 13 10:03:49 crc kubenswrapper[4632]: I0313 10:03:49.058849 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1cfdd9bf16760719f287994cb12d52eb0c1a9dcd9ecfd2e1c8dff79662b1e333"} Mar 13 10:03:49 crc kubenswrapper[4632]: I0313 10:03:49.060484 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"127b7383b05ead4e61b986500f7a6fbcc9b94e8af451c60a18cbbf65b5633c3f"} Mar 13 10:03:49 crc kubenswrapper[4632]: I0313 10:03:49.061645 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8a837521daf688e33fb4e6fbf5cbabe8729fa8258893d103fa454605c4a0eba7"} Mar 13 10:03:49 crc kubenswrapper[4632]: W0313 10:03:49.085046 4632 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Mar 13 10:03:49 crc kubenswrapper[4632]: E0313 10:03:49.085148 4632 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Mar 13 10:03:49 crc kubenswrapper[4632]: E0313 10:03:49.381061 4632 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="1.6s" Mar 13 10:03:49 crc kubenswrapper[4632]: W0313 10:03:49.384694 4632 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Mar 13 10:03:49 crc kubenswrapper[4632]: E0313 10:03:49.384757 4632 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Mar 13 10:03:49 crc kubenswrapper[4632]: W0313 10:03:49.443649 4632 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Mar 13 10:03:49 crc kubenswrapper[4632]: E0313 10:03:49.443759 4632 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Mar 13 10:03:49 crc kubenswrapper[4632]: I0313 10:03:49.633605 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:49 crc kubenswrapper[4632]: I0313 10:03:49.635861 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:49 crc kubenswrapper[4632]: I0313 10:03:49.635902 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:49 crc kubenswrapper[4632]: I0313 10:03:49.635914 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:49 crc kubenswrapper[4632]: I0313 10:03:49.635956 4632 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 13 10:03:49 crc kubenswrapper[4632]: E0313 10:03:49.636827 4632 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.182:6443: connect: connection refused" node="crc" Mar 13 10:03:49 crc kubenswrapper[4632]: I0313 10:03:49.834419 4632 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 13 10:03:49 crc kubenswrapper[4632]: E0313 10:03:49.835833 4632 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Mar 13 10:03:49 crc kubenswrapper[4632]: I0313 10:03:49.975236 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.068238 4632 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611" exitCode=0 Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.068323 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611"} Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.068369 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.069630 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.069658 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.069667 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.071288 4632 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="452fd4724dbce515315ce137b8477166e605db76fec46d7b0ab23756c3cf1c52" exitCode=0 Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.071358 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"452fd4724dbce515315ce137b8477166e605db76fec46d7b0ab23756c3cf1c52"} Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.071424 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.077100 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.077134 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.077144 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.079732 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2666e43569f4f3fb7d9deec4b70dc873d86a3a7c4ac2fac7eea45198e35ecf3a"} Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.079766 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cc37f87081e3682692ee20f12c80aa65fbcb8604b381f313b8073f8019b96dbc"} Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.079775 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"59e800b40d3f99e6d38fa7e28f06be6260e6ebe4bb5e7c73de6734d0617092ac"} Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.079785 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8207ad7aa524df5af853ad8235c24a6addbc04e168248391629f00124901672d"} Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.079812 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.080799 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.080836 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.080847 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.082717 4632 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990" exitCode=0 Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.082796 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990"} Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.082824 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.083505 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.083521 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.083530 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.087988 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.088358 4632 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7b16440f9dc548d378b6415b89f879e2684e2ea5d0284feb13fcf67f4fa9fa81" exitCode=0 Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.088419 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7b16440f9dc548d378b6415b89f879e2684e2ea5d0284feb13fcf67f4fa9fa81"} Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.088615 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.089145 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.089168 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.089176 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.089506 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.089546 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.089560 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:50 crc kubenswrapper[4632]: I0313 10:03:50.658356 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 10:03:51 crc kubenswrapper[4632]: W0313 10:03:51.001267 4632 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Mar 13 10:03:51 crc kubenswrapper[4632]: E0313 10:03:51.001372 4632 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Mar 13 10:03:51 crc kubenswrapper[4632]: E0313 10:03:51.001469 4632 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="3.2s" Mar 13 10:03:51 crc kubenswrapper[4632]: I0313 10:03:51.001628 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Mar 13 10:03:51 crc kubenswrapper[4632]: W0313 10:03:51.102099 4632 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Mar 13 10:03:51 crc kubenswrapper[4632]: E0313 10:03:51.102192 4632 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Mar 13 10:03:51 crc kubenswrapper[4632]: I0313 10:03:51.146550 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ecc4f274e9e301fdd8acbd720c998a2aa1c5e00df4fc254bcacab4acb539b8ce"} Mar 13 10:03:51 crc kubenswrapper[4632]: I0313 10:03:51.146601 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9bd8f71a5e4bfa40758e7d51545f1b2eff43f11071060201770a574f89d391bc"} Mar 13 10:03:51 crc kubenswrapper[4632]: I0313 10:03:51.146612 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ba32b01674f111328387620c028407e11c9d44b3154c3dce8f415f79e1db54c1"} Mar 13 10:03:51 crc kubenswrapper[4632]: I0313 10:03:51.148533 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf"} Mar 13 10:03:51 crc kubenswrapper[4632]: I0313 10:03:51.148572 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe"} Mar 13 10:03:51 crc kubenswrapper[4632]: I0313 10:03:51.148581 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54"} Mar 13 10:03:51 crc kubenswrapper[4632]: I0313 10:03:51.149861 4632 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="3322cab0328fe5a4dfa2407db67435002bef607e711821802d5e9a81ef8c8476" exitCode=0 Mar 13 10:03:51 crc kubenswrapper[4632]: I0313 10:03:51.149895 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"3322cab0328fe5a4dfa2407db67435002bef607e711821802d5e9a81ef8c8476"} Mar 13 10:03:51 crc kubenswrapper[4632]: I0313 10:03:51.150012 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:51 crc kubenswrapper[4632]: I0313 10:03:51.150953 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:51 crc kubenswrapper[4632]: I0313 10:03:51.150972 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:51 crc kubenswrapper[4632]: I0313 10:03:51.150999 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:51 crc kubenswrapper[4632]: I0313 10:03:51.167157 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:51 crc kubenswrapper[4632]: I0313 10:03:51.167688 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:51 crc kubenswrapper[4632]: I0313 10:03:51.167952 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"da62efee0b0e5a420abd2f18aae7b1ad532b0dd5dcda4d36d4efa9b039bd8811"} Mar 13 10:03:51 crc kubenswrapper[4632]: I0313 10:03:51.168414 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:51 crc kubenswrapper[4632]: I0313 10:03:51.168433 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:51 crc kubenswrapper[4632]: I0313 10:03:51.168441 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:51 crc kubenswrapper[4632]: I0313 10:03:51.168975 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:51 crc kubenswrapper[4632]: I0313 10:03:51.169006 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:51 crc kubenswrapper[4632]: I0313 10:03:51.169016 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:51 crc kubenswrapper[4632]: I0313 10:03:51.177143 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:51 crc kubenswrapper[4632]: I0313 10:03:51.178261 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:51 crc kubenswrapper[4632]: I0313 10:03:51.178298 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:51 crc kubenswrapper[4632]: I0313 10:03:51.178310 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:51 crc kubenswrapper[4632]: I0313 10:03:51.263169 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:51 crc kubenswrapper[4632]: I0313 10:03:51.265537 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:51 crc kubenswrapper[4632]: I0313 10:03:51.265579 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:51 crc kubenswrapper[4632]: I0313 10:03:51.265594 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:51 crc kubenswrapper[4632]: I0313 10:03:51.265637 4632 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 13 10:03:51 crc kubenswrapper[4632]: E0313 10:03:51.266091 4632 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.182:6443: connect: connection refused" node="crc" Mar 13 10:03:51 crc kubenswrapper[4632]: I0313 10:03:51.353272 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 13 10:03:51 crc kubenswrapper[4632]: W0313 10:03:51.397215 4632 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Mar 13 10:03:51 crc kubenswrapper[4632]: E0313 10:03:51.397341 4632 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Mar 13 10:03:51 crc kubenswrapper[4632]: E0313 10:03:51.406297 4632 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.182:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189c5e79366017e8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:47.9734538 +0000 UTC m=+1.995983933,LastTimestamp:2026-03-13 10:03:47.9734538 +0000 UTC m=+1.995983933,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:03:51 crc kubenswrapper[4632]: I0313 10:03:51.974626 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Mar 13 10:03:52 crc kubenswrapper[4632]: I0313 10:03:52.179652 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7b911e0926eac089864b65a40a27d5996412dc7dd93176dc1472b5e6fee82ee0"} Mar 13 10:03:52 crc kubenswrapper[4632]: I0313 10:03:52.179730 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc"} Mar 13 10:03:52 crc kubenswrapper[4632]: I0313 10:03:52.179733 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:52 crc kubenswrapper[4632]: I0313 10:03:52.181204 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:52 crc kubenswrapper[4632]: I0313 10:03:52.181259 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:52 crc kubenswrapper[4632]: I0313 10:03:52.181273 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:52 crc kubenswrapper[4632]: I0313 10:03:52.184586 4632 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="462f33d50e7ba9db6ec487a1cdd2e211e7969591a4699212fc2fa82f4ce990c8" exitCode=0 Mar 13 10:03:52 crc kubenswrapper[4632]: I0313 10:03:52.184754 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:52 crc kubenswrapper[4632]: I0313 10:03:52.184788 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:52 crc kubenswrapper[4632]: I0313 10:03:52.184871 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"462f33d50e7ba9db6ec487a1cdd2e211e7969591a4699212fc2fa82f4ce990c8"} Mar 13 10:03:52 crc kubenswrapper[4632]: I0313 10:03:52.184986 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:52 crc kubenswrapper[4632]: I0313 10:03:52.185681 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:52 crc kubenswrapper[4632]: I0313 10:03:52.186024 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:52 crc kubenswrapper[4632]: I0313 10:03:52.186075 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:52 crc kubenswrapper[4632]: I0313 10:03:52.186101 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:52 crc kubenswrapper[4632]: I0313 10:03:52.186212 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:52 crc kubenswrapper[4632]: I0313 10:03:52.186243 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:52 crc kubenswrapper[4632]: I0313 10:03:52.186221 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:52 crc kubenswrapper[4632]: I0313 10:03:52.186287 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:52 crc kubenswrapper[4632]: I0313 10:03:52.186302 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:52 crc kubenswrapper[4632]: I0313 10:03:52.186254 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:52 crc kubenswrapper[4632]: I0313 10:03:52.186678 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:52 crc kubenswrapper[4632]: I0313 10:03:52.186704 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:52 crc kubenswrapper[4632]: I0313 10:03:52.186714 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:52 crc kubenswrapper[4632]: I0313 10:03:52.434363 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:03:53 crc kubenswrapper[4632]: I0313 10:03:53.191501 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0f16c27db31aa07efc2dc28bf5f70651f3f4fc43df633bf31c9cb10e7d7d3305"} Mar 13 10:03:53 crc kubenswrapper[4632]: I0313 10:03:53.191556 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"18912268581dca1cde5072a095c27c7cba60d75ef23e45af7fd8b89dbf56ecfe"} Mar 13 10:03:53 crc kubenswrapper[4632]: I0313 10:03:53.191571 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9415e213590ed2127f1b3ccc09bb2a37fde1a6a96d71e06b5a599cb4264ae0b9"} Mar 13 10:03:53 crc kubenswrapper[4632]: I0313 10:03:53.191592 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"13da8bb63b5666f05bbda4e4ade0cdc7995d2d3562fcbb0f4e5b68330aad8232"} Mar 13 10:03:53 crc kubenswrapper[4632]: I0313 10:03:53.191607 4632 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 10:03:53 crc kubenswrapper[4632]: I0313 10:03:53.191684 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:53 crc kubenswrapper[4632]: I0313 10:03:53.191698 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:53 crc kubenswrapper[4632]: I0313 10:03:53.192849 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:53 crc kubenswrapper[4632]: I0313 10:03:53.192877 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:53 crc kubenswrapper[4632]: I0313 10:03:53.192887 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:53 crc kubenswrapper[4632]: I0313 10:03:53.193818 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:53 crc kubenswrapper[4632]: I0313 10:03:53.193842 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:53 crc kubenswrapper[4632]: I0313 10:03:53.193850 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:54 crc kubenswrapper[4632]: I0313 10:03:54.062780 4632 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 13 10:03:54 crc kubenswrapper[4632]: I0313 10:03:54.198811 4632 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 10:03:54 crc kubenswrapper[4632]: I0313 10:03:54.198870 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:54 crc kubenswrapper[4632]: I0313 10:03:54.199565 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:54 crc kubenswrapper[4632]: I0313 10:03:54.199835 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5b9bb5108316be08fc018d2bbe5bb5c3f1e0728b8aa5598243f55c27332ef9dd"} Mar 13 10:03:54 crc kubenswrapper[4632]: I0313 10:03:54.200348 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:54 crc kubenswrapper[4632]: I0313 10:03:54.200377 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:54 crc kubenswrapper[4632]: I0313 10:03:54.200388 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:54 crc kubenswrapper[4632]: I0313 10:03:54.200984 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:54 crc kubenswrapper[4632]: I0313 10:03:54.201009 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:54 crc kubenswrapper[4632]: I0313 10:03:54.201022 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:54 crc kubenswrapper[4632]: I0313 10:03:54.466499 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:54 crc kubenswrapper[4632]: I0313 10:03:54.468196 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:54 crc kubenswrapper[4632]: I0313 10:03:54.468249 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:54 crc kubenswrapper[4632]: I0313 10:03:54.468279 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:54 crc kubenswrapper[4632]: I0313 10:03:54.468313 4632 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 13 10:03:54 crc kubenswrapper[4632]: I0313 10:03:54.523723 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 10:03:54 crc kubenswrapper[4632]: I0313 10:03:54.523908 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:54 crc kubenswrapper[4632]: I0313 10:03:54.525005 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:54 crc kubenswrapper[4632]: I0313 10:03:54.525038 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:54 crc kubenswrapper[4632]: I0313 10:03:54.525047 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:54 crc kubenswrapper[4632]: I0313 10:03:54.903083 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:03:54 crc kubenswrapper[4632]: I0313 10:03:54.954166 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Mar 13 10:03:55 crc kubenswrapper[4632]: I0313 10:03:55.159798 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 10:03:55 crc kubenswrapper[4632]: I0313 10:03:55.201242 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:55 crc kubenswrapper[4632]: I0313 10:03:55.201242 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:55 crc kubenswrapper[4632]: I0313 10:03:55.201243 4632 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 10:03:55 crc kubenswrapper[4632]: I0313 10:03:55.201448 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:55 crc kubenswrapper[4632]: I0313 10:03:55.202180 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:55 crc kubenswrapper[4632]: I0313 10:03:55.202220 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:55 crc kubenswrapper[4632]: I0313 10:03:55.202236 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:55 crc kubenswrapper[4632]: I0313 10:03:55.202265 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:55 crc kubenswrapper[4632]: I0313 10:03:55.202285 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:55 crc kubenswrapper[4632]: I0313 10:03:55.202294 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:55 crc kubenswrapper[4632]: I0313 10:03:55.202452 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:55 crc kubenswrapper[4632]: I0313 10:03:55.202470 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:55 crc kubenswrapper[4632]: I0313 10:03:55.202479 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:56 crc kubenswrapper[4632]: I0313 10:03:56.088702 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:03:56 crc kubenswrapper[4632]: I0313 10:03:56.191920 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Mar 13 10:03:56 crc kubenswrapper[4632]: I0313 10:03:56.207007 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:56 crc kubenswrapper[4632]: I0313 10:03:56.207186 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:56 crc kubenswrapper[4632]: I0313 10:03:56.207882 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:56 crc kubenswrapper[4632]: I0313 10:03:56.207909 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:56 crc kubenswrapper[4632]: I0313 10:03:56.207919 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:56 crc kubenswrapper[4632]: I0313 10:03:56.208860 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:56 crc kubenswrapper[4632]: I0313 10:03:56.208881 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:56 crc kubenswrapper[4632]: I0313 10:03:56.208889 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:57 crc kubenswrapper[4632]: I0313 10:03:57.209826 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:57 crc kubenswrapper[4632]: I0313 10:03:57.211122 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:57 crc kubenswrapper[4632]: I0313 10:03:57.211179 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:57 crc kubenswrapper[4632]: I0313 10:03:57.211203 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:58 crc kubenswrapper[4632]: E0313 10:03:58.131323 4632 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 13 10:03:58 crc kubenswrapper[4632]: I0313 10:03:58.160038 4632 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 10:03:58 crc kubenswrapper[4632]: I0313 10:03:58.160153 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 10:03:59 crc kubenswrapper[4632]: I0313 10:03:59.463997 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 10:03:59 crc kubenswrapper[4632]: I0313 10:03:59.464287 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:03:59 crc kubenswrapper[4632]: I0313 10:03:59.468022 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:03:59 crc kubenswrapper[4632]: I0313 10:03:59.468097 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:03:59 crc kubenswrapper[4632]: I0313 10:03:59.468129 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:03:59 crc kubenswrapper[4632]: I0313 10:03:59.474003 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 10:04:00 crc kubenswrapper[4632]: I0313 10:04:00.218000 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:00 crc kubenswrapper[4632]: I0313 10:04:00.219863 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:00 crc kubenswrapper[4632]: I0313 10:04:00.219973 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:00 crc kubenswrapper[4632]: I0313 10:04:00.219991 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:00 crc kubenswrapper[4632]: I0313 10:04:00.222810 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 10:04:01 crc kubenswrapper[4632]: I0313 10:04:01.220760 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:01 crc kubenswrapper[4632]: I0313 10:04:01.222091 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:01 crc kubenswrapper[4632]: I0313 10:04:01.222151 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:01 crc kubenswrapper[4632]: I0313 10:04:01.222164 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:02 crc kubenswrapper[4632]: W0313 10:04:02.403085 4632 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Mar 13 10:04:02 crc kubenswrapper[4632]: I0313 10:04:02.403278 4632 trace.go:236] Trace[1189007437]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (13-Mar-2026 10:03:52.401) (total time: 10001ms): Mar 13 10:04:02 crc kubenswrapper[4632]: Trace[1189007437]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (10:04:02.403) Mar 13 10:04:02 crc kubenswrapper[4632]: Trace[1189007437]: [10.001998026s] [10.001998026s] END Mar 13 10:04:02 crc kubenswrapper[4632]: E0313 10:04:02.403323 4632 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Mar 13 10:04:02 crc kubenswrapper[4632]: I0313 10:04:02.434546 4632 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 10:04:02 crc kubenswrapper[4632]: I0313 10:04:02.434977 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 10:04:02 crc kubenswrapper[4632]: I0313 10:04:02.976632 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Mar 13 10:04:03 crc kubenswrapper[4632]: I0313 10:04:03.226421 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 13 10:04:03 crc kubenswrapper[4632]: I0313 10:04:03.227842 4632 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7b911e0926eac089864b65a40a27d5996412dc7dd93176dc1472b5e6fee82ee0" exitCode=255 Mar 13 10:04:03 crc kubenswrapper[4632]: I0313 10:04:03.228018 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"7b911e0926eac089864b65a40a27d5996412dc7dd93176dc1472b5e6fee82ee0"} Mar 13 10:04:03 crc kubenswrapper[4632]: I0313 10:04:03.228246 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:03 crc kubenswrapper[4632]: I0313 10:04:03.229120 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:03 crc kubenswrapper[4632]: I0313 10:04:03.229234 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:03 crc kubenswrapper[4632]: I0313 10:04:03.229326 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:03 crc kubenswrapper[4632]: I0313 10:04:03.230065 4632 scope.go:117] "RemoveContainer" containerID="7b911e0926eac089864b65a40a27d5996412dc7dd93176dc1472b5e6fee82ee0" Mar 13 10:04:04 crc kubenswrapper[4632]: E0313 10:04:04.065306 4632 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Mar 13 10:04:04 crc kubenswrapper[4632]: E0313 10:04:04.203243 4632 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Mar 13 10:04:04 crc kubenswrapper[4632]: I0313 10:04:04.232666 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 13 10:04:04 crc kubenswrapper[4632]: I0313 10:04:04.234670 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d83208b114211c941c04a286b532683c256ad92512d1e3fac27e249095b31d4a"} Mar 13 10:04:04 crc kubenswrapper[4632]: I0313 10:04:04.234817 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:04 crc kubenswrapper[4632]: I0313 10:04:04.235731 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:04 crc kubenswrapper[4632]: I0313 10:04:04.235757 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:04 crc kubenswrapper[4632]: I0313 10:04:04.235765 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:04 crc kubenswrapper[4632]: E0313 10:04:04.469410 4632 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Mar 13 10:04:04 crc kubenswrapper[4632]: W0313 10:04:04.806568 4632 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:04:04Z is after 2026-02-23T05:33:13Z Mar 13 10:04:04 crc kubenswrapper[4632]: E0313 10:04:04.806660 4632 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:04:04Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 13 10:04:04 crc kubenswrapper[4632]: W0313 10:04:04.811442 4632 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:04:04Z is after 2026-02-23T05:33:13Z Mar 13 10:04:04 crc kubenswrapper[4632]: E0313 10:04:04.811827 4632 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:04:04Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 13 10:04:04 crc kubenswrapper[4632]: W0313 10:04:04.813258 4632 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:04:04Z is after 2026-02-23T05:33:13Z Mar 13 10:04:04 crc kubenswrapper[4632]: E0313 10:04:04.813364 4632 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:04:04Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 13 10:04:04 crc kubenswrapper[4632]: I0313 10:04:04.814769 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:04:04Z is after 2026-02-23T05:33:13Z Mar 13 10:04:04 crc kubenswrapper[4632]: E0313 10:04:04.818452 4632 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:04:04Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189c5e79366017e8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:47.9734538 +0000 UTC m=+1.995983933,LastTimestamp:2026-03-13 10:03:47.9734538 +0000 UTC m=+1.995983933,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:04 crc kubenswrapper[4632]: I0313 10:04:04.859436 4632 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 13 10:04:04 crc kubenswrapper[4632]: I0313 10:04:04.859839 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 13 10:04:04 crc kubenswrapper[4632]: I0313 10:04:04.987995 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Mar 13 10:04:04 crc kubenswrapper[4632]: I0313 10:04:04.988591 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:04 crc kubenswrapper[4632]: I0313 10:04:04.990363 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:04 crc kubenswrapper[4632]: I0313 10:04:04.990412 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:04 crc kubenswrapper[4632]: I0313 10:04:04.990421 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:04 crc kubenswrapper[4632]: I0313 10:04:04.993970 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:04:04Z is after 2026-02-23T05:33:13Z Mar 13 10:04:05 crc kubenswrapper[4632]: I0313 10:04:05.004695 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Mar 13 10:04:05 crc kubenswrapper[4632]: I0313 10:04:05.247418 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:05 crc kubenswrapper[4632]: I0313 10:04:05.248517 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:05 crc kubenswrapper[4632]: I0313 10:04:05.248677 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:05 crc kubenswrapper[4632]: I0313 10:04:05.248750 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:05 crc kubenswrapper[4632]: I0313 10:04:05.977860 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:04:05Z is after 2026-02-23T05:33:13Z Mar 13 10:04:06 crc kubenswrapper[4632]: I0313 10:04:06.088751 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:04:06 crc kubenswrapper[4632]: I0313 10:04:06.089039 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:06 crc kubenswrapper[4632]: I0313 10:04:06.090300 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:06 crc kubenswrapper[4632]: I0313 10:04:06.090419 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:06 crc kubenswrapper[4632]: I0313 10:04:06.090527 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:06 crc kubenswrapper[4632]: I0313 10:04:06.252279 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 13 10:04:06 crc kubenswrapper[4632]: I0313 10:04:06.253545 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 13 10:04:06 crc kubenswrapper[4632]: I0313 10:04:06.255931 4632 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d83208b114211c941c04a286b532683c256ad92512d1e3fac27e249095b31d4a" exitCode=255 Mar 13 10:04:06 crc kubenswrapper[4632]: I0313 10:04:06.256002 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"d83208b114211c941c04a286b532683c256ad92512d1e3fac27e249095b31d4a"} Mar 13 10:04:06 crc kubenswrapper[4632]: I0313 10:04:06.256040 4632 scope.go:117] "RemoveContainer" containerID="7b911e0926eac089864b65a40a27d5996412dc7dd93176dc1472b5e6fee82ee0" Mar 13 10:04:06 crc kubenswrapper[4632]: I0313 10:04:06.256498 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:06 crc kubenswrapper[4632]: I0313 10:04:06.258248 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:06 crc kubenswrapper[4632]: I0313 10:04:06.258311 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:06 crc kubenswrapper[4632]: I0313 10:04:06.258325 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:06 crc kubenswrapper[4632]: I0313 10:04:06.259545 4632 scope.go:117] "RemoveContainer" containerID="d83208b114211c941c04a286b532683c256ad92512d1e3fac27e249095b31d4a" Mar 13 10:04:06 crc kubenswrapper[4632]: E0313 10:04:06.259984 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 13 10:04:06 crc kubenswrapper[4632]: I0313 10:04:06.978694 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:04:06Z is after 2026-02-23T05:33:13Z Mar 13 10:04:07 crc kubenswrapper[4632]: I0313 10:04:07.261439 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 13 10:04:07 crc kubenswrapper[4632]: I0313 10:04:07.441894 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:04:07 crc kubenswrapper[4632]: I0313 10:04:07.442148 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:07 crc kubenswrapper[4632]: I0313 10:04:07.443502 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:07 crc kubenswrapper[4632]: I0313 10:04:07.443538 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:07 crc kubenswrapper[4632]: I0313 10:04:07.443550 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:07 crc kubenswrapper[4632]: I0313 10:04:07.444272 4632 scope.go:117] "RemoveContainer" containerID="d83208b114211c941c04a286b532683c256ad92512d1e3fac27e249095b31d4a" Mar 13 10:04:07 crc kubenswrapper[4632]: E0313 10:04:07.444455 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 13 10:04:07 crc kubenswrapper[4632]: I0313 10:04:07.447398 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:04:07 crc kubenswrapper[4632]: W0313 10:04:07.901792 4632 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:04:07Z is after 2026-02-23T05:33:13Z Mar 13 10:04:07 crc kubenswrapper[4632]: E0313 10:04:07.901903 4632 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:04:07Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 13 10:04:07 crc kubenswrapper[4632]: I0313 10:04:07.978303 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:04:07Z is after 2026-02-23T05:33:13Z Mar 13 10:04:08 crc kubenswrapper[4632]: E0313 10:04:08.131586 4632 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 13 10:04:08 crc kubenswrapper[4632]: I0313 10:04:08.164291 4632 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 10:04:08 crc kubenswrapper[4632]: I0313 10:04:08.164380 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 10:04:08 crc kubenswrapper[4632]: I0313 10:04:08.266204 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:08 crc kubenswrapper[4632]: I0313 10:04:08.267473 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:08 crc kubenswrapper[4632]: I0313 10:04:08.267544 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:08 crc kubenswrapper[4632]: I0313 10:04:08.267579 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:08 crc kubenswrapper[4632]: I0313 10:04:08.268147 4632 scope.go:117] "RemoveContainer" containerID="d83208b114211c941c04a286b532683c256ad92512d1e3fac27e249095b31d4a" Mar 13 10:04:08 crc kubenswrapper[4632]: E0313 10:04:08.268309 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 13 10:04:08 crc kubenswrapper[4632]: I0313 10:04:08.979660 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:04:08Z is after 2026-02-23T05:33:13Z Mar 13 10:04:09 crc kubenswrapper[4632]: I0313 10:04:09.977555 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:04:09Z is after 2026-02-23T05:33:13Z Mar 13 10:04:10 crc kubenswrapper[4632]: E0313 10:04:10.606863 4632 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:04:10Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 13 10:04:10 crc kubenswrapper[4632]: I0313 10:04:10.870248 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:10 crc kubenswrapper[4632]: I0313 10:04:10.871669 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:10 crc kubenswrapper[4632]: I0313 10:04:10.871709 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:10 crc kubenswrapper[4632]: I0313 10:04:10.871719 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:10 crc kubenswrapper[4632]: I0313 10:04:10.871746 4632 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 13 10:04:10 crc kubenswrapper[4632]: E0313 10:04:10.875571 4632 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:04:10Z is after 2026-02-23T05:33:13Z" node="crc" Mar 13 10:04:10 crc kubenswrapper[4632]: I0313 10:04:10.978860 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:04:10Z is after 2026-02-23T05:33:13Z Mar 13 10:04:11 crc kubenswrapper[4632]: W0313 10:04:11.420698 4632 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 13 10:04:11 crc kubenswrapper[4632]: E0313 10:04:11.421033 4632 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 13 10:04:11 crc kubenswrapper[4632]: I0313 10:04:11.979671 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:12 crc kubenswrapper[4632]: I0313 10:04:12.189607 4632 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 13 10:04:12 crc kubenswrapper[4632]: I0313 10:04:12.210985 4632 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 13 10:04:12 crc kubenswrapper[4632]: W0313 10:04:12.864606 4632 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:12 crc kubenswrapper[4632]: E0313 10:04:12.864682 4632 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 13 10:04:12 crc kubenswrapper[4632]: I0313 10:04:12.978865 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:13 crc kubenswrapper[4632]: I0313 10:04:13.979344 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:14 crc kubenswrapper[4632]: E0313 10:04:14.824701 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c5e79366017e8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:47.9734538 +0000 UTC m=+1.995983933,LastTimestamp:2026-03-13 10:03:47.9734538 +0000 UTC m=+1.995983933,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:14 crc kubenswrapper[4632]: E0313 10:04:14.830225 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c5e79397edfe0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:48.02580272 +0000 UTC m=+2.048332843,LastTimestamp:2026-03-13 10:03:48.02580272 +0000 UTC m=+2.048332843,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:14 crc kubenswrapper[4632]: E0313 10:04:14.831765 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c5e79397f2ff5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:48.025823221 +0000 UTC m=+2.048353354,LastTimestamp:2026-03-13 10:03:48.025823221 +0000 UTC m=+2.048353354,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:14 crc kubenswrapper[4632]: E0313 10:04:14.837061 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c5e79397f5877 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:48.025833591 +0000 UTC m=+2.048363724,LastTimestamp:2026-03-13 10:03:48.025833591 +0000 UTC m=+2.048363724,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:14 crc kubenswrapper[4632]: E0313 10:04:14.842283 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c5e793f688428 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:48.125000744 +0000 UTC m=+2.147530877,LastTimestamp:2026-03-13 10:03:48.125000744 +0000 UTC m=+2.147530877,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:14 crc kubenswrapper[4632]: E0313 10:04:14.847574 4632 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c5e79397edfe0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c5e79397edfe0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:48.02580272 +0000 UTC m=+2.048332843,LastTimestamp:2026-03-13 10:03:48.14510877 +0000 UTC m=+2.167638903,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:14 crc kubenswrapper[4632]: E0313 10:04:14.852547 4632 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c5e79397f2ff5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c5e79397f2ff5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:48.025823221 +0000 UTC m=+2.048353354,LastTimestamp:2026-03-13 10:03:48.1451216 +0000 UTC m=+2.167651733,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:14 crc kubenswrapper[4632]: E0313 10:04:14.858765 4632 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c5e79397f5877\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c5e79397f5877 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:48.025833591 +0000 UTC m=+2.048363724,LastTimestamp:2026-03-13 10:03:48.145129211 +0000 UTC m=+2.167659344,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:14 crc kubenswrapper[4632]: E0313 10:04:14.863981 4632 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c5e79397edfe0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c5e79397edfe0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:48.02580272 +0000 UTC m=+2.048332843,LastTimestamp:2026-03-13 10:03:48.146382802 +0000 UTC m=+2.168912935,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:14 crc kubenswrapper[4632]: E0313 10:04:14.869984 4632 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c5e79397f2ff5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c5e79397f2ff5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:48.025823221 +0000 UTC m=+2.048353354,LastTimestamp:2026-03-13 10:03:48.146391232 +0000 UTC m=+2.168921365,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:14 crc kubenswrapper[4632]: E0313 10:04:14.874808 4632 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c5e79397f5877\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c5e79397f5877 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:48.025833591 +0000 UTC m=+2.048363724,LastTimestamp:2026-03-13 10:03:48.146398402 +0000 UTC m=+2.168928535,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:14 crc kubenswrapper[4632]: E0313 10:04:14.879355 4632 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c5e79397edfe0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c5e79397edfe0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:48.02580272 +0000 UTC m=+2.048332843,LastTimestamp:2026-03-13 10:03:48.146910214 +0000 UTC m=+2.169440337,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:14 crc kubenswrapper[4632]: E0313 10:04:14.884482 4632 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c5e79397f2ff5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c5e79397f2ff5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:48.025823221 +0000 UTC m=+2.048353354,LastTimestamp:2026-03-13 10:03:48.146926325 +0000 UTC m=+2.169456458,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:14 crc kubenswrapper[4632]: E0313 10:04:14.889126 4632 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c5e79397f5877\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c5e79397f5877 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:48.025833591 +0000 UTC m=+2.048363724,LastTimestamp:2026-03-13 10:03:48.146956875 +0000 UTC m=+2.169486998,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:14 crc kubenswrapper[4632]: E0313 10:04:14.896289 4632 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c5e79397edfe0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c5e79397edfe0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:48.02580272 +0000 UTC m=+2.048332843,LastTimestamp:2026-03-13 10:03:48.147313284 +0000 UTC m=+2.169843417,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:14 crc kubenswrapper[4632]: E0313 10:04:14.902553 4632 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c5e79397f2ff5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c5e79397f2ff5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:48.025823221 +0000 UTC m=+2.048353354,LastTimestamp:2026-03-13 10:03:48.147326084 +0000 UTC m=+2.169856217,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:14 crc kubenswrapper[4632]: E0313 10:04:14.909793 4632 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c5e79397f5877\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c5e79397f5877 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:48.025833591 +0000 UTC m=+2.048363724,LastTimestamp:2026-03-13 10:03:48.147333314 +0000 UTC m=+2.169863447,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:14 crc kubenswrapper[4632]: E0313 10:04:14.916131 4632 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c5e79397edfe0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c5e79397edfe0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:48.02580272 +0000 UTC m=+2.048332843,LastTimestamp:2026-03-13 10:03:48.147398086 +0000 UTC m=+2.169928209,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:14 crc kubenswrapper[4632]: E0313 10:04:14.920521 4632 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c5e79397f2ff5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c5e79397f2ff5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:48.025823221 +0000 UTC m=+2.048353354,LastTimestamp:2026-03-13 10:03:48.147450927 +0000 UTC m=+2.169981060,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:14 crc kubenswrapper[4632]: E0313 10:04:14.925465 4632 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c5e79397f5877\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c5e79397f5877 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:48.025833591 +0000 UTC m=+2.048363724,LastTimestamp:2026-03-13 10:03:48.147464837 +0000 UTC m=+2.169994960,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:14 crc kubenswrapper[4632]: E0313 10:04:14.930599 4632 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c5e79397edfe0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c5e79397edfe0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:48.02580272 +0000 UTC m=+2.048332843,LastTimestamp:2026-03-13 10:03:48.148020562 +0000 UTC m=+2.170550695,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:14 crc kubenswrapper[4632]: E0313 10:04:14.935402 4632 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c5e79397f2ff5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c5e79397f2ff5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:48.025823221 +0000 UTC m=+2.048353354,LastTimestamp:2026-03-13 10:03:48.148036082 +0000 UTC m=+2.170566215,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:14 crc kubenswrapper[4632]: E0313 10:04:14.940082 4632 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c5e79397f5877\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c5e79397f5877 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:48.025833591 +0000 UTC m=+2.048363724,LastTimestamp:2026-03-13 10:03:48.148044682 +0000 UTC m=+2.170574815,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:14 crc kubenswrapper[4632]: E0313 10:04:14.948511 4632 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c5e79397edfe0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c5e79397edfe0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:48.02580272 +0000 UTC m=+2.048332843,LastTimestamp:2026-03-13 10:03:48.148560545 +0000 UTC m=+2.171090678,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:14 crc kubenswrapper[4632]: E0313 10:04:14.952770 4632 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c5e79397f2ff5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c5e79397f2ff5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:48.025823221 +0000 UTC m=+2.048353354,LastTimestamp:2026-03-13 10:03:48.148581795 +0000 UTC m=+2.171111928,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:14 crc kubenswrapper[4632]: E0313 10:04:14.958641 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c5e795950ab80 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:48.559645568 +0000 UTC m=+2.582175701,LastTimestamp:2026-03-13 10:03:48.559645568 +0000 UTC m=+2.582175701,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:14 crc kubenswrapper[4632]: E0313 10:04:14.964130 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c5e795951112e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:48.559671598 +0000 UTC m=+2.582201731,LastTimestamp:2026-03-13 10:03:48.559671598 +0000 UTC m=+2.582201731,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:14 crc kubenswrapper[4632]: E0313 10:04:14.968636 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c5e795a3382cc openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:48.57451182 +0000 UTC m=+2.597041943,LastTimestamp:2026-03-13 10:03:48.57451182 +0000 UTC m=+2.597041943,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:14 crc kubenswrapper[4632]: E0313 10:04:14.973208 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c5e795a4077c9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:48.575360969 +0000 UTC m=+2.597891102,LastTimestamp:2026-03-13 10:03:48.575360969 +0000 UTC m=+2.597891102,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:14 crc kubenswrapper[4632]: I0313 10:04:14.980332 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:14 crc kubenswrapper[4632]: E0313 10:04:14.980919 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c5e795b989d15 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:48.597914901 +0000 UTC m=+2.620445034,LastTimestamp:2026-03-13 10:03:48.597914901 +0000 UTC m=+2.620445034,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:14 crc kubenswrapper[4632]: E0313 10:04:14.987069 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c5e798179cdf1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:49.233430001 +0000 UTC m=+3.255960134,LastTimestamp:2026-03-13 10:03:49.233430001 +0000 UTC m=+3.255960134,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:14 crc kubenswrapper[4632]: E0313 10:04:14.991511 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c5e7981984065 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:49.235425381 +0000 UTC m=+3.257955514,LastTimestamp:2026-03-13 10:03:49.235425381 +0000 UTC m=+3.257955514,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:14 crc kubenswrapper[4632]: E0313 10:04:14.995728 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c5e7981985221 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:49.235429921 +0000 UTC m=+3.257960054,LastTimestamp:2026-03-13 10:03:49.235429921 +0000 UTC m=+3.257960054,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.000195 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c5e79819bd53e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:49.235660094 +0000 UTC m=+3.258190227,LastTimestamp:2026-03-13 10:03:49.235660094 +0000 UTC m=+3.258190227,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.005466 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c5e7981a62fef openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:49.236338671 +0000 UTC m=+3.258868804,LastTimestamp:2026-03-13 10:03:49.236338671 +0000 UTC m=+3.258868804,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.010439 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c5e798285710f openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:49.250969871 +0000 UTC m=+3.273500004,LastTimestamp:2026-03-13 10:03:49.250969871 +0000 UTC m=+3.273500004,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.015754 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c5e7982958adb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:49.252025051 +0000 UTC m=+3.274555184,LastTimestamp:2026-03-13 10:03:49.252025051 +0000 UTC m=+3.274555184,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.021883 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c5e79829857b3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:49.252208563 +0000 UTC m=+3.274738696,LastTimestamp:2026-03-13 10:03:49.252208563 +0000 UTC m=+3.274738696,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.026300 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c5e79829e7e2c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:49.252611628 +0000 UTC m=+3.275141761,LastTimestamp:2026-03-13 10:03:49.252611628 +0000 UTC m=+3.275141761,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: I0313 10:04:15.031048 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.031083 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c5e79829fc215 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:49.252694549 +0000 UTC m=+3.275224692,LastTimestamp:2026-03-13 10:03:49.252694549 +0000 UTC m=+3.275224692,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: I0313 10:04:15.031267 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:15 crc kubenswrapper[4632]: I0313 10:04:15.032496 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:15 crc kubenswrapper[4632]: I0313 10:04:15.032650 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:15 crc kubenswrapper[4632]: I0313 10:04:15.032761 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:15 crc kubenswrapper[4632]: I0313 10:04:15.033547 4632 scope.go:117] "RemoveContainer" containerID="d83208b114211c941c04a286b532683c256ad92512d1e3fac27e249095b31d4a" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.033845 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.036915 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c5e7982b87027 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:49.254311975 +0000 UTC m=+3.276842108,LastTimestamp:2026-03-13 10:03:49.254311975 +0000 UTC m=+3.276842108,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.041845 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c5e799377a0ed openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:49.535277293 +0000 UTC m=+3.557807426,LastTimestamp:2026-03-13 10:03:49.535277293 +0000 UTC m=+3.557807426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.046857 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c5e7993fba23d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:49.543928381 +0000 UTC m=+3.566458514,LastTimestamp:2026-03-13 10:03:49.543928381 +0000 UTC m=+3.566458514,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.051402 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c5e79940a3838 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:49.54488428 +0000 UTC m=+3.567414413,LastTimestamp:2026-03-13 10:03:49.54488428 +0000 UTC m=+3.567414413,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.055708 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c5e799e3100d1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:49.715198161 +0000 UTC m=+3.737728294,LastTimestamp:2026-03-13 10:03:49.715198161 +0000 UTC m=+3.737728294,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.059838 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c5e799f3a6138 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:49.73258988 +0000 UTC m=+3.755120013,LastTimestamp:2026-03-13 10:03:49.73258988 +0000 UTC m=+3.755120013,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.065064 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c5e799f513f39 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:49.734088505 +0000 UTC m=+3.756618668,LastTimestamp:2026-03-13 10:03:49.734088505 +0000 UTC m=+3.756618668,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.069466 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c5e79ab0f25ab openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:49.931083179 +0000 UTC m=+3.953613312,LastTimestamp:2026-03-13 10:03:49.931083179 +0000 UTC m=+3.953613312,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.074382 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c5e79abd022fb openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:49.943730939 +0000 UTC m=+3.966261072,LastTimestamp:2026-03-13 10:03:49.943730939 +0000 UTC m=+3.966261072,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.079687 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c5e79b36b9370 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:50.07135832 +0000 UTC m=+4.093888453,LastTimestamp:2026-03-13 10:03:50.07135832 +0000 UTC m=+4.093888453,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.084416 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c5e79b3da235d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:50.078604125 +0000 UTC m=+4.101134258,LastTimestamp:2026-03-13 10:03:50.078604125 +0000 UTC m=+4.101134258,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.088705 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c5e79b46640e9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:50.087786729 +0000 UTC m=+4.110316862,LastTimestamp:2026-03-13 10:03:50.087786729 +0000 UTC m=+4.110316862,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.093383 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c5e79b4aed344 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:50.092542788 +0000 UTC m=+4.115072921,LastTimestamp:2026-03-13 10:03:50.092542788 +0000 UTC m=+4.115072921,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.097126 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c5e79c251aaf9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:50.321318649 +0000 UTC m=+4.343848782,LastTimestamp:2026-03-13 10:03:50.321318649 +0000 UTC m=+4.343848782,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.101477 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c5e79c26e2c82 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:50.323186818 +0000 UTC m=+4.345716951,LastTimestamp:2026-03-13 10:03:50.323186818 +0000 UTC m=+4.345716951,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.106033 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c5e79c29b6968 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:50.326151528 +0000 UTC m=+4.348681661,LastTimestamp:2026-03-13 10:03:50.326151528 +0000 UTC m=+4.348681661,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.110084 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c5e79c29b6e86 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:50.326152838 +0000 UTC m=+4.348682971,LastTimestamp:2026-03-13 10:03:50.326152838 +0000 UTC m=+4.348682971,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.113538 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c5e79c41abdb4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:50.351273396 +0000 UTC m=+4.373803529,LastTimestamp:2026-03-13 10:03:50.351273396 +0000 UTC m=+4.373803529,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.117308 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c5e79c431eb52 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:50.352792402 +0000 UTC m=+4.375322535,LastTimestamp:2026-03-13 10:03:50.352792402 +0000 UTC m=+4.375322535,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.121198 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c5e79c45b65de openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:50.35551075 +0000 UTC m=+4.378040883,LastTimestamp:2026-03-13 10:03:50.35551075 +0000 UTC m=+4.378040883,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.125584 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c5e79c47fbf64 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:50.357892964 +0000 UTC m=+4.380423107,LastTimestamp:2026-03-13 10:03:50.357892964 +0000 UTC m=+4.380423107,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.131806 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c5e79c488c1e7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:50.358483431 +0000 UTC m=+4.381013564,LastTimestamp:2026-03-13 10:03:50.358483431 +0000 UTC m=+4.381013564,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.135763 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c5e79c4bfb9e2 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:50.362085858 +0000 UTC m=+4.384615991,LastTimestamp:2026-03-13 10:03:50.362085858 +0000 UTC m=+4.384615991,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.139912 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c5e79d27605b7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:50.592136631 +0000 UTC m=+4.614666764,LastTimestamp:2026-03-13 10:03:50.592136631 +0000 UTC m=+4.614666764,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.142017 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c5e79d2958489 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:50.594200713 +0000 UTC m=+4.616730836,LastTimestamp:2026-03-13 10:03:50.594200713 +0000 UTC m=+4.616730836,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.144054 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c5e79d35fd0af openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:50.607458479 +0000 UTC m=+4.629988612,LastTimestamp:2026-03-13 10:03:50.607458479 +0000 UTC m=+4.629988612,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.147987 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c5e79d371df95 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:50.608641941 +0000 UTC m=+4.631172074,LastTimestamp:2026-03-13 10:03:50.608641941 +0000 UTC m=+4.631172074,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.151808 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c5e79d3853e76 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:50.609911414 +0000 UTC m=+4.632441547,LastTimestamp:2026-03-13 10:03:50.609911414 +0000 UTC m=+4.632441547,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.155834 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c5e79d3867521 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:50.609990945 +0000 UTC m=+4.632521078,LastTimestamp:2026-03-13 10:03:50.609990945 +0000 UTC m=+4.632521078,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.160611 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c5e79dec011c2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:50.79831597 +0000 UTC m=+4.820846103,LastTimestamp:2026-03-13 10:03:50.79831597 +0000 UTC m=+4.820846103,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.164588 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c5e79e0ae4f97 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:50.830706583 +0000 UTC m=+4.853236716,LastTimestamp:2026-03-13 10:03:50.830706583 +0000 UTC m=+4.853236716,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.170489 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c5e79e0ce3826 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:50.832797734 +0000 UTC m=+4.855327857,LastTimestamp:2026-03-13 10:03:50.832797734 +0000 UTC m=+4.855327857,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.175593 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c5e79e3ca15ef openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:50.882858479 +0000 UTC m=+4.905388612,LastTimestamp:2026-03-13 10:03:50.882858479 +0000 UTC m=+4.905388612,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.179901 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c5e79e5721b18 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:50.910647064 +0000 UTC m=+4.933177197,LastTimestamp:2026-03-13 10:03:50.910647064 +0000 UTC m=+4.933177197,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.185611 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c5e79f3d5d759 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:51.152064345 +0000 UTC m=+5.174594478,LastTimestamp:2026-03-13 10:03:51.152064345 +0000 UTC m=+5.174594478,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.191214 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c5e79fce736c4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:51.304197828 +0000 UTC m=+5.326727971,LastTimestamp:2026-03-13 10:03:51.304197828 +0000 UTC m=+5.326727971,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.196056 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c5e79fdc3ac6a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:51.318645866 +0000 UTC m=+5.341176009,LastTimestamp:2026-03-13 10:03:51.318645866 +0000 UTC m=+5.341176009,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.200118 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c5e79fe0a6dba openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:51.323282874 +0000 UTC m=+5.345813027,LastTimestamp:2026-03-13 10:03:51.323282874 +0000 UTC m=+5.345813027,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.205333 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c5e7a0663e194 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:51.463362964 +0000 UTC m=+5.485893097,LastTimestamp:2026-03-13 10:03:51.463362964 +0000 UTC m=+5.485893097,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.210632 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c5e7a07d9a447 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:51.487857735 +0000 UTC m=+5.510387868,LastTimestamp:2026-03-13 10:03:51.487857735 +0000 UTC m=+5.510387868,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.215342 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c5e7a0a4e98df openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:51.529076959 +0000 UTC m=+5.551607092,LastTimestamp:2026-03-13 10:03:51.529076959 +0000 UTC m=+5.551607092,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.220731 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c5e7a0b1d59ed openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:51.542626797 +0000 UTC m=+5.565156930,LastTimestamp:2026-03-13 10:03:51.542626797 +0000 UTC m=+5.565156930,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.227054 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c5e7a319410b5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:52.187941045 +0000 UTC m=+6.210471178,LastTimestamp:2026-03-13 10:03:52.187941045 +0000 UTC m=+6.210471178,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.233036 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c5e7a4005fd86 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:52.430288262 +0000 UTC m=+6.452818395,LastTimestamp:2026-03-13 10:03:52.430288262 +0000 UTC m=+6.452818395,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.238361 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c5e7a40ac380b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:52.441182219 +0000 UTC m=+6.463712352,LastTimestamp:2026-03-13 10:03:52.441182219 +0000 UTC m=+6.463712352,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.242666 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c5e7a40c5253f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:52.442815807 +0000 UTC m=+6.465345940,LastTimestamp:2026-03-13 10:03:52.442815807 +0000 UTC m=+6.465345940,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.247066 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c5e7a4c0ec186 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:52.632189318 +0000 UTC m=+6.654719451,LastTimestamp:2026-03-13 10:03:52.632189318 +0000 UTC m=+6.654719451,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.251305 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c5e7a4cd5725b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:52.645210715 +0000 UTC m=+6.667740838,LastTimestamp:2026-03-13 10:03:52.645210715 +0000 UTC m=+6.667740838,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.256470 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c5e7a4ce5b0db openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:52.646275291 +0000 UTC m=+6.668805424,LastTimestamp:2026-03-13 10:03:52.646275291 +0000 UTC m=+6.668805424,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.261217 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c5e7a57999173 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:52.825835891 +0000 UTC m=+6.848366014,LastTimestamp:2026-03-13 10:03:52.825835891 +0000 UTC m=+6.848366014,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.265767 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c5e7a584d82de openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:52.837628638 +0000 UTC m=+6.860158791,LastTimestamp:2026-03-13 10:03:52.837628638 +0000 UTC m=+6.860158791,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.271435 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c5e7a587ba633 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:52.840652339 +0000 UTC m=+6.863182482,LastTimestamp:2026-03-13 10:03:52.840652339 +0000 UTC m=+6.863182482,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.277291 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c5e7a6325ad2d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:53.019567405 +0000 UTC m=+7.042097538,LastTimestamp:2026-03-13 10:03:53.019567405 +0000 UTC m=+7.042097538,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.284841 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c5e7a679dc730 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:53.094547248 +0000 UTC m=+7.117077381,LastTimestamp:2026-03-13 10:03:53.094547248 +0000 UTC m=+7.117077381,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.291135 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c5e7a67b1f94f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:53.095870799 +0000 UTC m=+7.118400922,LastTimestamp:2026-03-13 10:03:53.095870799 +0000 UTC m=+7.118400922,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.299151 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c5e7a73018ae6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:53.28563479 +0000 UTC m=+7.308164923,LastTimestamp:2026-03-13 10:03:53.28563479 +0000 UTC m=+7.308164923,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.302682 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c5e7a73d77cd2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:53.29965589 +0000 UTC m=+7.322186023,LastTimestamp:2026-03-13 10:03:53.29965589 +0000 UTC m=+7.322186023,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.312243 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 13 10:04:15 crc kubenswrapper[4632]: &Event{ObjectMeta:{kube-controller-manager-crc.189c5e7b958c683c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 13 10:04:15 crc kubenswrapper[4632]: body: Mar 13 10:04:15 crc kubenswrapper[4632]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:58.16012806 +0000 UTC m=+12.182658183,LastTimestamp:2026-03-13 10:03:58.16012806 +0000 UTC m=+12.182658183,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 13 10:04:15 crc kubenswrapper[4632]: > Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.335919 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c5e7b958da514 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:58.160209172 +0000 UTC m=+12.182739325,LastTimestamp:2026-03-13 10:03:58.160209172 +0000 UTC m=+12.182739325,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.340302 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 13 10:04:15 crc kubenswrapper[4632]: &Event{ObjectMeta:{kube-apiserver-crc.189c5e7c94589d41 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:6443/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 13 10:04:15 crc kubenswrapper[4632]: body: Mar 13 10:04:15 crc kubenswrapper[4632]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:04:02.434923841 +0000 UTC m=+16.457453974,LastTimestamp:2026-03-13 10:04:02.434923841 +0000 UTC m=+16.457453974,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 13 10:04:15 crc kubenswrapper[4632]: > Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.344022 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c5e7c945af72f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:04:02.435077935 +0000 UTC m=+16.457608068,LastTimestamp:2026-03-13 10:04:02.435077935 +0000 UTC m=+16.457608068,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.348584 4632 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189c5e79fe0a6dba\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c5e79fe0a6dba openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:51.323282874 +0000 UTC m=+5.345813027,LastTimestamp:2026-03-13 10:04:03.231962788 +0000 UTC m=+17.254492931,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.351883 4632 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189c5e7a0a4e98df\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c5e7a0a4e98df openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:51.529076959 +0000 UTC m=+5.551607092,LastTimestamp:2026-03-13 10:04:03.56217343 +0000 UTC m=+17.584703563,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.355172 4632 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189c5e7a0b1d59ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c5e7a0b1d59ed openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:51.542626797 +0000 UTC m=+5.565156930,LastTimestamp:2026-03-13 10:04:03.573111098 +0000 UTC m=+17.595641231,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.358802 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 13 10:04:15 crc kubenswrapper[4632]: &Event{ObjectMeta:{kube-apiserver-crc.189c5e7d24e13615 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 13 10:04:15 crc kubenswrapper[4632]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 13 10:04:15 crc kubenswrapper[4632]: Mar 13 10:04:15 crc kubenswrapper[4632]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:04:04.859794965 +0000 UTC m=+18.882325098,LastTimestamp:2026-03-13 10:04:04.859794965 +0000 UTC m=+18.882325098,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 13 10:04:15 crc kubenswrapper[4632]: > Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.362218 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c5e7d24e3d879 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:04:04.859967609 +0000 UTC m=+18.882497742,LastTimestamp:2026-03-13 10:04:04.859967609 +0000 UTC m=+18.882497742,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.366289 4632 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c5e7b958c683c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 13 10:04:15 crc kubenswrapper[4632]: &Event{ObjectMeta:{kube-controller-manager-crc.189c5e7b958c683c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 13 10:04:15 crc kubenswrapper[4632]: body: Mar 13 10:04:15 crc kubenswrapper[4632]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:58.16012806 +0000 UTC m=+12.182658183,LastTimestamp:2026-03-13 10:04:08.164353209 +0000 UTC m=+22.186883342,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 13 10:04:15 crc kubenswrapper[4632]: > Mar 13 10:04:15 crc kubenswrapper[4632]: E0313 10:04:15.369591 4632 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c5e7b958da514\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c5e7b958da514 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:58.160209172 +0000 UTC m=+12.182739325,LastTimestamp:2026-03-13 10:04:08.16441919 +0000 UTC m=+22.186949323,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:15 crc kubenswrapper[4632]: I0313 10:04:15.979280 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:16 crc kubenswrapper[4632]: W0313 10:04:16.742063 4632 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 13 10:04:16 crc kubenswrapper[4632]: E0313 10:04:16.742162 4632 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 13 10:04:16 crc kubenswrapper[4632]: I0313 10:04:16.979394 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:17 crc kubenswrapper[4632]: E0313 10:04:17.612904 4632 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 13 10:04:17 crc kubenswrapper[4632]: I0313 10:04:17.876687 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:17 crc kubenswrapper[4632]: I0313 10:04:17.879079 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:17 crc kubenswrapper[4632]: I0313 10:04:17.879149 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:17 crc kubenswrapper[4632]: I0313 10:04:17.879166 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:17 crc kubenswrapper[4632]: I0313 10:04:17.879195 4632 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 13 10:04:17 crc kubenswrapper[4632]: E0313 10:04:17.901487 4632 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 13 10:04:17 crc kubenswrapper[4632]: I0313 10:04:17.980253 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:18 crc kubenswrapper[4632]: E0313 10:04:18.131735 4632 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 13 10:04:18 crc kubenswrapper[4632]: I0313 10:04:18.161319 4632 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 10:04:18 crc kubenswrapper[4632]: I0313 10:04:18.161406 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 10:04:18 crc kubenswrapper[4632]: I0313 10:04:18.161484 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 10:04:18 crc kubenswrapper[4632]: I0313 10:04:18.161664 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:18 crc kubenswrapper[4632]: I0313 10:04:18.162968 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:18 crc kubenswrapper[4632]: I0313 10:04:18.163020 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:18 crc kubenswrapper[4632]: I0313 10:04:18.163046 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:18 crc kubenswrapper[4632]: I0313 10:04:18.163771 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"59e800b40d3f99e6d38fa7e28f06be6260e6ebe4bb5e7c73de6734d0617092ac"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 13 10:04:18 crc kubenswrapper[4632]: I0313 10:04:18.163988 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://59e800b40d3f99e6d38fa7e28f06be6260e6ebe4bb5e7c73de6734d0617092ac" gracePeriod=30 Mar 13 10:04:18 crc kubenswrapper[4632]: E0313 10:04:18.184838 4632 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c5e7b958c683c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 13 10:04:18 crc kubenswrapper[4632]: &Event{ObjectMeta:{kube-controller-manager-crc.189c5e7b958c683c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 13 10:04:18 crc kubenswrapper[4632]: body: Mar 13 10:04:18 crc kubenswrapper[4632]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:58.16012806 +0000 UTC m=+12.182658183,LastTimestamp:2026-03-13 10:04:18.161384938 +0000 UTC m=+32.183915071,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 13 10:04:18 crc kubenswrapper[4632]: > Mar 13 10:04:18 crc kubenswrapper[4632]: E0313 10:04:18.191203 4632 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c5e7b958da514\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c5e7b958da514 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:58.160209172 +0000 UTC m=+12.182739325,LastTimestamp:2026-03-13 10:04:18.16143983 +0000 UTC m=+32.183969963,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:18 crc kubenswrapper[4632]: E0313 10:04:18.203854 4632 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c5e803ddec832 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:04:18.16396805 +0000 UTC m=+32.186498193,LastTimestamp:2026-03-13 10:04:18.16396805 +0000 UTC m=+32.186498193,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:18 crc kubenswrapper[4632]: E0313 10:04:18.292187 4632 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c5e7982b87027\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c5e7982b87027 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:49.254311975 +0000 UTC m=+3.276842108,LastTimestamp:2026-03-13 10:04:18.286345496 +0000 UTC m=+32.308875629,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:18 crc kubenswrapper[4632]: I0313 10:04:18.301349 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 13 10:04:18 crc kubenswrapper[4632]: I0313 10:04:18.303052 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"59e800b40d3f99e6d38fa7e28f06be6260e6ebe4bb5e7c73de6734d0617092ac"} Mar 13 10:04:18 crc kubenswrapper[4632]: I0313 10:04:18.303678 4632 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="59e800b40d3f99e6d38fa7e28f06be6260e6ebe4bb5e7c73de6734d0617092ac" exitCode=255 Mar 13 10:04:18 crc kubenswrapper[4632]: E0313 10:04:18.463702 4632 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c5e799377a0ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c5e799377a0ed openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:49.535277293 +0000 UTC m=+3.557807426,LastTimestamp:2026-03-13 10:04:18.45786721 +0000 UTC m=+32.480397363,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:18 crc kubenswrapper[4632]: E0313 10:04:18.476675 4632 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c5e7993fba23d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c5e7993fba23d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:49.543928381 +0000 UTC m=+3.566458514,LastTimestamp:2026-03-13 10:04:18.471551981 +0000 UTC m=+32.494082114,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:18 crc kubenswrapper[4632]: I0313 10:04:18.979715 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:19 crc kubenswrapper[4632]: I0313 10:04:19.309702 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 13 10:04:19 crc kubenswrapper[4632]: I0313 10:04:19.310220 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"951a71f1358e164fefc6fbe9404cf1d0e387d58b0bc1d060eeca64a85e0fc08e"} Mar 13 10:04:19 crc kubenswrapper[4632]: I0313 10:04:19.310368 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:19 crc kubenswrapper[4632]: I0313 10:04:19.311979 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:19 crc kubenswrapper[4632]: I0313 10:04:19.312025 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:19 crc kubenswrapper[4632]: I0313 10:04:19.312044 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:19 crc kubenswrapper[4632]: I0313 10:04:19.979880 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:20 crc kubenswrapper[4632]: W0313 10:04:20.251851 4632 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 13 10:04:20 crc kubenswrapper[4632]: E0313 10:04:20.252415 4632 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 13 10:04:20 crc kubenswrapper[4632]: I0313 10:04:20.312791 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:20 crc kubenswrapper[4632]: I0313 10:04:20.314087 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:20 crc kubenswrapper[4632]: I0313 10:04:20.314184 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:20 crc kubenswrapper[4632]: I0313 10:04:20.314203 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:20 crc kubenswrapper[4632]: I0313 10:04:20.981234 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:21 crc kubenswrapper[4632]: I0313 10:04:21.980137 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:22 crc kubenswrapper[4632]: I0313 10:04:22.979441 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:23 crc kubenswrapper[4632]: I0313 10:04:23.980020 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:24 crc kubenswrapper[4632]: I0313 10:04:24.525029 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 10:04:24 crc kubenswrapper[4632]: I0313 10:04:24.525294 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:24 crc kubenswrapper[4632]: I0313 10:04:24.526795 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:24 crc kubenswrapper[4632]: I0313 10:04:24.526840 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:24 crc kubenswrapper[4632]: I0313 10:04:24.526850 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:24 crc kubenswrapper[4632]: E0313 10:04:24.618503 4632 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 13 10:04:24 crc kubenswrapper[4632]: I0313 10:04:24.902287 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:24 crc kubenswrapper[4632]: I0313 10:04:24.904126 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:24 crc kubenswrapper[4632]: I0313 10:04:24.904181 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:24 crc kubenswrapper[4632]: I0313 10:04:24.904198 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:24 crc kubenswrapper[4632]: I0313 10:04:24.904229 4632 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 13 10:04:24 crc kubenswrapper[4632]: E0313 10:04:24.909309 4632 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 13 10:04:24 crc kubenswrapper[4632]: I0313 10:04:24.981133 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:25 crc kubenswrapper[4632]: I0313 10:04:25.160702 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 10:04:25 crc kubenswrapper[4632]: I0313 10:04:25.333212 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:25 crc kubenswrapper[4632]: I0313 10:04:25.334263 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:25 crc kubenswrapper[4632]: I0313 10:04:25.334321 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:25 crc kubenswrapper[4632]: I0313 10:04:25.334333 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:25 crc kubenswrapper[4632]: I0313 10:04:25.979346 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:26 crc kubenswrapper[4632]: I0313 10:04:26.043697 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:26 crc kubenswrapper[4632]: I0313 10:04:26.045990 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:26 crc kubenswrapper[4632]: I0313 10:04:26.046068 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:26 crc kubenswrapper[4632]: I0313 10:04:26.046081 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:26 crc kubenswrapper[4632]: I0313 10:04:26.046797 4632 scope.go:117] "RemoveContainer" containerID="d83208b114211c941c04a286b532683c256ad92512d1e3fac27e249095b31d4a" Mar 13 10:04:26 crc kubenswrapper[4632]: I0313 10:04:26.338642 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 13 10:04:26 crc kubenswrapper[4632]: I0313 10:04:26.340415 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6b2d0d7e12e3475e3d8c895d2175381ba90230c85db30b582d19853ce1e4ba54"} Mar 13 10:04:26 crc kubenswrapper[4632]: I0313 10:04:26.340540 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:26 crc kubenswrapper[4632]: I0313 10:04:26.341288 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:26 crc kubenswrapper[4632]: I0313 10:04:26.341312 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:26 crc kubenswrapper[4632]: I0313 10:04:26.341320 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:26 crc kubenswrapper[4632]: I0313 10:04:26.979757 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:27 crc kubenswrapper[4632]: I0313 10:04:27.346033 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 13 10:04:27 crc kubenswrapper[4632]: I0313 10:04:27.346497 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 13 10:04:27 crc kubenswrapper[4632]: I0313 10:04:27.348443 4632 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6b2d0d7e12e3475e3d8c895d2175381ba90230c85db30b582d19853ce1e4ba54" exitCode=255 Mar 13 10:04:27 crc kubenswrapper[4632]: I0313 10:04:27.348492 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6b2d0d7e12e3475e3d8c895d2175381ba90230c85db30b582d19853ce1e4ba54"} Mar 13 10:04:27 crc kubenswrapper[4632]: I0313 10:04:27.348546 4632 scope.go:117] "RemoveContainer" containerID="d83208b114211c941c04a286b532683c256ad92512d1e3fac27e249095b31d4a" Mar 13 10:04:27 crc kubenswrapper[4632]: I0313 10:04:27.348670 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:27 crc kubenswrapper[4632]: I0313 10:04:27.349900 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:27 crc kubenswrapper[4632]: I0313 10:04:27.349961 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:27 crc kubenswrapper[4632]: I0313 10:04:27.350000 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:27 crc kubenswrapper[4632]: I0313 10:04:27.350786 4632 scope.go:117] "RemoveContainer" containerID="6b2d0d7e12e3475e3d8c895d2175381ba90230c85db30b582d19853ce1e4ba54" Mar 13 10:04:27 crc kubenswrapper[4632]: E0313 10:04:27.351028 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 13 10:04:27 crc kubenswrapper[4632]: I0313 10:04:27.979602 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:28 crc kubenswrapper[4632]: E0313 10:04:28.132173 4632 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 13 10:04:28 crc kubenswrapper[4632]: I0313 10:04:28.161046 4632 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 10:04:28 crc kubenswrapper[4632]: I0313 10:04:28.161162 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 10:04:28 crc kubenswrapper[4632]: E0313 10:04:28.166654 4632 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c5e7b958c683c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 13 10:04:28 crc kubenswrapper[4632]: &Event{ObjectMeta:{kube-controller-manager-crc.189c5e7b958c683c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 13 10:04:28 crc kubenswrapper[4632]: body: Mar 13 10:04:28 crc kubenswrapper[4632]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:58.16012806 +0000 UTC m=+12.182658183,LastTimestamp:2026-03-13 10:04:28.161125237 +0000 UTC m=+42.183655380,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 13 10:04:28 crc kubenswrapper[4632]: > Mar 13 10:04:28 crc kubenswrapper[4632]: E0313 10:04:28.171758 4632 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c5e7b958da514\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c5e7b958da514 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:03:58.160209172 +0000 UTC m=+12.182739325,LastTimestamp:2026-03-13 10:04:28.161198848 +0000 UTC m=+42.183729001,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:04:28 crc kubenswrapper[4632]: I0313 10:04:28.359516 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 13 10:04:28 crc kubenswrapper[4632]: I0313 10:04:28.981173 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:29 crc kubenswrapper[4632]: I0313 10:04:29.980610 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:30 crc kubenswrapper[4632]: I0313 10:04:30.981214 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:31 crc kubenswrapper[4632]: E0313 10:04:31.624726 4632 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 13 10:04:31 crc kubenswrapper[4632]: I0313 10:04:31.910350 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:31 crc kubenswrapper[4632]: I0313 10:04:31.912163 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:31 crc kubenswrapper[4632]: I0313 10:04:31.912230 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:31 crc kubenswrapper[4632]: I0313 10:04:31.912255 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:31 crc kubenswrapper[4632]: I0313 10:04:31.912309 4632 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 13 10:04:31 crc kubenswrapper[4632]: E0313 10:04:31.917879 4632 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 13 10:04:31 crc kubenswrapper[4632]: I0313 10:04:31.983223 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:32 crc kubenswrapper[4632]: W0313 10:04:32.145577 4632 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:32 crc kubenswrapper[4632]: E0313 10:04:32.145653 4632 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 13 10:04:32 crc kubenswrapper[4632]: I0313 10:04:32.980233 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:34 crc kubenswrapper[4632]: I0313 10:04:34.010654 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:34 crc kubenswrapper[4632]: I0313 10:04:34.979471 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:35 crc kubenswrapper[4632]: I0313 10:04:35.032131 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:04:35 crc kubenswrapper[4632]: I0313 10:04:35.032372 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:35 crc kubenswrapper[4632]: I0313 10:04:35.034166 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:35 crc kubenswrapper[4632]: I0313 10:04:35.034207 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:35 crc kubenswrapper[4632]: I0313 10:04:35.034219 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:35 crc kubenswrapper[4632]: I0313 10:04:35.034822 4632 scope.go:117] "RemoveContainer" containerID="6b2d0d7e12e3475e3d8c895d2175381ba90230c85db30b582d19853ce1e4ba54" Mar 13 10:04:35 crc kubenswrapper[4632]: E0313 10:04:35.035079 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 13 10:04:35 crc kubenswrapper[4632]: W0313 10:04:35.415562 4632 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 13 10:04:35 crc kubenswrapper[4632]: E0313 10:04:35.415641 4632 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 13 10:04:35 crc kubenswrapper[4632]: I0313 10:04:35.980714 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:36 crc kubenswrapper[4632]: I0313 10:04:36.089144 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:04:36 crc kubenswrapper[4632]: I0313 10:04:36.089379 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:36 crc kubenswrapper[4632]: I0313 10:04:36.091344 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:36 crc kubenswrapper[4632]: I0313 10:04:36.091715 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:36 crc kubenswrapper[4632]: I0313 10:04:36.091742 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:36 crc kubenswrapper[4632]: I0313 10:04:36.092681 4632 scope.go:117] "RemoveContainer" containerID="6b2d0d7e12e3475e3d8c895d2175381ba90230c85db30b582d19853ce1e4ba54" Mar 13 10:04:36 crc kubenswrapper[4632]: E0313 10:04:36.092878 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 13 10:04:36 crc kubenswrapper[4632]: I0313 10:04:36.142356 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 10:04:36 crc kubenswrapper[4632]: I0313 10:04:36.142540 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:36 crc kubenswrapper[4632]: I0313 10:04:36.144113 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:36 crc kubenswrapper[4632]: I0313 10:04:36.144151 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:36 crc kubenswrapper[4632]: I0313 10:04:36.144162 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:36 crc kubenswrapper[4632]: I0313 10:04:36.146720 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 10:04:36 crc kubenswrapper[4632]: I0313 10:04:36.387355 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:36 crc kubenswrapper[4632]: I0313 10:04:36.388541 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:36 crc kubenswrapper[4632]: I0313 10:04:36.388587 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:36 crc kubenswrapper[4632]: I0313 10:04:36.388603 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:36 crc kubenswrapper[4632]: I0313 10:04:36.979587 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:37 crc kubenswrapper[4632]: W0313 10:04:37.124968 4632 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 13 10:04:37 crc kubenswrapper[4632]: E0313 10:04:37.125054 4632 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 13 10:04:37 crc kubenswrapper[4632]: I0313 10:04:37.995242 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:38 crc kubenswrapper[4632]: E0313 10:04:38.133367 4632 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 13 10:04:38 crc kubenswrapper[4632]: E0313 10:04:38.755145 4632 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 13 10:04:38 crc kubenswrapper[4632]: I0313 10:04:38.918299 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:38 crc kubenswrapper[4632]: I0313 10:04:38.920076 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:38 crc kubenswrapper[4632]: I0313 10:04:38.920248 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:38 crc kubenswrapper[4632]: I0313 10:04:38.920346 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:38 crc kubenswrapper[4632]: I0313 10:04:38.920446 4632 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 13 10:04:38 crc kubenswrapper[4632]: E0313 10:04:38.972205 4632 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 13 10:04:38 crc kubenswrapper[4632]: I0313 10:04:38.980007 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:39 crc kubenswrapper[4632]: I0313 10:04:39.987409 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:40 crc kubenswrapper[4632]: I0313 10:04:40.980743 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:41 crc kubenswrapper[4632]: I0313 10:04:41.358757 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 13 10:04:41 crc kubenswrapper[4632]: I0313 10:04:41.358906 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:41 crc kubenswrapper[4632]: I0313 10:04:41.369804 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:41 crc kubenswrapper[4632]: I0313 10:04:41.369885 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:41 crc kubenswrapper[4632]: I0313 10:04:41.369902 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:41 crc kubenswrapper[4632]: I0313 10:04:41.982792 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:43 crc kubenswrapper[4632]: I0313 10:04:43.039739 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:43 crc kubenswrapper[4632]: W0313 10:04:43.180366 4632 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 13 10:04:43 crc kubenswrapper[4632]: E0313 10:04:43.180431 4632 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 13 10:04:43 crc kubenswrapper[4632]: I0313 10:04:43.980585 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:44 crc kubenswrapper[4632]: I0313 10:04:44.991597 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:45 crc kubenswrapper[4632]: E0313 10:04:45.761887 4632 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 13 10:04:45 crc kubenswrapper[4632]: I0313 10:04:45.972563 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:45 crc kubenswrapper[4632]: I0313 10:04:45.974751 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:45 crc kubenswrapper[4632]: I0313 10:04:45.974831 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:45 crc kubenswrapper[4632]: I0313 10:04:45.974844 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:45 crc kubenswrapper[4632]: I0313 10:04:45.974881 4632 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 13 10:04:45 crc kubenswrapper[4632]: I0313 10:04:45.978997 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:45 crc kubenswrapper[4632]: E0313 10:04:45.979349 4632 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 13 10:04:46 crc kubenswrapper[4632]: I0313 10:04:46.979405 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:47 crc kubenswrapper[4632]: I0313 10:04:47.979414 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:48 crc kubenswrapper[4632]: E0313 10:04:48.134489 4632 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 13 10:04:48 crc kubenswrapper[4632]: I0313 10:04:48.980333 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:49 crc kubenswrapper[4632]: I0313 10:04:49.982397 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:50 crc kubenswrapper[4632]: I0313 10:04:50.044206 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:50 crc kubenswrapper[4632]: I0313 10:04:50.046086 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:50 crc kubenswrapper[4632]: I0313 10:04:50.046133 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:50 crc kubenswrapper[4632]: I0313 10:04:50.046147 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:50 crc kubenswrapper[4632]: I0313 10:04:50.046808 4632 scope.go:117] "RemoveContainer" containerID="6b2d0d7e12e3475e3d8c895d2175381ba90230c85db30b582d19853ce1e4ba54" Mar 13 10:04:50 crc kubenswrapper[4632]: I0313 10:04:50.979917 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:51 crc kubenswrapper[4632]: I0313 10:04:51.435253 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 13 10:04:51 crc kubenswrapper[4632]: I0313 10:04:51.438321 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae"} Mar 13 10:04:51 crc kubenswrapper[4632]: I0313 10:04:51.438815 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:51 crc kubenswrapper[4632]: I0313 10:04:51.440211 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:51 crc kubenswrapper[4632]: I0313 10:04:51.440436 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:51 crc kubenswrapper[4632]: I0313 10:04:51.440571 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:51 crc kubenswrapper[4632]: I0313 10:04:51.979668 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:52 crc kubenswrapper[4632]: I0313 10:04:52.443982 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 13 10:04:52 crc kubenswrapper[4632]: I0313 10:04:52.445584 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 13 10:04:52 crc kubenswrapper[4632]: I0313 10:04:52.448534 4632 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae" exitCode=255 Mar 13 10:04:52 crc kubenswrapper[4632]: I0313 10:04:52.448601 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae"} Mar 13 10:04:52 crc kubenswrapper[4632]: I0313 10:04:52.448668 4632 scope.go:117] "RemoveContainer" containerID="6b2d0d7e12e3475e3d8c895d2175381ba90230c85db30b582d19853ce1e4ba54" Mar 13 10:04:52 crc kubenswrapper[4632]: I0313 10:04:52.448966 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:52 crc kubenswrapper[4632]: I0313 10:04:52.450007 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:52 crc kubenswrapper[4632]: I0313 10:04:52.450041 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:52 crc kubenswrapper[4632]: I0313 10:04:52.450051 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:52 crc kubenswrapper[4632]: I0313 10:04:52.450676 4632 scope.go:117] "RemoveContainer" containerID="9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae" Mar 13 10:04:52 crc kubenswrapper[4632]: E0313 10:04:52.450999 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 13 10:04:52 crc kubenswrapper[4632]: E0313 10:04:52.768114 4632 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 13 10:04:52 crc kubenswrapper[4632]: I0313 10:04:52.979441 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:52 crc kubenswrapper[4632]: I0313 10:04:52.980112 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:52 crc kubenswrapper[4632]: I0313 10:04:52.980665 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:52 crc kubenswrapper[4632]: I0313 10:04:52.980727 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:52 crc kubenswrapper[4632]: I0313 10:04:52.980747 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:52 crc kubenswrapper[4632]: I0313 10:04:52.980819 4632 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 13 10:04:52 crc kubenswrapper[4632]: E0313 10:04:52.988231 4632 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 13 10:04:53 crc kubenswrapper[4632]: I0313 10:04:53.453726 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 13 10:04:53 crc kubenswrapper[4632]: I0313 10:04:53.979330 4632 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:04:54 crc kubenswrapper[4632]: I0313 10:04:54.124029 4632 csr.go:261] certificate signing request csr-f8d22 is approved, waiting to be issued Mar 13 10:04:54 crc kubenswrapper[4632]: I0313 10:04:54.134050 4632 csr.go:257] certificate signing request csr-f8d22 is issued Mar 13 10:04:54 crc kubenswrapper[4632]: I0313 10:04:54.163778 4632 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 13 10:04:54 crc kubenswrapper[4632]: I0313 10:04:54.643539 4632 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 13 10:04:55 crc kubenswrapper[4632]: I0313 10:04:55.031921 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:04:55 crc kubenswrapper[4632]: I0313 10:04:55.032473 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:55 crc kubenswrapper[4632]: I0313 10:04:55.034265 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:55 crc kubenswrapper[4632]: I0313 10:04:55.034331 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:55 crc kubenswrapper[4632]: I0313 10:04:55.034345 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:55 crc kubenswrapper[4632]: I0313 10:04:55.035185 4632 scope.go:117] "RemoveContainer" containerID="9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae" Mar 13 10:04:55 crc kubenswrapper[4632]: E0313 10:04:55.035396 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 13 10:04:55 crc kubenswrapper[4632]: I0313 10:04:55.135216 4632 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-13 20:31:44.592872983 +0000 UTC Mar 13 10:04:55 crc kubenswrapper[4632]: I0313 10:04:55.135331 4632 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6610h26m49.457544381s for next certificate rotation Mar 13 10:04:56 crc kubenswrapper[4632]: I0313 10:04:56.088911 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:04:56 crc kubenswrapper[4632]: I0313 10:04:56.089159 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:56 crc kubenswrapper[4632]: I0313 10:04:56.090381 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:56 crc kubenswrapper[4632]: I0313 10:04:56.090422 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:56 crc kubenswrapper[4632]: I0313 10:04:56.090434 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:56 crc kubenswrapper[4632]: I0313 10:04:56.090984 4632 scope.go:117] "RemoveContainer" containerID="9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae" Mar 13 10:04:56 crc kubenswrapper[4632]: E0313 10:04:56.091171 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 13 10:04:58 crc kubenswrapper[4632]: E0313 10:04:58.135647 4632 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 13 10:04:59 crc kubenswrapper[4632]: I0313 10:04:59.988602 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:04:59 crc kubenswrapper[4632]: I0313 10:04:59.990106 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:04:59 crc kubenswrapper[4632]: I0313 10:04:59.990143 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:04:59 crc kubenswrapper[4632]: I0313 10:04:59.990153 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:04:59 crc kubenswrapper[4632]: I0313 10:04:59.990241 4632 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 13 10:04:59 crc kubenswrapper[4632]: I0313 10:04:59.998230 4632 kubelet_node_status.go:115] "Node was previously registered" node="crc" Mar 13 10:04:59 crc kubenswrapper[4632]: I0313 10:04:59.998661 4632 kubelet_node_status.go:79] "Successfully registered node" node="crc" Mar 13 10:04:59 crc kubenswrapper[4632]: E0313 10:04:59.998685 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Mar 13 10:05:00 crc kubenswrapper[4632]: I0313 10:05:00.002690 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:00 crc kubenswrapper[4632]: I0313 10:05:00.002745 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:00 crc kubenswrapper[4632]: I0313 10:05:00.002758 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:00 crc kubenswrapper[4632]: I0313 10:05:00.002775 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:00 crc kubenswrapper[4632]: I0313 10:05:00.002787 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:00Z","lastTransitionTime":"2026-03-13T10:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:00 crc kubenswrapper[4632]: E0313 10:05:00.018527 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:00 crc kubenswrapper[4632]: I0313 10:05:00.027545 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:00 crc kubenswrapper[4632]: I0313 10:05:00.027614 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:00 crc kubenswrapper[4632]: I0313 10:05:00.027629 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:00 crc kubenswrapper[4632]: I0313 10:05:00.027652 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:00 crc kubenswrapper[4632]: I0313 10:05:00.027668 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:00Z","lastTransitionTime":"2026-03-13T10:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:00 crc kubenswrapper[4632]: E0313 10:05:00.050859 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:00 crc kubenswrapper[4632]: I0313 10:05:00.059971 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:00 crc kubenswrapper[4632]: I0313 10:05:00.060015 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:00 crc kubenswrapper[4632]: I0313 10:05:00.060026 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:00 crc kubenswrapper[4632]: I0313 10:05:00.060044 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:00 crc kubenswrapper[4632]: I0313 10:05:00.060055 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:00Z","lastTransitionTime":"2026-03-13T10:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:00 crc kubenswrapper[4632]: E0313 10:05:00.071292 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:00 crc kubenswrapper[4632]: I0313 10:05:00.077505 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:00 crc kubenswrapper[4632]: I0313 10:05:00.077558 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:00 crc kubenswrapper[4632]: I0313 10:05:00.077574 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:00 crc kubenswrapper[4632]: I0313 10:05:00.077600 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:00 crc kubenswrapper[4632]: I0313 10:05:00.077616 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:00Z","lastTransitionTime":"2026-03-13T10:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:00 crc kubenswrapper[4632]: E0313 10:05:00.087376 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:00 crc kubenswrapper[4632]: E0313 10:05:00.087528 4632 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 10:05:00 crc kubenswrapper[4632]: E0313 10:05:00.087562 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:00 crc kubenswrapper[4632]: E0313 10:05:00.188129 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:00 crc kubenswrapper[4632]: E0313 10:05:00.288966 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:00 crc kubenswrapper[4632]: E0313 10:05:00.390169 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:00 crc kubenswrapper[4632]: E0313 10:05:00.490995 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:00 crc kubenswrapper[4632]: E0313 10:05:00.592238 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:00 crc kubenswrapper[4632]: E0313 10:05:00.693388 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:00 crc kubenswrapper[4632]: E0313 10:05:00.794090 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:00 crc kubenswrapper[4632]: E0313 10:05:00.895483 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:00 crc kubenswrapper[4632]: E0313 10:05:00.996930 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:01 crc kubenswrapper[4632]: E0313 10:05:01.097443 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:01 crc kubenswrapper[4632]: E0313 10:05:01.198262 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:01 crc kubenswrapper[4632]: E0313 10:05:01.301600 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:01 crc kubenswrapper[4632]: E0313 10:05:01.401878 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:01 crc kubenswrapper[4632]: E0313 10:05:01.502818 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:01 crc kubenswrapper[4632]: E0313 10:05:01.603282 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:01 crc kubenswrapper[4632]: E0313 10:05:01.703553 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:01 crc kubenswrapper[4632]: E0313 10:05:01.804332 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:01 crc kubenswrapper[4632]: E0313 10:05:01.905392 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:02 crc kubenswrapper[4632]: E0313 10:05:02.006631 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:02 crc kubenswrapper[4632]: E0313 10:05:02.107756 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:02 crc kubenswrapper[4632]: E0313 10:05:02.208285 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:02 crc kubenswrapper[4632]: E0313 10:05:02.308869 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:02 crc kubenswrapper[4632]: E0313 10:05:02.409341 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:02 crc kubenswrapper[4632]: E0313 10:05:02.510048 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:02 crc kubenswrapper[4632]: E0313 10:05:02.610564 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:02 crc kubenswrapper[4632]: E0313 10:05:02.710840 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:02 crc kubenswrapper[4632]: E0313 10:05:02.811655 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:02 crc kubenswrapper[4632]: E0313 10:05:02.912774 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:03 crc kubenswrapper[4632]: E0313 10:05:03.013565 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:03 crc kubenswrapper[4632]: E0313 10:05:03.115150 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:03 crc kubenswrapper[4632]: E0313 10:05:03.216223 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:03 crc kubenswrapper[4632]: E0313 10:05:03.317798 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:03 crc kubenswrapper[4632]: E0313 10:05:03.418475 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:03 crc kubenswrapper[4632]: E0313 10:05:03.519075 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:03 crc kubenswrapper[4632]: E0313 10:05:03.619692 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:03 crc kubenswrapper[4632]: E0313 10:05:03.720824 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:03 crc kubenswrapper[4632]: E0313 10:05:03.821795 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:03 crc kubenswrapper[4632]: E0313 10:05:03.922488 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:04 crc kubenswrapper[4632]: E0313 10:05:04.023331 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:04 crc kubenswrapper[4632]: E0313 10:05:04.123885 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:04 crc kubenswrapper[4632]: E0313 10:05:04.224384 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:04 crc kubenswrapper[4632]: E0313 10:05:04.324889 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:04 crc kubenswrapper[4632]: E0313 10:05:04.425715 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:04 crc kubenswrapper[4632]: E0313 10:05:04.526400 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:04 crc kubenswrapper[4632]: E0313 10:05:04.626981 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:04 crc kubenswrapper[4632]: E0313 10:05:04.727561 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:04 crc kubenswrapper[4632]: E0313 10:05:04.827833 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:04 crc kubenswrapper[4632]: E0313 10:05:04.927981 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:05 crc kubenswrapper[4632]: E0313 10:05:05.028686 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:05 crc kubenswrapper[4632]: E0313 10:05:05.129849 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:05 crc kubenswrapper[4632]: E0313 10:05:05.230986 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:05 crc kubenswrapper[4632]: E0313 10:05:05.331524 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:05 crc kubenswrapper[4632]: E0313 10:05:05.431851 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:05 crc kubenswrapper[4632]: E0313 10:05:05.532820 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:05 crc kubenswrapper[4632]: E0313 10:05:05.633058 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:05 crc kubenswrapper[4632]: E0313 10:05:05.734179 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:05 crc kubenswrapper[4632]: E0313 10:05:05.834642 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:05 crc kubenswrapper[4632]: E0313 10:05:05.935520 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:06 crc kubenswrapper[4632]: E0313 10:05:06.036159 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:06 crc kubenswrapper[4632]: E0313 10:05:06.136866 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:06 crc kubenswrapper[4632]: E0313 10:05:06.237094 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:06 crc kubenswrapper[4632]: E0313 10:05:06.337669 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:06 crc kubenswrapper[4632]: E0313 10:05:06.438895 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:06 crc kubenswrapper[4632]: E0313 10:05:06.539670 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:06 crc kubenswrapper[4632]: E0313 10:05:06.640820 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:06 crc kubenswrapper[4632]: E0313 10:05:06.741816 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:06 crc kubenswrapper[4632]: E0313 10:05:06.843121 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:06 crc kubenswrapper[4632]: E0313 10:05:06.943498 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:07 crc kubenswrapper[4632]: E0313 10:05:07.044096 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:07 crc kubenswrapper[4632]: E0313 10:05:07.144888 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:07 crc kubenswrapper[4632]: E0313 10:05:07.245071 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:07 crc kubenswrapper[4632]: E0313 10:05:07.345321 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:07 crc kubenswrapper[4632]: E0313 10:05:07.445844 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:07 crc kubenswrapper[4632]: E0313 10:05:07.546514 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:07 crc kubenswrapper[4632]: E0313 10:05:07.647426 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:07 crc kubenswrapper[4632]: E0313 10:05:07.747622 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:07 crc kubenswrapper[4632]: E0313 10:05:07.848526 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:07 crc kubenswrapper[4632]: E0313 10:05:07.948924 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:08 crc kubenswrapper[4632]: I0313 10:05:08.044071 4632 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:05:08 crc kubenswrapper[4632]: I0313 10:05:08.045592 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:08 crc kubenswrapper[4632]: I0313 10:05:08.045650 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:08 crc kubenswrapper[4632]: I0313 10:05:08.045665 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:08 crc kubenswrapper[4632]: E0313 10:05:08.049066 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:08 crc kubenswrapper[4632]: E0313 10:05:08.136067 4632 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 13 10:05:08 crc kubenswrapper[4632]: E0313 10:05:08.150149 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:08 crc kubenswrapper[4632]: E0313 10:05:08.250505 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:08 crc kubenswrapper[4632]: E0313 10:05:08.351616 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:08 crc kubenswrapper[4632]: E0313 10:05:08.452708 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:08 crc kubenswrapper[4632]: E0313 10:05:08.553055 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:08 crc kubenswrapper[4632]: E0313 10:05:08.653604 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:08 crc kubenswrapper[4632]: E0313 10:05:08.754449 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:08 crc kubenswrapper[4632]: E0313 10:05:08.855192 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:08 crc kubenswrapper[4632]: E0313 10:05:08.956022 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:09 crc kubenswrapper[4632]: E0313 10:05:09.056595 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:09 crc kubenswrapper[4632]: E0313 10:05:09.157484 4632 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.249331 4632 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.259965 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.260026 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.260041 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.260062 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.260076 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:09Z","lastTransitionTime":"2026-03-13T10:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.363394 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.363441 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.363454 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.363472 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.363484 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:09Z","lastTransitionTime":"2026-03-13T10:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.466521 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.466556 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.466565 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.466579 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.466588 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:09Z","lastTransitionTime":"2026-03-13T10:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.570353 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.570427 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.570443 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.570468 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.570483 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:09Z","lastTransitionTime":"2026-03-13T10:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.674091 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.674170 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.674191 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.674219 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.674242 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:09Z","lastTransitionTime":"2026-03-13T10:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.777507 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.777575 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.777584 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.777598 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.777630 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:09Z","lastTransitionTime":"2026-03-13T10:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.880686 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.880786 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.880806 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.880826 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.880836 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:09Z","lastTransitionTime":"2026-03-13T10:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.983410 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.983473 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.983484 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.983498 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:09 crc kubenswrapper[4632]: I0313 10:05:09.983507 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:09Z","lastTransitionTime":"2026-03-13T10:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.050352 4632 apiserver.go:52] "Watching apiserver" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.086512 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.086576 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.086589 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.086612 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.086625 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:10Z","lastTransitionTime":"2026-03-13T10:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.087653 4632 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.088052 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2","openshift-multus/multus-additional-cni-plugins-qlc8m","openshift-multus/multus-gqf22","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-n55jt","openshift-multus/network-metrics-daemon-z2vlz","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-operator/iptables-alerter-4ln5h","openshift-ovn-kubernetes/ovnkube-node-qb725","openshift-image-registry/node-ca-zwlc8","openshift-machine-config-operator/machine-config-daemon-zkscb","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.088410 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.088453 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.088491 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.088576 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.088581 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.088527 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.088971 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.089060 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.089095 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.089997 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.090813 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-n55jt" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.090842 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.091365 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.091423 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-zwlc8" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.091478 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.091876 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.091963 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.092046 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.096915 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.097183 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.097396 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.097918 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.101607 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.102081 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.102146 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.102081 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.102679 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.102689 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.102683 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.102847 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.103126 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.103192 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.103397 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.103421 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.103677 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.104158 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.104357 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.104527 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.104760 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.104880 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.105098 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.105162 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.105170 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.105237 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.105378 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.105391 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.105555 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.105705 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.105804 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.105805 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.105990 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.106038 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.106096 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.106443 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.106643 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.123558 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.138063 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.148818 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.164649 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.169155 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.169191 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.169204 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.169222 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.169233 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:10Z","lastTransitionTime":"2026-03-13T10:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.179213 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.179261 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.179603 4632 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.183206 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.183241 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.183257 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.183272 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.183282 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:10Z","lastTransitionTime":"2026-03-13T10:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.192321 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.195632 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.195663 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.195671 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.195684 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.195694 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:10Z","lastTransitionTime":"2026-03-13T10:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.196765 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.204465 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.204544 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.207667 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.207708 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.207719 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.207736 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.207746 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:10Z","lastTransitionTime":"2026-03-13T10:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.214263 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.217778 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.221124 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.221159 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.221172 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.221188 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.221199 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:10Z","lastTransitionTime":"2026-03-13T10:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.230080 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.230091 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.230722 4632 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.233403 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.233434 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.233447 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.233468 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.233481 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:10Z","lastTransitionTime":"2026-03-13T10:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.239015 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.248648 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.258253 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.264029 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.264263 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.264373 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.264482 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.264576 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.264680 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.264792 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.264892 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.265022 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.265128 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.265219 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.265321 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.265419 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.265519 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.265613 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.265772 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.265896 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.266049 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.266554 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.266987 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.267288 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.267584 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.267692 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.267785 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.267879 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.268369 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.268834 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.269115 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.269308 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.269844 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.270546 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.270675 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.270800 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.270907 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.264680 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.264775 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.264930 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.265203 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.265626 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.265914 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.270917 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.266125 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.266203 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.266306 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.266518 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.266713 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.267117 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.267955 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.267930 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.267973 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.268202 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.268559 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.268600 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.268957 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.269221 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.269385 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.269449 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.269490 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.269513 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.270207 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.270497 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.270503 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.271650 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.271684 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.271706 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.271728 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.271751 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.271776 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.271801 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.271820 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.271837 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.271853 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.271869 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.271890 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.271932 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.272038 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.272059 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.272076 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.272093 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.272111 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.272141 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.272217 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.272248 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.272270 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.272293 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.272312 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.272327 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.272352 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.272369 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.272387 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.272409 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.272427 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.272443 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.272460 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.272476 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.272492 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.272509 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.272525 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.272540 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.272556 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.272573 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.272587 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.272601 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.272616 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.272631 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.272822 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.272875 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.272903 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.272923 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.272969 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.272994 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273011 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273030 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273049 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273066 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273081 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273104 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273127 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273151 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273175 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273198 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273218 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273239 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273265 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273293 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273321 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273363 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273389 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273406 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273430 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273445 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273461 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273477 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273499 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273530 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273561 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273586 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273604 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273620 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273636 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273651 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273668 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273724 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273741 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273757 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273774 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273791 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273807 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273830 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273853 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273877 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.273901 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.274196 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.274234 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.274261 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.274293 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.274319 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.274344 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.274370 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.274408 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.274435 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.274461 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.274487 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.274543 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.274600 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.274667 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.274699 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.274725 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.274752 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.274778 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.274803 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.274829 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.274853 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.274878 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.274903 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.274927 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.274969 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.275017 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.275044 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.275072 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.275100 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.275126 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.275151 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.275176 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.275200 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.275222 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.275251 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.275277 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.275419 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.275453 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.275479 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.275514 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.275545 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.275570 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.275598 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.275623 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.275651 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.275676 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.275699 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.275725 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.275750 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.275777 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.275804 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.275830 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.275856 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.275881 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.275908 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.275962 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.275988 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.276013 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.276091 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.276121 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.276150 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.276177 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.276206 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.276233 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.276261 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.276290 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.276315 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.276359 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.276386 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.276413 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.276437 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.276513 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d77b18a7-7ad9-4bf5-bff5-da45878af7f4-rootfs\") pod \"machine-config-daemon-zkscb\" (UID: \"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\") " pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.276545 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnh6w\" (UniqueName: \"kubernetes.io/projected/d77b18a7-7ad9-4bf5-bff5-da45878af7f4-kube-api-access-vnh6w\") pod \"machine-config-daemon-zkscb\" (UID: \"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\") " pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.276573 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b054ca08-1d09-4eca-a608-eb5b9323959a-cni-binary-copy\") pod \"multus-additional-cni-plugins-qlc8m\" (UID: \"b054ca08-1d09-4eca-a608-eb5b9323959a\") " pod="openshift-multus/multus-additional-cni-plugins-qlc8m" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.276596 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b054ca08-1d09-4eca-a608-eb5b9323959a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qlc8m\" (UID: \"b054ca08-1d09-4eca-a608-eb5b9323959a\") " pod="openshift-multus/multus-additional-cni-plugins-qlc8m" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.276627 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.276655 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b0c542d5-8c38-4243-8af7-cfc0d8e22773-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-kbtt2\" (UID: \"b0c542d5-8c38-4243-8af7-cfc0d8e22773\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.276711 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.276739 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-host-run-netns\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.276762 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-run-openvswitch\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.276798 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3b40c6b3-0061-4224-82d5-3ccf67998722-ovnkube-config\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.276821 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3b40c6b3-0061-4224-82d5-3ccf67998722-ovn-node-metrics-cert\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.276845 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.276871 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9a50974e-f938-40f7-ace5-2a3b4cb1f3e7-serviceca\") pod \"node-ca-zwlc8\" (UID: \"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\") " pod="openshift-image-registry/node-ca-zwlc8" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.276896 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9s5s\" (UniqueName: \"kubernetes.io/projected/b054ca08-1d09-4eca-a608-eb5b9323959a-kube-api-access-l9s5s\") pod \"multus-additional-cni-plugins-qlc8m\" (UID: \"b054ca08-1d09-4eca-a608-eb5b9323959a\") " pod="openshift-multus/multus-additional-cni-plugins-qlc8m" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.276918 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-log-socket\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.276964 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.276992 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-var-lib-openvswitch\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.277018 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj6cl\" (UniqueName: \"kubernetes.io/projected/3b40c6b3-0061-4224-82d5-3ccf67998722-kube-api-access-dj6cl\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.277053 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.277080 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.277108 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.277161 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-host-var-lib-cni-bin\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.277191 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.277219 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b0c542d5-8c38-4243-8af7-cfc0d8e22773-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-kbtt2\" (UID: \"b0c542d5-8c38-4243-8af7-cfc0d8e22773\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.277244 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-system-cni-dir\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.277305 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b054ca08-1d09-4eca-a608-eb5b9323959a-os-release\") pod \"multus-additional-cni-plugins-qlc8m\" (UID: \"b054ca08-1d09-4eca-a608-eb5b9323959a\") " pod="openshift-multus/multus-additional-cni-plugins-qlc8m" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.277328 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-cni-netd\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.277353 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq5zl\" (UniqueName: \"kubernetes.io/projected/9a50974e-f938-40f7-ace5-2a3b4cb1f3e7-kube-api-access-mq5zl\") pod \"node-ca-zwlc8\" (UID: \"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\") " pod="openshift-image-registry/node-ca-zwlc8" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.277569 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-multus-cni-dir\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.277646 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.277757 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4ec8e301-3037-4de0-94d2-32c49709660e-cni-binary-copy\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.277787 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-multus-socket-dir-parent\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.277814 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-host-var-lib-cni-multus\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.277842 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.277867 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-node-log\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.277887 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a50974e-f938-40f7-ace5-2a3b4cb1f3e7-host\") pod \"node-ca-zwlc8\" (UID: \"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\") " pod="openshift-image-registry/node-ca-zwlc8" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.277909 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d5c4\" (UniqueName: \"kubernetes.io/projected/4ec8e301-3037-4de0-94d2-32c49709660e-kube-api-access-8d5c4\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.277995 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d77b18a7-7ad9-4bf5-bff5-da45878af7f4-mcd-auth-proxy-config\") pod \"machine-config-daemon-zkscb\" (UID: \"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\") " pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.278053 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-etc-openvswitch\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.278086 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-cni-bin\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.278595 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7bcc\" (UniqueName: \"kubernetes.io/projected/ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad-kube-api-access-q7bcc\") pod \"network-metrics-daemon-z2vlz\" (UID: \"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\") " pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.278639 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.278670 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b054ca08-1d09-4eca-a608-eb5b9323959a-system-cni-dir\") pod \"multus-additional-cni-plugins-qlc8m\" (UID: \"b054ca08-1d09-4eca-a608-eb5b9323959a\") " pod="openshift-multus/multus-additional-cni-plugins-qlc8m" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.278695 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b054ca08-1d09-4eca-a608-eb5b9323959a-cnibin\") pod \"multus-additional-cni-plugins-qlc8m\" (UID: \"b054ca08-1d09-4eca-a608-eb5b9323959a\") " pod="openshift-multus/multus-additional-cni-plugins-qlc8m" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.278760 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.278829 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-systemd-units\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.278861 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-run-systemd\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.278889 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4ec8e301-3037-4de0-94d2-32c49709660e-multus-daemon-config\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.278963 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3b40c6b3-0061-4224-82d5-3ccf67998722-ovnkube-script-lib\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.278995 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-host-var-lib-kubelet\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.279025 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.279052 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b29b9ad7-8cc9-434f-8731-a86265c383fd-hosts-file\") pod \"node-resolver-n55jt\" (UID: \"b29b9ad7-8cc9-434f-8731-a86265c383fd\") " pod="openshift-dns/node-resolver-n55jt" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.279077 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-host-run-k8s-cni-cncf-io\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.279129 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffbwk\" (UniqueName: \"kubernetes.io/projected/b0c542d5-8c38-4243-8af7-cfc0d8e22773-kube-api-access-ffbwk\") pod \"ovnkube-control-plane-749d76644c-kbtt2\" (UID: \"b0c542d5-8c38-4243-8af7-cfc0d8e22773\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.279153 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-host-run-multus-certs\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.279175 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-kubelet\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.279197 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-run-ovn\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.279218 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-run-ovn-kubernetes\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.279241 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b0c542d5-8c38-4243-8af7-cfc0d8e22773-env-overrides\") pod \"ovnkube-control-plane-749d76644c-kbtt2\" (UID: \"b0c542d5-8c38-4243-8af7-cfc0d8e22773\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.279262 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d77b18a7-7ad9-4bf5-bff5-da45878af7f4-proxy-tls\") pod \"machine-config-daemon-zkscb\" (UID: \"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\") " pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.279446 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-run-netns\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.279474 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-os-release\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.279547 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-hostroot\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.279619 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-multus-conf-dir\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.279653 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.279679 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-slash\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.279703 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3b40c6b3-0061-4224-82d5-3ccf67998722-env-overrides\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.279726 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad-metrics-certs\") pod \"network-metrics-daemon-z2vlz\" (UID: \"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\") " pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.279778 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-etc-kubernetes\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.279829 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgn9f\" (UniqueName: \"kubernetes.io/projected/b29b9ad7-8cc9-434f-8731-a86265c383fd-kube-api-access-pgn9f\") pod \"node-resolver-n55jt\" (UID: \"b29b9ad7-8cc9-434f-8731-a86265c383fd\") " pod="openshift-dns/node-resolver-n55jt" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.279859 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.279923 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-cnibin\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.280840 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b054ca08-1d09-4eca-a608-eb5b9323959a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qlc8m\" (UID: \"b054ca08-1d09-4eca-a608-eb5b9323959a\") " pod="openshift-multus/multus-additional-cni-plugins-qlc8m" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.280977 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.280996 4632 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.281013 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.281027 4632 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.281043 4632 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.281059 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.281073 4632 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.281087 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.281100 4632 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.281114 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.281126 4632 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.281138 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.281151 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.281163 4632 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.281176 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.281189 4632 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.281202 4632 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.281214 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.281228 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.281241 4632 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.281254 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.281266 4632 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.281278 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.281290 4632 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.281302 4632 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.281316 4632 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.281330 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.282385 4632 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.282601 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.283051 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-13 10:05:10.783028261 +0000 UTC m=+84.805558394 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.288449 4632 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.288550 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-13 10:05:10.788533611 +0000 UTC m=+84.811063744 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.289626 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.289688 4632 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.289780 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.289908 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:05:10.789893293 +0000 UTC m=+84.812423436 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.290183 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.293538 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.294120 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.294673 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.295666 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.296292 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.296909 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.298864 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.303275 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.304214 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.304246 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.304582 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.304730 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.305349 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.307141 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.307427 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.307455 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.307667 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.308597 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.308659 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.308820 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.309010 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.309119 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.309643 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.309661 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.310362 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.310390 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.310638 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.310655 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.310754 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.310817 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.310875 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.310891 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.311666 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.311748 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.311782 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.312248 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.312378 4632 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.315216 4632 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.315233 4632 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.315279 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.315301 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-13 10:05:10.815276866 +0000 UTC m=+84.837806999 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.315489 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.315750 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.315790 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.315910 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.315969 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.312671 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.312998 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.313011 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.313460 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.313527 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.313628 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.313646 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.313872 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.314111 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.314130 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.314380 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.314477 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.314742 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.314818 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.315853 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.316279 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.316435 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.311119 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.315867 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.316694 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.316771 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.316782 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.316850 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.316873 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.317203 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.312640 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.317250 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.317312 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.317413 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.317450 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.317487 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.318307 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.318327 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.318382 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.318410 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.318441 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.318548 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.318680 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.318791 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.318891 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.318930 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.319115 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.319129 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.319190 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.319233 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.319241 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.319255 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.319365 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.319456 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.319531 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.319703 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.320154 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.320243 4632 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.320269 4632 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.320282 4632 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.320318 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-13 10:05:10.8203068 +0000 UTC m=+84.842836933 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.320379 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.321067 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.321182 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.321654 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.325692 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.326018 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.326342 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.326488 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.326729 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.328097 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.328363 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.328769 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.328768 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.328878 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.328974 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.329017 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.329038 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.329208 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.329271 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.329280 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.329290 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.329670 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.329778 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.330220 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.330231 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.330546 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.330594 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.330776 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.330864 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.330938 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.331072 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.331183 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.331672 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.331862 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.332127 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.332159 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.332332 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.332358 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.332359 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.332681 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.332814 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.332902 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.333014 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.333097 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.333172 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.333742 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.333738 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.334299 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.334357 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.335395 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.335416 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.335481 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.335609 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.335792 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.336076 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.336108 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.336216 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.336411 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.336503 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.337765 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.337801 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.337816 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.337831 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.337842 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:10Z","lastTransitionTime":"2026-03-13T10:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.338355 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.338541 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.338577 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.338892 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.339534 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.339771 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.339882 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.340119 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.340319 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.340430 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.340473 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.341195 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.342713 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.344264 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.345087 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.345368 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.345545 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.355505 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.362072 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.363281 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.373028 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.382694 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-log-socket\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.382725 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.382742 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9a50974e-f938-40f7-ace5-2a3b4cb1f3e7-serviceca\") pod \"node-ca-zwlc8\" (UID: \"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\") " pod="openshift-image-registry/node-ca-zwlc8" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.382772 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9s5s\" (UniqueName: \"kubernetes.io/projected/b054ca08-1d09-4eca-a608-eb5b9323959a-kube-api-access-l9s5s\") pod \"multus-additional-cni-plugins-qlc8m\" (UID: \"b054ca08-1d09-4eca-a608-eb5b9323959a\") " pod="openshift-multus/multus-additional-cni-plugins-qlc8m" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.382793 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-log-socket\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.382796 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-var-lib-openvswitch\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.382847 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj6cl\" (UniqueName: \"kubernetes.io/projected/3b40c6b3-0061-4224-82d5-3ccf67998722-kube-api-access-dj6cl\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.382865 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-system-cni-dir\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.382880 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-host-var-lib-cni-bin\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.382924 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b0c542d5-8c38-4243-8af7-cfc0d8e22773-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-kbtt2\" (UID: \"b0c542d5-8c38-4243-8af7-cfc0d8e22773\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.382967 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b054ca08-1d09-4eca-a608-eb5b9323959a-os-release\") pod \"multus-additional-cni-plugins-qlc8m\" (UID: \"b054ca08-1d09-4eca-a608-eb5b9323959a\") " pod="openshift-multus/multus-additional-cni-plugins-qlc8m" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.382989 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-cni-netd\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.383009 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq5zl\" (UniqueName: \"kubernetes.io/projected/9a50974e-f938-40f7-ace5-2a3b4cb1f3e7-kube-api-access-mq5zl\") pod \"node-ca-zwlc8\" (UID: \"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\") " pod="openshift-image-registry/node-ca-zwlc8" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.383034 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.383085 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-system-cni-dir\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.383104 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-var-lib-openvswitch\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.383372 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b054ca08-1d09-4eca-a608-eb5b9323959a-os-release\") pod \"multus-additional-cni-plugins-qlc8m\" (UID: \"b054ca08-1d09-4eca-a608-eb5b9323959a\") " pod="openshift-multus/multus-additional-cni-plugins-qlc8m" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.383413 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-cni-netd\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.383448 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-host-var-lib-cni-bin\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.383798 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-multus-cni-dir\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.383834 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-multus-socket-dir-parent\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.383852 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9a50974e-f938-40f7-ace5-2a3b4cb1f3e7-serviceca\") pod \"node-ca-zwlc8\" (UID: \"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\") " pod="openshift-image-registry/node-ca-zwlc8" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.383883 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4ec8e301-3037-4de0-94d2-32c49709660e-cni-binary-copy\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.383903 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-host-var-lib-cni-multus\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.384043 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-multus-socket-dir-parent\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.384073 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-host-var-lib-cni-multus\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.384059 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-multus-cni-dir\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.383924 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d77b18a7-7ad9-4bf5-bff5-da45878af7f4-mcd-auth-proxy-config\") pod \"machine-config-daemon-zkscb\" (UID: \"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\") " pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.384683 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-node-log\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.384829 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a50974e-f938-40f7-ace5-2a3b4cb1f3e7-host\") pod \"node-ca-zwlc8\" (UID: \"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\") " pod="openshift-image-registry/node-ca-zwlc8" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.384855 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d5c4\" (UniqueName: \"kubernetes.io/projected/4ec8e301-3037-4de0-94d2-32c49709660e-kube-api-access-8d5c4\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.385004 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-etc-openvswitch\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.385147 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-cni-bin\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.385177 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7bcc\" (UniqueName: \"kubernetes.io/projected/ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad-kube-api-access-q7bcc\") pod \"network-metrics-daemon-z2vlz\" (UID: \"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\") " pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.385267 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4ec8e301-3037-4de0-94d2-32c49709660e-cni-binary-copy\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.385309 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4ec8e301-3037-4de0-94d2-32c49709660e-multus-daemon-config\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.385427 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b054ca08-1d09-4eca-a608-eb5b9323959a-system-cni-dir\") pod \"multus-additional-cni-plugins-qlc8m\" (UID: \"b054ca08-1d09-4eca-a608-eb5b9323959a\") " pod="openshift-multus/multus-additional-cni-plugins-qlc8m" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.385566 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b054ca08-1d09-4eca-a608-eb5b9323959a-cnibin\") pod \"multus-additional-cni-plugins-qlc8m\" (UID: \"b054ca08-1d09-4eca-a608-eb5b9323959a\") " pod="openshift-multus/multus-additional-cni-plugins-qlc8m" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.385605 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-systemd-units\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.386411 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-run-systemd\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.386440 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3b40c6b3-0061-4224-82d5-3ccf67998722-ovnkube-script-lib\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.386583 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-host-run-k8s-cni-cncf-io\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.386611 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-host-var-lib-kubelet\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.386743 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-systemd-units\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.386757 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b29b9ad7-8cc9-434f-8731-a86265c383fd-hosts-file\") pod \"node-resolver-n55jt\" (UID: \"b29b9ad7-8cc9-434f-8731-a86265c383fd\") " pod="openshift-dns/node-resolver-n55jt" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.386896 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b0c542d5-8c38-4243-8af7-cfc0d8e22773-env-overrides\") pod \"ovnkube-control-plane-749d76644c-kbtt2\" (UID: \"b0c542d5-8c38-4243-8af7-cfc0d8e22773\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.386965 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffbwk\" (UniqueName: \"kubernetes.io/projected/b0c542d5-8c38-4243-8af7-cfc0d8e22773-kube-api-access-ffbwk\") pod \"ovnkube-control-plane-749d76644c-kbtt2\" (UID: \"b0c542d5-8c38-4243-8af7-cfc0d8e22773\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.386994 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-host-run-multus-certs\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.387071 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b29b9ad7-8cc9-434f-8731-a86265c383fd-hosts-file\") pod \"node-resolver-n55jt\" (UID: \"b29b9ad7-8cc9-434f-8731-a86265c383fd\") " pod="openshift-dns/node-resolver-n55jt" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.387067 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-kubelet\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.387136 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-kubelet\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.387159 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4ec8e301-3037-4de0-94d2-32c49709660e-multus-daemon-config\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.387172 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b054ca08-1d09-4eca-a608-eb5b9323959a-system-cni-dir\") pod \"multus-additional-cni-plugins-qlc8m\" (UID: \"b054ca08-1d09-4eca-a608-eb5b9323959a\") " pod="openshift-multus/multus-additional-cni-plugins-qlc8m" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.387217 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b054ca08-1d09-4eca-a608-eb5b9323959a-cnibin\") pod \"multus-additional-cni-plugins-qlc8m\" (UID: \"b054ca08-1d09-4eca-a608-eb5b9323959a\") " pod="openshift-multus/multus-additional-cni-plugins-qlc8m" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.387232 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-run-ovn\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.387269 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-run-ovn-kubernetes\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.384853 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-node-log\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.384764 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d77b18a7-7ad9-4bf5-bff5-da45878af7f4-mcd-auth-proxy-config\") pod \"machine-config-daemon-zkscb\" (UID: \"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\") " pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.384888 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a50974e-f938-40f7-ace5-2a3b4cb1f3e7-host\") pod \"node-ca-zwlc8\" (UID: \"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\") " pod="openshift-image-registry/node-ca-zwlc8" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.387473 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-multus-conf-dir\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.387485 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-run-ovn-kubernetes\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.387508 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d77b18a7-7ad9-4bf5-bff5-da45878af7f4-proxy-tls\") pod \"machine-config-daemon-zkscb\" (UID: \"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\") " pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.387585 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-run-netns\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.387704 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-multus-conf-dir\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.387825 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-host-run-k8s-cni-cncf-io\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.387612 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-os-release\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.387871 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-hostroot\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.387930 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-run-ovn\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.388144 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-etc-openvswitch\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.388210 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-run-systemd\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.388248 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-cni-bin\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.388507 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b0c542d5-8c38-4243-8af7-cfc0d8e22773-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-kbtt2\" (UID: \"b0c542d5-8c38-4243-8af7-cfc0d8e22773\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.388570 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-host-run-multus-certs\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.388605 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgn9f\" (UniqueName: \"kubernetes.io/projected/b29b9ad7-8cc9-434f-8731-a86265c383fd-kube-api-access-pgn9f\") pod \"node-resolver-n55jt\" (UID: \"b29b9ad7-8cc9-434f-8731-a86265c383fd\") " pod="openshift-dns/node-resolver-n55jt" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.388659 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-slash\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.388719 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3b40c6b3-0061-4224-82d5-3ccf67998722-env-overrides\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.388811 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-os-release\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.388850 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-host-var-lib-kubelet\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.388879 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-run-netns\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.388744 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad-metrics-certs\") pod \"network-metrics-daemon-z2vlz\" (UID: \"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\") " pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.388899 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-slash\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.388912 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-hostroot\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.388974 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-etc-kubernetes\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.389020 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b054ca08-1d09-4eca-a608-eb5b9323959a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qlc8m\" (UID: \"b054ca08-1d09-4eca-a608-eb5b9323959a\") " pod="openshift-multus/multus-additional-cni-plugins-qlc8m" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.389046 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.389071 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-cnibin\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.389124 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-host-run-netns\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.389148 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d77b18a7-7ad9-4bf5-bff5-da45878af7f4-rootfs\") pod \"machine-config-daemon-zkscb\" (UID: \"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\") " pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.389173 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnh6w\" (UniqueName: \"kubernetes.io/projected/d77b18a7-7ad9-4bf5-bff5-da45878af7f4-kube-api-access-vnh6w\") pod \"machine-config-daemon-zkscb\" (UID: \"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\") " pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.389195 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b054ca08-1d09-4eca-a608-eb5b9323959a-cni-binary-copy\") pod \"multus-additional-cni-plugins-qlc8m\" (UID: \"b054ca08-1d09-4eca-a608-eb5b9323959a\") " pod="openshift-multus/multus-additional-cni-plugins-qlc8m" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.389217 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b054ca08-1d09-4eca-a608-eb5b9323959a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qlc8m\" (UID: \"b054ca08-1d09-4eca-a608-eb5b9323959a\") " pod="openshift-multus/multus-additional-cni-plugins-qlc8m" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.389243 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.389718 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3b40c6b3-0061-4224-82d5-3ccf67998722-env-overrides\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.389723 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3b40c6b3-0061-4224-82d5-3ccf67998722-ovnkube-script-lib\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.389801 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-host-run-netns\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.389806 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b0c542d5-8c38-4243-8af7-cfc0d8e22773-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-kbtt2\" (UID: \"b0c542d5-8c38-4243-8af7-cfc0d8e22773\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.389843 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3b40c6b3-0061-4224-82d5-3ccf67998722-ovn-node-metrics-cert\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.389869 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-run-openvswitch\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.389894 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3b40c6b3-0061-4224-82d5-3ccf67998722-ovnkube-config\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.390790 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b0c542d5-8c38-4243-8af7-cfc0d8e22773-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-kbtt2\" (UID: \"b0c542d5-8c38-4243-8af7-cfc0d8e22773\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.390867 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.390893 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3b40c6b3-0061-4224-82d5-3ccf67998722-ovnkube-config\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.390952 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-cnibin\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.391671 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b054ca08-1d09-4eca-a608-eb5b9323959a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qlc8m\" (UID: \"b054ca08-1d09-4eca-a608-eb5b9323959a\") " pod="openshift-multus/multus-additional-cni-plugins-qlc8m" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.391700 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-run-openvswitch\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.391934 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b054ca08-1d09-4eca-a608-eb5b9323959a-cni-binary-copy\") pod \"multus-additional-cni-plugins-qlc8m\" (UID: \"b054ca08-1d09-4eca-a608-eb5b9323959a\") " pod="openshift-multus/multus-additional-cni-plugins-qlc8m" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.397671 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.397979 4632 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.398097 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad-metrics-certs podName:ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad nodeName:}" failed. No retries permitted until 2026-03-13 10:05:10.898062949 +0000 UTC m=+84.920593082 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad-metrics-certs") pod "network-metrics-daemon-z2vlz" (UID: "ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.398547 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d77b18a7-7ad9-4bf5-bff5-da45878af7f4-rootfs\") pod \"machine-config-daemon-zkscb\" (UID: \"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\") " pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.399029 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b0c542d5-8c38-4243-8af7-cfc0d8e22773-env-overrides\") pod \"ovnkube-control-plane-749d76644c-kbtt2\" (UID: \"b0c542d5-8c38-4243-8af7-cfc0d8e22773\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.399100 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ec8e301-3037-4de0-94d2-32c49709660e-etc-kubernetes\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.399691 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b054ca08-1d09-4eca-a608-eb5b9323959a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qlc8m\" (UID: \"b054ca08-1d09-4eca-a608-eb5b9323959a\") " pod="openshift-multus/multus-additional-cni-plugins-qlc8m" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.400137 4632 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.400170 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.400185 4632 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.400200 4632 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.400212 4632 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.400230 4632 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.400256 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.400352 4632 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.400370 4632 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.400389 4632 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.400433 4632 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.400617 4632 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.400638 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.400654 4632 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.400698 4632 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.400883 4632 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.400907 4632 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.400962 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.400978 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.401224 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.401278 4632 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.401297 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.401311 4632 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.401328 4632 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.401553 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.401574 4632 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.401588 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.402098 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.402140 4632 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.402165 4632 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.402179 4632 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.402198 4632 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.402211 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.402224 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.402238 4632 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.402316 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d77b18a7-7ad9-4bf5-bff5-da45878af7f4-proxy-tls\") pod \"machine-config-daemon-zkscb\" (UID: \"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\") " pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.402249 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.402358 4632 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.402390 4632 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.402412 4632 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.402613 4632 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.402629 4632 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.402644 4632 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.402660 4632 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.402680 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.402694 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.402708 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.402727 4632 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.402742 4632 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.402818 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.402835 4632 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.402853 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.402867 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.402987 4632 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403005 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403026 4632 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403040 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403129 4632 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403143 4632 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403160 4632 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403174 4632 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403190 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403208 4632 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403221 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403235 4632 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403247 4632 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403268 4632 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403281 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403294 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403310 4632 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403328 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403340 4632 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403352 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403370 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403384 4632 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403398 4632 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403410 4632 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403429 4632 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403443 4632 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403563 4632 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403578 4632 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403596 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403608 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403621 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403634 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403654 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403667 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403680 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403697 4632 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403709 4632 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403722 4632 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403737 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403755 4632 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403768 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403780 4632 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403792 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403809 4632 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403823 4632 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403836 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403850 4632 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403868 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403884 4632 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403896 4632 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403915 4632 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403929 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403962 4632 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403976 4632 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.403997 4632 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404011 4632 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404025 4632 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404038 4632 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404057 4632 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404069 4632 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404082 4632 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404099 4632 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404111 4632 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404123 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404135 4632 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404151 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404159 4632 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404169 4632 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404182 4632 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404198 4632 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404211 4632 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404222 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404235 4632 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404252 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404266 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404279 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404295 4632 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404306 4632 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404318 4632 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404328 4632 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404345 4632 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404358 4632 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404371 4632 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404382 4632 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404397 4632 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404410 4632 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404421 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404436 4632 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404448 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404462 4632 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404474 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404490 4632 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404503 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404518 4632 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404532 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404551 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404563 4632 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404576 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404588 4632 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404605 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404618 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404631 4632 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404646 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404660 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404674 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404687 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404705 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404717 4632 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404729 4632 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404743 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404761 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404774 4632 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404787 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404802 4632 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.404818 4632 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.407051 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.407518 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj6cl\" (UniqueName: \"kubernetes.io/projected/3b40c6b3-0061-4224-82d5-3ccf67998722-kube-api-access-dj6cl\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.408265 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7bcc\" (UniqueName: \"kubernetes.io/projected/ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad-kube-api-access-q7bcc\") pod \"network-metrics-daemon-z2vlz\" (UID: \"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\") " pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.412211 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3b40c6b3-0061-4224-82d5-3ccf67998722-ovn-node-metrics-cert\") pod \"ovnkube-node-qb725\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.412733 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq5zl\" (UniqueName: \"kubernetes.io/projected/9a50974e-f938-40f7-ace5-2a3b4cb1f3e7-kube-api-access-mq5zl\") pod \"node-ca-zwlc8\" (UID: \"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\") " pod="openshift-image-registry/node-ca-zwlc8" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.414195 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d5c4\" (UniqueName: \"kubernetes.io/projected/4ec8e301-3037-4de0-94d2-32c49709660e-kube-api-access-8d5c4\") pod \"multus-gqf22\" (UID: \"4ec8e301-3037-4de0-94d2-32c49709660e\") " pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.414425 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9s5s\" (UniqueName: \"kubernetes.io/projected/b054ca08-1d09-4eca-a608-eb5b9323959a-kube-api-access-l9s5s\") pod \"multus-additional-cni-plugins-qlc8m\" (UID: \"b054ca08-1d09-4eca-a608-eb5b9323959a\") " pod="openshift-multus/multus-additional-cni-plugins-qlc8m" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.416653 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffbwk\" (UniqueName: \"kubernetes.io/projected/b0c542d5-8c38-4243-8af7-cfc0d8e22773-kube-api-access-ffbwk\") pod \"ovnkube-control-plane-749d76644c-kbtt2\" (UID: \"b0c542d5-8c38-4243-8af7-cfc0d8e22773\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.418931 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnh6w\" (UniqueName: \"kubernetes.io/projected/d77b18a7-7ad9-4bf5-bff5-da45878af7f4-kube-api-access-vnh6w\") pod \"machine-config-daemon-zkscb\" (UID: \"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\") " pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.419426 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgn9f\" (UniqueName: \"kubernetes.io/projected/b29b9ad7-8cc9-434f-8731-a86265c383fd-kube-api-access-pgn9f\") pod \"node-resolver-n55jt\" (UID: \"b29b9ad7-8cc9-434f-8731-a86265c383fd\") " pod="openshift-dns/node-resolver-n55jt" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.420713 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.425649 4632 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 13 10:05:10 crc kubenswrapper[4632]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 13 10:05:10 crc kubenswrapper[4632]: set -o allexport Mar 13 10:05:10 crc kubenswrapper[4632]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 13 10:05:10 crc kubenswrapper[4632]: source /etc/kubernetes/apiserver-url.env Mar 13 10:05:10 crc kubenswrapper[4632]: else Mar 13 10:05:10 crc kubenswrapper[4632]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 13 10:05:10 crc kubenswrapper[4632]: exit 1 Mar 13 10:05:10 crc kubenswrapper[4632]: fi Mar 13 10:05:10 crc kubenswrapper[4632]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 13 10:05:10 crc kubenswrapper[4632]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 13 10:05:10 crc kubenswrapper[4632]: > logger="UnhandledError" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.426958 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.431137 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.432730 4632 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 13 10:05:10 crc kubenswrapper[4632]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 13 10:05:10 crc kubenswrapper[4632]: if [[ -f "/env/_master" ]]; then Mar 13 10:05:10 crc kubenswrapper[4632]: set -o allexport Mar 13 10:05:10 crc kubenswrapper[4632]: source "/env/_master" Mar 13 10:05:10 crc kubenswrapper[4632]: set +o allexport Mar 13 10:05:10 crc kubenswrapper[4632]: fi Mar 13 10:05:10 crc kubenswrapper[4632]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 13 10:05:10 crc kubenswrapper[4632]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 13 10:05:10 crc kubenswrapper[4632]: ho_enable="--enable-hybrid-overlay" Mar 13 10:05:10 crc kubenswrapper[4632]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 13 10:05:10 crc kubenswrapper[4632]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 13 10:05:10 crc kubenswrapper[4632]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 13 10:05:10 crc kubenswrapper[4632]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 13 10:05:10 crc kubenswrapper[4632]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 13 10:05:10 crc kubenswrapper[4632]: --webhook-host=127.0.0.1 \ Mar 13 10:05:10 crc kubenswrapper[4632]: --webhook-port=9743 \ Mar 13 10:05:10 crc kubenswrapper[4632]: ${ho_enable} \ Mar 13 10:05:10 crc kubenswrapper[4632]: --enable-interconnect \ Mar 13 10:05:10 crc kubenswrapper[4632]: --disable-approver \ Mar 13 10:05:10 crc kubenswrapper[4632]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 13 10:05:10 crc kubenswrapper[4632]: --wait-for-kubernetes-api=200s \ Mar 13 10:05:10 crc kubenswrapper[4632]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 13 10:05:10 crc kubenswrapper[4632]: --loglevel="${LOGLEVEL}" Mar 13 10:05:10 crc kubenswrapper[4632]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 13 10:05:10 crc kubenswrapper[4632]: > logger="UnhandledError" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.437817 4632 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 13 10:05:10 crc kubenswrapper[4632]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 13 10:05:10 crc kubenswrapper[4632]: if [[ -f "/env/_master" ]]; then Mar 13 10:05:10 crc kubenswrapper[4632]: set -o allexport Mar 13 10:05:10 crc kubenswrapper[4632]: source "/env/_master" Mar 13 10:05:10 crc kubenswrapper[4632]: set +o allexport Mar 13 10:05:10 crc kubenswrapper[4632]: fi Mar 13 10:05:10 crc kubenswrapper[4632]: Mar 13 10:05:10 crc kubenswrapper[4632]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 13 10:05:10 crc kubenswrapper[4632]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 13 10:05:10 crc kubenswrapper[4632]: --disable-webhook \ Mar 13 10:05:10 crc kubenswrapper[4632]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 13 10:05:10 crc kubenswrapper[4632]: --loglevel="${LOGLEVEL}" Mar 13 10:05:10 crc kubenswrapper[4632]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 13 10:05:10 crc kubenswrapper[4632]: > logger="UnhandledError" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.438959 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.439420 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gqf22" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.440646 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.440701 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.440713 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.440772 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.440793 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:10Z","lastTransitionTime":"2026-03-13T10:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:10 crc kubenswrapper[4632]: W0313 10:05:10.443738 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-8b7cb438273be81daa37dee852b13428c2abbaceb49c8c8e2ae84fffcf8cf261 WatchSource:0}: Error finding container 8b7cb438273be81daa37dee852b13428c2abbaceb49c8c8e2ae84fffcf8cf261: Status 404 returned error can't find the container with id 8b7cb438273be81daa37dee852b13428c2abbaceb49c8c8e2ae84fffcf8cf261 Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.447834 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-n55jt" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.448373 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.450929 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.459727 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.460254 4632 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 13 10:05:10 crc kubenswrapper[4632]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 13 10:05:10 crc kubenswrapper[4632]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 13 10:05:10 crc kubenswrapper[4632]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8d5c4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-gqf22_openshift-multus(4ec8e301-3037-4de0-94d2-32c49709660e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 13 10:05:10 crc kubenswrapper[4632]: > logger="UnhandledError" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.461547 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-gqf22" podUID="4ec8e301-3037-4de0-94d2-32c49709660e" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.465821 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" Mar 13 10:05:10 crc kubenswrapper[4632]: W0313 10:05:10.466442 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb29b9ad7_8cc9_434f_8731_a86265c383fd.slice/crio-12fc45120502c7a3991aa07e36d421f9b60ca97adfa86accdf7d5f45f50ccde5 WatchSource:0}: Error finding container 12fc45120502c7a3991aa07e36d421f9b60ca97adfa86accdf7d5f45f50ccde5: Status 404 returned error can't find the container with id 12fc45120502c7a3991aa07e36d421f9b60ca97adfa86accdf7d5f45f50ccde5 Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.469128 4632 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 13 10:05:10 crc kubenswrapper[4632]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Mar 13 10:05:10 crc kubenswrapper[4632]: set -uo pipefail Mar 13 10:05:10 crc kubenswrapper[4632]: Mar 13 10:05:10 crc kubenswrapper[4632]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 13 10:05:10 crc kubenswrapper[4632]: Mar 13 10:05:10 crc kubenswrapper[4632]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 13 10:05:10 crc kubenswrapper[4632]: HOSTS_FILE="/etc/hosts" Mar 13 10:05:10 crc kubenswrapper[4632]: TEMP_FILE="/etc/hosts.tmp" Mar 13 10:05:10 crc kubenswrapper[4632]: Mar 13 10:05:10 crc kubenswrapper[4632]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 13 10:05:10 crc kubenswrapper[4632]: Mar 13 10:05:10 crc kubenswrapper[4632]: # Make a temporary file with the old hosts file's attributes. Mar 13 10:05:10 crc kubenswrapper[4632]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 13 10:05:10 crc kubenswrapper[4632]: echo "Failed to preserve hosts file. Exiting." Mar 13 10:05:10 crc kubenswrapper[4632]: exit 1 Mar 13 10:05:10 crc kubenswrapper[4632]: fi Mar 13 10:05:10 crc kubenswrapper[4632]: Mar 13 10:05:10 crc kubenswrapper[4632]: while true; do Mar 13 10:05:10 crc kubenswrapper[4632]: declare -A svc_ips Mar 13 10:05:10 crc kubenswrapper[4632]: for svc in "${services[@]}"; do Mar 13 10:05:10 crc kubenswrapper[4632]: # Fetch service IP from cluster dns if present. We make several tries Mar 13 10:05:10 crc kubenswrapper[4632]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 13 10:05:10 crc kubenswrapper[4632]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 13 10:05:10 crc kubenswrapper[4632]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 13 10:05:10 crc kubenswrapper[4632]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 13 10:05:10 crc kubenswrapper[4632]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 13 10:05:10 crc kubenswrapper[4632]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 13 10:05:10 crc kubenswrapper[4632]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 13 10:05:10 crc kubenswrapper[4632]: for i in ${!cmds[*]} Mar 13 10:05:10 crc kubenswrapper[4632]: do Mar 13 10:05:10 crc kubenswrapper[4632]: ips=($(eval "${cmds[i]}")) Mar 13 10:05:10 crc kubenswrapper[4632]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 13 10:05:10 crc kubenswrapper[4632]: svc_ips["${svc}"]="${ips[@]}" Mar 13 10:05:10 crc kubenswrapper[4632]: break Mar 13 10:05:10 crc kubenswrapper[4632]: fi Mar 13 10:05:10 crc kubenswrapper[4632]: done Mar 13 10:05:10 crc kubenswrapper[4632]: done Mar 13 10:05:10 crc kubenswrapper[4632]: Mar 13 10:05:10 crc kubenswrapper[4632]: # Update /etc/hosts only if we get valid service IPs Mar 13 10:05:10 crc kubenswrapper[4632]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 13 10:05:10 crc kubenswrapper[4632]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 13 10:05:10 crc kubenswrapper[4632]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 13 10:05:10 crc kubenswrapper[4632]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 13 10:05:10 crc kubenswrapper[4632]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 13 10:05:10 crc kubenswrapper[4632]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 13 10:05:10 crc kubenswrapper[4632]: sleep 60 & wait Mar 13 10:05:10 crc kubenswrapper[4632]: continue Mar 13 10:05:10 crc kubenswrapper[4632]: fi Mar 13 10:05:10 crc kubenswrapper[4632]: Mar 13 10:05:10 crc kubenswrapper[4632]: # Append resolver entries for services Mar 13 10:05:10 crc kubenswrapper[4632]: rc=0 Mar 13 10:05:10 crc kubenswrapper[4632]: for svc in "${!svc_ips[@]}"; do Mar 13 10:05:10 crc kubenswrapper[4632]: for ip in ${svc_ips[${svc}]}; do Mar 13 10:05:10 crc kubenswrapper[4632]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 13 10:05:10 crc kubenswrapper[4632]: done Mar 13 10:05:10 crc kubenswrapper[4632]: done Mar 13 10:05:10 crc kubenswrapper[4632]: if [[ $rc -ne 0 ]]; then Mar 13 10:05:10 crc kubenswrapper[4632]: sleep 60 & wait Mar 13 10:05:10 crc kubenswrapper[4632]: continue Mar 13 10:05:10 crc kubenswrapper[4632]: fi Mar 13 10:05:10 crc kubenswrapper[4632]: Mar 13 10:05:10 crc kubenswrapper[4632]: Mar 13 10:05:10 crc kubenswrapper[4632]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 13 10:05:10 crc kubenswrapper[4632]: # Replace /etc/hosts with our modified version if needed Mar 13 10:05:10 crc kubenswrapper[4632]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 13 10:05:10 crc kubenswrapper[4632]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 13 10:05:10 crc kubenswrapper[4632]: fi Mar 13 10:05:10 crc kubenswrapper[4632]: sleep 60 & wait Mar 13 10:05:10 crc kubenswrapper[4632]: unset svc_ips Mar 13 10:05:10 crc kubenswrapper[4632]: done Mar 13 10:05:10 crc kubenswrapper[4632]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgn9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-n55jt_openshift-dns(b29b9ad7-8cc9-434f-8731-a86265c383fd): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 13 10:05:10 crc kubenswrapper[4632]: > logger="UnhandledError" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.470380 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-n55jt" podUID="b29b9ad7-8cc9-434f-8731-a86265c383fd" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.472895 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-zwlc8" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.480574 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.482773 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vnh6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.489378 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.490122 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vnh6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.491886 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.499327 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9s5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-qlc8m_openshift-multus(b054ca08-1d09-4eca-a608-eb5b9323959a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.500489 4632 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 13 10:05:10 crc kubenswrapper[4632]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Mar 13 10:05:10 crc kubenswrapper[4632]: while [ true ]; Mar 13 10:05:10 crc kubenswrapper[4632]: do Mar 13 10:05:10 crc kubenswrapper[4632]: for f in $(ls /tmp/serviceca); do Mar 13 10:05:10 crc kubenswrapper[4632]: echo $f Mar 13 10:05:10 crc kubenswrapper[4632]: ca_file_path="/tmp/serviceca/${f}" Mar 13 10:05:10 crc kubenswrapper[4632]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Mar 13 10:05:10 crc kubenswrapper[4632]: reg_dir_path="/etc/docker/certs.d/${f}" Mar 13 10:05:10 crc kubenswrapper[4632]: if [ -e "${reg_dir_path}" ]; then Mar 13 10:05:10 crc kubenswrapper[4632]: cp -u $ca_file_path $reg_dir_path/ca.crt Mar 13 10:05:10 crc kubenswrapper[4632]: else Mar 13 10:05:10 crc kubenswrapper[4632]: mkdir $reg_dir_path Mar 13 10:05:10 crc kubenswrapper[4632]: cp $ca_file_path $reg_dir_path/ca.crt Mar 13 10:05:10 crc kubenswrapper[4632]: fi Mar 13 10:05:10 crc kubenswrapper[4632]: done Mar 13 10:05:10 crc kubenswrapper[4632]: for d in $(ls /etc/docker/certs.d); do Mar 13 10:05:10 crc kubenswrapper[4632]: echo $d Mar 13 10:05:10 crc kubenswrapper[4632]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Mar 13 10:05:10 crc kubenswrapper[4632]: reg_conf_path="/tmp/serviceca/${dp}" Mar 13 10:05:10 crc kubenswrapper[4632]: if [ ! -e "${reg_conf_path}" ]; then Mar 13 10:05:10 crc kubenswrapper[4632]: rm -rf /etc/docker/certs.d/$d Mar 13 10:05:10 crc kubenswrapper[4632]: fi Mar 13 10:05:10 crc kubenswrapper[4632]: done Mar 13 10:05:10 crc kubenswrapper[4632]: sleep 60 & wait ${!} Mar 13 10:05:10 crc kubenswrapper[4632]: done Mar 13 10:05:10 crc kubenswrapper[4632]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mq5zl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-zwlc8_openshift-image-registry(9a50974e-f938-40f7-ace5-2a3b4cb1f3e7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 13 10:05:10 crc kubenswrapper[4632]: > logger="UnhandledError" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.500581 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" podUID="b054ca08-1d09-4eca-a608-eb5b9323959a" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.501651 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-zwlc8" podUID="9a50974e-f938-40f7-ace5-2a3b4cb1f3e7" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.504197 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerStarted","Data":"b6c337d9d2e58ca0167b79bc2a82693282b889a5339b46a5201132b60dec013f"} Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.506390 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-n55jt" event={"ID":"b29b9ad7-8cc9-434f-8731-a86265c383fd","Type":"ContainerStarted","Data":"12fc45120502c7a3991aa07e36d421f9b60ca97adfa86accdf7d5f45f50ccde5"} Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.506510 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vnh6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.508610 4632 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 13 10:05:10 crc kubenswrapper[4632]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Mar 13 10:05:10 crc kubenswrapper[4632]: set -uo pipefail Mar 13 10:05:10 crc kubenswrapper[4632]: Mar 13 10:05:10 crc kubenswrapper[4632]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 13 10:05:10 crc kubenswrapper[4632]: Mar 13 10:05:10 crc kubenswrapper[4632]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 13 10:05:10 crc kubenswrapper[4632]: HOSTS_FILE="/etc/hosts" Mar 13 10:05:10 crc kubenswrapper[4632]: TEMP_FILE="/etc/hosts.tmp" Mar 13 10:05:10 crc kubenswrapper[4632]: Mar 13 10:05:10 crc kubenswrapper[4632]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 13 10:05:10 crc kubenswrapper[4632]: Mar 13 10:05:10 crc kubenswrapper[4632]: # Make a temporary file with the old hosts file's attributes. Mar 13 10:05:10 crc kubenswrapper[4632]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 13 10:05:10 crc kubenswrapper[4632]: echo "Failed to preserve hosts file. Exiting." Mar 13 10:05:10 crc kubenswrapper[4632]: exit 1 Mar 13 10:05:10 crc kubenswrapper[4632]: fi Mar 13 10:05:10 crc kubenswrapper[4632]: Mar 13 10:05:10 crc kubenswrapper[4632]: while true; do Mar 13 10:05:10 crc kubenswrapper[4632]: declare -A svc_ips Mar 13 10:05:10 crc kubenswrapper[4632]: for svc in "${services[@]}"; do Mar 13 10:05:10 crc kubenswrapper[4632]: # Fetch service IP from cluster dns if present. We make several tries Mar 13 10:05:10 crc kubenswrapper[4632]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 13 10:05:10 crc kubenswrapper[4632]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 13 10:05:10 crc kubenswrapper[4632]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 13 10:05:10 crc kubenswrapper[4632]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 13 10:05:10 crc kubenswrapper[4632]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 13 10:05:10 crc kubenswrapper[4632]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 13 10:05:10 crc kubenswrapper[4632]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 13 10:05:10 crc kubenswrapper[4632]: for i in ${!cmds[*]} Mar 13 10:05:10 crc kubenswrapper[4632]: do Mar 13 10:05:10 crc kubenswrapper[4632]: ips=($(eval "${cmds[i]}")) Mar 13 10:05:10 crc kubenswrapper[4632]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 13 10:05:10 crc kubenswrapper[4632]: svc_ips["${svc}"]="${ips[@]}" Mar 13 10:05:10 crc kubenswrapper[4632]: break Mar 13 10:05:10 crc kubenswrapper[4632]: fi Mar 13 10:05:10 crc kubenswrapper[4632]: done Mar 13 10:05:10 crc kubenswrapper[4632]: done Mar 13 10:05:10 crc kubenswrapper[4632]: Mar 13 10:05:10 crc kubenswrapper[4632]: # Update /etc/hosts only if we get valid service IPs Mar 13 10:05:10 crc kubenswrapper[4632]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 13 10:05:10 crc kubenswrapper[4632]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 13 10:05:10 crc kubenswrapper[4632]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 13 10:05:10 crc kubenswrapper[4632]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 13 10:05:10 crc kubenswrapper[4632]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 13 10:05:10 crc kubenswrapper[4632]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 13 10:05:10 crc kubenswrapper[4632]: sleep 60 & wait Mar 13 10:05:10 crc kubenswrapper[4632]: continue Mar 13 10:05:10 crc kubenswrapper[4632]: fi Mar 13 10:05:10 crc kubenswrapper[4632]: Mar 13 10:05:10 crc kubenswrapper[4632]: # Append resolver entries for services Mar 13 10:05:10 crc kubenswrapper[4632]: rc=0 Mar 13 10:05:10 crc kubenswrapper[4632]: for svc in "${!svc_ips[@]}"; do Mar 13 10:05:10 crc kubenswrapper[4632]: for ip in ${svc_ips[${svc}]}; do Mar 13 10:05:10 crc kubenswrapper[4632]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 13 10:05:10 crc kubenswrapper[4632]: done Mar 13 10:05:10 crc kubenswrapper[4632]: done Mar 13 10:05:10 crc kubenswrapper[4632]: if [[ $rc -ne 0 ]]; then Mar 13 10:05:10 crc kubenswrapper[4632]: sleep 60 & wait Mar 13 10:05:10 crc kubenswrapper[4632]: continue Mar 13 10:05:10 crc kubenswrapper[4632]: fi Mar 13 10:05:10 crc kubenswrapper[4632]: Mar 13 10:05:10 crc kubenswrapper[4632]: Mar 13 10:05:10 crc kubenswrapper[4632]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 13 10:05:10 crc kubenswrapper[4632]: # Replace /etc/hosts with our modified version if needed Mar 13 10:05:10 crc kubenswrapper[4632]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 13 10:05:10 crc kubenswrapper[4632]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 13 10:05:10 crc kubenswrapper[4632]: fi Mar 13 10:05:10 crc kubenswrapper[4632]: sleep 60 & wait Mar 13 10:05:10 crc kubenswrapper[4632]: unset svc_ips Mar 13 10:05:10 crc kubenswrapper[4632]: done Mar 13 10:05:10 crc kubenswrapper[4632]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgn9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-n55jt_openshift-dns(b29b9ad7-8cc9-434f-8731-a86265c383fd): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 13 10:05:10 crc kubenswrapper[4632]: > logger="UnhandledError" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.509700 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-n55jt" podUID="b29b9ad7-8cc9-434f-8731-a86265c383fd" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.516563 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vnh6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.516629 4632 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 13 10:05:10 crc kubenswrapper[4632]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Mar 13 10:05:10 crc kubenswrapper[4632]: set -euo pipefail Mar 13 10:05:10 crc kubenswrapper[4632]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Mar 13 10:05:10 crc kubenswrapper[4632]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Mar 13 10:05:10 crc kubenswrapper[4632]: # As the secret mount is optional we must wait for the files to be present. Mar 13 10:05:10 crc kubenswrapper[4632]: # The service is created in monitor.yaml and this is created in sdn.yaml. Mar 13 10:05:10 crc kubenswrapper[4632]: TS=$(date +%s) Mar 13 10:05:10 crc kubenswrapper[4632]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Mar 13 10:05:10 crc kubenswrapper[4632]: HAS_LOGGED_INFO=0 Mar 13 10:05:10 crc kubenswrapper[4632]: Mar 13 10:05:10 crc kubenswrapper[4632]: log_missing_certs(){ Mar 13 10:05:10 crc kubenswrapper[4632]: CUR_TS=$(date +%s) Mar 13 10:05:10 crc kubenswrapper[4632]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Mar 13 10:05:10 crc kubenswrapper[4632]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Mar 13 10:05:10 crc kubenswrapper[4632]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Mar 13 10:05:10 crc kubenswrapper[4632]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Mar 13 10:05:10 crc kubenswrapper[4632]: HAS_LOGGED_INFO=1 Mar 13 10:05:10 crc kubenswrapper[4632]: fi Mar 13 10:05:10 crc kubenswrapper[4632]: } Mar 13 10:05:10 crc kubenswrapper[4632]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Mar 13 10:05:10 crc kubenswrapper[4632]: log_missing_certs Mar 13 10:05:10 crc kubenswrapper[4632]: sleep 5 Mar 13 10:05:10 crc kubenswrapper[4632]: done Mar 13 10:05:10 crc kubenswrapper[4632]: Mar 13 10:05:10 crc kubenswrapper[4632]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Mar 13 10:05:10 crc kubenswrapper[4632]: exec /usr/bin/kube-rbac-proxy \ Mar 13 10:05:10 crc kubenswrapper[4632]: --logtostderr \ Mar 13 10:05:10 crc kubenswrapper[4632]: --secure-listen-address=:9108 \ Mar 13 10:05:10 crc kubenswrapper[4632]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Mar 13 10:05:10 crc kubenswrapper[4632]: --upstream=http://127.0.0.1:29108/ \ Mar 13 10:05:10 crc kubenswrapper[4632]: --tls-private-key-file=${TLS_PK} \ Mar 13 10:05:10 crc kubenswrapper[4632]: --tls-cert-file=${TLS_CERT} Mar 13 10:05:10 crc kubenswrapper[4632]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ffbwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-kbtt2_openshift-ovn-kubernetes(b0c542d5-8c38-4243-8af7-cfc0d8e22773): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 13 10:05:10 crc kubenswrapper[4632]: > logger="UnhandledError" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.517430 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.518181 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.518867 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gqf22" event={"ID":"4ec8e301-3037-4de0-94d2-32c49709660e","Type":"ContainerStarted","Data":"59b4c4e96c104a54f846e291518c93a1b3e3a63ef11982c82cf3ed1b26b885f0"} Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.520826 4632 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 13 10:05:10 crc kubenswrapper[4632]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 13 10:05:10 crc kubenswrapper[4632]: if [[ -f "/env/_master" ]]; then Mar 13 10:05:10 crc kubenswrapper[4632]: set -o allexport Mar 13 10:05:10 crc kubenswrapper[4632]: source "/env/_master" Mar 13 10:05:10 crc kubenswrapper[4632]: set +o allexport Mar 13 10:05:10 crc kubenswrapper[4632]: fi Mar 13 10:05:10 crc kubenswrapper[4632]: Mar 13 10:05:10 crc kubenswrapper[4632]: ovn_v4_join_subnet_opt= Mar 13 10:05:10 crc kubenswrapper[4632]: if [[ "" != "" ]]; then Mar 13 10:05:10 crc kubenswrapper[4632]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Mar 13 10:05:10 crc kubenswrapper[4632]: fi Mar 13 10:05:10 crc kubenswrapper[4632]: ovn_v6_join_subnet_opt= Mar 13 10:05:10 crc kubenswrapper[4632]: if [[ "" != "" ]]; then Mar 13 10:05:10 crc kubenswrapper[4632]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Mar 13 10:05:10 crc kubenswrapper[4632]: fi Mar 13 10:05:10 crc kubenswrapper[4632]: Mar 13 10:05:10 crc kubenswrapper[4632]: ovn_v4_transit_switch_subnet_opt= Mar 13 10:05:10 crc kubenswrapper[4632]: if [[ "" != "" ]]; then Mar 13 10:05:10 crc kubenswrapper[4632]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Mar 13 10:05:10 crc kubenswrapper[4632]: fi Mar 13 10:05:10 crc kubenswrapper[4632]: ovn_v6_transit_switch_subnet_opt= Mar 13 10:05:10 crc kubenswrapper[4632]: if [[ "" != "" ]]; then Mar 13 10:05:10 crc kubenswrapper[4632]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Mar 13 10:05:10 crc kubenswrapper[4632]: fi Mar 13 10:05:10 crc kubenswrapper[4632]: Mar 13 10:05:10 crc kubenswrapper[4632]: dns_name_resolver_enabled_flag= Mar 13 10:05:10 crc kubenswrapper[4632]: if [[ "false" == "true" ]]; then Mar 13 10:05:10 crc kubenswrapper[4632]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Mar 13 10:05:10 crc kubenswrapper[4632]: fi Mar 13 10:05:10 crc kubenswrapper[4632]: Mar 13 10:05:10 crc kubenswrapper[4632]: persistent_ips_enabled_flag= Mar 13 10:05:10 crc kubenswrapper[4632]: if [[ "true" == "true" ]]; then Mar 13 10:05:10 crc kubenswrapper[4632]: persistent_ips_enabled_flag="--enable-persistent-ips" Mar 13 10:05:10 crc kubenswrapper[4632]: fi Mar 13 10:05:10 crc kubenswrapper[4632]: Mar 13 10:05:10 crc kubenswrapper[4632]: # This is needed so that converting clusters from GA to TP Mar 13 10:05:10 crc kubenswrapper[4632]: # will rollout control plane pods as well Mar 13 10:05:10 crc kubenswrapper[4632]: network_segmentation_enabled_flag= Mar 13 10:05:10 crc kubenswrapper[4632]: multi_network_enabled_flag= Mar 13 10:05:10 crc kubenswrapper[4632]: if [[ "true" == "true" ]]; then Mar 13 10:05:10 crc kubenswrapper[4632]: multi_network_enabled_flag="--enable-multi-network" Mar 13 10:05:10 crc kubenswrapper[4632]: network_segmentation_enabled_flag="--enable-network-segmentation" Mar 13 10:05:10 crc kubenswrapper[4632]: fi Mar 13 10:05:10 crc kubenswrapper[4632]: Mar 13 10:05:10 crc kubenswrapper[4632]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Mar 13 10:05:10 crc kubenswrapper[4632]: exec /usr/bin/ovnkube \ Mar 13 10:05:10 crc kubenswrapper[4632]: --enable-interconnect \ Mar 13 10:05:10 crc kubenswrapper[4632]: --init-cluster-manager "${K8S_NODE}" \ Mar 13 10:05:10 crc kubenswrapper[4632]: --config-file=/run/ovnkube-config/ovnkube.conf \ Mar 13 10:05:10 crc kubenswrapper[4632]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Mar 13 10:05:10 crc kubenswrapper[4632]: --metrics-bind-address "127.0.0.1:29108" \ Mar 13 10:05:10 crc kubenswrapper[4632]: --metrics-enable-pprof \ Mar 13 10:05:10 crc kubenswrapper[4632]: --metrics-enable-config-duration \ Mar 13 10:05:10 crc kubenswrapper[4632]: ${ovn_v4_join_subnet_opt} \ Mar 13 10:05:10 crc kubenswrapper[4632]: ${ovn_v6_join_subnet_opt} \ Mar 13 10:05:10 crc kubenswrapper[4632]: ${ovn_v4_transit_switch_subnet_opt} \ Mar 13 10:05:10 crc kubenswrapper[4632]: ${ovn_v6_transit_switch_subnet_opt} \ Mar 13 10:05:10 crc kubenswrapper[4632]: ${dns_name_resolver_enabled_flag} \ Mar 13 10:05:10 crc kubenswrapper[4632]: ${persistent_ips_enabled_flag} \ Mar 13 10:05:10 crc kubenswrapper[4632]: ${multi_network_enabled_flag} \ Mar 13 10:05:10 crc kubenswrapper[4632]: ${network_segmentation_enabled_flag} Mar 13 10:05:10 crc kubenswrapper[4632]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ffbwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-kbtt2_openshift-ovn-kubernetes(b0c542d5-8c38-4243-8af7-cfc0d8e22773): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 13 10:05:10 crc kubenswrapper[4632]: > logger="UnhandledError" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.521188 4632 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 13 10:05:10 crc kubenswrapper[4632]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 13 10:05:10 crc kubenswrapper[4632]: apiVersion: v1 Mar 13 10:05:10 crc kubenswrapper[4632]: clusters: Mar 13 10:05:10 crc kubenswrapper[4632]: - cluster: Mar 13 10:05:10 crc kubenswrapper[4632]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 13 10:05:10 crc kubenswrapper[4632]: server: https://api-int.crc.testing:6443 Mar 13 10:05:10 crc kubenswrapper[4632]: name: default-cluster Mar 13 10:05:10 crc kubenswrapper[4632]: contexts: Mar 13 10:05:10 crc kubenswrapper[4632]: - context: Mar 13 10:05:10 crc kubenswrapper[4632]: cluster: default-cluster Mar 13 10:05:10 crc kubenswrapper[4632]: namespace: default Mar 13 10:05:10 crc kubenswrapper[4632]: user: default-auth Mar 13 10:05:10 crc kubenswrapper[4632]: name: default-context Mar 13 10:05:10 crc kubenswrapper[4632]: current-context: default-context Mar 13 10:05:10 crc kubenswrapper[4632]: kind: Config Mar 13 10:05:10 crc kubenswrapper[4632]: preferences: {} Mar 13 10:05:10 crc kubenswrapper[4632]: users: Mar 13 10:05:10 crc kubenswrapper[4632]: - name: default-auth Mar 13 10:05:10 crc kubenswrapper[4632]: user: Mar 13 10:05:10 crc kubenswrapper[4632]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 13 10:05:10 crc kubenswrapper[4632]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 13 10:05:10 crc kubenswrapper[4632]: EOF Mar 13 10:05:10 crc kubenswrapper[4632]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dj6cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-qb725_openshift-ovn-kubernetes(3b40c6b3-0061-4224-82d5-3ccf67998722): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 13 10:05:10 crc kubenswrapper[4632]: > logger="UnhandledError" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.521502 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"8b7cb438273be81daa37dee852b13428c2abbaceb49c8c8e2ae84fffcf8cf261"} Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.521813 4632 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 13 10:05:10 crc kubenswrapper[4632]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 13 10:05:10 crc kubenswrapper[4632]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 13 10:05:10 crc kubenswrapper[4632]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8d5c4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-gqf22_openshift-multus(4ec8e301-3037-4de0-94d2-32c49709660e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 13 10:05:10 crc kubenswrapper[4632]: > logger="UnhandledError" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.521986 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" podUID="b0c542d5-8c38-4243-8af7-cfc0d8e22773" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.522604 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.522733 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.523813 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.523846 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-gqf22" podUID="4ec8e301-3037-4de0-94d2-32c49709660e" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.525747 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-zwlc8" event={"ID":"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7","Type":"ContainerStarted","Data":"4c92132358db0c70647bacc66decb6a4c6c62b231a7ed9f697e887c3f84c7787"} Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.527554 4632 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 13 10:05:10 crc kubenswrapper[4632]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Mar 13 10:05:10 crc kubenswrapper[4632]: while [ true ]; Mar 13 10:05:10 crc kubenswrapper[4632]: do Mar 13 10:05:10 crc kubenswrapper[4632]: for f in $(ls /tmp/serviceca); do Mar 13 10:05:10 crc kubenswrapper[4632]: echo $f Mar 13 10:05:10 crc kubenswrapper[4632]: ca_file_path="/tmp/serviceca/${f}" Mar 13 10:05:10 crc kubenswrapper[4632]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Mar 13 10:05:10 crc kubenswrapper[4632]: reg_dir_path="/etc/docker/certs.d/${f}" Mar 13 10:05:10 crc kubenswrapper[4632]: if [ -e "${reg_dir_path}" ]; then Mar 13 10:05:10 crc kubenswrapper[4632]: cp -u $ca_file_path $reg_dir_path/ca.crt Mar 13 10:05:10 crc kubenswrapper[4632]: else Mar 13 10:05:10 crc kubenswrapper[4632]: mkdir $reg_dir_path Mar 13 10:05:10 crc kubenswrapper[4632]: cp $ca_file_path $reg_dir_path/ca.crt Mar 13 10:05:10 crc kubenswrapper[4632]: fi Mar 13 10:05:10 crc kubenswrapper[4632]: done Mar 13 10:05:10 crc kubenswrapper[4632]: for d in $(ls /etc/docker/certs.d); do Mar 13 10:05:10 crc kubenswrapper[4632]: echo $d Mar 13 10:05:10 crc kubenswrapper[4632]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Mar 13 10:05:10 crc kubenswrapper[4632]: reg_conf_path="/tmp/serviceca/${dp}" Mar 13 10:05:10 crc kubenswrapper[4632]: if [ ! -e "${reg_conf_path}" ]; then Mar 13 10:05:10 crc kubenswrapper[4632]: rm -rf /etc/docker/certs.d/$d Mar 13 10:05:10 crc kubenswrapper[4632]: fi Mar 13 10:05:10 crc kubenswrapper[4632]: done Mar 13 10:05:10 crc kubenswrapper[4632]: sleep 60 & wait ${!} Mar 13 10:05:10 crc kubenswrapper[4632]: done Mar 13 10:05:10 crc kubenswrapper[4632]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mq5zl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-zwlc8_openshift-image-registry(9a50974e-f938-40f7-ace5-2a3b4cb1f3e7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 13 10:05:10 crc kubenswrapper[4632]: > logger="UnhandledError" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.528711 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-zwlc8" podUID="9a50974e-f938-40f7-ace5-2a3b4cb1f3e7" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.529767 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" event={"ID":"b054ca08-1d09-4eca-a608-eb5b9323959a","Type":"ContainerStarted","Data":"b6d89645c2bf89bdd72981c912fdc942c220dbb780f6f99cfc8bc5a2bbbf55cf"} Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.530208 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.531030 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6f22aa92075ec9a6d25de41e0ea229738f486d80dffdd54030cd9441e1bc535a"} Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.533538 4632 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 13 10:05:10 crc kubenswrapper[4632]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 13 10:05:10 crc kubenswrapper[4632]: if [[ -f "/env/_master" ]]; then Mar 13 10:05:10 crc kubenswrapper[4632]: set -o allexport Mar 13 10:05:10 crc kubenswrapper[4632]: source "/env/_master" Mar 13 10:05:10 crc kubenswrapper[4632]: set +o allexport Mar 13 10:05:10 crc kubenswrapper[4632]: fi Mar 13 10:05:10 crc kubenswrapper[4632]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 13 10:05:10 crc kubenswrapper[4632]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 13 10:05:10 crc kubenswrapper[4632]: ho_enable="--enable-hybrid-overlay" Mar 13 10:05:10 crc kubenswrapper[4632]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 13 10:05:10 crc kubenswrapper[4632]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 13 10:05:10 crc kubenswrapper[4632]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 13 10:05:10 crc kubenswrapper[4632]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 13 10:05:10 crc kubenswrapper[4632]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 13 10:05:10 crc kubenswrapper[4632]: --webhook-host=127.0.0.1 \ Mar 13 10:05:10 crc kubenswrapper[4632]: --webhook-port=9743 \ Mar 13 10:05:10 crc kubenswrapper[4632]: ${ho_enable} \ Mar 13 10:05:10 crc kubenswrapper[4632]: --enable-interconnect \ Mar 13 10:05:10 crc kubenswrapper[4632]: --disable-approver \ Mar 13 10:05:10 crc kubenswrapper[4632]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 13 10:05:10 crc kubenswrapper[4632]: --wait-for-kubernetes-api=200s \ Mar 13 10:05:10 crc kubenswrapper[4632]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 13 10:05:10 crc kubenswrapper[4632]: --loglevel="${LOGLEVEL}" Mar 13 10:05:10 crc kubenswrapper[4632]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 13 10:05:10 crc kubenswrapper[4632]: > logger="UnhandledError" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.533841 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a73720ed844a5d744dc836e0fdd3b9ba936013cb457caf6f23070e9ade4d0cbf"} Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.534741 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9s5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-qlc8m_openshift-multus(b054ca08-1d09-4eca-a608-eb5b9323959a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.535839 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" podUID="b054ca08-1d09-4eca-a608-eb5b9323959a" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.535968 4632 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 13 10:05:10 crc kubenswrapper[4632]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 13 10:05:10 crc kubenswrapper[4632]: if [[ -f "/env/_master" ]]; then Mar 13 10:05:10 crc kubenswrapper[4632]: set -o allexport Mar 13 10:05:10 crc kubenswrapper[4632]: source "/env/_master" Mar 13 10:05:10 crc kubenswrapper[4632]: set +o allexport Mar 13 10:05:10 crc kubenswrapper[4632]: fi Mar 13 10:05:10 crc kubenswrapper[4632]: Mar 13 10:05:10 crc kubenswrapper[4632]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 13 10:05:10 crc kubenswrapper[4632]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 13 10:05:10 crc kubenswrapper[4632]: --disable-webhook \ Mar 13 10:05:10 crc kubenswrapper[4632]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 13 10:05:10 crc kubenswrapper[4632]: --loglevel="${LOGLEVEL}" Mar 13 10:05:10 crc kubenswrapper[4632]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 13 10:05:10 crc kubenswrapper[4632]: > logger="UnhandledError" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.536115 4632 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 13 10:05:10 crc kubenswrapper[4632]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 13 10:05:10 crc kubenswrapper[4632]: set -o allexport Mar 13 10:05:10 crc kubenswrapper[4632]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 13 10:05:10 crc kubenswrapper[4632]: source /etc/kubernetes/apiserver-url.env Mar 13 10:05:10 crc kubenswrapper[4632]: else Mar 13 10:05:10 crc kubenswrapper[4632]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 13 10:05:10 crc kubenswrapper[4632]: exit 1 Mar 13 10:05:10 crc kubenswrapper[4632]: fi Mar 13 10:05:10 crc kubenswrapper[4632]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 13 10:05:10 crc kubenswrapper[4632]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 13 10:05:10 crc kubenswrapper[4632]: > logger="UnhandledError" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.537678 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.537721 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.542523 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.544428 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.544457 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.544515 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.544539 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.544704 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:10Z","lastTransitionTime":"2026-03-13T10:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.551613 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.566545 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.578186 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.586099 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.594639 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.604703 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.616641 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.625626 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.634756 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.644396 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.647155 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.647179 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.647212 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.647232 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.647243 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:10Z","lastTransitionTime":"2026-03-13T10:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.655302 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.667390 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.678131 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.687150 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.695713 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.705758 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.716810 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.725505 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.734470 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.743557 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.749871 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.749952 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.749972 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.749987 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.749996 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:10Z","lastTransitionTime":"2026-03-13T10:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.751178 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.757579 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.787216 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.809096 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.809260 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:05:11.809238229 +0000 UTC m=+85.831768362 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.809378 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.809540 4632 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.809609 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.809769 4632 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.809782 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-13 10:05:11.809764925 +0000 UTC m=+85.832295058 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.809894 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-13 10:05:11.809871859 +0000 UTC m=+85.832402012 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.820209 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.853561 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.853623 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.853636 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.853662 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.853677 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:10Z","lastTransitionTime":"2026-03-13T10:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.859696 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.911020 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.911099 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.911143 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad-metrics-certs\") pod \"network-metrics-daemon-z2vlz\" (UID: \"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\") " pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.911314 4632 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.911384 4632 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.911474 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad-metrics-certs podName:ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad nodeName:}" failed. No retries permitted until 2026-03-13 10:05:11.911449812 +0000 UTC m=+85.933979985 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad-metrics-certs") pod "network-metrics-daemon-z2vlz" (UID: "ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.911389 4632 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.911528 4632 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.911505 4632 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.911615 4632 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.911625 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-13 10:05:11.911596106 +0000 UTC m=+85.934126399 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.911637 4632 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:05:10 crc kubenswrapper[4632]: E0313 10:05:10.911731 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-13 10:05:11.911703559 +0000 UTC m=+85.934233882 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.958079 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.958153 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.958167 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.958195 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:10 crc kubenswrapper[4632]: I0313 10:05:10.958211 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:10Z","lastTransitionTime":"2026-03-13T10:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.065424 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.065798 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.065880 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.065981 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.066075 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:11Z","lastTransitionTime":"2026-03-13T10:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.066157 4632 scope.go:117] "RemoveContainer" containerID="9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae" Mar 13 10:05:11 crc kubenswrapper[4632]: E0313 10:05:11.067493 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.067748 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.169164 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.169225 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.169240 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.169260 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.169273 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:11Z","lastTransitionTime":"2026-03-13T10:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.271956 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.272016 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.272031 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.272057 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.272072 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:11Z","lastTransitionTime":"2026-03-13T10:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.375027 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.375081 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.375096 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.375115 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.375129 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:11Z","lastTransitionTime":"2026-03-13T10:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.478231 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.478305 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.478322 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.478343 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.478358 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:11Z","lastTransitionTime":"2026-03-13T10:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.537824 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" event={"ID":"b0c542d5-8c38-4243-8af7-cfc0d8e22773","Type":"ContainerStarted","Data":"475af06ee8970aef910c45ce81dbd4a1179474c0a41501893d0b8f7aa65229e2"} Mar 13 10:05:11 crc kubenswrapper[4632]: E0313 10:05:11.539476 4632 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 13 10:05:11 crc kubenswrapper[4632]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Mar 13 10:05:11 crc kubenswrapper[4632]: set -euo pipefail Mar 13 10:05:11 crc kubenswrapper[4632]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Mar 13 10:05:11 crc kubenswrapper[4632]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Mar 13 10:05:11 crc kubenswrapper[4632]: # As the secret mount is optional we must wait for the files to be present. Mar 13 10:05:11 crc kubenswrapper[4632]: # The service is created in monitor.yaml and this is created in sdn.yaml. Mar 13 10:05:11 crc kubenswrapper[4632]: TS=$(date +%s) Mar 13 10:05:11 crc kubenswrapper[4632]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Mar 13 10:05:11 crc kubenswrapper[4632]: HAS_LOGGED_INFO=0 Mar 13 10:05:11 crc kubenswrapper[4632]: Mar 13 10:05:11 crc kubenswrapper[4632]: log_missing_certs(){ Mar 13 10:05:11 crc kubenswrapper[4632]: CUR_TS=$(date +%s) Mar 13 10:05:11 crc kubenswrapper[4632]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Mar 13 10:05:11 crc kubenswrapper[4632]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Mar 13 10:05:11 crc kubenswrapper[4632]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Mar 13 10:05:11 crc kubenswrapper[4632]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Mar 13 10:05:11 crc kubenswrapper[4632]: HAS_LOGGED_INFO=1 Mar 13 10:05:11 crc kubenswrapper[4632]: fi Mar 13 10:05:11 crc kubenswrapper[4632]: } Mar 13 10:05:11 crc kubenswrapper[4632]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Mar 13 10:05:11 crc kubenswrapper[4632]: log_missing_certs Mar 13 10:05:11 crc kubenswrapper[4632]: sleep 5 Mar 13 10:05:11 crc kubenswrapper[4632]: done Mar 13 10:05:11 crc kubenswrapper[4632]: Mar 13 10:05:11 crc kubenswrapper[4632]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Mar 13 10:05:11 crc kubenswrapper[4632]: exec /usr/bin/kube-rbac-proxy \ Mar 13 10:05:11 crc kubenswrapper[4632]: --logtostderr \ Mar 13 10:05:11 crc kubenswrapper[4632]: --secure-listen-address=:9108 \ Mar 13 10:05:11 crc kubenswrapper[4632]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Mar 13 10:05:11 crc kubenswrapper[4632]: --upstream=http://127.0.0.1:29108/ \ Mar 13 10:05:11 crc kubenswrapper[4632]: --tls-private-key-file=${TLS_PK} \ Mar 13 10:05:11 crc kubenswrapper[4632]: --tls-cert-file=${TLS_CERT} Mar 13 10:05:11 crc kubenswrapper[4632]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ffbwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-kbtt2_openshift-ovn-kubernetes(b0c542d5-8c38-4243-8af7-cfc0d8e22773): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 13 10:05:11 crc kubenswrapper[4632]: > logger="UnhandledError" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.539838 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" event={"ID":"3b40c6b3-0061-4224-82d5-3ccf67998722","Type":"ContainerStarted","Data":"5f99589c0e329dc2bea211f1582fe2ff509c48ed7460521bac851a5b63796f30"} Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.540154 4632 scope.go:117] "RemoveContainer" containerID="9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae" Mar 13 10:05:11 crc kubenswrapper[4632]: E0313 10:05:11.540323 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 13 10:05:11 crc kubenswrapper[4632]: E0313 10:05:11.543181 4632 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 13 10:05:11 crc kubenswrapper[4632]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 13 10:05:11 crc kubenswrapper[4632]: apiVersion: v1 Mar 13 10:05:11 crc kubenswrapper[4632]: clusters: Mar 13 10:05:11 crc kubenswrapper[4632]: - cluster: Mar 13 10:05:11 crc kubenswrapper[4632]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 13 10:05:11 crc kubenswrapper[4632]: server: https://api-int.crc.testing:6443 Mar 13 10:05:11 crc kubenswrapper[4632]: name: default-cluster Mar 13 10:05:11 crc kubenswrapper[4632]: contexts: Mar 13 10:05:11 crc kubenswrapper[4632]: - context: Mar 13 10:05:11 crc kubenswrapper[4632]: cluster: default-cluster Mar 13 10:05:11 crc kubenswrapper[4632]: namespace: default Mar 13 10:05:11 crc kubenswrapper[4632]: user: default-auth Mar 13 10:05:11 crc kubenswrapper[4632]: name: default-context Mar 13 10:05:11 crc kubenswrapper[4632]: current-context: default-context Mar 13 10:05:11 crc kubenswrapper[4632]: kind: Config Mar 13 10:05:11 crc kubenswrapper[4632]: preferences: {} Mar 13 10:05:11 crc kubenswrapper[4632]: users: Mar 13 10:05:11 crc kubenswrapper[4632]: - name: default-auth Mar 13 10:05:11 crc kubenswrapper[4632]: user: Mar 13 10:05:11 crc kubenswrapper[4632]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 13 10:05:11 crc kubenswrapper[4632]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 13 10:05:11 crc kubenswrapper[4632]: EOF Mar 13 10:05:11 crc kubenswrapper[4632]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dj6cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-qb725_openshift-ovn-kubernetes(3b40c6b3-0061-4224-82d5-3ccf67998722): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 13 10:05:11 crc kubenswrapper[4632]: > logger="UnhandledError" Mar 13 10:05:11 crc kubenswrapper[4632]: E0313 10:05:11.543275 4632 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 13 10:05:11 crc kubenswrapper[4632]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 13 10:05:11 crc kubenswrapper[4632]: if [[ -f "/env/_master" ]]; then Mar 13 10:05:11 crc kubenswrapper[4632]: set -o allexport Mar 13 10:05:11 crc kubenswrapper[4632]: source "/env/_master" Mar 13 10:05:11 crc kubenswrapper[4632]: set +o allexport Mar 13 10:05:11 crc kubenswrapper[4632]: fi Mar 13 10:05:11 crc kubenswrapper[4632]: Mar 13 10:05:11 crc kubenswrapper[4632]: ovn_v4_join_subnet_opt= Mar 13 10:05:11 crc kubenswrapper[4632]: if [[ "" != "" ]]; then Mar 13 10:05:11 crc kubenswrapper[4632]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Mar 13 10:05:11 crc kubenswrapper[4632]: fi Mar 13 10:05:11 crc kubenswrapper[4632]: ovn_v6_join_subnet_opt= Mar 13 10:05:11 crc kubenswrapper[4632]: if [[ "" != "" ]]; then Mar 13 10:05:11 crc kubenswrapper[4632]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Mar 13 10:05:11 crc kubenswrapper[4632]: fi Mar 13 10:05:11 crc kubenswrapper[4632]: Mar 13 10:05:11 crc kubenswrapper[4632]: ovn_v4_transit_switch_subnet_opt= Mar 13 10:05:11 crc kubenswrapper[4632]: if [[ "" != "" ]]; then Mar 13 10:05:11 crc kubenswrapper[4632]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Mar 13 10:05:11 crc kubenswrapper[4632]: fi Mar 13 10:05:11 crc kubenswrapper[4632]: ovn_v6_transit_switch_subnet_opt= Mar 13 10:05:11 crc kubenswrapper[4632]: if [[ "" != "" ]]; then Mar 13 10:05:11 crc kubenswrapper[4632]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Mar 13 10:05:11 crc kubenswrapper[4632]: fi Mar 13 10:05:11 crc kubenswrapper[4632]: Mar 13 10:05:11 crc kubenswrapper[4632]: dns_name_resolver_enabled_flag= Mar 13 10:05:11 crc kubenswrapper[4632]: if [[ "false" == "true" ]]; then Mar 13 10:05:11 crc kubenswrapper[4632]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Mar 13 10:05:11 crc kubenswrapper[4632]: fi Mar 13 10:05:11 crc kubenswrapper[4632]: Mar 13 10:05:11 crc kubenswrapper[4632]: persistent_ips_enabled_flag= Mar 13 10:05:11 crc kubenswrapper[4632]: if [[ "true" == "true" ]]; then Mar 13 10:05:11 crc kubenswrapper[4632]: persistent_ips_enabled_flag="--enable-persistent-ips" Mar 13 10:05:11 crc kubenswrapper[4632]: fi Mar 13 10:05:11 crc kubenswrapper[4632]: Mar 13 10:05:11 crc kubenswrapper[4632]: # This is needed so that converting clusters from GA to TP Mar 13 10:05:11 crc kubenswrapper[4632]: # will rollout control plane pods as well Mar 13 10:05:11 crc kubenswrapper[4632]: network_segmentation_enabled_flag= Mar 13 10:05:11 crc kubenswrapper[4632]: multi_network_enabled_flag= Mar 13 10:05:11 crc kubenswrapper[4632]: if [[ "true" == "true" ]]; then Mar 13 10:05:11 crc kubenswrapper[4632]: multi_network_enabled_flag="--enable-multi-network" Mar 13 10:05:11 crc kubenswrapper[4632]: network_segmentation_enabled_flag="--enable-network-segmentation" Mar 13 10:05:11 crc kubenswrapper[4632]: fi Mar 13 10:05:11 crc kubenswrapper[4632]: Mar 13 10:05:11 crc kubenswrapper[4632]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Mar 13 10:05:11 crc kubenswrapper[4632]: exec /usr/bin/ovnkube \ Mar 13 10:05:11 crc kubenswrapper[4632]: --enable-interconnect \ Mar 13 10:05:11 crc kubenswrapper[4632]: --init-cluster-manager "${K8S_NODE}" \ Mar 13 10:05:11 crc kubenswrapper[4632]: --config-file=/run/ovnkube-config/ovnkube.conf \ Mar 13 10:05:11 crc kubenswrapper[4632]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Mar 13 10:05:11 crc kubenswrapper[4632]: --metrics-bind-address "127.0.0.1:29108" \ Mar 13 10:05:11 crc kubenswrapper[4632]: --metrics-enable-pprof \ Mar 13 10:05:11 crc kubenswrapper[4632]: --metrics-enable-config-duration \ Mar 13 10:05:11 crc kubenswrapper[4632]: ${ovn_v4_join_subnet_opt} \ Mar 13 10:05:11 crc kubenswrapper[4632]: ${ovn_v6_join_subnet_opt} \ Mar 13 10:05:11 crc kubenswrapper[4632]: ${ovn_v4_transit_switch_subnet_opt} \ Mar 13 10:05:11 crc kubenswrapper[4632]: ${ovn_v6_transit_switch_subnet_opt} \ Mar 13 10:05:11 crc kubenswrapper[4632]: ${dns_name_resolver_enabled_flag} \ Mar 13 10:05:11 crc kubenswrapper[4632]: ${persistent_ips_enabled_flag} \ Mar 13 10:05:11 crc kubenswrapper[4632]: ${multi_network_enabled_flag} \ Mar 13 10:05:11 crc kubenswrapper[4632]: ${network_segmentation_enabled_flag} Mar 13 10:05:11 crc kubenswrapper[4632]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ffbwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-kbtt2_openshift-ovn-kubernetes(b0c542d5-8c38-4243-8af7-cfc0d8e22773): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 13 10:05:11 crc kubenswrapper[4632]: > logger="UnhandledError" Mar 13 10:05:11 crc kubenswrapper[4632]: E0313 10:05:11.544337 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" Mar 13 10:05:11 crc kubenswrapper[4632]: E0313 10:05:11.544507 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" podUID="b0c542d5-8c38-4243-8af7-cfc0d8e22773" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.552711 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.564793 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.576730 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.580365 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.580401 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.580410 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.580426 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.580435 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:11Z","lastTransitionTime":"2026-03-13T10:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.594118 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.610523 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.622289 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.636535 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894cdc70-0747-4975-a22f-0dbd657e91a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0313 10:04:51.224345 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 10:04:51.224743 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0313 10:04:51.231279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3511042304/tls.crt::/tmp/serving-cert-3511042304/tls.key\\\\\\\"\\\\nI0313 10:04:51.512346 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0313 10:04:51.516641 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0313 10:04:51.516667 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0313 10:04:51.516694 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0313 10:04:51.516704 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0313 10:04:51.524417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0313 10:04:51.524471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524478 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 10:04:51.524495 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 10:04:51.524500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 10:04:51.524505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 10:04:51.524633 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0313 10:04:51.529203 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:04:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.650235 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.663657 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.674518 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.685304 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.685346 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.685355 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.685374 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.685383 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:11Z","lastTransitionTime":"2026-03-13T10:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.686622 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.696892 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.707373 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.716191 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.733604 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.746037 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.756904 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.769302 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.782431 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.788326 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.788381 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.788402 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.788420 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.788431 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:11Z","lastTransitionTime":"2026-03-13T10:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.796455 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.807071 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.820059 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894cdc70-0747-4975-a22f-0dbd657e91a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0313 10:04:51.224345 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 10:04:51.224743 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0313 10:04:51.231279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3511042304/tls.crt::/tmp/serving-cert-3511042304/tls.key\\\\\\\"\\\\nI0313 10:04:51.512346 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0313 10:04:51.516641 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0313 10:04:51.516667 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0313 10:04:51.516694 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0313 10:04:51.516704 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0313 10:04:51.524417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0313 10:04:51.524471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524478 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 10:04:51.524495 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 10:04:51.524500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 10:04:51.524505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 10:04:51.524633 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0313 10:04:51.529203 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:04:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.821362 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:05:11 crc kubenswrapper[4632]: E0313 10:05:11.821671 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:05:13.821633341 +0000 UTC m=+87.844163594 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.821767 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.821868 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:11 crc kubenswrapper[4632]: E0313 10:05:11.822010 4632 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 13 10:05:11 crc kubenswrapper[4632]: E0313 10:05:11.822070 4632 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 13 10:05:11 crc kubenswrapper[4632]: E0313 10:05:11.822100 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-13 10:05:13.822079535 +0000 UTC m=+87.844609668 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 13 10:05:11 crc kubenswrapper[4632]: E0313 10:05:11.822233 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-13 10:05:13.822210009 +0000 UTC m=+87.844740272 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.833651 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.844880 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.876417 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.893396 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.893457 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.893470 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.893494 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.893511 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:11Z","lastTransitionTime":"2026-03-13T10:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.918807 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.923504 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.923626 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad-metrics-certs\") pod \"network-metrics-daemon-z2vlz\" (UID: \"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\") " pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.923749 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:05:11 crc kubenswrapper[4632]: E0313 10:05:11.923695 4632 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 10:05:11 crc kubenswrapper[4632]: E0313 10:05:11.923832 4632 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 10:05:11 crc kubenswrapper[4632]: E0313 10:05:11.923851 4632 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:05:11 crc kubenswrapper[4632]: E0313 10:05:11.923717 4632 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 10:05:11 crc kubenswrapper[4632]: E0313 10:05:11.923907 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-13 10:05:13.923889345 +0000 UTC m=+87.946419478 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:05:11 crc kubenswrapper[4632]: E0313 10:05:11.923961 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad-metrics-certs podName:ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad nodeName:}" failed. No retries permitted until 2026-03-13 10:05:13.923933066 +0000 UTC m=+87.946463209 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad-metrics-certs") pod "network-metrics-daemon-z2vlz" (UID: "ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 10:05:11 crc kubenswrapper[4632]: E0313 10:05:11.923987 4632 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 10:05:11 crc kubenswrapper[4632]: E0313 10:05:11.924007 4632 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 10:05:11 crc kubenswrapper[4632]: E0313 10:05:11.924018 4632 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:05:11 crc kubenswrapper[4632]: E0313 10:05:11.924082 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-13 10:05:13.92407047 +0000 UTC m=+87.946600603 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.960341 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.998240 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.998428 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.998464 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.998477 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.998499 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:11 crc kubenswrapper[4632]: I0313 10:05:11.998514 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:11Z","lastTransitionTime":"2026-03-13T10:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.038230 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.043646 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.043787 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.043833 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:05:12 crc kubenswrapper[4632]: E0313 10:05:12.043897 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.043921 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:05:12 crc kubenswrapper[4632]: E0313 10:05:12.044089 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:05:12 crc kubenswrapper[4632]: E0313 10:05:12.044221 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:05:12 crc kubenswrapper[4632]: E0313 10:05:12.044304 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.050740 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.051491 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.052458 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.053238 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.054092 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.054713 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.055397 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.055980 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.056683 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.057238 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.060217 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.061012 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.061970 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.062538 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.063113 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.064112 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.064728 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.065688 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.066301 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.066873 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.067806 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.068425 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.068888 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.070218 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.070700 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.071780 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.072478 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.073407 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.074142 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.075026 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.075522 4632 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.075637 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.077728 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.078395 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.078797 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.080351 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.081366 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.082076 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.082092 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.083291 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.083960 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.084794 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.085472 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.086552 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.087602 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.088167 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.088850 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.090133 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.090969 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.091808 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.092313 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.093258 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.093756 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.094402 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.095277 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.100598 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.100699 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.100713 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.100736 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.100752 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:12Z","lastTransitionTime":"2026-03-13T10:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.203925 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.204059 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.204074 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.204098 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.204114 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:12Z","lastTransitionTime":"2026-03-13T10:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.312468 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.312819 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.313586 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.313628 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.313641 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:12Z","lastTransitionTime":"2026-03-13T10:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.416724 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.416801 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.416814 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.416837 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.416854 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:12Z","lastTransitionTime":"2026-03-13T10:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.518919 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.518999 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.519013 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.519030 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.519048 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:12Z","lastTransitionTime":"2026-03-13T10:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.621764 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.621808 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.621817 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.621831 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.621839 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:12Z","lastTransitionTime":"2026-03-13T10:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.725320 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.725370 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.725381 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.725399 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.725410 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:12Z","lastTransitionTime":"2026-03-13T10:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.828623 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.828667 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.828714 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.828733 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.828742 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:12Z","lastTransitionTime":"2026-03-13T10:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.932086 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.932186 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.932205 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.932223 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:12 crc kubenswrapper[4632]: I0313 10:05:12.932233 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:12Z","lastTransitionTime":"2026-03-13T10:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.034853 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.034912 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.034928 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.034994 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.035017 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:13Z","lastTransitionTime":"2026-03-13T10:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.137806 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.138016 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.138038 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.138072 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.138090 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:13Z","lastTransitionTime":"2026-03-13T10:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.239877 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.239955 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.239973 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.239989 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.240001 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:13Z","lastTransitionTime":"2026-03-13T10:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.342715 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.342755 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.342773 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.342791 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.342801 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:13Z","lastTransitionTime":"2026-03-13T10:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.445623 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.445698 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.445714 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.445731 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.445750 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:13Z","lastTransitionTime":"2026-03-13T10:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.547739 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.547802 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.547818 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.547840 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.547855 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:13Z","lastTransitionTime":"2026-03-13T10:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.652243 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.652339 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.652353 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.652375 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.652389 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:13Z","lastTransitionTime":"2026-03-13T10:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.754894 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.754965 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.754979 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.754997 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.755010 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:13Z","lastTransitionTime":"2026-03-13T10:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.841713 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.841817 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.841876 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:13 crc kubenswrapper[4632]: E0313 10:05:13.841978 4632 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 13 10:05:13 crc kubenswrapper[4632]: E0313 10:05:13.842011 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:05:17.841969098 +0000 UTC m=+91.864499241 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:05:13 crc kubenswrapper[4632]: E0313 10:05:13.842069 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-13 10:05:17.84205792 +0000 UTC m=+91.864588103 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 13 10:05:13 crc kubenswrapper[4632]: E0313 10:05:13.842079 4632 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 13 10:05:13 crc kubenswrapper[4632]: E0313 10:05:13.842168 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-13 10:05:17.842147643 +0000 UTC m=+91.864677776 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.856766 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.856802 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.856813 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.856826 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.856835 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:13Z","lastTransitionTime":"2026-03-13T10:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.943454 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:05:13 crc kubenswrapper[4632]: E0313 10:05:13.943753 4632 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 10:05:13 crc kubenswrapper[4632]: E0313 10:05:13.943911 4632 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 10:05:13 crc kubenswrapper[4632]: E0313 10:05:13.943929 4632 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.943856 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad-metrics-certs\") pod \"network-metrics-daemon-z2vlz\" (UID: \"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\") " pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:05:13 crc kubenswrapper[4632]: E0313 10:05:13.944030 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-13 10:05:17.944004913 +0000 UTC m=+91.966535056 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.944093 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:05:13 crc kubenswrapper[4632]: E0313 10:05:13.944235 4632 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 10:05:13 crc kubenswrapper[4632]: E0313 10:05:13.944260 4632 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 10:05:13 crc kubenswrapper[4632]: E0313 10:05:13.944272 4632 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:05:13 crc kubenswrapper[4632]: E0313 10:05:13.944325 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-13 10:05:17.944308564 +0000 UTC m=+91.966838797 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:05:13 crc kubenswrapper[4632]: E0313 10:05:13.944523 4632 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 10:05:13 crc kubenswrapper[4632]: E0313 10:05:13.944669 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad-metrics-certs podName:ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad nodeName:}" failed. No retries permitted until 2026-03-13 10:05:17.944654704 +0000 UTC m=+91.967184927 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad-metrics-certs") pod "network-metrics-daemon-z2vlz" (UID: "ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.959498 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.959824 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.959930 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.960081 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:13 crc kubenswrapper[4632]: I0313 10:05:13.960392 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:13Z","lastTransitionTime":"2026-03-13T10:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.043453 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.043460 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:05:14 crc kubenswrapper[4632]: E0313 10:05:14.043589 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.043474 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:05:14 crc kubenswrapper[4632]: E0313 10:05:14.043716 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:05:14 crc kubenswrapper[4632]: E0313 10:05:14.043761 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.044242 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:14 crc kubenswrapper[4632]: E0313 10:05:14.044417 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.064487 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.064544 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.064559 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.064580 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.064593 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:14Z","lastTransitionTime":"2026-03-13T10:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.167729 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.167785 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.167798 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.167816 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.167830 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:14Z","lastTransitionTime":"2026-03-13T10:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.271083 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.271137 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.271149 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.271168 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.271180 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:14Z","lastTransitionTime":"2026-03-13T10:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.374868 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.374927 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.374969 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.374988 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.375001 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:14Z","lastTransitionTime":"2026-03-13T10:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.477647 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.477733 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.477744 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.477765 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.477776 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:14Z","lastTransitionTime":"2026-03-13T10:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.580300 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.580621 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.580747 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.580868 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.581082 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:14Z","lastTransitionTime":"2026-03-13T10:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.684872 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.685320 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.685435 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.685549 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.685658 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:14Z","lastTransitionTime":"2026-03-13T10:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.789023 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.789068 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.789083 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.789103 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.789119 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:14Z","lastTransitionTime":"2026-03-13T10:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.891698 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.891726 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.891735 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.891749 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.891758 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:14Z","lastTransitionTime":"2026-03-13T10:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.994685 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.995030 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.995110 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.995193 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:14 crc kubenswrapper[4632]: I0313 10:05:14.995254 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:14Z","lastTransitionTime":"2026-03-13T10:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.097967 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.098245 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.098325 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.098428 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.098537 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:15Z","lastTransitionTime":"2026-03-13T10:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.201610 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.201650 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.201662 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.201682 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.201694 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:15Z","lastTransitionTime":"2026-03-13T10:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.303650 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.303698 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.303708 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.303720 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.303729 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:15Z","lastTransitionTime":"2026-03-13T10:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.406644 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.406682 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.406691 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.406707 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.406718 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:15Z","lastTransitionTime":"2026-03-13T10:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.509925 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.510003 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.510017 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.510033 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.510044 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:15Z","lastTransitionTime":"2026-03-13T10:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.613825 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.614190 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.614330 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.614435 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.614524 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:15Z","lastTransitionTime":"2026-03-13T10:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.718579 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.718630 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.718643 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.718665 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.718676 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:15Z","lastTransitionTime":"2026-03-13T10:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.835339 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.835404 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.835416 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.835438 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.835451 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:15Z","lastTransitionTime":"2026-03-13T10:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.938914 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.939167 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.939232 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.939296 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:15 crc kubenswrapper[4632]: I0313 10:05:15.939355 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:15Z","lastTransitionTime":"2026-03-13T10:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.041617 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.041656 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.041667 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.041684 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.041694 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:16Z","lastTransitionTime":"2026-03-13T10:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.044112 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.044128 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.044150 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:16 crc kubenswrapper[4632]: E0313 10:05:16.044202 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.044112 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:05:16 crc kubenswrapper[4632]: E0313 10:05:16.044288 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:05:16 crc kubenswrapper[4632]: E0313 10:05:16.044344 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:05:16 crc kubenswrapper[4632]: E0313 10:05:16.044390 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.267928 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.267997 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.268010 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.268028 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.268041 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:16Z","lastTransitionTime":"2026-03-13T10:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.301037 4632 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.373762 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.373826 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.373840 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.373858 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.373872 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:16Z","lastTransitionTime":"2026-03-13T10:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.476209 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.476244 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.476252 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.476265 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.476277 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:16Z","lastTransitionTime":"2026-03-13T10:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.578907 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.578968 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.578984 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.579001 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.579010 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:16Z","lastTransitionTime":"2026-03-13T10:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.682301 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.682351 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.682360 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.682375 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.682386 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:16Z","lastTransitionTime":"2026-03-13T10:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.784422 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.784458 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.784467 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.784479 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.784488 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:16Z","lastTransitionTime":"2026-03-13T10:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.887345 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.887375 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.887383 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.887397 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.887407 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:16Z","lastTransitionTime":"2026-03-13T10:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.989687 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.989723 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.989732 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.989745 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:16 crc kubenswrapper[4632]: I0313 10:05:16.989754 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:16Z","lastTransitionTime":"2026-03-13T10:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.056695 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.093175 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.093249 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.093265 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.093282 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.093325 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:17Z","lastTransitionTime":"2026-03-13T10:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.196712 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.196757 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.196767 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.196786 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.196799 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:17Z","lastTransitionTime":"2026-03-13T10:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.299857 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.300283 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.300395 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.300528 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.300614 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:17Z","lastTransitionTime":"2026-03-13T10:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.403218 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.403273 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.403283 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.403300 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.403309 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:17Z","lastTransitionTime":"2026-03-13T10:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.505594 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.505634 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.505643 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.505657 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.505666 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:17Z","lastTransitionTime":"2026-03-13T10:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.608932 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.608994 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.609006 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.609025 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.609038 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:17Z","lastTransitionTime":"2026-03-13T10:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.712093 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.712140 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.712151 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.712168 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.712180 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:17Z","lastTransitionTime":"2026-03-13T10:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.814579 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.814631 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.814640 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.814659 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.814668 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:17Z","lastTransitionTime":"2026-03-13T10:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.880113 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.880224 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.880283 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:17 crc kubenswrapper[4632]: E0313 10:05:17.880365 4632 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 13 10:05:17 crc kubenswrapper[4632]: E0313 10:05:17.880428 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-13 10:05:25.880411291 +0000 UTC m=+99.902941424 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 13 10:05:17 crc kubenswrapper[4632]: E0313 10:05:17.880490 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:05:25.880482494 +0000 UTC m=+99.903012627 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:05:17 crc kubenswrapper[4632]: E0313 10:05:17.880568 4632 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 13 10:05:17 crc kubenswrapper[4632]: E0313 10:05:17.880607 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-13 10:05:25.880599637 +0000 UTC m=+99.903129770 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.917597 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.917928 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.918055 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.918125 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.918191 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:17Z","lastTransitionTime":"2026-03-13T10:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.981665 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.981711 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad-metrics-certs\") pod \"network-metrics-daemon-z2vlz\" (UID: \"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\") " pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:05:17 crc kubenswrapper[4632]: I0313 10:05:17.981730 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:05:17 crc kubenswrapper[4632]: E0313 10:05:17.982023 4632 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 10:05:17 crc kubenswrapper[4632]: E0313 10:05:17.982090 4632 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 10:05:17 crc kubenswrapper[4632]: E0313 10:05:17.982107 4632 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:05:17 crc kubenswrapper[4632]: E0313 10:05:17.982184 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-13 10:05:25.98216139 +0000 UTC m=+100.004691593 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:05:17 crc kubenswrapper[4632]: E0313 10:05:17.982202 4632 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 10:05:17 crc kubenswrapper[4632]: E0313 10:05:17.982300 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad-metrics-certs podName:ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad nodeName:}" failed. No retries permitted until 2026-03-13 10:05:25.982282533 +0000 UTC m=+100.004812666 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad-metrics-certs") pod "network-metrics-daemon-z2vlz" (UID: "ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 10:05:17 crc kubenswrapper[4632]: E0313 10:05:17.982806 4632 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 10:05:17 crc kubenswrapper[4632]: E0313 10:05:17.982834 4632 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 10:05:17 crc kubenswrapper[4632]: E0313 10:05:17.982849 4632 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:05:17 crc kubenswrapper[4632]: E0313 10:05:17.982890 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-13 10:05:25.982879331 +0000 UTC m=+100.005409464 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.020566 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.020605 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.020618 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.020637 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.020649 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:18Z","lastTransitionTime":"2026-03-13T10:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.044045 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.044092 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.044068 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:05:18 crc kubenswrapper[4632]: E0313 10:05:18.044175 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.044045 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:05:18 crc kubenswrapper[4632]: E0313 10:05:18.044302 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:05:18 crc kubenswrapper[4632]: E0313 10:05:18.044370 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:05:18 crc kubenswrapper[4632]: E0313 10:05:18.044431 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.056477 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894cdc70-0747-4975-a22f-0dbd657e91a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0313 10:04:51.224345 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 10:04:51.224743 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0313 10:04:51.231279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3511042304/tls.crt::/tmp/serving-cert-3511042304/tls.key\\\\\\\"\\\\nI0313 10:04:51.512346 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0313 10:04:51.516641 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0313 10:04:51.516667 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0313 10:04:51.516694 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0313 10:04:51.516704 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0313 10:04:51.524417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0313 10:04:51.524471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524478 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 10:04:51.524495 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 10:04:51.524500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 10:04:51.524505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 10:04:51.524633 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0313 10:04:51.529203 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:04:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.065039 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.073550 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.081043 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.090584 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.099747 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.106726 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.112353 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.124318 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.124358 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.124369 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.124397 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.124410 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:18Z","lastTransitionTime":"2026-03-13T10:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.126817 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.134819 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeb823b-9a8c-403a-9a60-1d74ba0fbffe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da62efee0b0e5a420abd2f18aae7b1ad532b0dd5dcda4d36d4efa9b039bd8811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.150792 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.161604 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.174050 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.188418 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.198968 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.209883 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.226783 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.226834 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.226854 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.226878 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.226895 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:18Z","lastTransitionTime":"2026-03-13T10:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.326815 4632 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.329829 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.329878 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.329893 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.329913 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.329925 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:18Z","lastTransitionTime":"2026-03-13T10:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.387653 4632 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.433265 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.433309 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.433321 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.433338 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.433358 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:18Z","lastTransitionTime":"2026-03-13T10:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.536099 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.536150 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.536167 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.536185 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.536200 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:18Z","lastTransitionTime":"2026-03-13T10:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.639814 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.639855 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.639865 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.639885 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.639895 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:18Z","lastTransitionTime":"2026-03-13T10:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.742623 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.742684 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.742703 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.742724 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.742735 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:18Z","lastTransitionTime":"2026-03-13T10:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.851127 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.851164 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.851176 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.851193 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.851205 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:18Z","lastTransitionTime":"2026-03-13T10:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.954314 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.954361 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.954372 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.954388 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:18 crc kubenswrapper[4632]: I0313 10:05:18.954400 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:18Z","lastTransitionTime":"2026-03-13T10:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.056907 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.056974 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.056987 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.057003 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.057018 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:19Z","lastTransitionTime":"2026-03-13T10:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.159117 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.159149 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.159161 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.159177 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.159188 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:19Z","lastTransitionTime":"2026-03-13T10:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.262350 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.262629 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.262759 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.262848 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.262966 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:19Z","lastTransitionTime":"2026-03-13T10:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.365610 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.365845 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.365964 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.366091 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.366160 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:19Z","lastTransitionTime":"2026-03-13T10:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.468309 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.468982 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.469062 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.469127 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.469264 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:19Z","lastTransitionTime":"2026-03-13T10:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.571576 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.571615 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.571626 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.571641 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.571653 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:19Z","lastTransitionTime":"2026-03-13T10:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.674524 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.674815 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.675063 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.675239 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.675312 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:19Z","lastTransitionTime":"2026-03-13T10:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.777656 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.777708 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.777722 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.777745 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.777763 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:19Z","lastTransitionTime":"2026-03-13T10:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.881075 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.881458 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.881553 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.881643 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.881719 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:19Z","lastTransitionTime":"2026-03-13T10:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.984971 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.985017 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.985028 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.985043 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:19 crc kubenswrapper[4632]: I0313 10:05:19.985054 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:19Z","lastTransitionTime":"2026-03-13T10:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.043995 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:05:20 crc kubenswrapper[4632]: E0313 10:05:20.044180 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.044200 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.044464 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:20 crc kubenswrapper[4632]: E0313 10:05:20.044493 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.044634 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:05:20 crc kubenswrapper[4632]: E0313 10:05:20.044695 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:05:20 crc kubenswrapper[4632]: E0313 10:05:20.044884 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.087647 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.087694 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.087715 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.087731 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.087741 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:20Z","lastTransitionTime":"2026-03-13T10:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.191521 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.191618 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.191637 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.191661 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.191678 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:20Z","lastTransitionTime":"2026-03-13T10:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.294328 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.294371 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.294381 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.294397 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.294407 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:20Z","lastTransitionTime":"2026-03-13T10:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.343026 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.343368 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.343461 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.343573 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.343667 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:20Z","lastTransitionTime":"2026-03-13T10:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:20 crc kubenswrapper[4632]: E0313 10:05:20.355358 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.360552 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.360816 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.360882 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.360963 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.361037 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:20Z","lastTransitionTime":"2026-03-13T10:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:20 crc kubenswrapper[4632]: E0313 10:05:20.374795 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.379321 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.379381 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.379586 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.379613 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.379631 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:20Z","lastTransitionTime":"2026-03-13T10:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:20 crc kubenswrapper[4632]: E0313 10:05:20.392073 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.396647 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.396709 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.396719 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.396741 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.396755 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:20Z","lastTransitionTime":"2026-03-13T10:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:20 crc kubenswrapper[4632]: E0313 10:05:20.409181 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.415479 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.415594 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.415613 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.415641 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.415678 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:20Z","lastTransitionTime":"2026-03-13T10:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:20 crc kubenswrapper[4632]: E0313 10:05:20.429728 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:20 crc kubenswrapper[4632]: E0313 10:05:20.429867 4632 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.432459 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.432520 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.432533 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.432550 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.432561 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:20Z","lastTransitionTime":"2026-03-13T10:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.536282 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.536331 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.536341 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.536360 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.536372 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:20Z","lastTransitionTime":"2026-03-13T10:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.639620 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.639996 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.640299 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.640462 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.640601 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:20Z","lastTransitionTime":"2026-03-13T10:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.742864 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.743167 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.743406 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.743604 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.743744 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:20Z","lastTransitionTime":"2026-03-13T10:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.846213 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.846540 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.846784 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.846990 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.847208 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:20Z","lastTransitionTime":"2026-03-13T10:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.950825 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.950887 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.950900 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.950921 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:20 crc kubenswrapper[4632]: I0313 10:05:20.950968 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:20Z","lastTransitionTime":"2026-03-13T10:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.054423 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.054489 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.054500 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.054518 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.054528 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:21Z","lastTransitionTime":"2026-03-13T10:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.157196 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.157511 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.157524 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.157543 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.157555 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:21Z","lastTransitionTime":"2026-03-13T10:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.260893 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.260972 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.261017 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.261037 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.261048 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:21Z","lastTransitionTime":"2026-03-13T10:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.363359 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.363397 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.363408 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.363432 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.363447 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:21Z","lastTransitionTime":"2026-03-13T10:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.466766 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.466837 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.466851 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.466871 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.466885 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:21Z","lastTransitionTime":"2026-03-13T10:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.569642 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.569704 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.569716 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.569740 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.569751 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:21Z","lastTransitionTime":"2026-03-13T10:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.571259 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerStarted","Data":"e74ac702432406b147f9db010a8945c3e54f6f15a64346e895833c39bcc8f6d9"} Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.571320 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerStarted","Data":"8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7"} Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.589434 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.608010 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.622058 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.638106 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.659215 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.671547 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeb823b-9a8c-403a-9a60-1d74ba0fbffe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da62efee0b0e5a420abd2f18aae7b1ad532b0dd5dcda4d36d4efa9b039bd8811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.673737 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.673810 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.673840 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.673861 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.673873 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:21Z","lastTransitionTime":"2026-03-13T10:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.744702 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.758056 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74ac702432406b147f9db010a8945c3e54f6f15a64346e895833c39bcc8f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.771659 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.776510 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.776571 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.776586 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.776605 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.776617 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:21Z","lastTransitionTime":"2026-03-13T10:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.786222 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.796903 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.808166 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.832131 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894cdc70-0747-4975-a22f-0dbd657e91a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0313 10:04:51.224345 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 10:04:51.224743 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0313 10:04:51.231279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3511042304/tls.crt::/tmp/serving-cert-3511042304/tls.key\\\\\\\"\\\\nI0313 10:04:51.512346 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0313 10:04:51.516641 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0313 10:04:51.516667 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0313 10:04:51.516694 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0313 10:04:51.516704 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0313 10:04:51.524417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0313 10:04:51.524471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524478 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 10:04:51.524495 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 10:04:51.524500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 10:04:51.524505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 10:04:51.524633 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0313 10:04:51.529203 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:04:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.846920 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.865075 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.874100 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.879502 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.879566 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.879580 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.879600 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.879613 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:21Z","lastTransitionTime":"2026-03-13T10:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.982639 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.982687 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.982698 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.982715 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:21 crc kubenswrapper[4632]: I0313 10:05:21.982727 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:21Z","lastTransitionTime":"2026-03-13T10:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.043452 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.043592 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.043663 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:05:22 crc kubenswrapper[4632]: E0313 10:05:22.043760 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.043595 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:05:22 crc kubenswrapper[4632]: E0313 10:05:22.043623 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:05:22 crc kubenswrapper[4632]: E0313 10:05:22.043888 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:05:22 crc kubenswrapper[4632]: E0313 10:05:22.043950 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.044711 4632 scope.go:117] "RemoveContainer" containerID="9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae" Mar 13 10:05:22 crc kubenswrapper[4632]: E0313 10:05:22.044995 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.085474 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.085513 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.085525 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.085541 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.085553 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:22Z","lastTransitionTime":"2026-03-13T10:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.189262 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.189327 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.189341 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.189361 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.189374 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:22Z","lastTransitionTime":"2026-03-13T10:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.292046 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.292081 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.292090 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.292103 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.292123 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:22Z","lastTransitionTime":"2026-03-13T10:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.395277 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.395334 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.395352 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.395632 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.395668 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:22Z","lastTransitionTime":"2026-03-13T10:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.499461 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.499503 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.499512 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.499531 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.499542 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:22Z","lastTransitionTime":"2026-03-13T10:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.603809 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.603873 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.603885 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.603908 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.603922 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:22Z","lastTransitionTime":"2026-03-13T10:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.707293 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.707342 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.707361 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.707379 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.707390 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:22Z","lastTransitionTime":"2026-03-13T10:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.810540 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.810599 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.810610 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.810629 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.810640 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:22Z","lastTransitionTime":"2026-03-13T10:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.914819 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.914898 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.914913 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.914959 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:22 crc kubenswrapper[4632]: I0313 10:05:22.914979 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:22Z","lastTransitionTime":"2026-03-13T10:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.018370 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.018420 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.018436 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.018462 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.018525 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:23Z","lastTransitionTime":"2026-03-13T10:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.121988 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.122032 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.122044 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.122062 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.122073 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:23Z","lastTransitionTime":"2026-03-13T10:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.284331 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.284398 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.284424 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.284450 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.284465 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:23Z","lastTransitionTime":"2026-03-13T10:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.401283 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.401818 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.401835 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.401861 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.401878 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:23Z","lastTransitionTime":"2026-03-13T10:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.536135 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.536187 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.536199 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.536223 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.536237 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:23Z","lastTransitionTime":"2026-03-13T10:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.580804 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"39037d4d6f8345706dd36a2591ead2b263a8f0bcc4558684d2c06a5eccfbfbf3"} Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.658484 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.658529 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.658541 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.658558 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.658570 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:23Z","lastTransitionTime":"2026-03-13T10:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.760761 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.761207 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.761314 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.761420 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.761519 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:23Z","lastTransitionTime":"2026-03-13T10:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.864761 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.865231 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.865349 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.865473 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.865563 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:23Z","lastTransitionTime":"2026-03-13T10:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.969584 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.969641 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.969654 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.969671 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:23 crc kubenswrapper[4632]: I0313 10:05:23.969684 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:23Z","lastTransitionTime":"2026-03-13T10:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.043407 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.043476 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.043505 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.043601 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:24 crc kubenswrapper[4632]: E0313 10:05:24.043928 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:05:24 crc kubenswrapper[4632]: E0313 10:05:24.045039 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:05:24 crc kubenswrapper[4632]: E0313 10:05:24.045137 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:05:24 crc kubenswrapper[4632]: E0313 10:05:24.045413 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.073859 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.073914 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.074126 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.074440 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.074455 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:24Z","lastTransitionTime":"2026-03-13T10:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.181581 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.181614 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.181624 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.181639 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.181649 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:24Z","lastTransitionTime":"2026-03-13T10:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.284242 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.284738 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.284749 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.284765 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.284780 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:24Z","lastTransitionTime":"2026-03-13T10:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.387516 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.387556 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.387567 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.387583 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.387593 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:24Z","lastTransitionTime":"2026-03-13T10:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.491649 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.491693 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.491704 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.491720 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.491731 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:24Z","lastTransitionTime":"2026-03-13T10:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.594843 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.594899 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.594912 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.594930 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.595101 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:24Z","lastTransitionTime":"2026-03-13T10:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.698533 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.698592 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.698602 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.698615 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.698627 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:24Z","lastTransitionTime":"2026-03-13T10:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.801790 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.801829 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.801838 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.801852 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.801865 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:24Z","lastTransitionTime":"2026-03-13T10:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.904438 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.904487 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.904499 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.904518 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:24 crc kubenswrapper[4632]: I0313 10:05:24.904530 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:24Z","lastTransitionTime":"2026-03-13T10:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.006886 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.006925 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.006949 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.006965 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.006976 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:25Z","lastTransitionTime":"2026-03-13T10:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.111104 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.111278 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.111372 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.111473 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.111620 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:25Z","lastTransitionTime":"2026-03-13T10:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.214731 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.214791 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.214810 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.214839 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.214856 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:25Z","lastTransitionTime":"2026-03-13T10:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.318276 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.318305 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.318316 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.318333 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.318346 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:25Z","lastTransitionTime":"2026-03-13T10:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.423919 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.424381 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.424394 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.424412 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.424424 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:25Z","lastTransitionTime":"2026-03-13T10:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.526639 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.526681 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.526693 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.526708 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.526716 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:25Z","lastTransitionTime":"2026-03-13T10:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.593639 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-zwlc8" event={"ID":"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7","Type":"ContainerStarted","Data":"a0cdf9ec894d29357b7b171dd1552444b97348f86d71e58afd9ab6bae1a05654"} Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.596249 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gqf22" event={"ID":"4ec8e301-3037-4de0-94d2-32c49709660e","Type":"ContainerStarted","Data":"0ae022ab87d0aedd5bbc6440acda6466cde0d3712108042da1225ea73ca35d6d"} Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.598955 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"1b3ab26f24c338001da561ca80dccbab8a99da54054f6b7ddd2b0d5ac02f7dfa"} Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.600469 4632 generic.go:334] "Generic (PLEG): container finished" podID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerID="fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50" exitCode=0 Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.600522 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" event={"ID":"3b40c6b3-0061-4224-82d5-3ccf67998722","Type":"ContainerDied","Data":"fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50"} Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.602931 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-n55jt" event={"ID":"b29b9ad7-8cc9-434f-8731-a86265c383fd","Type":"ContainerStarted","Data":"6f8c3d39bd1ac3290c35d993771da5d7915b468b7376aadccf7b459f65dc7138"} Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.607142 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" event={"ID":"b054ca08-1d09-4eca-a608-eb5b9323959a","Type":"ContainerStarted","Data":"a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35"} Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.609438 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"dcdb8c440e2f83530ee3fd4be3d39b21575674047ecba0e26719eb06bde38dbb"} Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.611594 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" event={"ID":"b0c542d5-8c38-4243-8af7-cfc0d8e22773","Type":"ContainerStarted","Data":"a71f678e4bdb921e73ed7f26a9d4565fc5668c94082003bb79357a2bf793b06f"} Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.611627 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" event={"ID":"b0c542d5-8c38-4243-8af7-cfc0d8e22773","Type":"ContainerStarted","Data":"55b54038392eeeed9eefaa615e0f9a4a34341efebcd6f6e628d7a911545ca154"} Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.618846 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:25Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.629690 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.629724 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.629733 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.629746 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.629755 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:25Z","lastTransitionTime":"2026-03-13T10:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.645482 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74ac702432406b147f9db010a8945c3e54f6f15a64346e895833c39bcc8f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:25Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.667082 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:25Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.682396 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:25Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.695814 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:25Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.708330 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:25Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.722185 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:25Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.733317 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.733380 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.733394 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.733414 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.733426 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:25Z","lastTransitionTime":"2026-03-13T10:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.739563 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894cdc70-0747-4975-a22f-0dbd657e91a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0313 10:04:51.224345 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 10:04:51.224743 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0313 10:04:51.231279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3511042304/tls.crt::/tmp/serving-cert-3511042304/tls.key\\\\\\\"\\\\nI0313 10:04:51.512346 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0313 10:04:51.516641 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0313 10:04:51.516667 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0313 10:04:51.516694 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0313 10:04:51.516704 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0313 10:04:51.524417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0313 10:04:51.524471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524478 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 10:04:51.524495 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 10:04:51.524500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 10:04:51.524505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 10:04:51.524633 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0313 10:04:51.529203 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:04:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:25Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.773916 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:25Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.835743 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.835781 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.835791 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.835805 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.835814 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:25Z","lastTransitionTime":"2026-03-13T10:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.839198 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:25Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.866533 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:25Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.879564 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeb823b-9a8c-403a-9a60-1d74ba0fbffe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da62efee0b0e5a420abd2f18aae7b1ad532b0dd5dcda4d36d4efa9b039bd8811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:25Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.894813 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:25Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.919231 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:25Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.943853 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:25Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.944969 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.945119 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.945150 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.945182 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.945195 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:25Z","lastTransitionTime":"2026-03-13T10:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.961349 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cdf9ec894d29357b7b171dd1552444b97348f86d71e58afd9ab6bae1a05654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:25Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.978723 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:05:25 crc kubenswrapper[4632]: E0313 10:05:25.978864 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:05:41.978839785 +0000 UTC m=+116.001369918 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.979138 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.979292 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:25 crc kubenswrapper[4632]: E0313 10:05:25.979336 4632 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 13 10:05:25 crc kubenswrapper[4632]: E0313 10:05:25.979555 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-13 10:05:41.979542896 +0000 UTC m=+116.002073029 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 13 10:05:25 crc kubenswrapper[4632]: E0313 10:05:25.979373 4632 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 13 10:05:25 crc kubenswrapper[4632]: E0313 10:05:25.979779 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-13 10:05:41.979769024 +0000 UTC m=+116.002299157 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 13 10:05:25 crc kubenswrapper[4632]: I0313 10:05:25.981748 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:25Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.016821 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ae022ab87d0aedd5bbc6440acda6466cde0d3712108042da1225ea73ca35d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.041266 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.043310 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.043452 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.043535 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.043466 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:26 crc kubenswrapper[4632]: E0313 10:05:26.043458 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:05:26 crc kubenswrapper[4632]: E0313 10:05:26.043752 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:05:26 crc kubenswrapper[4632]: E0313 10:05:26.043627 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:05:26 crc kubenswrapper[4632]: E0313 10:05:26.044993 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.047428 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.047471 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.047481 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.047497 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.047507 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:26Z","lastTransitionTime":"2026-03-13T10:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.068578 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b54038392eeeed9eefaa615e0f9a4a34341efebcd6f6e628d7a911545ca154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f678e4bdb921e73ed7f26a9d4565fc5668c94082003bb79357a2bf793b06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.080578 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.080713 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad-metrics-certs\") pod \"network-metrics-daemon-z2vlz\" (UID: \"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\") " pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.080771 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:05:26 crc kubenswrapper[4632]: E0313 10:05:26.081186 4632 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 10:05:26 crc kubenswrapper[4632]: E0313 10:05:26.081231 4632 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 10:05:26 crc kubenswrapper[4632]: E0313 10:05:26.081246 4632 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:05:26 crc kubenswrapper[4632]: E0313 10:05:26.081244 4632 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 10:05:26 crc kubenswrapper[4632]: E0313 10:05:26.081294 4632 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 10:05:26 crc kubenswrapper[4632]: E0313 10:05:26.081310 4632 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:05:26 crc kubenswrapper[4632]: E0313 10:05:26.081331 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-13 10:05:42.081312065 +0000 UTC m=+116.103842198 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:05:26 crc kubenswrapper[4632]: E0313 10:05:26.081399 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-13 10:05:42.081372367 +0000 UTC m=+116.103902500 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:05:26 crc kubenswrapper[4632]: E0313 10:05:26.081481 4632 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 10:05:26 crc kubenswrapper[4632]: E0313 10:05:26.081521 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad-metrics-certs podName:ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad nodeName:}" failed. No retries permitted until 2026-03-13 10:05:42.081510841 +0000 UTC m=+116.104041174 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad-metrics-certs") pod "network-metrics-daemon-z2vlz" (UID: "ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.088806 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894cdc70-0747-4975-a22f-0dbd657e91a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0313 10:04:51.224345 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 10:04:51.224743 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0313 10:04:51.231279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3511042304/tls.crt::/tmp/serving-cert-3511042304/tls.key\\\\\\\"\\\\nI0313 10:04:51.512346 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0313 10:04:51.516641 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0313 10:04:51.516667 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0313 10:04:51.516694 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0313 10:04:51.516704 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0313 10:04:51.524417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0313 10:04:51.524471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524478 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 10:04:51.524495 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 10:04:51.524500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 10:04:51.524505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 10:04:51.524633 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0313 10:04:51.529203 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:04:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.107787 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.125753 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdb8c440e2f83530ee3fd4be3d39b21575674047ecba0e26719eb06bde38dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39037d4d6f8345706dd36a2591ead2b263a8f0bcc4558684d2c06a5eccfbfbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.137410 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.150322 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.150350 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.150358 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.150372 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.150380 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:26Z","lastTransitionTime":"2026-03-13T10:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.176100 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeb823b-9a8c-403a-9a60-1d74ba0fbffe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da62efee0b0e5a420abd2f18aae7b1ad532b0dd5dcda4d36d4efa9b039bd8811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.194877 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.210300 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.220250 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8c3d39bd1ac3290c35d993771da5d7915b468b7376aadccf7b459f65dc7138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.249413 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cdf9ec894d29357b7b171dd1552444b97348f86d71e58afd9ab6bae1a05654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.252217 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.252239 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.252248 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.252262 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.252290 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:26Z","lastTransitionTime":"2026-03-13T10:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.271607 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.297323 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3ab26f24c338001da561ca80dccbab8a99da54054f6b7ddd2b0d5ac02f7dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.343792 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74ac702432406b147f9db010a8945c3e54f6f15a64346e895833c39bcc8f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.354843 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.354870 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.354878 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.354906 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.354915 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:26Z","lastTransitionTime":"2026-03-13T10:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.459399 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.459445 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.459461 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.459479 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.459491 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:26Z","lastTransitionTime":"2026-03-13T10:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.562492 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.562534 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.562545 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.562561 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.562574 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:26Z","lastTransitionTime":"2026-03-13T10:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.618848 4632 generic.go:334] "Generic (PLEG): container finished" podID="b054ca08-1d09-4eca-a608-eb5b9323959a" containerID="a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35" exitCode=0 Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.618956 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" event={"ID":"b054ca08-1d09-4eca-a608-eb5b9323959a","Type":"ContainerDied","Data":"a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35"} Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.637683 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" event={"ID":"3b40c6b3-0061-4224-82d5-3ccf67998722","Type":"ContainerStarted","Data":"1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da"} Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.637730 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" event={"ID":"3b40c6b3-0061-4224-82d5-3ccf67998722","Type":"ContainerStarted","Data":"a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5"} Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.637742 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" event={"ID":"3b40c6b3-0061-4224-82d5-3ccf67998722","Type":"ContainerStarted","Data":"cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b"} Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.637754 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" event={"ID":"3b40c6b3-0061-4224-82d5-3ccf67998722","Type":"ContainerStarted","Data":"32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719"} Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.637765 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" event={"ID":"3b40c6b3-0061-4224-82d5-3ccf67998722","Type":"ContainerStarted","Data":"7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705"} Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.662467 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdb8c440e2f83530ee3fd4be3d39b21575674047ecba0e26719eb06bde38dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39037d4d6f8345706dd36a2591ead2b263a8f0bcc4558684d2c06a5eccfbfbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.664409 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.664443 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.664458 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.664475 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.664487 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:26Z","lastTransitionTime":"2026-03-13T10:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.680696 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.700593 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894cdc70-0747-4975-a22f-0dbd657e91a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0313 10:04:51.224345 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 10:04:51.224743 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0313 10:04:51.231279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3511042304/tls.crt::/tmp/serving-cert-3511042304/tls.key\\\\\\\"\\\\nI0313 10:04:51.512346 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0313 10:04:51.516641 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0313 10:04:51.516667 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0313 10:04:51.516694 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0313 10:04:51.516704 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0313 10:04:51.524417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0313 10:04:51.524471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524478 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 10:04:51.524495 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 10:04:51.524500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 10:04:51.524505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 10:04:51.524633 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0313 10:04:51.529203 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:04:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.731420 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.752175 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cdf9ec894d29357b7b171dd1552444b97348f86d71e58afd9ab6bae1a05654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.767991 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.768019 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.768027 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.768057 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.768067 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:26Z","lastTransitionTime":"2026-03-13T10:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.775015 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.802825 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeb823b-9a8c-403a-9a60-1d74ba0fbffe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da62efee0b0e5a420abd2f18aae7b1ad532b0dd5dcda4d36d4efa9b039bd8811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.831997 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.851260 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.864817 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8c3d39bd1ac3290c35d993771da5d7915b468b7376aadccf7b459f65dc7138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.971047 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.971273 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.971363 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.971447 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.971524 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:26Z","lastTransitionTime":"2026-03-13T10:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:26 crc kubenswrapper[4632]: I0313 10:05:26.984215 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3ab26f24c338001da561ca80dccbab8a99da54054f6b7ddd2b0d5ac02f7dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.005965 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74ac702432406b147f9db010a8945c3e54f6f15a64346e895833c39bcc8f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:27Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.027488 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:27Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.047337 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ae022ab87d0aedd5bbc6440acda6466cde0d3712108042da1225ea73ca35d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:27Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.069880 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:27Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.093503 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b54038392eeeed9eefaa615e0f9a4a34341efebcd6f6e628d7a911545ca154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f678e4bdb921e73ed7f26a9d4565fc5668c94082003bb79357a2bf793b06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:27Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.106565 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.106604 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.106618 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.106649 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.106663 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:27Z","lastTransitionTime":"2026-03-13T10:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.210500 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.210536 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.210547 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.210563 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.210572 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:27Z","lastTransitionTime":"2026-03-13T10:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.313825 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.313882 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.313892 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.313907 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.313918 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:27Z","lastTransitionTime":"2026-03-13T10:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.417449 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.417500 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.417510 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.417529 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.417538 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:27Z","lastTransitionTime":"2026-03-13T10:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.519529 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.519578 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.519588 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.519605 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.519614 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:27Z","lastTransitionTime":"2026-03-13T10:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.621529 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.621557 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.621565 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.621578 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.621586 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:27Z","lastTransitionTime":"2026-03-13T10:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.687313 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" event={"ID":"3b40c6b3-0061-4224-82d5-3ccf67998722","Type":"ContainerStarted","Data":"af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f"} Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.689071 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" event={"ID":"b054ca08-1d09-4eca-a608-eb5b9323959a","Type":"ContainerStarted","Data":"c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d"} Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.691087 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"be8dc5225d33306dd28d6043b50b88e39a677356681187c0f881966e12e9494c"} Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.782985 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.783021 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.783031 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.783045 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.783054 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:27Z","lastTransitionTime":"2026-03-13T10:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.787406 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeb823b-9a8c-403a-9a60-1d74ba0fbffe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da62efee0b0e5a420abd2f18aae7b1ad532b0dd5dcda4d36d4efa9b039bd8811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:27Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.810859 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:27Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.842972 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:27Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.871363 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8c3d39bd1ac3290c35d993771da5d7915b468b7376aadccf7b459f65dc7138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:27Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.885544 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.885596 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.885611 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.885631 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.885639 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:27Z","lastTransitionTime":"2026-03-13T10:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.897038 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cdf9ec894d29357b7b171dd1552444b97348f86d71e58afd9ab6bae1a05654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:27Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.938849 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:27Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.961505 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3ab26f24c338001da561ca80dccbab8a99da54054f6b7ddd2b0d5ac02f7dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:27Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.977298 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74ac702432406b147f9db010a8945c3e54f6f15a64346e895833c39bcc8f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:27Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.988491 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.988541 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.988553 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.988570 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.988584 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:27Z","lastTransitionTime":"2026-03-13T10:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:27 crc kubenswrapper[4632]: I0313 10:05:27.996022 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:27Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.015688 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ae022ab87d0aedd5bbc6440acda6466cde0d3712108042da1225ea73ca35d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.039278 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.051799 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.051816 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.051909 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.051930 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:05:28 crc kubenswrapper[4632]: E0313 10:05:28.052055 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:05:28 crc kubenswrapper[4632]: E0313 10:05:28.052466 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:05:28 crc kubenswrapper[4632]: E0313 10:05:28.052527 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:05:28 crc kubenswrapper[4632]: E0313 10:05:28.052592 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.070107 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b54038392eeeed9eefaa615e0f9a4a34341efebcd6f6e628d7a911545ca154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f678e4bdb921e73ed7f26a9d4565fc5668c94082003bb79357a2bf793b06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.085752 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894cdc70-0747-4975-a22f-0dbd657e91a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0313 10:04:51.224345 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 10:04:51.224743 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0313 10:04:51.231279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3511042304/tls.crt::/tmp/serving-cert-3511042304/tls.key\\\\\\\"\\\\nI0313 10:04:51.512346 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0313 10:04:51.516641 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0313 10:04:51.516667 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0313 10:04:51.516694 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0313 10:04:51.516704 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0313 10:04:51.524417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0313 10:04:51.524471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524478 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 10:04:51.524495 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 10:04:51.524500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 10:04:51.524505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 10:04:51.524633 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0313 10:04:51.529203 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:04:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.091568 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.091618 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.091634 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.091653 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.091665 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:28Z","lastTransitionTime":"2026-03-13T10:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.118630 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.138428 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdb8c440e2f83530ee3fd4be3d39b21575674047ecba0e26719eb06bde38dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39037d4d6f8345706dd36a2591ead2b263a8f0bcc4558684d2c06a5eccfbfbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.153147 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.181228 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8dc5225d33306dd28d6043b50b88e39a677356681187c0f881966e12e9494c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.195221 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.195325 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.195340 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.195359 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.195402 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:28Z","lastTransitionTime":"2026-03-13T10:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.195440 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ae022ab87d0aedd5bbc6440acda6466cde0d3712108042da1225ea73ca35d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.227570 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.263850 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b54038392eeeed9eefaa615e0f9a4a34341efebcd6f6e628d7a911545ca154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f678e4bdb921e73ed7f26a9d4565fc5668c94082003bb79357a2bf793b06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.295305 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894cdc70-0747-4975-a22f-0dbd657e91a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0313 10:04:51.224345 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 10:04:51.224743 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0313 10:04:51.231279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3511042304/tls.crt::/tmp/serving-cert-3511042304/tls.key\\\\\\\"\\\\nI0313 10:04:51.512346 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0313 10:04:51.516641 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0313 10:04:51.516667 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0313 10:04:51.516694 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0313 10:04:51.516704 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0313 10:04:51.524417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0313 10:04:51.524471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524478 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 10:04:51.524495 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 10:04:51.524500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 10:04:51.524505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 10:04:51.524633 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0313 10:04:51.529203 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:04:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.297567 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.297610 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.297622 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.297640 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.297653 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:28Z","lastTransitionTime":"2026-03-13T10:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.333874 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.352472 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdb8c440e2f83530ee3fd4be3d39b21575674047ecba0e26719eb06bde38dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39037d4d6f8345706dd36a2591ead2b263a8f0bcc4558684d2c06a5eccfbfbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.374063 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.383925 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeb823b-9a8c-403a-9a60-1d74ba0fbffe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da62efee0b0e5a420abd2f18aae7b1ad532b0dd5dcda4d36d4efa9b039bd8811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.394930 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.401041 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.401103 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.401176 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.401207 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.401229 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:28Z","lastTransitionTime":"2026-03-13T10:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.412201 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.424310 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8c3d39bd1ac3290c35d993771da5d7915b468b7376aadccf7b459f65dc7138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.436616 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cdf9ec894d29357b7b171dd1552444b97348f86d71e58afd9ab6bae1a05654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.462695 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.476099 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3ab26f24c338001da561ca80dccbab8a99da54054f6b7ddd2b0d5ac02f7dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.485026 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74ac702432406b147f9db010a8945c3e54f6f15a64346e895833c39bcc8f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.494228 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeb823b-9a8c-403a-9a60-1d74ba0fbffe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da62efee0b0e5a420abd2f18aae7b1ad532b0dd5dcda4d36d4efa9b039bd8811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.575185 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.575225 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.575237 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.575254 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.575265 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:28Z","lastTransitionTime":"2026-03-13T10:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.588158 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.600646 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.627140 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8c3d39bd1ac3290c35d993771da5d7915b468b7376aadccf7b459f65dc7138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.638031 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cdf9ec894d29357b7b171dd1552444b97348f86d71e58afd9ab6bae1a05654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.654966 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.666840 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3ab26f24c338001da561ca80dccbab8a99da54054f6b7ddd2b0d5ac02f7dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.677902 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.677876 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74ac702432406b147f9db010a8945c3e54f6f15a64346e895833c39bcc8f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.678007 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.678022 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.678038 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.678047 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:28Z","lastTransitionTime":"2026-03-13T10:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.687818 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8dc5225d33306dd28d6043b50b88e39a677356681187c0f881966e12e9494c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.701221 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ae022ab87d0aedd5bbc6440acda6466cde0d3712108042da1225ea73ca35d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.715722 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.734817 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b54038392eeeed9eefaa615e0f9a4a34341efebcd6f6e628d7a911545ca154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f678e4bdb921e73ed7f26a9d4565fc5668c94082003bb79357a2bf793b06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.749919 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894cdc70-0747-4975-a22f-0dbd657e91a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0313 10:04:51.224345 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 10:04:51.224743 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0313 10:04:51.231279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3511042304/tls.crt::/tmp/serving-cert-3511042304/tls.key\\\\\\\"\\\\nI0313 10:04:51.512346 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0313 10:04:51.516641 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0313 10:04:51.516667 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0313 10:04:51.516694 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0313 10:04:51.516704 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0313 10:04:51.524417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0313 10:04:51.524471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524478 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 10:04:51.524495 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 10:04:51.524500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 10:04:51.524505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 10:04:51.524633 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0313 10:04:51.529203 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:04:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.767702 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.780812 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.780849 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.780858 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.780876 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.780886 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:28Z","lastTransitionTime":"2026-03-13T10:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.785479 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdb8c440e2f83530ee3fd4be3d39b21575674047ecba0e26719eb06bde38dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39037d4d6f8345706dd36a2591ead2b263a8f0bcc4558684d2c06a5eccfbfbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.800246 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.888456 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.888504 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.888516 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.888554 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.888580 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:28Z","lastTransitionTime":"2026-03-13T10:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.991388 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.991410 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.991417 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.991431 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:28 crc kubenswrapper[4632]: I0313 10:05:28.991443 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:28Z","lastTransitionTime":"2026-03-13T10:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.096707 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.096841 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.097104 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.097373 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.097635 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:29Z","lastTransitionTime":"2026-03-13T10:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.200578 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.200613 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.200622 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.200636 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.200644 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:29Z","lastTransitionTime":"2026-03-13T10:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.303231 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.303270 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.303284 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.303300 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.303310 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:29Z","lastTransitionTime":"2026-03-13T10:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.406389 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.406442 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.406458 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.406479 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.406495 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:29Z","lastTransitionTime":"2026-03-13T10:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.511341 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.511414 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.511441 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.511472 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.511495 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:29Z","lastTransitionTime":"2026-03-13T10:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.615767 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.615816 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.615834 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.615854 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.615868 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:29Z","lastTransitionTime":"2026-03-13T10:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.704613 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" event={"ID":"3b40c6b3-0061-4224-82d5-3ccf67998722","Type":"ContainerStarted","Data":"e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d"} Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.720062 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.720123 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.720138 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.720161 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.720177 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:29Z","lastTransitionTime":"2026-03-13T10:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.823078 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.823117 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.823129 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.823147 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.823159 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:29Z","lastTransitionTime":"2026-03-13T10:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.926093 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.926134 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.926144 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.926182 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:29 crc kubenswrapper[4632]: I0313 10:05:29.926212 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:29Z","lastTransitionTime":"2026-03-13T10:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.029058 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.029126 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.029140 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.029163 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.029177 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:30Z","lastTransitionTime":"2026-03-13T10:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.043501 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.043622 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.043642 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.043550 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:30 crc kubenswrapper[4632]: E0313 10:05:30.043763 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:05:30 crc kubenswrapper[4632]: E0313 10:05:30.044042 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:05:30 crc kubenswrapper[4632]: E0313 10:05:30.044179 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:05:30 crc kubenswrapper[4632]: E0313 10:05:30.044293 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.132476 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.132562 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.132578 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.132600 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.132615 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:30Z","lastTransitionTime":"2026-03-13T10:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.235530 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.235569 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.235579 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.235594 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.235603 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:30Z","lastTransitionTime":"2026-03-13T10:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.338422 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.338794 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.339111 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.339322 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.339499 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:30Z","lastTransitionTime":"2026-03-13T10:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.442151 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.442185 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.442222 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.442238 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.442250 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:30Z","lastTransitionTime":"2026-03-13T10:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.556929 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.556992 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.557001 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.557016 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.557026 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:30Z","lastTransitionTime":"2026-03-13T10:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.659716 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.659800 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.659814 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.659833 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.659844 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:30Z","lastTransitionTime":"2026-03-13T10:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.710185 4632 generic.go:334] "Generic (PLEG): container finished" podID="b054ca08-1d09-4eca-a608-eb5b9323959a" containerID="c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d" exitCode=0 Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.710218 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" event={"ID":"b054ca08-1d09-4eca-a608-eb5b9323959a","Type":"ContainerDied","Data":"c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d"} Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.726237 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8dc5225d33306dd28d6043b50b88e39a677356681187c0f881966e12e9494c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:30Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.740174 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ae022ab87d0aedd5bbc6440acda6466cde0d3712108042da1225ea73ca35d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:30Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.755191 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:30Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.762112 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.762343 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.762483 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.762596 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.762709 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:30Z","lastTransitionTime":"2026-03-13T10:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.769555 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b54038392eeeed9eefaa615e0f9a4a34341efebcd6f6e628d7a911545ca154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f678e4bdb921e73ed7f26a9d4565fc5668c94082003bb79357a2bf793b06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:30Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.781331 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.781367 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.781378 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.781393 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.781403 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:30Z","lastTransitionTime":"2026-03-13T10:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.783438 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894cdc70-0747-4975-a22f-0dbd657e91a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0313 10:04:51.224345 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 10:04:51.224743 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0313 10:04:51.231279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3511042304/tls.crt::/tmp/serving-cert-3511042304/tls.key\\\\\\\"\\\\nI0313 10:04:51.512346 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0313 10:04:51.516641 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0313 10:04:51.516667 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0313 10:04:51.516694 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0313 10:04:51.516704 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0313 10:04:51.524417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0313 10:04:51.524471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524478 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 10:04:51.524495 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 10:04:51.524500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 10:04:51.524505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 10:04:51.524633 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0313 10:04:51.529203 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:04:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:30Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:30 crc kubenswrapper[4632]: E0313 10:05:30.793828 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:30Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.794372 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:30Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.798650 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.798871 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.799008 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.799122 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.799199 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:30Z","lastTransitionTime":"2026-03-13T10:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.807620 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdb8c440e2f83530ee3fd4be3d39b21575674047ecba0e26719eb06bde38dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39037d4d6f8345706dd36a2591ead2b263a8f0bcc4558684d2c06a5eccfbfbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:30Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:30 crc kubenswrapper[4632]: E0313 10:05:30.812445 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:30Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.818761 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:30Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.819069 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.819113 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.819128 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.819144 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.819178 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:30Z","lastTransitionTime":"2026-03-13T10:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.831858 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeb823b-9a8c-403a-9a60-1d74ba0fbffe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da62efee0b0e5a420abd2f18aae7b1ad532b0dd5dcda4d36d4efa9b039bd8811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:30Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:30 crc kubenswrapper[4632]: E0313 10:05:30.834860 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:30Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.838562 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.838594 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.838605 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.838623 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.838633 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:30Z","lastTransitionTime":"2026-03-13T10:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.848428 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:30Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:30 crc kubenswrapper[4632]: E0313 10:05:30.851891 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:30Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.856394 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.856548 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.856645 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.856735 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.856866 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:30Z","lastTransitionTime":"2026-03-13T10:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.859831 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:30Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.869623 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8c3d39bd1ac3290c35d993771da5d7915b468b7376aadccf7b459f65dc7138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:30Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:30 crc kubenswrapper[4632]: E0313 10:05:30.869695 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:30Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:30 crc kubenswrapper[4632]: E0313 10:05:30.869985 4632 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.875076 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.875109 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.875119 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.875135 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.875146 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:30Z","lastTransitionTime":"2026-03-13T10:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.879099 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cdf9ec894d29357b7b171dd1552444b97348f86d71e58afd9ab6bae1a05654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:30Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.897879 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:30Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.910395 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3ab26f24c338001da561ca80dccbab8a99da54054f6b7ddd2b0d5ac02f7dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:30Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.922332 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74ac702432406b147f9db010a8945c3e54f6f15a64346e895833c39bcc8f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:30Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.978064 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.978101 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.978146 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.978163 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:30 crc kubenswrapper[4632]: I0313 10:05:30.978173 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:30Z","lastTransitionTime":"2026-03-13T10:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.081120 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.081385 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.081466 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.081600 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.081719 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:31Z","lastTransitionTime":"2026-03-13T10:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.184345 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.184667 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.184850 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.184955 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.185045 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:31Z","lastTransitionTime":"2026-03-13T10:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.287076 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.287434 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.287505 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.287610 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.287683 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:31Z","lastTransitionTime":"2026-03-13T10:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.390210 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.390264 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.390280 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.390300 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.390313 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:31Z","lastTransitionTime":"2026-03-13T10:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.493185 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.493232 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.493241 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.493257 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.493266 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:31Z","lastTransitionTime":"2026-03-13T10:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.595654 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.595693 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.595701 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.595715 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.595724 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:31Z","lastTransitionTime":"2026-03-13T10:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.697829 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.697882 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.697893 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.697907 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.697917 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:31Z","lastTransitionTime":"2026-03-13T10:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.716984 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" event={"ID":"b054ca08-1d09-4eca-a608-eb5b9323959a","Type":"ContainerStarted","Data":"b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e"} Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.800224 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.800263 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.800279 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.800299 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.800309 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:31Z","lastTransitionTime":"2026-03-13T10:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.903042 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.903096 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.903109 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.903127 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:31 crc kubenswrapper[4632]: I0313 10:05:31.903141 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:31Z","lastTransitionTime":"2026-03-13T10:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.004789 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.004822 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.004830 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.004844 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.004852 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:32Z","lastTransitionTime":"2026-03-13T10:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.044185 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.044225 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.044232 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:05:32 crc kubenswrapper[4632]: E0313 10:05:32.044350 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.044370 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:05:32 crc kubenswrapper[4632]: E0313 10:05:32.044480 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:05:32 crc kubenswrapper[4632]: E0313 10:05:32.044546 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:05:32 crc kubenswrapper[4632]: E0313 10:05:32.044587 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.107049 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.107132 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.107149 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.107165 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.107176 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:32Z","lastTransitionTime":"2026-03-13T10:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.210073 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.210121 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.210132 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.210152 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.210164 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:32Z","lastTransitionTime":"2026-03-13T10:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.314825 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.314862 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.314873 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.314892 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.314904 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:32Z","lastTransitionTime":"2026-03-13T10:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.418340 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.418382 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.418394 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.418411 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.418422 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:32Z","lastTransitionTime":"2026-03-13T10:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.521685 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.521721 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.521730 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.521744 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.521753 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:32Z","lastTransitionTime":"2026-03-13T10:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.624597 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.624648 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.624657 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.624674 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.624689 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:32Z","lastTransitionTime":"2026-03-13T10:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.727882 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" event={"ID":"3b40c6b3-0061-4224-82d5-3ccf67998722","Type":"ContainerStarted","Data":"0b158df3a1ade30707dc3ca7240a0945ad79d93f45888229313da4ded182c7d9"} Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.730414 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.730521 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.730642 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.734369 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.734431 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.734457 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.734496 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.734511 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:32Z","lastTransitionTime":"2026-03-13T10:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.738721 4632 generic.go:334] "Generic (PLEG): container finished" podID="b054ca08-1d09-4eca-a608-eb5b9323959a" containerID="b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e" exitCode=0 Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.738774 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" event={"ID":"b054ca08-1d09-4eca-a608-eb5b9323959a","Type":"ContainerDied","Data":"b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e"} Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.749869 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:32Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.765522 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.766195 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.768458 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdb8c440e2f83530ee3fd4be3d39b21575674047ecba0e26719eb06bde38dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39037d4d6f8345706dd36a2591ead2b263a8f0bcc4558684d2c06a5eccfbfbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:32Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.782882 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:32Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.801136 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894cdc70-0747-4975-a22f-0dbd657e91a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0313 10:04:51.224345 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 10:04:51.224743 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0313 10:04:51.231279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3511042304/tls.crt::/tmp/serving-cert-3511042304/tls.key\\\\\\\"\\\\nI0313 10:04:51.512346 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0313 10:04:51.516641 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0313 10:04:51.516667 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0313 10:04:51.516694 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0313 10:04:51.516704 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0313 10:04:51.524417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0313 10:04:51.524471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524478 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 10:04:51.524495 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 10:04:51.524500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 10:04:51.524505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 10:04:51.524633 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0313 10:04:51.529203 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:04:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:32Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.816357 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8c3d39bd1ac3290c35d993771da5d7915b468b7376aadccf7b459f65dc7138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:32Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.831302 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cdf9ec894d29357b7b171dd1552444b97348f86d71e58afd9ab6bae1a05654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:32Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.838929 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.839228 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.839241 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.839258 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.839270 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:32Z","lastTransitionTime":"2026-03-13T10:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.855509 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b158df3a1ade30707dc3ca7240a0945ad79d93f45888229313da4ded182c7d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:32Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.869048 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeb823b-9a8c-403a-9a60-1d74ba0fbffe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da62efee0b0e5a420abd2f18aae7b1ad532b0dd5dcda4d36d4efa9b039bd8811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:32Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.911223 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:32Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.925411 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:32Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.939630 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3ab26f24c338001da561ca80dccbab8a99da54054f6b7ddd2b0d5ac02f7dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:32Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.941791 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.941834 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.941845 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.941862 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.941872 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:32Z","lastTransitionTime":"2026-03-13T10:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.950541 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74ac702432406b147f9db010a8945c3e54f6f15a64346e895833c39bcc8f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:32Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.974253 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b54038392eeeed9eefaa615e0f9a4a34341efebcd6f6e628d7a911545ca154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f678e4bdb921e73ed7f26a9d4565fc5668c94082003bb79357a2bf793b06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:32Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:32 crc kubenswrapper[4632]: I0313 10:05:32.994048 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8dc5225d33306dd28d6043b50b88e39a677356681187c0f881966e12e9494c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:32Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.030930 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ae022ab87d0aedd5bbc6440acda6466cde0d3712108042da1225ea73ca35d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:33Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.045310 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.045403 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.045418 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.045442 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.045458 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:33Z","lastTransitionTime":"2026-03-13T10:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.053605 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:33Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.072604 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894cdc70-0747-4975-a22f-0dbd657e91a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0313 10:04:51.224345 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 10:04:51.224743 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0313 10:04:51.231279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3511042304/tls.crt::/tmp/serving-cert-3511042304/tls.key\\\\\\\"\\\\nI0313 10:04:51.512346 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0313 10:04:51.516641 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0313 10:04:51.516667 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0313 10:04:51.516694 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0313 10:04:51.516704 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0313 10:04:51.524417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0313 10:04:51.524471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524478 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 10:04:51.524495 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 10:04:51.524500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 10:04:51.524505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 10:04:51.524633 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0313 10:04:51.529203 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:04:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:33Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.098670 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:33Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.121385 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdb8c440e2f83530ee3fd4be3d39b21575674047ecba0e26719eb06bde38dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39037d4d6f8345706dd36a2591ead2b263a8f0bcc4558684d2c06a5eccfbfbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:33Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.137032 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:33Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.148763 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.148798 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.148807 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.148822 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.148833 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:33Z","lastTransitionTime":"2026-03-13T10:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.151595 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeb823b-9a8c-403a-9a60-1d74ba0fbffe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da62efee0b0e5a420abd2f18aae7b1ad532b0dd5dcda4d36d4efa9b039bd8811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:33Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.170642 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:33Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.186711 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:33Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.201261 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8c3d39bd1ac3290c35d993771da5d7915b468b7376aadccf7b459f65dc7138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:33Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.221270 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cdf9ec894d29357b7b171dd1552444b97348f86d71e58afd9ab6bae1a05654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:33Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.242668 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b158df3a1ade30707dc3ca7240a0945ad79d93f45888229313da4ded182c7d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:33Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.250628 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.250660 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.250669 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.250683 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.250693 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:33Z","lastTransitionTime":"2026-03-13T10:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.258907 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3ab26f24c338001da561ca80dccbab8a99da54054f6b7ddd2b0d5ac02f7dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:33Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.276545 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74ac702432406b147f9db010a8945c3e54f6f15a64346e895833c39bcc8f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:33Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.299330 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8dc5225d33306dd28d6043b50b88e39a677356681187c0f881966e12e9494c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:33Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.324552 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ae022ab87d0aedd5bbc6440acda6466cde0d3712108042da1225ea73ca35d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:33Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.353611 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.353649 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.353658 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.353679 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.353693 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:33Z","lastTransitionTime":"2026-03-13T10:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.355973 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:33Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.372154 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b54038392eeeed9eefaa615e0f9a4a34341efebcd6f6e628d7a911545ca154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f678e4bdb921e73ed7f26a9d4565fc5668c94082003bb79357a2bf793b06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:33Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.456581 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.456615 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.456623 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.456639 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.456650 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:33Z","lastTransitionTime":"2026-03-13T10:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.559140 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.559184 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.559193 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.559210 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.559222 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:33Z","lastTransitionTime":"2026-03-13T10:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.662584 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.662658 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.662669 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.662688 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.662701 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:33Z","lastTransitionTime":"2026-03-13T10:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.746804 4632 generic.go:334] "Generic (PLEG): container finished" podID="b054ca08-1d09-4eca-a608-eb5b9323959a" containerID="4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9" exitCode=0 Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.748153 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" event={"ID":"b054ca08-1d09-4eca-a608-eb5b9323959a","Type":"ContainerDied","Data":"4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9"} Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.764333 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:33Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.765742 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.765765 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.765840 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.766107 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.766125 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:33Z","lastTransitionTime":"2026-03-13T10:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.806723 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894cdc70-0747-4975-a22f-0dbd657e91a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0313 10:04:51.224345 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 10:04:51.224743 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0313 10:04:51.231279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3511042304/tls.crt::/tmp/serving-cert-3511042304/tls.key\\\\\\\"\\\\nI0313 10:04:51.512346 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0313 10:04:51.516641 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0313 10:04:51.516667 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0313 10:04:51.516694 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0313 10:04:51.516704 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0313 10:04:51.524417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0313 10:04:51.524471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524478 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 10:04:51.524495 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 10:04:51.524500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 10:04:51.524505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 10:04:51.524633 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0313 10:04:51.529203 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:04:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:33Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.825421 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:33Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.841503 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdb8c440e2f83530ee3fd4be3d39b21575674047ecba0e26719eb06bde38dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39037d4d6f8345706dd36a2591ead2b263a8f0bcc4558684d2c06a5eccfbfbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:33Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.862956 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b158df3a1ade30707dc3ca7240a0945ad79d93f45888229313da4ded182c7d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:33Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.870779 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.870850 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.870866 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.870887 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.870899 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:33Z","lastTransitionTime":"2026-03-13T10:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.877981 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeb823b-9a8c-403a-9a60-1d74ba0fbffe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da62efee0b0e5a420abd2f18aae7b1ad532b0dd5dcda4d36d4efa9b039bd8811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:33Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.892048 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:33Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.906151 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:33Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.920385 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8c3d39bd1ac3290c35d993771da5d7915b468b7376aadccf7b459f65dc7138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:33Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.938815 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cdf9ec894d29357b7b171dd1552444b97348f86d71e58afd9ab6bae1a05654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:33Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.960657 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3ab26f24c338001da561ca80dccbab8a99da54054f6b7ddd2b0d5ac02f7dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:33Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.973752 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.973810 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.973831 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.973847 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.973858 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:33Z","lastTransitionTime":"2026-03-13T10:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.976507 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74ac702432406b147f9db010a8945c3e54f6f15a64346e895833c39bcc8f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:33Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:33 crc kubenswrapper[4632]: I0313 10:05:33.992881 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8dc5225d33306dd28d6043b50b88e39a677356681187c0f881966e12e9494c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:33Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.009009 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ae022ab87d0aedd5bbc6440acda6466cde0d3712108042da1225ea73ca35d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:34Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.026287 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:34Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.040872 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b54038392eeeed9eefaa615e0f9a4a34341efebcd6f6e628d7a911545ca154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f678e4bdb921e73ed7f26a9d4565fc5668c94082003bb79357a2bf793b06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:34Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.044182 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.044249 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.044306 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:05:34 crc kubenswrapper[4632]: E0313 10:05:34.044322 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.044343 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:05:34 crc kubenswrapper[4632]: E0313 10:05:34.044494 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:05:34 crc kubenswrapper[4632]: E0313 10:05:34.044561 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:05:34 crc kubenswrapper[4632]: E0313 10:05:34.044658 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.077219 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.077283 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.077306 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.077329 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.077343 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:34Z","lastTransitionTime":"2026-03-13T10:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.180700 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.180753 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.180766 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.180789 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.180805 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:34Z","lastTransitionTime":"2026-03-13T10:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.284506 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.284571 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.284588 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.284611 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.284628 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:34Z","lastTransitionTime":"2026-03-13T10:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.388180 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.388263 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.388275 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.388318 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.388331 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:34Z","lastTransitionTime":"2026-03-13T10:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.493462 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.493516 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.493525 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.493540 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.493550 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:34Z","lastTransitionTime":"2026-03-13T10:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.596547 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.596617 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.596629 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.596650 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.596665 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:34Z","lastTransitionTime":"2026-03-13T10:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.699532 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.699590 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.699602 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.699621 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.699634 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:34Z","lastTransitionTime":"2026-03-13T10:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.864432 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" event={"ID":"b054ca08-1d09-4eca-a608-eb5b9323959a","Type":"ContainerStarted","Data":"8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09"} Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.882158 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:34Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.896684 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b54038392eeeed9eefaa615e0f9a4a34341efebcd6f6e628d7a911545ca154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f678e4bdb921e73ed7f26a9d4565fc5668c94082003bb79357a2bf793b06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:34Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.908767 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.908802 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.908810 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.908824 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.908833 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:34Z","lastTransitionTime":"2026-03-13T10:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.913997 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8dc5225d33306dd28d6043b50b88e39a677356681187c0f881966e12e9494c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:34Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.934916 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ae022ab87d0aedd5bbc6440acda6466cde0d3712108042da1225ea73ca35d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:34Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.949109 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:34Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.966054 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdb8c440e2f83530ee3fd4be3d39b21575674047ecba0e26719eb06bde38dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39037d4d6f8345706dd36a2591ead2b263a8f0bcc4558684d2c06a5eccfbfbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:34Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:34 crc kubenswrapper[4632]: I0313 10:05:34.979080 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:34Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.010722 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.010801 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.010812 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.010825 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.010833 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:35Z","lastTransitionTime":"2026-03-13T10:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.045563 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894cdc70-0747-4975-a22f-0dbd657e91a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0313 10:04:51.224345 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 10:04:51.224743 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0313 10:04:51.231279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3511042304/tls.crt::/tmp/serving-cert-3511042304/tls.key\\\\\\\"\\\\nI0313 10:04:51.512346 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0313 10:04:51.516641 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0313 10:04:51.516667 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0313 10:04:51.516694 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0313 10:04:51.516704 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0313 10:04:51.524417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0313 10:04:51.524471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524478 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 10:04:51.524495 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 10:04:51.524500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 10:04:51.524505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 10:04:51.524633 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0313 10:04:51.529203 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:04:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:35Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.057707 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:35Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.071633 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8c3d39bd1ac3290c35d993771da5d7915b468b7376aadccf7b459f65dc7138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:35Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.081361 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cdf9ec894d29357b7b171dd1552444b97348f86d71e58afd9ab6bae1a05654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:35Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.109356 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b158df3a1ade30707dc3ca7240a0945ad79d93f45888229313da4ded182c7d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:35Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.115283 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.115319 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.115330 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.115369 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.115383 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:35Z","lastTransitionTime":"2026-03-13T10:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.123838 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeb823b-9a8c-403a-9a60-1d74ba0fbffe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da62efee0b0e5a420abd2f18aae7b1ad532b0dd5dcda4d36d4efa9b039bd8811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:35Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.138764 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:35Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.153204 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74ac702432406b147f9db010a8945c3e54f6f15a64346e895833c39bcc8f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:35Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.165715 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3ab26f24c338001da561ca80dccbab8a99da54054f6b7ddd2b0d5ac02f7dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:35Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.218715 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.218760 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.218769 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.218787 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.218800 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:35Z","lastTransitionTime":"2026-03-13T10:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.321737 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.321789 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.321803 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.321823 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.321836 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:35Z","lastTransitionTime":"2026-03-13T10:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.424069 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.424098 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.424107 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.424120 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.424128 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:35Z","lastTransitionTime":"2026-03-13T10:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.526295 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.526324 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.526332 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.526344 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.526352 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:35Z","lastTransitionTime":"2026-03-13T10:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.628201 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.628425 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.628516 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.628676 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.628776 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:35Z","lastTransitionTime":"2026-03-13T10:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.731393 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.731677 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.731772 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.731868 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.732019 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:35Z","lastTransitionTime":"2026-03-13T10:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.960429 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.961204 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.961325 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.961442 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:35 crc kubenswrapper[4632]: I0313 10:05:35.961558 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:35Z","lastTransitionTime":"2026-03-13T10:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.048195 4632 scope.go:117] "RemoveContainer" containerID="9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.049139 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:05:36 crc kubenswrapper[4632]: E0313 10:05:36.049336 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.049990 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.050080 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:36 crc kubenswrapper[4632]: E0313 10:05:36.051348 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.051460 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:05:36 crc kubenswrapper[4632]: E0313 10:05:36.051638 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:05:36 crc kubenswrapper[4632]: E0313 10:05:36.051777 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.068058 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.068590 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.068687 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.068772 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.068843 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:36Z","lastTransitionTime":"2026-03-13T10:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.172679 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.172721 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.172733 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.172752 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.172763 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:36Z","lastTransitionTime":"2026-03-13T10:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.306126 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.306178 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.306197 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.306216 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.306226 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:36Z","lastTransitionTime":"2026-03-13T10:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.409989 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.410030 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.410039 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.410052 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.410069 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:36Z","lastTransitionTime":"2026-03-13T10:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.512561 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.512615 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.512628 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.512648 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.512661 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:36Z","lastTransitionTime":"2026-03-13T10:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.616278 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.616335 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.616353 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.616371 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.616381 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:36Z","lastTransitionTime":"2026-03-13T10:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.719617 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.719959 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.720090 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.720200 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.720304 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:36Z","lastTransitionTime":"2026-03-13T10:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.922395 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.922731 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.922834 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.923022 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.923131 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:36Z","lastTransitionTime":"2026-03-13T10:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.967344 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 13 10:05:36 crc kubenswrapper[4632]: I0313 10:05:36.969717 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"21d33939d9cfb29a666b40f15b0bd1e73ec4c62db28db999433541739f2a1c94"} Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.026771 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.026851 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.026868 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.026893 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.026920 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:37Z","lastTransitionTime":"2026-03-13T10:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.237195 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.237231 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.237243 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.237261 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.237273 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:37Z","lastTransitionTime":"2026-03-13T10:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.340312 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.340354 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.340366 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.340383 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.340394 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:37Z","lastTransitionTime":"2026-03-13T10:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.444768 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.444815 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.444829 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.444848 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.444862 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:37Z","lastTransitionTime":"2026-03-13T10:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.548398 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.548454 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.548464 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.548482 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.548494 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:37Z","lastTransitionTime":"2026-03-13T10:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.651647 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.651706 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.651719 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.651738 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.652147 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:37Z","lastTransitionTime":"2026-03-13T10:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.755346 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.755424 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.755438 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.755460 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.755478 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:37Z","lastTransitionTime":"2026-03-13T10:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.858373 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.858431 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.858440 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.858458 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.858466 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:37Z","lastTransitionTime":"2026-03-13T10:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.964903 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.964972 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.964983 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.965002 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.965017 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:37Z","lastTransitionTime":"2026-03-13T10:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.976749 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qb725_3b40c6b3-0061-4224-82d5-3ccf67998722/ovnkube-controller/0.log" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.980725 4632 generic.go:334] "Generic (PLEG): container finished" podID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerID="0b158df3a1ade30707dc3ca7240a0945ad79d93f45888229313da4ded182c7d9" exitCode=1 Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.980824 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" event={"ID":"3b40c6b3-0061-4224-82d5-3ccf67998722","Type":"ContainerDied","Data":"0b158df3a1ade30707dc3ca7240a0945ad79d93f45888229313da4ded182c7d9"} Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.981695 4632 scope.go:117] "RemoveContainer" containerID="0b158df3a1ade30707dc3ca7240a0945ad79d93f45888229313da4ded182c7d9" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.985399 4632 generic.go:334] "Generic (PLEG): container finished" podID="b054ca08-1d09-4eca-a608-eb5b9323959a" containerID="8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09" exitCode=0 Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.985447 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" event={"ID":"b054ca08-1d09-4eca-a608-eb5b9323959a","Type":"ContainerDied","Data":"8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09"} Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.985804 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:05:37 crc kubenswrapper[4632]: I0313 10:05:37.999249 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8dc5225d33306dd28d6043b50b88e39a677356681187c0f881966e12e9494c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:37Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.013993 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ae022ab87d0aedd5bbc6440acda6466cde0d3712108042da1225ea73ca35d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.033309 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.045018 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.045091 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.045118 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:05:38 crc kubenswrapper[4632]: E0313 10:05:38.045174 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.045016 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:05:38 crc kubenswrapper[4632]: E0313 10:05:38.045322 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:05:38 crc kubenswrapper[4632]: E0313 10:05:38.045462 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:05:38 crc kubenswrapper[4632]: E0313 10:05:38.045547 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.048674 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b54038392eeeed9eefaa615e0f9a4a34341efebcd6f6e628d7a911545ca154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f678e4bdb921e73ed7f26a9d4565fc5668c94082003bb79357a2bf793b06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.067352 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.068927 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.069002 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.069015 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.069033 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.069045 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:38Z","lastTransitionTime":"2026-03-13T10:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.087276 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894cdc70-0747-4975-a22f-0dbd657e91a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0313 10:04:51.224345 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 10:04:51.224743 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0313 10:04:51.231279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3511042304/tls.crt::/tmp/serving-cert-3511042304/tls.key\\\\\\\"\\\\nI0313 10:04:51.512346 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0313 10:04:51.516641 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0313 10:04:51.516667 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0313 10:04:51.516694 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0313 10:04:51.516704 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0313 10:04:51.524417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0313 10:04:51.524471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524478 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 10:04:51.524495 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 10:04:51.524500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 10:04:51.524505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 10:04:51.524633 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0313 10:04:51.529203 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:04:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.104623 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.132997 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdb8c440e2f83530ee3fd4be3d39b21575674047ecba0e26719eb06bde38dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39037d4d6f8345706dd36a2591ead2b263a8f0bcc4558684d2c06a5eccfbfbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.155469 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b158df3a1ade30707dc3ca7240a0945ad79d93f45888229313da4ded182c7d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b158df3a1ade30707dc3ca7240a0945ad79d93f45888229313da4ded182c7d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T10:05:37Z\\\",\\\"message\\\":\\\"s) from k8s.io/client-go/informers/factory.go:160\\\\nI0313 10:05:37.577081 6495 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0313 10:05:37.577192 6495 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0313 10:05:37.577561 6495 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0313 10:05:37.578699 6495 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0313 10:05:37.578717 6495 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0313 10:05:37.578749 6495 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0313 10:05:37.578783 6495 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0313 10:05:37.578783 6495 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0313 10:05:37.578851 6495 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0313 10:05:37.578884 6495 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0313 10:05:37.578917 6495 factory.go:656] Stopping watch factory\\\\nI0313 10:05:37.578961 6495 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.171519 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeb823b-9a8c-403a-9a60-1d74ba0fbffe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da62efee0b0e5a420abd2f18aae7b1ad532b0dd5dcda4d36d4efa9b039bd8811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.171843 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.172232 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.172247 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.172265 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.172277 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:38Z","lastTransitionTime":"2026-03-13T10:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.189059 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.203288 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.215690 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8c3d39bd1ac3290c35d993771da5d7915b468b7376aadccf7b459f65dc7138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.229192 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cdf9ec894d29357b7b171dd1552444b97348f86d71e58afd9ab6bae1a05654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.243519 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3ab26f24c338001da561ca80dccbab8a99da54054f6b7ddd2b0d5ac02f7dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.276985 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.277032 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.277044 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.277067 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.277081 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:38Z","lastTransitionTime":"2026-03-13T10:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.279640 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74ac702432406b147f9db010a8945c3e54f6f15a64346e895833c39bcc8f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.304084 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3ab26f24c338001da561ca80dccbab8a99da54054f6b7ddd2b0d5ac02f7dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.324275 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74ac702432406b147f9db010a8945c3e54f6f15a64346e895833c39bcc8f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.346252 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ae022ab87d0aedd5bbc6440acda6466cde0d3712108042da1225ea73ca35d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.368610 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.383881 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.383932 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.383961 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.383984 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.384003 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:38Z","lastTransitionTime":"2026-03-13T10:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.385787 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b54038392eeeed9eefaa615e0f9a4a34341efebcd6f6e628d7a911545ca154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f678e4bdb921e73ed7f26a9d4565fc5668c94082003bb79357a2bf793b06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.402915 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8dc5225d33306dd28d6043b50b88e39a677356681187c0f881966e12e9494c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.436645 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894cdc70-0747-4975-a22f-0dbd657e91a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d33939d9cfb29a666b40f15b0bd1e73ec4c62db28db999433541739f2a1c94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0313 10:04:51.224345 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 10:04:51.224743 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0313 10:04:51.231279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3511042304/tls.crt::/tmp/serving-cert-3511042304/tls.key\\\\\\\"\\\\nI0313 10:04:51.512346 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0313 10:04:51.516641 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0313 10:04:51.516667 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0313 10:04:51.516694 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0313 10:04:51.516704 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0313 10:04:51.524417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0313 10:04:51.524471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524478 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 10:04:51.524495 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 10:04:51.524500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 10:04:51.524505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 10:04:51.524633 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0313 10:04:51.529203 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:04:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.474040 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.496447 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.496506 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.496515 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.496547 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.496559 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:38Z","lastTransitionTime":"2026-03-13T10:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.515053 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdb8c440e2f83530ee3fd4be3d39b21575674047ecba0e26719eb06bde38dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39037d4d6f8345706dd36a2591ead2b263a8f0bcc4558684d2c06a5eccfbfbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.541458 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.561297 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.583899 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.599359 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.599423 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.599439 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.599481 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.599496 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:38Z","lastTransitionTime":"2026-03-13T10:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.601833 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8c3d39bd1ac3290c35d993771da5d7915b468b7376aadccf7b459f65dc7138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.622290 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cdf9ec894d29357b7b171dd1552444b97348f86d71e58afd9ab6bae1a05654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.654096 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b158df3a1ade30707dc3ca7240a0945ad79d93f45888229313da4ded182c7d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b158df3a1ade30707dc3ca7240a0945ad79d93f45888229313da4ded182c7d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T10:05:37Z\\\",\\\"message\\\":\\\"s) from k8s.io/client-go/informers/factory.go:160\\\\nI0313 10:05:37.577081 6495 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0313 10:05:37.577192 6495 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0313 10:05:37.577561 6495 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0313 10:05:37.578699 6495 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0313 10:05:37.578717 6495 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0313 10:05:37.578749 6495 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0313 10:05:37.578783 6495 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0313 10:05:37.578783 6495 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0313 10:05:37.578851 6495 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0313 10:05:37.578884 6495 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0313 10:05:37.578917 6495 factory.go:656] Stopping watch factory\\\\nI0313 10:05:37.578961 6495 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.667214 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeb823b-9a8c-403a-9a60-1d74ba0fbffe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da62efee0b0e5a420abd2f18aae7b1ad532b0dd5dcda4d36d4efa9b039bd8811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.679455 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeb823b-9a8c-403a-9a60-1d74ba0fbffe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da62efee0b0e5a420abd2f18aae7b1ad532b0dd5dcda4d36d4efa9b039bd8811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.693804 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.702590 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.702646 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.702658 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.702691 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.702701 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:38Z","lastTransitionTime":"2026-03-13T10:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.707992 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.720128 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8c3d39bd1ac3290c35d993771da5d7915b468b7376aadccf7b459f65dc7138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.731212 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cdf9ec894d29357b7b171dd1552444b97348f86d71e58afd9ab6bae1a05654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.753219 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b158df3a1ade30707dc3ca7240a0945ad79d93f45888229313da4ded182c7d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b158df3a1ade30707dc3ca7240a0945ad79d93f45888229313da4ded182c7d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T10:05:37Z\\\",\\\"message\\\":\\\"s) from k8s.io/client-go/informers/factory.go:160\\\\nI0313 10:05:37.577081 6495 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0313 10:05:37.577192 6495 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0313 10:05:37.577561 6495 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0313 10:05:37.578699 6495 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0313 10:05:37.578717 6495 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0313 10:05:37.578749 6495 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0313 10:05:37.578783 6495 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0313 10:05:37.578783 6495 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0313 10:05:37.578851 6495 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0313 10:05:37.578884 6495 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0313 10:05:37.578917 6495 factory.go:656] Stopping watch factory\\\\nI0313 10:05:37.578961 6495 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.766559 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3ab26f24c338001da561ca80dccbab8a99da54054f6b7ddd2b0d5ac02f7dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.777534 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74ac702432406b147f9db010a8945c3e54f6f15a64346e895833c39bcc8f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.798209 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8dc5225d33306dd28d6043b50b88e39a677356681187c0f881966e12e9494c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.805698 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.805773 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.805792 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.805818 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.805836 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:38Z","lastTransitionTime":"2026-03-13T10:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.814679 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ae022ab87d0aedd5bbc6440acda6466cde0d3712108042da1225ea73ca35d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.831464 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.848269 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b54038392eeeed9eefaa615e0f9a4a34341efebcd6f6e628d7a911545ca154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f678e4bdb921e73ed7f26a9d4565fc5668c94082003bb79357a2bf793b06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.866801 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894cdc70-0747-4975-a22f-0dbd657e91a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d33939d9cfb29a666b40f15b0bd1e73ec4c62db28db999433541739f2a1c94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0313 10:04:51.224345 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 10:04:51.224743 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0313 10:04:51.231279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3511042304/tls.crt::/tmp/serving-cert-3511042304/tls.key\\\\\\\"\\\\nI0313 10:04:51.512346 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0313 10:04:51.516641 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0313 10:04:51.516667 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0313 10:04:51.516694 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0313 10:04:51.516704 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0313 10:04:51.524417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0313 10:04:51.524471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524478 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 10:04:51.524495 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 10:04:51.524500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 10:04:51.524505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 10:04:51.524633 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0313 10:04:51.529203 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:04:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.884156 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.909610 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.909665 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.909687 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.909711 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.909737 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:38Z","lastTransitionTime":"2026-03-13T10:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.912919 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdb8c440e2f83530ee3fd4be3d39b21575674047ecba0e26719eb06bde38dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39037d4d6f8345706dd36a2591ead2b263a8f0bcc4558684d2c06a5eccfbfbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.928030 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:38Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.996139 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qb725_3b40c6b3-0061-4224-82d5-3ccf67998722/ovnkube-controller/0.log" Mar 13 10:05:38 crc kubenswrapper[4632]: I0313 10:05:38.999504 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" event={"ID":"3b40c6b3-0061-4224-82d5-3ccf67998722","Type":"ContainerStarted","Data":"934a538bc834077a7381421f37a69e4d28792692b3d6686b4c56e39c6561d79e"} Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.000105 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.005147 4632 generic.go:334] "Generic (PLEG): container finished" podID="b054ca08-1d09-4eca-a608-eb5b9323959a" containerID="be5cd3ab03fb352008bc7646fa1f6f5f4825c107aef63eb0697721666d30cd41" exitCode=0 Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.005213 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" event={"ID":"b054ca08-1d09-4eca-a608-eb5b9323959a","Type":"ContainerDied","Data":"be5cd3ab03fb352008bc7646fa1f6f5f4825c107aef63eb0697721666d30cd41"} Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.012442 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.012479 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.012489 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.012508 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.012519 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:39Z","lastTransitionTime":"2026-03-13T10:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.022214 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894cdc70-0747-4975-a22f-0dbd657e91a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d33939d9cfb29a666b40f15b0bd1e73ec4c62db28db999433541739f2a1c94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0313 10:04:51.224345 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 10:04:51.224743 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0313 10:04:51.231279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3511042304/tls.crt::/tmp/serving-cert-3511042304/tls.key\\\\\\\"\\\\nI0313 10:04:51.512346 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0313 10:04:51.516641 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0313 10:04:51.516667 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0313 10:04:51.516694 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0313 10:04:51.516704 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0313 10:04:51.524417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0313 10:04:51.524471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524478 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 10:04:51.524495 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 10:04:51.524500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 10:04:51.524505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 10:04:51.524633 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0313 10:04:51.529203 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:04:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:39Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.050070 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:39Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.068925 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdb8c440e2f83530ee3fd4be3d39b21575674047ecba0e26719eb06bde38dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39037d4d6f8345706dd36a2591ead2b263a8f0bcc4558684d2c06a5eccfbfbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:39Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.086959 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:39Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.105064 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:39Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.116615 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.116669 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.116681 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.116701 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.116715 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:39Z","lastTransitionTime":"2026-03-13T10:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.128131 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:39Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.189925 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8c3d39bd1ac3290c35d993771da5d7915b468b7376aadccf7b459f65dc7138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:39Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.215594 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cdf9ec894d29357b7b171dd1552444b97348f86d71e58afd9ab6bae1a05654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:39Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.221234 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.221275 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.221288 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.221307 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.221322 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:39Z","lastTransitionTime":"2026-03-13T10:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.251556 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://934a538bc834077a7381421f37a69e4d28792692b3d6686b4c56e39c6561d79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b158df3a1ade30707dc3ca7240a0945ad79d93f45888229313da4ded182c7d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T10:05:37Z\\\",\\\"message\\\":\\\"s) from k8s.io/client-go/informers/factory.go:160\\\\nI0313 10:05:37.577081 6495 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0313 10:05:37.577192 6495 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0313 10:05:37.577561 6495 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0313 10:05:37.578699 6495 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0313 10:05:37.578717 6495 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0313 10:05:37.578749 6495 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0313 10:05:37.578783 6495 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0313 10:05:37.578783 6495 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0313 10:05:37.578851 6495 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0313 10:05:37.578884 6495 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0313 10:05:37.578917 6495 factory.go:656] Stopping watch factory\\\\nI0313 10:05:37.578961 6495 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:39Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.268766 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeb823b-9a8c-403a-9a60-1d74ba0fbffe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da62efee0b0e5a420abd2f18aae7b1ad532b0dd5dcda4d36d4efa9b039bd8811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:39Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.289138 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3ab26f24c338001da561ca80dccbab8a99da54054f6b7ddd2b0d5ac02f7dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:39Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.310587 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74ac702432406b147f9db010a8945c3e54f6f15a64346e895833c39bcc8f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:39Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.324862 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.324918 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.324930 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.324967 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.324982 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:39Z","lastTransitionTime":"2026-03-13T10:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.330773 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ae022ab87d0aedd5bbc6440acda6466cde0d3712108042da1225ea73ca35d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:39Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.350321 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:39Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.373335 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b54038392eeeed9eefaa615e0f9a4a34341efebcd6f6e628d7a911545ca154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f678e4bdb921e73ed7f26a9d4565fc5668c94082003bb79357a2bf793b06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:39Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.394448 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8dc5225d33306dd28d6043b50b88e39a677356681187c0f881966e12e9494c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:39Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.424670 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://934a538bc834077a7381421f37a69e4d28792692b3d6686b4c56e39c6561d79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b158df3a1ade30707dc3ca7240a0945ad79d93f45888229313da4ded182c7d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T10:05:37Z\\\",\\\"message\\\":\\\"s) from k8s.io/client-go/informers/factory.go:160\\\\nI0313 10:05:37.577081 6495 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0313 10:05:37.577192 6495 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0313 10:05:37.577561 6495 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0313 10:05:37.578699 6495 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0313 10:05:37.578717 6495 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0313 10:05:37.578749 6495 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0313 10:05:37.578783 6495 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0313 10:05:37.578783 6495 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0313 10:05:37.578851 6495 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0313 10:05:37.578884 6495 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0313 10:05:37.578917 6495 factory.go:656] Stopping watch factory\\\\nI0313 10:05:37.578961 6495 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:39Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.428056 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.428156 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.428167 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.428187 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.428199 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:39Z","lastTransitionTime":"2026-03-13T10:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.444014 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeb823b-9a8c-403a-9a60-1d74ba0fbffe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da62efee0b0e5a420abd2f18aae7b1ad532b0dd5dcda4d36d4efa9b039bd8811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:39Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.462696 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:39Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.482370 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:39Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.501033 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8c3d39bd1ac3290c35d993771da5d7915b468b7376aadccf7b459f65dc7138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:39Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.513470 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cdf9ec894d29357b7b171dd1552444b97348f86d71e58afd9ab6bae1a05654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:39Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.529822 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3ab26f24c338001da561ca80dccbab8a99da54054f6b7ddd2b0d5ac02f7dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:39Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.531872 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.532066 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.532168 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.532268 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.532399 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:39Z","lastTransitionTime":"2026-03-13T10:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.554041 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74ac702432406b147f9db010a8945c3e54f6f15a64346e895833c39bcc8f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:39Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.567546 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8dc5225d33306dd28d6043b50b88e39a677356681187c0f881966e12e9494c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:39Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.588238 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ae022ab87d0aedd5bbc6440acda6466cde0d3712108042da1225ea73ca35d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:39Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.605074 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be5cd3ab03fb352008bc7646fa1f6f5f4825c107aef63eb0697721666d30cd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be5cd3ab03fb352008bc7646fa1f6f5f4825c107aef63eb0697721666d30cd41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:39Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.619850 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b54038392eeeed9eefaa615e0f9a4a34341efebcd6f6e628d7a911545ca154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f678e4bdb921e73ed7f26a9d4565fc5668c94082003bb79357a2bf793b06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:39Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.635646 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.635707 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.635715 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.635734 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.635749 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:39Z","lastTransitionTime":"2026-03-13T10:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.636133 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:39Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.657712 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894cdc70-0747-4975-a22f-0dbd657e91a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d33939d9cfb29a666b40f15b0bd1e73ec4c62db28db999433541739f2a1c94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0313 10:04:51.224345 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 10:04:51.224743 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0313 10:04:51.231279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3511042304/tls.crt::/tmp/serving-cert-3511042304/tls.key\\\\\\\"\\\\nI0313 10:04:51.512346 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0313 10:04:51.516641 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0313 10:04:51.516667 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0313 10:04:51.516694 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0313 10:04:51.516704 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0313 10:04:51.524417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0313 10:04:51.524471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524478 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 10:04:51.524495 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 10:04:51.524500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 10:04:51.524505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 10:04:51.524633 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0313 10:04:51.529203 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:04:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:39Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.679318 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:39Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.696019 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdb8c440e2f83530ee3fd4be3d39b21575674047ecba0e26719eb06bde38dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39037d4d6f8345706dd36a2591ead2b263a8f0bcc4558684d2c06a5eccfbfbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:39Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.739814 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.739864 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.739874 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.739893 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.739904 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:39Z","lastTransitionTime":"2026-03-13T10:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.842711 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.842752 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.842764 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.842785 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.842797 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:39Z","lastTransitionTime":"2026-03-13T10:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.946154 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.946311 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.946327 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.946346 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:39 crc kubenswrapper[4632]: I0313 10:05:39.946360 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:39Z","lastTransitionTime":"2026-03-13T10:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.035957 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" event={"ID":"b054ca08-1d09-4eca-a608-eb5b9323959a","Type":"ContainerStarted","Data":"1443078dbf73aa095c42c9554ca6cfd14a0b7b3b58fdf9f9ca610ea8ae4faaf3"} Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.044074 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:05:40 crc kubenswrapper[4632]: E0313 10:05:40.044175 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.044451 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:05:40 crc kubenswrapper[4632]: E0313 10:05:40.044520 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.044579 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:40 crc kubenswrapper[4632]: E0313 10:05:40.044628 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.044678 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:05:40 crc kubenswrapper[4632]: E0313 10:05:40.044738 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.052313 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.052362 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.052396 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.052419 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.052434 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:40Z","lastTransitionTime":"2026-03-13T10:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.053691 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894cdc70-0747-4975-a22f-0dbd657e91a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d33939d9cfb29a666b40f15b0bd1e73ec4c62db28db999433541739f2a1c94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0313 10:04:51.224345 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 10:04:51.224743 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0313 10:04:51.231279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3511042304/tls.crt::/tmp/serving-cert-3511042304/tls.key\\\\\\\"\\\\nI0313 10:04:51.512346 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0313 10:04:51.516641 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0313 10:04:51.516667 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0313 10:04:51.516694 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0313 10:04:51.516704 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0313 10:04:51.524417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0313 10:04:51.524471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524478 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 10:04:51.524495 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 10:04:51.524500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 10:04:51.524505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 10:04:51.524633 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0313 10:04:51.529203 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:04:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:40Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.066443 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:40Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.082349 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdb8c440e2f83530ee3fd4be3d39b21575674047ecba0e26719eb06bde38dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39037d4d6f8345706dd36a2591ead2b263a8f0bcc4558684d2c06a5eccfbfbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:40Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.096043 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:40Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.111505 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeb823b-9a8c-403a-9a60-1d74ba0fbffe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da62efee0b0e5a420abd2f18aae7b1ad532b0dd5dcda4d36d4efa9b039bd8811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:40Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.127836 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:40Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.145448 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:40Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.157004 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.157071 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.157088 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.157114 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.157129 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:40Z","lastTransitionTime":"2026-03-13T10:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.161573 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8c3d39bd1ac3290c35d993771da5d7915b468b7376aadccf7b459f65dc7138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:40Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.176612 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cdf9ec894d29357b7b171dd1552444b97348f86d71e58afd9ab6bae1a05654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:40Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.198816 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://934a538bc834077a7381421f37a69e4d28792692b3d6686b4c56e39c6561d79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b158df3a1ade30707dc3ca7240a0945ad79d93f45888229313da4ded182c7d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T10:05:37Z\\\",\\\"message\\\":\\\"s) from k8s.io/client-go/informers/factory.go:160\\\\nI0313 10:05:37.577081 6495 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0313 10:05:37.577192 6495 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0313 10:05:37.577561 6495 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0313 10:05:37.578699 6495 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0313 10:05:37.578717 6495 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0313 10:05:37.578749 6495 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0313 10:05:37.578783 6495 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0313 10:05:37.578783 6495 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0313 10:05:37.578851 6495 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0313 10:05:37.578884 6495 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0313 10:05:37.578917 6495 factory.go:656] Stopping watch factory\\\\nI0313 10:05:37.578961 6495 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:40Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.216371 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3ab26f24c338001da561ca80dccbab8a99da54054f6b7ddd2b0d5ac02f7dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:40Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.230722 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74ac702432406b147f9db010a8945c3e54f6f15a64346e895833c39bcc8f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:40Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.244253 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8dc5225d33306dd28d6043b50b88e39a677356681187c0f881966e12e9494c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:40Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.260530 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ae022ab87d0aedd5bbc6440acda6466cde0d3712108042da1225ea73ca35d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:40Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.260983 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.261016 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.261031 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.261051 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.261065 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:40Z","lastTransitionTime":"2026-03-13T10:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.278349 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1443078dbf73aa095c42c9554ca6cfd14a0b7b3b58fdf9f9ca610ea8ae4faaf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be5cd3ab03fb352008bc7646fa1f6f5f4825c107aef63eb0697721666d30cd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be5cd3ab03fb352008bc7646fa1f6f5f4825c107aef63eb0697721666d30cd41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:40Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.299867 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b54038392eeeed9eefaa615e0f9a4a34341efebcd6f6e628d7a911545ca154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f678e4bdb921e73ed7f26a9d4565fc5668c94082003bb79357a2bf793b06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:40Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.365668 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.365746 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.365764 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.365788 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.365805 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:40Z","lastTransitionTime":"2026-03-13T10:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.469236 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.469307 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.469319 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.469347 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.469361 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:40Z","lastTransitionTime":"2026-03-13T10:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.573053 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.573131 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.573146 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.573172 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.573188 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:40Z","lastTransitionTime":"2026-03-13T10:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.677155 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.677213 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.677226 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.677253 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.677271 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:40Z","lastTransitionTime":"2026-03-13T10:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.780233 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.780296 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.780311 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.780330 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.780343 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:40Z","lastTransitionTime":"2026-03-13T10:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.883007 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.883054 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.883066 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.883086 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.883100 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:40Z","lastTransitionTime":"2026-03-13T10:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.986328 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.986401 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.986412 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.986432 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.986445 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:40Z","lastTransitionTime":"2026-03-13T10:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.988070 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.988142 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.988157 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.988181 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:40 crc kubenswrapper[4632]: I0313 10:05:40.988195 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:40Z","lastTransitionTime":"2026-03-13T10:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:41 crc kubenswrapper[4632]: E0313 10:05:41.003885 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:41Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.008418 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.008465 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.008480 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.008505 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.008521 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:41Z","lastTransitionTime":"2026-03-13T10:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:41 crc kubenswrapper[4632]: E0313 10:05:41.023643 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:41Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.028358 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.028403 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.028413 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.028435 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.028448 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:41Z","lastTransitionTime":"2026-03-13T10:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.044347 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qb725_3b40c6b3-0061-4224-82d5-3ccf67998722/ovnkube-controller/1.log" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.044982 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qb725_3b40c6b3-0061-4224-82d5-3ccf67998722/ovnkube-controller/0.log" Mar 13 10:05:41 crc kubenswrapper[4632]: E0313 10:05:41.048178 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:41Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.049154 4632 generic.go:334] "Generic (PLEG): container finished" podID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerID="934a538bc834077a7381421f37a69e4d28792692b3d6686b4c56e39c6561d79e" exitCode=1 Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.049220 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" event={"ID":"3b40c6b3-0061-4224-82d5-3ccf67998722","Type":"ContainerDied","Data":"934a538bc834077a7381421f37a69e4d28792692b3d6686b4c56e39c6561d79e"} Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.049332 4632 scope.go:117] "RemoveContainer" containerID="0b158df3a1ade30707dc3ca7240a0945ad79d93f45888229313da4ded182c7d9" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.050341 4632 scope.go:117] "RemoveContainer" containerID="934a538bc834077a7381421f37a69e4d28792692b3d6686b4c56e39c6561d79e" Mar 13 10:05:41 crc kubenswrapper[4632]: E0313 10:05:41.050629 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qb725_openshift-ovn-kubernetes(3b40c6b3-0061-4224-82d5-3ccf67998722)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.054380 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.054433 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.054444 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.054462 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.054474 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:41Z","lastTransitionTime":"2026-03-13T10:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.073067 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:41Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:41 crc kubenswrapper[4632]: E0313 10:05:41.074799 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:41Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.080358 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.080402 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.080416 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.080439 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.080454 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:41Z","lastTransitionTime":"2026-03-13T10:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.090690 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdb8c440e2f83530ee3fd4be3d39b21575674047ecba0e26719eb06bde38dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39037d4d6f8345706dd36a2591ead2b263a8f0bcc4558684d2c06a5eccfbfbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:41Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:41 crc kubenswrapper[4632]: E0313 10:05:41.098442 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:41Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:41 crc kubenswrapper[4632]: E0313 10:05:41.098631 4632 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.100927 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.100989 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.101004 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.101026 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.101037 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:41Z","lastTransitionTime":"2026-03-13T10:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.105282 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:41Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.124062 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894cdc70-0747-4975-a22f-0dbd657e91a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d33939d9cfb29a666b40f15b0bd1e73ec4c62db28db999433541739f2a1c94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0313 10:04:51.224345 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 10:04:51.224743 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0313 10:04:51.231279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3511042304/tls.crt::/tmp/serving-cert-3511042304/tls.key\\\\\\\"\\\\nI0313 10:04:51.512346 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0313 10:04:51.516641 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0313 10:04:51.516667 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0313 10:04:51.516694 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0313 10:04:51.516704 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0313 10:04:51.524417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0313 10:04:51.524471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524478 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 10:04:51.524495 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 10:04:51.524500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 10:04:51.524505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 10:04:51.524633 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0313 10:04:51.529203 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:04:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:41Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.138658 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8c3d39bd1ac3290c35d993771da5d7915b468b7376aadccf7b459f65dc7138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:41Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.153177 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cdf9ec894d29357b7b171dd1552444b97348f86d71e58afd9ab6bae1a05654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:41Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.178847 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://934a538bc834077a7381421f37a69e4d28792692b3d6686b4c56e39c6561d79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b158df3a1ade30707dc3ca7240a0945ad79d93f45888229313da4ded182c7d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T10:05:37Z\\\",\\\"message\\\":\\\"s) from k8s.io/client-go/informers/factory.go:160\\\\nI0313 10:05:37.577081 6495 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0313 10:05:37.577192 6495 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0313 10:05:37.577561 6495 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0313 10:05:37.578699 6495 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0313 10:05:37.578717 6495 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0313 10:05:37.578749 6495 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0313 10:05:37.578783 6495 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0313 10:05:37.578783 6495 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0313 10:05:37.578851 6495 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0313 10:05:37.578884 6495 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0313 10:05:37.578917 6495 factory.go:656] Stopping watch factory\\\\nI0313 10:05:37.578961 6495 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://934a538bc834077a7381421f37a69e4d28792692b3d6686b4c56e39c6561d79e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"message\\\":\\\"{c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0313 10:05:39.681766 6736 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0313 10:05:39.682059 6736 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0313 10:05:39.682124 6736 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0313 10:05:39.682142 6736 ovnkube.go:599] Stopped ovnkube\\\\nI0313 10:05:39.682173 6736 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0313 10:05:39.682232 6736 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:41Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.193919 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeb823b-9a8c-403a-9a60-1d74ba0fbffe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da62efee0b0e5a420abd2f18aae7b1ad532b0dd5dcda4d36d4efa9b039bd8811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:41Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.204903 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.204989 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.205003 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.205026 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.205039 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:41Z","lastTransitionTime":"2026-03-13T10:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.213511 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:41Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.232975 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:41Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.253170 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3ab26f24c338001da561ca80dccbab8a99da54054f6b7ddd2b0d5ac02f7dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:41Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.271850 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74ac702432406b147f9db010a8945c3e54f6f15a64346e895833c39bcc8f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:41Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.289064 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b54038392eeeed9eefaa615e0f9a4a34341efebcd6f6e628d7a911545ca154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f678e4bdb921e73ed7f26a9d4565fc5668c94082003bb79357a2bf793b06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:41Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.307424 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8dc5225d33306dd28d6043b50b88e39a677356681187c0f881966e12e9494c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:41Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.308082 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.308122 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.308137 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.308161 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.308174 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:41Z","lastTransitionTime":"2026-03-13T10:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.323595 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ae022ab87d0aedd5bbc6440acda6466cde0d3712108042da1225ea73ca35d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:41Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.341233 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1443078dbf73aa095c42c9554ca6cfd14a0b7b3b58fdf9f9ca610ea8ae4faaf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be5cd3ab03fb352008bc7646fa1f6f5f4825c107aef63eb0697721666d30cd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be5cd3ab03fb352008bc7646fa1f6f5f4825c107aef63eb0697721666d30cd41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:41Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.413502 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.413571 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.413588 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.413610 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.413629 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:41Z","lastTransitionTime":"2026-03-13T10:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.517510 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.517555 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.517567 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.517585 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.517596 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:41Z","lastTransitionTime":"2026-03-13T10:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.620809 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.620864 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.620878 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.620897 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.620911 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:41Z","lastTransitionTime":"2026-03-13T10:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.724025 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.724070 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.724082 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.724098 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.724108 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:41Z","lastTransitionTime":"2026-03-13T10:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.827981 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.828030 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.828045 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.828065 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.828078 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:41Z","lastTransitionTime":"2026-03-13T10:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.932410 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.932474 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.932488 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.932512 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:41 crc kubenswrapper[4632]: I0313 10:05:41.932525 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:41Z","lastTransitionTime":"2026-03-13T10:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.012609 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.012812 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.012900 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:42 crc kubenswrapper[4632]: E0313 10:05:42.013053 4632 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 13 10:05:42 crc kubenswrapper[4632]: E0313 10:05:42.013159 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-13 10:06:14.013127415 +0000 UTC m=+148.035657548 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 13 10:05:42 crc kubenswrapper[4632]: E0313 10:05:42.013466 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:06:14.013453475 +0000 UTC m=+148.035983608 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:05:42 crc kubenswrapper[4632]: E0313 10:05:42.013568 4632 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 13 10:05:42 crc kubenswrapper[4632]: E0313 10:05:42.013618 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-13 10:06:14.01360781 +0000 UTC m=+148.036137943 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.036289 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.036367 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.036381 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.036400 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.036416 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:42Z","lastTransitionTime":"2026-03-13T10:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.043795 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.043870 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.043929 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:05:42 crc kubenswrapper[4632]: E0313 10:05:42.044031 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.043795 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:05:42 crc kubenswrapper[4632]: E0313 10:05:42.044198 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:05:42 crc kubenswrapper[4632]: E0313 10:05:42.044332 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:05:42 crc kubenswrapper[4632]: E0313 10:05:42.044457 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.058705 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qb725_3b40c6b3-0061-4224-82d5-3ccf67998722/ovnkube-controller/1.log" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.113707 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.113788 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.113818 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad-metrics-certs\") pod \"network-metrics-daemon-z2vlz\" (UID: \"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\") " pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:05:42 crc kubenswrapper[4632]: E0313 10:05:42.114029 4632 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 10:05:42 crc kubenswrapper[4632]: E0313 10:05:42.114114 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad-metrics-certs podName:ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad nodeName:}" failed. No retries permitted until 2026-03-13 10:06:14.114088429 +0000 UTC m=+148.136618562 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad-metrics-certs") pod "network-metrics-daemon-z2vlz" (UID: "ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 10:05:42 crc kubenswrapper[4632]: E0313 10:05:42.114400 4632 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 10:05:42 crc kubenswrapper[4632]: E0313 10:05:42.114422 4632 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 10:05:42 crc kubenswrapper[4632]: E0313 10:05:42.114437 4632 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:05:42 crc kubenswrapper[4632]: E0313 10:05:42.114480 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-13 10:06:14.11446572 +0000 UTC m=+148.136995853 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:05:42 crc kubenswrapper[4632]: E0313 10:05:42.114542 4632 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 10:05:42 crc kubenswrapper[4632]: E0313 10:05:42.114555 4632 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 10:05:42 crc kubenswrapper[4632]: E0313 10:05:42.114566 4632 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:05:42 crc kubenswrapper[4632]: E0313 10:05:42.114595 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-13 10:06:14.114584094 +0000 UTC m=+148.137114227 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.139418 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.139871 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.139986 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.140080 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.140144 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:42Z","lastTransitionTime":"2026-03-13T10:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.243830 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.243894 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.244208 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.244274 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.244299 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:42Z","lastTransitionTime":"2026-03-13T10:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.349366 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.349698 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.349778 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.349869 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.349984 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:42Z","lastTransitionTime":"2026-03-13T10:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.452908 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.452990 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.453004 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.453022 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.453035 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:42Z","lastTransitionTime":"2026-03-13T10:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.556544 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.556602 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.556640 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.556662 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.556676 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:42Z","lastTransitionTime":"2026-03-13T10:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.660189 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.660250 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.660263 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.660283 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.660298 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:42Z","lastTransitionTime":"2026-03-13T10:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.763691 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.763760 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.763769 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.763785 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.763795 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:42Z","lastTransitionTime":"2026-03-13T10:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.866210 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.866270 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.866283 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.866348 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.866405 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:42Z","lastTransitionTime":"2026-03-13T10:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.969533 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.969611 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.969628 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.969646 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:42 crc kubenswrapper[4632]: I0313 10:05:42.969706 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:42Z","lastTransitionTime":"2026-03-13T10:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.071637 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.071700 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.071712 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.071731 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.071745 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:43Z","lastTransitionTime":"2026-03-13T10:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.174571 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.174627 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.174641 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.174660 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.174678 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:43Z","lastTransitionTime":"2026-03-13T10:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.278108 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.278171 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.278186 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.278212 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.278229 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:43Z","lastTransitionTime":"2026-03-13T10:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.381507 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.381573 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.381585 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.381605 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.381615 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:43Z","lastTransitionTime":"2026-03-13T10:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.484400 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.484446 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.484456 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.484469 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.484478 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:43Z","lastTransitionTime":"2026-03-13T10:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.587488 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.587531 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.587540 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.587552 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.587561 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:43Z","lastTransitionTime":"2026-03-13T10:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.690602 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.690678 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.690694 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.690714 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.690726 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:43Z","lastTransitionTime":"2026-03-13T10:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.793596 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.793662 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.793683 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.793708 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.793726 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:43Z","lastTransitionTime":"2026-03-13T10:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.897041 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.897086 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.897095 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.897111 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.897125 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:43Z","lastTransitionTime":"2026-03-13T10:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.999830 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.999902 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.999911 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:43 crc kubenswrapper[4632]: I0313 10:05:43.999925 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:43.999963 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:43Z","lastTransitionTime":"2026-03-13T10:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.043747 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.043811 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.043811 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:05:44 crc kubenswrapper[4632]: E0313 10:05:44.043916 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:05:44 crc kubenswrapper[4632]: E0313 10:05:44.044046 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.044107 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:05:44 crc kubenswrapper[4632]: E0313 10:05:44.044178 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:05:44 crc kubenswrapper[4632]: E0313 10:05:44.044245 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.103469 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.104062 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.104146 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.104226 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.104301 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:44Z","lastTransitionTime":"2026-03-13T10:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.207285 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.207354 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.207368 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.207389 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.207406 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:44Z","lastTransitionTime":"2026-03-13T10:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.311122 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.311197 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.311219 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.311245 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.311260 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:44Z","lastTransitionTime":"2026-03-13T10:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.414126 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.414183 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.414195 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.414214 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.414226 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:44Z","lastTransitionTime":"2026-03-13T10:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.517232 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.517274 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.517283 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.517297 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.517307 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:44Z","lastTransitionTime":"2026-03-13T10:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.619670 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.619777 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.619786 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.619800 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.619808 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:44Z","lastTransitionTime":"2026-03-13T10:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.721706 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.721812 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.721831 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.723191 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.723250 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:44Z","lastTransitionTime":"2026-03-13T10:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.826359 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.826406 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.826419 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.826435 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.826449 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:44Z","lastTransitionTime":"2026-03-13T10:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.929322 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.929375 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.929387 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.929402 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:44 crc kubenswrapper[4632]: I0313 10:05:44.929412 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:44Z","lastTransitionTime":"2026-03-13T10:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.031695 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.031732 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.031742 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.031756 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.032012 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:45Z","lastTransitionTime":"2026-03-13T10:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.134699 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.134747 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.134755 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.134774 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.134784 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:45Z","lastTransitionTime":"2026-03-13T10:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.237678 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.237730 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.237738 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.237753 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.237763 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:45Z","lastTransitionTime":"2026-03-13T10:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.340606 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.340652 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.340664 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.340682 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.340694 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:45Z","lastTransitionTime":"2026-03-13T10:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.443287 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.443356 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.443372 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.443395 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.443414 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:45Z","lastTransitionTime":"2026-03-13T10:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.546435 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.546485 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.546498 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.546517 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.546531 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:45Z","lastTransitionTime":"2026-03-13T10:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.648810 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.648870 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.648885 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.648904 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.648921 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:45Z","lastTransitionTime":"2026-03-13T10:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.750961 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.751010 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.751022 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.751039 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.751050 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:45Z","lastTransitionTime":"2026-03-13T10:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.854450 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.854491 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.854499 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.854518 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.854527 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:45Z","lastTransitionTime":"2026-03-13T10:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.958014 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.958084 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.958109 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.958140 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:45 crc kubenswrapper[4632]: I0313 10:05:45.958166 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:45Z","lastTransitionTime":"2026-03-13T10:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.044208 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.044216 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.044380 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:05:46 crc kubenswrapper[4632]: E0313 10:05:46.044406 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.044430 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:05:46 crc kubenswrapper[4632]: E0313 10:05:46.044517 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:05:46 crc kubenswrapper[4632]: E0313 10:05:46.044670 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:05:46 crc kubenswrapper[4632]: E0313 10:05:46.044740 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.060679 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.060724 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.060756 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.060772 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.060782 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:46Z","lastTransitionTime":"2026-03-13T10:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.093597 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.105281 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:46Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.119829 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894cdc70-0747-4975-a22f-0dbd657e91a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d33939d9cfb29a666b40f15b0bd1e73ec4c62db28db999433541739f2a1c94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0313 10:04:51.224345 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 10:04:51.224743 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0313 10:04:51.231279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3511042304/tls.crt::/tmp/serving-cert-3511042304/tls.key\\\\\\\"\\\\nI0313 10:04:51.512346 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0313 10:04:51.516641 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0313 10:04:51.516667 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0313 10:04:51.516694 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0313 10:04:51.516704 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0313 10:04:51.524417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0313 10:04:51.524471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524478 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 10:04:51.524495 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 10:04:51.524500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 10:04:51.524505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 10:04:51.524633 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0313 10:04:51.529203 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:04:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:46Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.135729 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:46Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.153491 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdb8c440e2f83530ee3fd4be3d39b21575674047ecba0e26719eb06bde38dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39037d4d6f8345706dd36a2591ead2b263a8f0bcc4558684d2c06a5eccfbfbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:46Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.164439 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.164486 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.164499 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.164518 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.164527 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:46Z","lastTransitionTime":"2026-03-13T10:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.177273 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://934a538bc834077a7381421f37a69e4d28792692b3d6686b4c56e39c6561d79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b158df3a1ade30707dc3ca7240a0945ad79d93f45888229313da4ded182c7d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T10:05:37Z\\\",\\\"message\\\":\\\"s) from k8s.io/client-go/informers/factory.go:160\\\\nI0313 10:05:37.577081 6495 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0313 10:05:37.577192 6495 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0313 10:05:37.577561 6495 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0313 10:05:37.578699 6495 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0313 10:05:37.578717 6495 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0313 10:05:37.578749 6495 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0313 10:05:37.578783 6495 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0313 10:05:37.578783 6495 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0313 10:05:37.578851 6495 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0313 10:05:37.578884 6495 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0313 10:05:37.578917 6495 factory.go:656] Stopping watch factory\\\\nI0313 10:05:37.578961 6495 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://934a538bc834077a7381421f37a69e4d28792692b3d6686b4c56e39c6561d79e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"message\\\":\\\"{c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0313 10:05:39.681766 6736 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0313 10:05:39.682059 6736 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0313 10:05:39.682124 6736 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0313 10:05:39.682142 6736 ovnkube.go:599] Stopped ovnkube\\\\nI0313 10:05:39.682173 6736 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0313 10:05:39.682232 6736 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:46Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.189201 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeb823b-9a8c-403a-9a60-1d74ba0fbffe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da62efee0b0e5a420abd2f18aae7b1ad532b0dd5dcda4d36d4efa9b039bd8811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:46Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.210474 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:46Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.224322 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:46Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.238513 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8c3d39bd1ac3290c35d993771da5d7915b468b7376aadccf7b459f65dc7138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:46Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.251313 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cdf9ec894d29357b7b171dd1552444b97348f86d71e58afd9ab6bae1a05654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:46Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.264840 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3ab26f24c338001da561ca80dccbab8a99da54054f6b7ddd2b0d5ac02f7dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:46Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.267837 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.267908 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.267924 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.268308 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.268350 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:46Z","lastTransitionTime":"2026-03-13T10:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.279729 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74ac702432406b147f9db010a8945c3e54f6f15a64346e895833c39bcc8f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:46Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.294027 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8dc5225d33306dd28d6043b50b88e39a677356681187c0f881966e12e9494c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:46Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.308430 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ae022ab87d0aedd5bbc6440acda6466cde0d3712108042da1225ea73ca35d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:46Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.322529 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1443078dbf73aa095c42c9554ca6cfd14a0b7b3b58fdf9f9ca610ea8ae4faaf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be5cd3ab03fb352008bc7646fa1f6f5f4825c107aef63eb0697721666d30cd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be5cd3ab03fb352008bc7646fa1f6f5f4825c107aef63eb0697721666d30cd41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:46Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.334557 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b54038392eeeed9eefaa615e0f9a4a34341efebcd6f6e628d7a911545ca154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f678e4bdb921e73ed7f26a9d4565fc5668c94082003bb79357a2bf793b06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:46Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.370715 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.370750 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.370760 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.370776 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.370787 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:46Z","lastTransitionTime":"2026-03-13T10:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.473376 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.473480 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.473496 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.473520 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.473535 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:46Z","lastTransitionTime":"2026-03-13T10:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.576247 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.576297 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.576306 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.576319 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.576329 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:46Z","lastTransitionTime":"2026-03-13T10:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.679386 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.679450 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.679471 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.679495 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.679513 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:46Z","lastTransitionTime":"2026-03-13T10:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.782487 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.782524 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.782533 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.782549 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.782560 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:46Z","lastTransitionTime":"2026-03-13T10:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.886153 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.886213 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.886226 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.886246 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.886260 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:46Z","lastTransitionTime":"2026-03-13T10:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.990172 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.990245 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.990259 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.990281 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:46 crc kubenswrapper[4632]: I0313 10:05:46.990295 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:46Z","lastTransitionTime":"2026-03-13T10:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.062437 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.093778 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.093827 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.093841 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.093866 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.093879 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:47Z","lastTransitionTime":"2026-03-13T10:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.196968 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.197018 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.197029 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.197047 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.197060 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:47Z","lastTransitionTime":"2026-03-13T10:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.300432 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.300493 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.300504 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.300523 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.300536 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:47Z","lastTransitionTime":"2026-03-13T10:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.404749 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.404802 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.404817 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.404840 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.404903 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:47Z","lastTransitionTime":"2026-03-13T10:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.508308 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.508347 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.508357 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.508373 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.508384 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:47Z","lastTransitionTime":"2026-03-13T10:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.612227 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.612691 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.612777 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.612911 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.613026 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:47Z","lastTransitionTime":"2026-03-13T10:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.715616 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.716020 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.716104 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.716183 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.716301 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:47Z","lastTransitionTime":"2026-03-13T10:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.820118 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.820260 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.820275 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.820300 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.820317 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:47Z","lastTransitionTime":"2026-03-13T10:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.924090 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.924151 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.924163 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.924181 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:47 crc kubenswrapper[4632]: I0313 10:05:47.924193 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:47Z","lastTransitionTime":"2026-03-13T10:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:48 crc kubenswrapper[4632]: E0313 10:05:48.025525 4632 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Mar 13 10:05:48 crc kubenswrapper[4632]: I0313 10:05:48.044174 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:05:48 crc kubenswrapper[4632]: I0313 10:05:48.044266 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:48 crc kubenswrapper[4632]: E0313 10:05:48.044341 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:05:48 crc kubenswrapper[4632]: I0313 10:05:48.044441 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:05:48 crc kubenswrapper[4632]: E0313 10:05:48.044449 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:05:48 crc kubenswrapper[4632]: I0313 10:05:48.044490 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:05:48 crc kubenswrapper[4632]: E0313 10:05:48.044770 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:05:48 crc kubenswrapper[4632]: E0313 10:05:48.044647 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:05:48 crc kubenswrapper[4632]: I0313 10:05:48.064064 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:48Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:48 crc kubenswrapper[4632]: I0313 10:05:48.081932 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:48Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:48 crc kubenswrapper[4632]: I0313 10:05:48.094806 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8c3d39bd1ac3290c35d993771da5d7915b468b7376aadccf7b459f65dc7138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:48Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:48 crc kubenswrapper[4632]: I0313 10:05:48.109121 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cdf9ec894d29357b7b171dd1552444b97348f86d71e58afd9ab6bae1a05654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:48Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:48 crc kubenswrapper[4632]: I0313 10:05:48.134166 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://934a538bc834077a7381421f37a69e4d28792692b3d6686b4c56e39c6561d79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b158df3a1ade30707dc3ca7240a0945ad79d93f45888229313da4ded182c7d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T10:05:37Z\\\",\\\"message\\\":\\\"s) from k8s.io/client-go/informers/factory.go:160\\\\nI0313 10:05:37.577081 6495 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0313 10:05:37.577192 6495 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0313 10:05:37.577561 6495 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0313 10:05:37.578699 6495 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0313 10:05:37.578717 6495 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0313 10:05:37.578749 6495 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0313 10:05:37.578783 6495 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0313 10:05:37.578783 6495 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0313 10:05:37.578851 6495 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0313 10:05:37.578884 6495 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0313 10:05:37.578917 6495 factory.go:656] Stopping watch factory\\\\nI0313 10:05:37.578961 6495 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://934a538bc834077a7381421f37a69e4d28792692b3d6686b4c56e39c6561d79e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"message\\\":\\\"{c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0313 10:05:39.681766 6736 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0313 10:05:39.682059 6736 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0313 10:05:39.682124 6736 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0313 10:05:39.682142 6736 ovnkube.go:599] Stopped ovnkube\\\\nI0313 10:05:39.682173 6736 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0313 10:05:39.682232 6736 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:48Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:48 crc kubenswrapper[4632]: I0313 10:05:48.149172 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeb823b-9a8c-403a-9a60-1d74ba0fbffe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da62efee0b0e5a420abd2f18aae7b1ad532b0dd5dcda4d36d4efa9b039bd8811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:48Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:48 crc kubenswrapper[4632]: E0313 10:05:48.156347 4632 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 10:05:48 crc kubenswrapper[4632]: I0313 10:05:48.166002 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3ab26f24c338001da561ca80dccbab8a99da54054f6b7ddd2b0d5ac02f7dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:48Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:48 crc kubenswrapper[4632]: I0313 10:05:48.179730 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74ac702432406b147f9db010a8945c3e54f6f15a64346e895833c39bcc8f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:48Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:48 crc kubenswrapper[4632]: I0313 10:05:48.197082 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caab849a-f2dd-453b-85cc-768f57800789\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://951a71f1358e164fefc6fbe9404cf1d0e387d58b0bc1d060eeca64a85e0fc08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e800b40d3f99e6d38fa7e28f06be6260e6ebe4bb5e7c73de6734d0617092ac\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:18Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0313 10:03:50.495764 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0313 10:03:50.504436 1 observer_polling.go:159] Starting file observer\\\\nI0313 10:03:50.582400 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0313 10:03:50.592133 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0313 10:04:18.170496 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0313 10:04:18.170641 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8207ad7aa524df5af853ad8235c24a6addbc04e168248391629f00124901672d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc37f87081e3682692ee20f12c80aa65fbcb8604b381f313b8073f8019b96dbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2666e43569f4f3fb7d9deec4b70dc873d86a3a7c4ac2fac7eea45198e35ecf3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:48Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:48 crc kubenswrapper[4632]: I0313 10:05:48.211666 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ae022ab87d0aedd5bbc6440acda6466cde0d3712108042da1225ea73ca35d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:48Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:48 crc kubenswrapper[4632]: I0313 10:05:48.226294 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1443078dbf73aa095c42c9554ca6cfd14a0b7b3b58fdf9f9ca610ea8ae4faaf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be5cd3ab03fb352008bc7646fa1f6f5f4825c107aef63eb0697721666d30cd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be5cd3ab03fb352008bc7646fa1f6f5f4825c107aef63eb0697721666d30cd41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:48Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:48 crc kubenswrapper[4632]: I0313 10:05:48.240925 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b54038392eeeed9eefaa615e0f9a4a34341efebcd6f6e628d7a911545ca154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f678e4bdb921e73ed7f26a9d4565fc5668c94082003bb79357a2bf793b06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:48Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:48 crc kubenswrapper[4632]: I0313 10:05:48.252991 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8dc5225d33306dd28d6043b50b88e39a677356681187c0f881966e12e9494c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:48Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:48 crc kubenswrapper[4632]: I0313 10:05:48.265766 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894cdc70-0747-4975-a22f-0dbd657e91a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d33939d9cfb29a666b40f15b0bd1e73ec4c62db28db999433541739f2a1c94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0313 10:04:51.224345 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 10:04:51.224743 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0313 10:04:51.231279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3511042304/tls.crt::/tmp/serving-cert-3511042304/tls.key\\\\\\\"\\\\nI0313 10:04:51.512346 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0313 10:04:51.516641 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0313 10:04:51.516667 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0313 10:04:51.516694 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0313 10:04:51.516704 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0313 10:04:51.524417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0313 10:04:51.524471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524478 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 10:04:51.524495 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 10:04:51.524500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 10:04:51.524505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 10:04:51.524633 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0313 10:04:51.529203 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:04:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:48Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:48 crc kubenswrapper[4632]: I0313 10:05:48.277305 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:48Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:48 crc kubenswrapper[4632]: I0313 10:05:48.290179 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdb8c440e2f83530ee3fd4be3d39b21575674047ecba0e26719eb06bde38dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39037d4d6f8345706dd36a2591ead2b263a8f0bcc4558684d2c06a5eccfbfbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:48Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:48 crc kubenswrapper[4632]: I0313 10:05:48.299647 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:48Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:50 crc kubenswrapper[4632]: I0313 10:05:50.044277 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:05:50 crc kubenswrapper[4632]: I0313 10:05:50.044362 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:05:50 crc kubenswrapper[4632]: E0313 10:05:50.044819 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:05:50 crc kubenswrapper[4632]: I0313 10:05:50.044421 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:50 crc kubenswrapper[4632]: E0313 10:05:50.044932 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:05:50 crc kubenswrapper[4632]: I0313 10:05:50.044370 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:05:50 crc kubenswrapper[4632]: E0313 10:05:50.045086 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:05:50 crc kubenswrapper[4632]: E0313 10:05:50.045153 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:05:51 crc kubenswrapper[4632]: I0313 10:05:51.274472 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:51 crc kubenswrapper[4632]: I0313 10:05:51.274535 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:51 crc kubenswrapper[4632]: I0313 10:05:51.274555 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:51 crc kubenswrapper[4632]: I0313 10:05:51.274577 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:51 crc kubenswrapper[4632]: I0313 10:05:51.274593 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:51Z","lastTransitionTime":"2026-03-13T10:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:51 crc kubenswrapper[4632]: E0313 10:05:51.288079 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:51Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:51 crc kubenswrapper[4632]: I0313 10:05:51.292430 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:51 crc kubenswrapper[4632]: I0313 10:05:51.292465 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:51 crc kubenswrapper[4632]: I0313 10:05:51.292476 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:51 crc kubenswrapper[4632]: I0313 10:05:51.292492 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:51 crc kubenswrapper[4632]: I0313 10:05:51.292502 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:51Z","lastTransitionTime":"2026-03-13T10:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:51 crc kubenswrapper[4632]: E0313 10:05:51.303563 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:51Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:51 crc kubenswrapper[4632]: I0313 10:05:51.308025 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:51 crc kubenswrapper[4632]: I0313 10:05:51.308072 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:51 crc kubenswrapper[4632]: I0313 10:05:51.308082 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:51 crc kubenswrapper[4632]: I0313 10:05:51.308099 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:51 crc kubenswrapper[4632]: I0313 10:05:51.308108 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:51Z","lastTransitionTime":"2026-03-13T10:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:51 crc kubenswrapper[4632]: E0313 10:05:51.321830 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:51Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:51 crc kubenswrapper[4632]: I0313 10:05:51.325311 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:51 crc kubenswrapper[4632]: I0313 10:05:51.325359 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:51 crc kubenswrapper[4632]: I0313 10:05:51.325369 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:51 crc kubenswrapper[4632]: I0313 10:05:51.325386 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:51 crc kubenswrapper[4632]: I0313 10:05:51.325397 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:51Z","lastTransitionTime":"2026-03-13T10:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:51 crc kubenswrapper[4632]: E0313 10:05:51.338048 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:51Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:51 crc kubenswrapper[4632]: I0313 10:05:51.341869 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:05:51 crc kubenswrapper[4632]: I0313 10:05:51.341916 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:05:51 crc kubenswrapper[4632]: I0313 10:05:51.341927 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:05:51 crc kubenswrapper[4632]: I0313 10:05:51.341972 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:05:51 crc kubenswrapper[4632]: I0313 10:05:51.341985 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:05:51Z","lastTransitionTime":"2026-03-13T10:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:05:51 crc kubenswrapper[4632]: E0313 10:05:51.357083 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:51Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:51 crc kubenswrapper[4632]: E0313 10:05:51.357248 4632 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 10:05:52 crc kubenswrapper[4632]: I0313 10:05:52.043996 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:05:52 crc kubenswrapper[4632]: E0313 10:05:52.044157 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:05:52 crc kubenswrapper[4632]: I0313 10:05:52.044209 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:05:52 crc kubenswrapper[4632]: I0313 10:05:52.044253 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:52 crc kubenswrapper[4632]: E0313 10:05:52.044315 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:05:52 crc kubenswrapper[4632]: I0313 10:05:52.044351 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:05:52 crc kubenswrapper[4632]: E0313 10:05:52.044395 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:05:52 crc kubenswrapper[4632]: E0313 10:05:52.044434 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:05:53 crc kubenswrapper[4632]: I0313 10:05:53.054692 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 13 10:05:53 crc kubenswrapper[4632]: E0313 10:05:53.157697 4632 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 10:05:54 crc kubenswrapper[4632]: I0313 10:05:54.043279 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:05:54 crc kubenswrapper[4632]: I0313 10:05:54.043342 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:54 crc kubenswrapper[4632]: E0313 10:05:54.043460 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:05:54 crc kubenswrapper[4632]: I0313 10:05:54.043480 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:05:54 crc kubenswrapper[4632]: I0313 10:05:54.043289 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:05:54 crc kubenswrapper[4632]: E0313 10:05:54.043577 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:05:54 crc kubenswrapper[4632]: E0313 10:05:54.043675 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:05:54 crc kubenswrapper[4632]: E0313 10:05:54.043721 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:05:56 crc kubenswrapper[4632]: I0313 10:05:56.044171 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:05:56 crc kubenswrapper[4632]: E0313 10:05:56.044349 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:05:56 crc kubenswrapper[4632]: I0313 10:05:56.044345 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:05:56 crc kubenswrapper[4632]: E0313 10:05:56.044427 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:05:56 crc kubenswrapper[4632]: I0313 10:05:56.044516 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:05:56 crc kubenswrapper[4632]: I0313 10:05:56.044524 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:56 crc kubenswrapper[4632]: E0313 10:05:56.044806 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:05:56 crc kubenswrapper[4632]: E0313 10:05:56.045468 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:05:56 crc kubenswrapper[4632]: I0313 10:05:56.045530 4632 scope.go:117] "RemoveContainer" containerID="934a538bc834077a7381421f37a69e4d28792692b3d6686b4c56e39c6561d79e" Mar 13 10:05:56 crc kubenswrapper[4632]: I0313 10:05:56.065877 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:56Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:56 crc kubenswrapper[4632]: I0313 10:05:56.086020 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdb8c440e2f83530ee3fd4be3d39b21575674047ecba0e26719eb06bde38dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39037d4d6f8345706dd36a2591ead2b263a8f0bcc4558684d2c06a5eccfbfbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:56Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:56 crc kubenswrapper[4632]: I0313 10:05:56.101872 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:56Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:56 crc kubenswrapper[4632]: I0313 10:05:56.127512 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894cdc70-0747-4975-a22f-0dbd657e91a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d33939d9cfb29a666b40f15b0bd1e73ec4c62db28db999433541739f2a1c94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0313 10:04:51.224345 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 10:04:51.224743 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0313 10:04:51.231279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3511042304/tls.crt::/tmp/serving-cert-3511042304/tls.key\\\\\\\"\\\\nI0313 10:04:51.512346 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0313 10:04:51.516641 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0313 10:04:51.516667 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0313 10:04:51.516694 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0313 10:04:51.516704 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0313 10:04:51.524417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0313 10:04:51.524471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524478 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 10:04:51.524495 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 10:04:51.524500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 10:04:51.524505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 10:04:51.524633 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0313 10:04:51.529203 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:04:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:56Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:56 crc kubenswrapper[4632]: I0313 10:05:56.140896 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3570521a-40ff-48d4-a6c2-ef53f64eca38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba32b01674f111328387620c028407e11c9d44b3154c3dce8f415f79e1db54c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd8f71a5e4bfa40758e7d51545f1b2eff43f11071060201770a574f89d391bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecc4f274e9e301fdd8acbd720c998a2aa1c5e00df4fc254bcacab4acb539b8ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452fd4724dbce515315ce137b8477166e605db76fec46d7b0ab23756c3cf1c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452fd4724dbce515315ce137b8477166e605db76fec46d7b0ab23756c3cf1c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:56Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:56 crc kubenswrapper[4632]: I0313 10:05:56.153910 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8c3d39bd1ac3290c35d993771da5d7915b468b7376aadccf7b459f65dc7138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:56Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:56 crc kubenswrapper[4632]: I0313 10:05:56.165756 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cdf9ec894d29357b7b171dd1552444b97348f86d71e58afd9ab6bae1a05654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:56Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:56 crc kubenswrapper[4632]: I0313 10:05:56.185468 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://934a538bc834077a7381421f37a69e4d28792692b3d6686b4c56e39c6561d79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://934a538bc834077a7381421f37a69e4d28792692b3d6686b4c56e39c6561d79e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"message\\\":\\\"{c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0313 10:05:39.681766 6736 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0313 10:05:39.682059 6736 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0313 10:05:39.682124 6736 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0313 10:05:39.682142 6736 ovnkube.go:599] Stopped ovnkube\\\\nI0313 10:05:39.682173 6736 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0313 10:05:39.682232 6736 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qb725_openshift-ovn-kubernetes(3b40c6b3-0061-4224-82d5-3ccf67998722)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:56Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:56 crc kubenswrapper[4632]: I0313 10:05:56.204374 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeb823b-9a8c-403a-9a60-1d74ba0fbffe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da62efee0b0e5a420abd2f18aae7b1ad532b0dd5dcda4d36d4efa9b039bd8811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:56Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:56 crc kubenswrapper[4632]: I0313 10:05:56.220274 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:56Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:56 crc kubenswrapper[4632]: I0313 10:05:56.235195 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:56Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:56 crc kubenswrapper[4632]: I0313 10:05:56.253299 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caab849a-f2dd-453b-85cc-768f57800789\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://951a71f1358e164fefc6fbe9404cf1d0e387d58b0bc1d060eeca64a85e0fc08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e800b40d3f99e6d38fa7e28f06be6260e6ebe4bb5e7c73de6734d0617092ac\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:18Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0313 10:03:50.495764 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0313 10:03:50.504436 1 observer_polling.go:159] Starting file observer\\\\nI0313 10:03:50.582400 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0313 10:03:50.592133 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0313 10:04:18.170496 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0313 10:04:18.170641 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8207ad7aa524df5af853ad8235c24a6addbc04e168248391629f00124901672d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc37f87081e3682692ee20f12c80aa65fbcb8604b381f313b8073f8019b96dbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2666e43569f4f3fb7d9deec4b70dc873d86a3a7c4ac2fac7eea45198e35ecf3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:56Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:56 crc kubenswrapper[4632]: I0313 10:05:56.277007 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3ab26f24c338001da561ca80dccbab8a99da54054f6b7ddd2b0d5ac02f7dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:56Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:56 crc kubenswrapper[4632]: I0313 10:05:56.296288 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74ac702432406b147f9db010a8945c3e54f6f15a64346e895833c39bcc8f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:56Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:56 crc kubenswrapper[4632]: I0313 10:05:56.319377 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b54038392eeeed9eefaa615e0f9a4a34341efebcd6f6e628d7a911545ca154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f678e4bdb921e73ed7f26a9d4565fc5668c94082003bb79357a2bf793b06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:56Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:56 crc kubenswrapper[4632]: I0313 10:05:56.356711 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8dc5225d33306dd28d6043b50b88e39a677356681187c0f881966e12e9494c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:56Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:56 crc kubenswrapper[4632]: I0313 10:05:56.376230 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ae022ab87d0aedd5bbc6440acda6466cde0d3712108042da1225ea73ca35d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:56Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:56 crc kubenswrapper[4632]: I0313 10:05:56.395273 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1443078dbf73aa095c42c9554ca6cfd14a0b7b3b58fdf9f9ca610ea8ae4faaf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be5cd3ab03fb352008bc7646fa1f6f5f4825c107aef63eb0697721666d30cd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be5cd3ab03fb352008bc7646fa1f6f5f4825c107aef63eb0697721666d30cd41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:56Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:57 crc kubenswrapper[4632]: I0313 10:05:57.122790 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qb725_3b40c6b3-0061-4224-82d5-3ccf67998722/ovnkube-controller/2.log" Mar 13 10:05:57 crc kubenswrapper[4632]: I0313 10:05:57.123485 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qb725_3b40c6b3-0061-4224-82d5-3ccf67998722/ovnkube-controller/1.log" Mar 13 10:05:57 crc kubenswrapper[4632]: I0313 10:05:57.125978 4632 generic.go:334] "Generic (PLEG): container finished" podID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerID="85cd256cfb44a7c0f0961b308f26a8fdd3f76e9371a32d157163576e9eec7dfe" exitCode=1 Mar 13 10:05:57 crc kubenswrapper[4632]: I0313 10:05:57.126014 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" event={"ID":"3b40c6b3-0061-4224-82d5-3ccf67998722","Type":"ContainerDied","Data":"85cd256cfb44a7c0f0961b308f26a8fdd3f76e9371a32d157163576e9eec7dfe"} Mar 13 10:05:57 crc kubenswrapper[4632]: I0313 10:05:57.126047 4632 scope.go:117] "RemoveContainer" containerID="934a538bc834077a7381421f37a69e4d28792692b3d6686b4c56e39c6561d79e" Mar 13 10:05:57 crc kubenswrapper[4632]: I0313 10:05:57.126724 4632 scope.go:117] "RemoveContainer" containerID="85cd256cfb44a7c0f0961b308f26a8fdd3f76e9371a32d157163576e9eec7dfe" Mar 13 10:05:57 crc kubenswrapper[4632]: E0313 10:05:57.126881 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qb725_openshift-ovn-kubernetes(3b40c6b3-0061-4224-82d5-3ccf67998722)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" Mar 13 10:05:57 crc kubenswrapper[4632]: I0313 10:05:57.142529 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1443078dbf73aa095c42c9554ca6cfd14a0b7b3b58fdf9f9ca610ea8ae4faaf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be5cd3ab03fb352008bc7646fa1f6f5f4825c107aef63eb0697721666d30cd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be5cd3ab03fb352008bc7646fa1f6f5f4825c107aef63eb0697721666d30cd41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:57Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:57 crc kubenswrapper[4632]: I0313 10:05:57.156176 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b54038392eeeed9eefaa615e0f9a4a34341efebcd6f6e628d7a911545ca154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f678e4bdb921e73ed7f26a9d4565fc5668c94082003bb79357a2bf793b06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:57Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:57 crc kubenswrapper[4632]: I0313 10:05:57.171090 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8dc5225d33306dd28d6043b50b88e39a677356681187c0f881966e12e9494c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:57Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:57 crc kubenswrapper[4632]: I0313 10:05:57.186576 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ae022ab87d0aedd5bbc6440acda6466cde0d3712108042da1225ea73ca35d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:57Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:57 crc kubenswrapper[4632]: I0313 10:05:57.199594 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3570521a-40ff-48d4-a6c2-ef53f64eca38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba32b01674f111328387620c028407e11c9d44b3154c3dce8f415f79e1db54c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd8f71a5e4bfa40758e7d51545f1b2eff43f11071060201770a574f89d391bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecc4f274e9e301fdd8acbd720c998a2aa1c5e00df4fc254bcacab4acb539b8ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452fd4724dbce515315ce137b8477166e605db76fec46d7b0ab23756c3cf1c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452fd4724dbce515315ce137b8477166e605db76fec46d7b0ab23756c3cf1c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:57Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:57 crc kubenswrapper[4632]: I0313 10:05:57.212036 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:57Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:57 crc kubenswrapper[4632]: I0313 10:05:57.225009 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdb8c440e2f83530ee3fd4be3d39b21575674047ecba0e26719eb06bde38dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39037d4d6f8345706dd36a2591ead2b263a8f0bcc4558684d2c06a5eccfbfbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:57Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:57 crc kubenswrapper[4632]: I0313 10:05:57.237028 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:57Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:57 crc kubenswrapper[4632]: I0313 10:05:57.252734 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894cdc70-0747-4975-a22f-0dbd657e91a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d33939d9cfb29a666b40f15b0bd1e73ec4c62db28db999433541739f2a1c94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0313 10:04:51.224345 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 10:04:51.224743 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0313 10:04:51.231279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3511042304/tls.crt::/tmp/serving-cert-3511042304/tls.key\\\\\\\"\\\\nI0313 10:04:51.512346 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0313 10:04:51.516641 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0313 10:04:51.516667 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0313 10:04:51.516694 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0313 10:04:51.516704 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0313 10:04:51.524417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0313 10:04:51.524471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524478 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 10:04:51.524495 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 10:04:51.524500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 10:04:51.524505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 10:04:51.524633 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0313 10:04:51.529203 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:04:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:57Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:57 crc kubenswrapper[4632]: I0313 10:05:57.267446 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:57Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:57 crc kubenswrapper[4632]: I0313 10:05:57.279690 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8c3d39bd1ac3290c35d993771da5d7915b468b7376aadccf7b459f65dc7138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:57Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:57 crc kubenswrapper[4632]: I0313 10:05:57.291671 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cdf9ec894d29357b7b171dd1552444b97348f86d71e58afd9ab6bae1a05654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:57Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:57 crc kubenswrapper[4632]: I0313 10:05:57.317489 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85cd256cfb44a7c0f0961b308f26a8fdd3f76e9371a32d157163576e9eec7dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://934a538bc834077a7381421f37a69e4d28792692b3d6686b4c56e39c6561d79e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"message\\\":\\\"{c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0313 10:05:39.681766 6736 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0313 10:05:39.682059 6736 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0313 10:05:39.682124 6736 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0313 10:05:39.682142 6736 ovnkube.go:599] Stopped ovnkube\\\\nI0313 10:05:39.682173 6736 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0313 10:05:39.682232 6736 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85cd256cfb44a7c0f0961b308f26a8fdd3f76e9371a32d157163576e9eec7dfe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T10:05:57Z\\\",\\\"message\\\":\\\":(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0313 10:05:57.066140 6990 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0313 10:05:57.066148 6990 services_controller.go:452] Built service openshift-marketplace/marketplace-operator-metrics per-node LB for network=default: []services.LB{}\\\\nI0313 10:05:57.066155 6990 services_controller.go:453] Built service openshift-marketplace/marketplace-operator-metrics template LB for network=default: []services.LB{}\\\\nI0313 10:05:57.066161 6990 services_controller.go:454] Service openshift-marketplace/marketplace-operator-metrics for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0313 10:05:57.066185 6990 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI0313 10:05:57.066204 6990 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 1.29706ms\\\\nF0313 10:05:57.066212 6990 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:57Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:57 crc kubenswrapper[4632]: I0313 10:05:57.329142 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeb823b-9a8c-403a-9a60-1d74ba0fbffe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da62efee0b0e5a420abd2f18aae7b1ad532b0dd5dcda4d36d4efa9b039bd8811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:57Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:57 crc kubenswrapper[4632]: I0313 10:05:57.343194 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:57Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:57 crc kubenswrapper[4632]: I0313 10:05:57.354854 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74ac702432406b147f9db010a8945c3e54f6f15a64346e895833c39bcc8f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:57Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:57 crc kubenswrapper[4632]: I0313 10:05:57.367442 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caab849a-f2dd-453b-85cc-768f57800789\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://951a71f1358e164fefc6fbe9404cf1d0e387d58b0bc1d060eeca64a85e0fc08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e800b40d3f99e6d38fa7e28f06be6260e6ebe4bb5e7c73de6734d0617092ac\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:18Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0313 10:03:50.495764 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0313 10:03:50.504436 1 observer_polling.go:159] Starting file observer\\\\nI0313 10:03:50.582400 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0313 10:03:50.592133 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0313 10:04:18.170496 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0313 10:04:18.170641 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8207ad7aa524df5af853ad8235c24a6addbc04e168248391629f00124901672d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc37f87081e3682692ee20f12c80aa65fbcb8604b381f313b8073f8019b96dbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2666e43569f4f3fb7d9deec4b70dc873d86a3a7c4ac2fac7eea45198e35ecf3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:57Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:57 crc kubenswrapper[4632]: I0313 10:05:57.383421 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3ab26f24c338001da561ca80dccbab8a99da54054f6b7ddd2b0d5ac02f7dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:57Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:58 crc kubenswrapper[4632]: I0313 10:05:58.043580 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:05:58 crc kubenswrapper[4632]: I0313 10:05:58.043641 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:05:58 crc kubenswrapper[4632]: I0313 10:05:58.043695 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:05:58 crc kubenswrapper[4632]: I0313 10:05:58.043768 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:05:58 crc kubenswrapper[4632]: E0313 10:05:58.043764 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:05:58 crc kubenswrapper[4632]: E0313 10:05:58.043856 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:05:58 crc kubenswrapper[4632]: E0313 10:05:58.044112 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:05:58 crc kubenswrapper[4632]: E0313 10:05:58.044165 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:05:58 crc kubenswrapper[4632]: I0313 10:05:58.060553 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3ab26f24c338001da561ca80dccbab8a99da54054f6b7ddd2b0d5ac02f7dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:58Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:58 crc kubenswrapper[4632]: I0313 10:05:58.071570 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74ac702432406b147f9db010a8945c3e54f6f15a64346e895833c39bcc8f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:58Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:58 crc kubenswrapper[4632]: I0313 10:05:58.088424 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caab849a-f2dd-453b-85cc-768f57800789\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://951a71f1358e164fefc6fbe9404cf1d0e387d58b0bc1d060eeca64a85e0fc08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e800b40d3f99e6d38fa7e28f06be6260e6ebe4bb5e7c73de6734d0617092ac\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:18Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0313 10:03:50.495764 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0313 10:03:50.504436 1 observer_polling.go:159] Starting file observer\\\\nI0313 10:03:50.582400 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0313 10:03:50.592133 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0313 10:04:18.170496 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0313 10:04:18.170641 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8207ad7aa524df5af853ad8235c24a6addbc04e168248391629f00124901672d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc37f87081e3682692ee20f12c80aa65fbcb8604b381f313b8073f8019b96dbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2666e43569f4f3fb7d9deec4b70dc873d86a3a7c4ac2fac7eea45198e35ecf3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:58Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:58 crc kubenswrapper[4632]: I0313 10:05:58.103601 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ae022ab87d0aedd5bbc6440acda6466cde0d3712108042da1225ea73ca35d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:58Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:58 crc kubenswrapper[4632]: I0313 10:05:58.119695 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1443078dbf73aa095c42c9554ca6cfd14a0b7b3b58fdf9f9ca610ea8ae4faaf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be5cd3ab03fb352008bc7646fa1f6f5f4825c107aef63eb0697721666d30cd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be5cd3ab03fb352008bc7646fa1f6f5f4825c107aef63eb0697721666d30cd41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:58Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:58 crc kubenswrapper[4632]: I0313 10:05:58.130346 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qb725_3b40c6b3-0061-4224-82d5-3ccf67998722/ovnkube-controller/2.log" Mar 13 10:05:58 crc kubenswrapper[4632]: I0313 10:05:58.136764 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b54038392eeeed9eefaa615e0f9a4a34341efebcd6f6e628d7a911545ca154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f678e4bdb921e73ed7f26a9d4565fc5668c94082003bb79357a2bf793b06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:58Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:58 crc kubenswrapper[4632]: I0313 10:05:58.149703 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8dc5225d33306dd28d6043b50b88e39a677356681187c0f881966e12e9494c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:58Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:58 crc kubenswrapper[4632]: E0313 10:05:58.158482 4632 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 10:05:58 crc kubenswrapper[4632]: I0313 10:05:58.163268 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894cdc70-0747-4975-a22f-0dbd657e91a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d33939d9cfb29a666b40f15b0bd1e73ec4c62db28db999433541739f2a1c94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0313 10:04:51.224345 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 10:04:51.224743 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0313 10:04:51.231279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3511042304/tls.crt::/tmp/serving-cert-3511042304/tls.key\\\\\\\"\\\\nI0313 10:04:51.512346 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0313 10:04:51.516641 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0313 10:04:51.516667 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0313 10:04:51.516694 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0313 10:04:51.516704 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0313 10:04:51.524417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0313 10:04:51.524471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524478 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 10:04:51.524495 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 10:04:51.524500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 10:04:51.524505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 10:04:51.524633 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0313 10:04:51.529203 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:04:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:58Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:58 crc kubenswrapper[4632]: I0313 10:05:58.174711 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3570521a-40ff-48d4-a6c2-ef53f64eca38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba32b01674f111328387620c028407e11c9d44b3154c3dce8f415f79e1db54c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd8f71a5e4bfa40758e7d51545f1b2eff43f11071060201770a574f89d391bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecc4f274e9e301fdd8acbd720c998a2aa1c5e00df4fc254bcacab4acb539b8ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452fd4724dbce515315ce137b8477166e605db76fec46d7b0ab23756c3cf1c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452fd4724dbce515315ce137b8477166e605db76fec46d7b0ab23756c3cf1c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:58Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:58 crc kubenswrapper[4632]: I0313 10:05:58.197555 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:58Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:58 crc kubenswrapper[4632]: I0313 10:05:58.209484 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdb8c440e2f83530ee3fd4be3d39b21575674047ecba0e26719eb06bde38dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39037d4d6f8345706dd36a2591ead2b263a8f0bcc4558684d2c06a5eccfbfbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:58Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:58 crc kubenswrapper[4632]: I0313 10:05:58.219047 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:58Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:58 crc kubenswrapper[4632]: I0313 10:05:58.231209 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:58Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:58 crc kubenswrapper[4632]: I0313 10:05:58.243928 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:58Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:58 crc kubenswrapper[4632]: I0313 10:05:58.253336 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8c3d39bd1ac3290c35d993771da5d7915b468b7376aadccf7b459f65dc7138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:58Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:58 crc kubenswrapper[4632]: I0313 10:05:58.263025 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cdf9ec894d29357b7b171dd1552444b97348f86d71e58afd9ab6bae1a05654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:58Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:58 crc kubenswrapper[4632]: I0313 10:05:58.279442 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85cd256cfb44a7c0f0961b308f26a8fdd3f76e9371a32d157163576e9eec7dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://934a538bc834077a7381421f37a69e4d28792692b3d6686b4c56e39c6561d79e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"message\\\":\\\"{c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0313 10:05:39.681766 6736 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0313 10:05:39.682059 6736 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0313 10:05:39.682124 6736 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0313 10:05:39.682142 6736 ovnkube.go:599] Stopped ovnkube\\\\nI0313 10:05:39.682173 6736 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0313 10:05:39.682232 6736 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85cd256cfb44a7c0f0961b308f26a8fdd3f76e9371a32d157163576e9eec7dfe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T10:05:57Z\\\",\\\"message\\\":\\\":(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0313 10:05:57.066140 6990 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0313 10:05:57.066148 6990 services_controller.go:452] Built service openshift-marketplace/marketplace-operator-metrics per-node LB for network=default: []services.LB{}\\\\nI0313 10:05:57.066155 6990 services_controller.go:453] Built service openshift-marketplace/marketplace-operator-metrics template LB for network=default: []services.LB{}\\\\nI0313 10:05:57.066161 6990 services_controller.go:454] Service openshift-marketplace/marketplace-operator-metrics for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0313 10:05:57.066185 6990 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI0313 10:05:57.066204 6990 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 1.29706ms\\\\nF0313 10:05:57.066212 6990 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:58Z is after 2025-08-24T17:21:41Z" Mar 13 10:05:58 crc kubenswrapper[4632]: I0313 10:05:58.289231 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeb823b-9a8c-403a-9a60-1d74ba0fbffe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da62efee0b0e5a420abd2f18aae7b1ad532b0dd5dcda4d36d4efa9b039bd8811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:05:58Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:00 crc kubenswrapper[4632]: I0313 10:06:00.043661 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:06:00 crc kubenswrapper[4632]: I0313 10:06:00.043782 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:06:00 crc kubenswrapper[4632]: I0313 10:06:00.043692 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:06:00 crc kubenswrapper[4632]: E0313 10:06:00.043870 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:06:00 crc kubenswrapper[4632]: E0313 10:06:00.043976 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:06:00 crc kubenswrapper[4632]: E0313 10:06:00.044069 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:06:00 crc kubenswrapper[4632]: I0313 10:06:00.044305 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:06:00 crc kubenswrapper[4632]: E0313 10:06:00.044399 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:06:01 crc kubenswrapper[4632]: I0313 10:06:01.373268 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:06:01 crc kubenswrapper[4632]: I0313 10:06:01.373313 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:06:01 crc kubenswrapper[4632]: I0313 10:06:01.373324 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:06:01 crc kubenswrapper[4632]: I0313 10:06:01.373340 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:06:01 crc kubenswrapper[4632]: I0313 10:06:01.373350 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:06:01Z","lastTransitionTime":"2026-03-13T10:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:06:01 crc kubenswrapper[4632]: E0313 10:06:01.387202 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:01Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:01 crc kubenswrapper[4632]: I0313 10:06:01.390925 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:06:01 crc kubenswrapper[4632]: I0313 10:06:01.390966 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:06:01 crc kubenswrapper[4632]: I0313 10:06:01.390977 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:06:01 crc kubenswrapper[4632]: I0313 10:06:01.390990 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:06:01 crc kubenswrapper[4632]: I0313 10:06:01.390999 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:06:01Z","lastTransitionTime":"2026-03-13T10:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:06:01 crc kubenswrapper[4632]: E0313 10:06:01.404552 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:01Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:01 crc kubenswrapper[4632]: I0313 10:06:01.408502 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:06:01 crc kubenswrapper[4632]: I0313 10:06:01.408528 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:06:01 crc kubenswrapper[4632]: I0313 10:06:01.408536 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:06:01 crc kubenswrapper[4632]: I0313 10:06:01.408548 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:06:01 crc kubenswrapper[4632]: I0313 10:06:01.408556 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:06:01Z","lastTransitionTime":"2026-03-13T10:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:06:01 crc kubenswrapper[4632]: E0313 10:06:01.422692 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:01Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:01 crc kubenswrapper[4632]: I0313 10:06:01.427602 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:06:01 crc kubenswrapper[4632]: I0313 10:06:01.427630 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:06:01 crc kubenswrapper[4632]: I0313 10:06:01.427639 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:06:01 crc kubenswrapper[4632]: I0313 10:06:01.427651 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:06:01 crc kubenswrapper[4632]: I0313 10:06:01.427660 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:06:01Z","lastTransitionTime":"2026-03-13T10:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:06:01 crc kubenswrapper[4632]: E0313 10:06:01.439692 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:01Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:01 crc kubenswrapper[4632]: I0313 10:06:01.444427 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:06:01 crc kubenswrapper[4632]: I0313 10:06:01.444469 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:06:01 crc kubenswrapper[4632]: I0313 10:06:01.444481 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:06:01 crc kubenswrapper[4632]: I0313 10:06:01.444498 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:06:01 crc kubenswrapper[4632]: I0313 10:06:01.444512 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:06:01Z","lastTransitionTime":"2026-03-13T10:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:06:01 crc kubenswrapper[4632]: E0313 10:06:01.463523 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:01Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:01 crc kubenswrapper[4632]: E0313 10:06:01.463744 4632 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 10:06:02 crc kubenswrapper[4632]: I0313 10:06:02.043461 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:06:02 crc kubenswrapper[4632]: I0313 10:06:02.043461 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:06:02 crc kubenswrapper[4632]: I0313 10:06:02.043567 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:06:02 crc kubenswrapper[4632]: E0313 10:06:02.043636 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:06:02 crc kubenswrapper[4632]: I0313 10:06:02.043714 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:06:02 crc kubenswrapper[4632]: E0313 10:06:02.043829 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:06:02 crc kubenswrapper[4632]: E0313 10:06:02.043896 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:06:02 crc kubenswrapper[4632]: E0313 10:06:02.044024 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:06:03 crc kubenswrapper[4632]: E0313 10:06:03.159982 4632 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 10:06:04 crc kubenswrapper[4632]: I0313 10:06:04.043130 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:06:04 crc kubenswrapper[4632]: I0313 10:06:04.043252 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:06:04 crc kubenswrapper[4632]: E0313 10:06:04.043603 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:06:04 crc kubenswrapper[4632]: I0313 10:06:04.043380 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:06:04 crc kubenswrapper[4632]: E0313 10:06:04.043930 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:06:04 crc kubenswrapper[4632]: I0313 10:06:04.043314 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:06:04 crc kubenswrapper[4632]: E0313 10:06:04.044254 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:06:04 crc kubenswrapper[4632]: E0313 10:06:04.043790 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:06:06 crc kubenswrapper[4632]: I0313 10:06:06.043851 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:06:06 crc kubenswrapper[4632]: I0313 10:06:06.043932 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:06:06 crc kubenswrapper[4632]: I0313 10:06:06.043893 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:06:06 crc kubenswrapper[4632]: E0313 10:06:06.044060 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:06:06 crc kubenswrapper[4632]: I0313 10:06:06.043856 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:06:06 crc kubenswrapper[4632]: E0313 10:06:06.044181 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:06:06 crc kubenswrapper[4632]: E0313 10:06:06.044261 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:06:06 crc kubenswrapper[4632]: E0313 10:06:06.044320 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:06:08 crc kubenswrapper[4632]: I0313 10:06:08.043639 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:06:08 crc kubenswrapper[4632]: I0313 10:06:08.043681 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:06:08 crc kubenswrapper[4632]: I0313 10:06:08.043733 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:06:08 crc kubenswrapper[4632]: I0313 10:06:08.043827 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:06:08 crc kubenswrapper[4632]: E0313 10:06:08.043928 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:06:08 crc kubenswrapper[4632]: E0313 10:06:08.044062 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:06:08 crc kubenswrapper[4632]: E0313 10:06:08.044160 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:06:08 crc kubenswrapper[4632]: E0313 10:06:08.044228 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:06:08 crc kubenswrapper[4632]: I0313 10:06:08.063904 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdb8c440e2f83530ee3fd4be3d39b21575674047ecba0e26719eb06bde38dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39037d4d6f8345706dd36a2591ead2b263a8f0bcc4558684d2c06a5eccfbfbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:08Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:08 crc kubenswrapper[4632]: I0313 10:06:08.076059 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:08Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:08 crc kubenswrapper[4632]: I0313 10:06:08.091362 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894cdc70-0747-4975-a22f-0dbd657e91a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d33939d9cfb29a666b40f15b0bd1e73ec4c62db28db999433541739f2a1c94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0313 10:04:51.224345 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 10:04:51.224743 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0313 10:04:51.231279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3511042304/tls.crt::/tmp/serving-cert-3511042304/tls.key\\\\\\\"\\\\nI0313 10:04:51.512346 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0313 10:04:51.516641 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0313 10:04:51.516667 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0313 10:04:51.516694 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0313 10:04:51.516704 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0313 10:04:51.524417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0313 10:04:51.524471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524478 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 10:04:51.524495 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 10:04:51.524500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 10:04:51.524505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 10:04:51.524633 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0313 10:04:51.529203 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:04:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:08Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:08 crc kubenswrapper[4632]: I0313 10:06:08.104879 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3570521a-40ff-48d4-a6c2-ef53f64eca38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba32b01674f111328387620c028407e11c9d44b3154c3dce8f415f79e1db54c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd8f71a5e4bfa40758e7d51545f1b2eff43f11071060201770a574f89d391bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecc4f274e9e301fdd8acbd720c998a2aa1c5e00df4fc254bcacab4acb539b8ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452fd4724dbce515315ce137b8477166e605db76fec46d7b0ab23756c3cf1c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452fd4724dbce515315ce137b8477166e605db76fec46d7b0ab23756c3cf1c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:08Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:08 crc kubenswrapper[4632]: I0313 10:06:08.119359 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:08Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:08 crc kubenswrapper[4632]: I0313 10:06:08.131004 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cdf9ec894d29357b7b171dd1552444b97348f86d71e58afd9ab6bae1a05654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:08Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:08 crc kubenswrapper[4632]: I0313 10:06:08.151022 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85cd256cfb44a7c0f0961b308f26a8fdd3f76e9371a32d157163576e9eec7dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://934a538bc834077a7381421f37a69e4d28792692b3d6686b4c56e39c6561d79e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"message\\\":\\\"{c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0313 10:05:39.681766 6736 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0313 10:05:39.682059 6736 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0313 10:05:39.682124 6736 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0313 10:05:39.682142 6736 ovnkube.go:599] Stopped ovnkube\\\\nI0313 10:05:39.682173 6736 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0313 10:05:39.682232 6736 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85cd256cfb44a7c0f0961b308f26a8fdd3f76e9371a32d157163576e9eec7dfe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T10:05:57Z\\\",\\\"message\\\":\\\":(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0313 10:05:57.066140 6990 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0313 10:05:57.066148 6990 services_controller.go:452] Built service openshift-marketplace/marketplace-operator-metrics per-node LB for network=default: []services.LB{}\\\\nI0313 10:05:57.066155 6990 services_controller.go:453] Built service openshift-marketplace/marketplace-operator-metrics template LB for network=default: []services.LB{}\\\\nI0313 10:05:57.066161 6990 services_controller.go:454] Service openshift-marketplace/marketplace-operator-metrics for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0313 10:05:57.066185 6990 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI0313 10:05:57.066204 6990 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 1.29706ms\\\\nF0313 10:05:57.066212 6990 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:08Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:08 crc kubenswrapper[4632]: E0313 10:06:08.161181 4632 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 10:06:08 crc kubenswrapper[4632]: I0313 10:06:08.168346 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeb823b-9a8c-403a-9a60-1d74ba0fbffe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da62efee0b0e5a420abd2f18aae7b1ad532b0dd5dcda4d36d4efa9b039bd8811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:08Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:08 crc kubenswrapper[4632]: I0313 10:06:08.193575 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:08Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:08 crc kubenswrapper[4632]: I0313 10:06:08.210264 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:08Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:08 crc kubenswrapper[4632]: I0313 10:06:08.226477 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8c3d39bd1ac3290c35d993771da5d7915b468b7376aadccf7b459f65dc7138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:08Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:08 crc kubenswrapper[4632]: I0313 10:06:08.246288 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caab849a-f2dd-453b-85cc-768f57800789\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://951a71f1358e164fefc6fbe9404cf1d0e387d58b0bc1d060eeca64a85e0fc08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e800b40d3f99e6d38fa7e28f06be6260e6ebe4bb5e7c73de6734d0617092ac\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:18Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0313 10:03:50.495764 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0313 10:03:50.504436 1 observer_polling.go:159] Starting file observer\\\\nI0313 10:03:50.582400 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0313 10:03:50.592133 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0313 10:04:18.170496 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0313 10:04:18.170641 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8207ad7aa524df5af853ad8235c24a6addbc04e168248391629f00124901672d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc37f87081e3682692ee20f12c80aa65fbcb8604b381f313b8073f8019b96dbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2666e43569f4f3fb7d9deec4b70dc873d86a3a7c4ac2fac7eea45198e35ecf3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:08Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:08 crc kubenswrapper[4632]: I0313 10:06:08.262417 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3ab26f24c338001da561ca80dccbab8a99da54054f6b7ddd2b0d5ac02f7dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:08Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:08 crc kubenswrapper[4632]: I0313 10:06:08.275898 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74ac702432406b147f9db010a8945c3e54f6f15a64346e895833c39bcc8f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:08Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:08 crc kubenswrapper[4632]: I0313 10:06:08.290430 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8dc5225d33306dd28d6043b50b88e39a677356681187c0f881966e12e9494c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:08Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:08 crc kubenswrapper[4632]: I0313 10:06:08.303849 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ae022ab87d0aedd5bbc6440acda6466cde0d3712108042da1225ea73ca35d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:08Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:08 crc kubenswrapper[4632]: I0313 10:06:08.318334 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1443078dbf73aa095c42c9554ca6cfd14a0b7b3b58fdf9f9ca610ea8ae4faaf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be5cd3ab03fb352008bc7646fa1f6f5f4825c107aef63eb0697721666d30cd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be5cd3ab03fb352008bc7646fa1f6f5f4825c107aef63eb0697721666d30cd41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:08Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:08 crc kubenswrapper[4632]: I0313 10:06:08.330839 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b54038392eeeed9eefaa615e0f9a4a34341efebcd6f6e628d7a911545ca154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f678e4bdb921e73ed7f26a9d4565fc5668c94082003bb79357a2bf793b06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:08Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:10 crc kubenswrapper[4632]: I0313 10:06:10.043929 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:06:10 crc kubenswrapper[4632]: E0313 10:06:10.044145 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:06:10 crc kubenswrapper[4632]: I0313 10:06:10.043980 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:06:10 crc kubenswrapper[4632]: E0313 10:06:10.044236 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:06:10 crc kubenswrapper[4632]: I0313 10:06:10.043981 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:06:10 crc kubenswrapper[4632]: I0313 10:06:10.043835 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:06:10 crc kubenswrapper[4632]: E0313 10:06:10.045206 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:06:10 crc kubenswrapper[4632]: E0313 10:06:10.045303 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:06:10 crc kubenswrapper[4632]: I0313 10:06:10.045748 4632 scope.go:117] "RemoveContainer" containerID="85cd256cfb44a7c0f0961b308f26a8fdd3f76e9371a32d157163576e9eec7dfe" Mar 13 10:06:10 crc kubenswrapper[4632]: E0313 10:06:10.046103 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qb725_openshift-ovn-kubernetes(3b40c6b3-0061-4224-82d5-3ccf67998722)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" Mar 13 10:06:10 crc kubenswrapper[4632]: I0313 10:06:10.063833 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ae022ab87d0aedd5bbc6440acda6466cde0d3712108042da1225ea73ca35d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:10Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:10 crc kubenswrapper[4632]: I0313 10:06:10.084784 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1443078dbf73aa095c42c9554ca6cfd14a0b7b3b58fdf9f9ca610ea8ae4faaf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be5cd3ab03fb352008bc7646fa1f6f5f4825c107aef63eb0697721666d30cd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be5cd3ab03fb352008bc7646fa1f6f5f4825c107aef63eb0697721666d30cd41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:10Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:10 crc kubenswrapper[4632]: I0313 10:06:10.100632 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b54038392eeeed9eefaa615e0f9a4a34341efebcd6f6e628d7a911545ca154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f678e4bdb921e73ed7f26a9d4565fc5668c94082003bb79357a2bf793b06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:10Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:10 crc kubenswrapper[4632]: I0313 10:06:10.117403 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8dc5225d33306dd28d6043b50b88e39a677356681187c0f881966e12e9494c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:10Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:10 crc kubenswrapper[4632]: I0313 10:06:10.136285 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894cdc70-0747-4975-a22f-0dbd657e91a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d33939d9cfb29a666b40f15b0bd1e73ec4c62db28db999433541739f2a1c94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0313 10:04:51.224345 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 10:04:51.224743 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0313 10:04:51.231279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3511042304/tls.crt::/tmp/serving-cert-3511042304/tls.key\\\\\\\"\\\\nI0313 10:04:51.512346 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0313 10:04:51.516641 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0313 10:04:51.516667 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0313 10:04:51.516694 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0313 10:04:51.516704 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0313 10:04:51.524417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0313 10:04:51.524471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524478 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 10:04:51.524495 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 10:04:51.524500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 10:04:51.524505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 10:04:51.524633 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0313 10:04:51.529203 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:04:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:10Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:10 crc kubenswrapper[4632]: I0313 10:06:10.152288 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3570521a-40ff-48d4-a6c2-ef53f64eca38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba32b01674f111328387620c028407e11c9d44b3154c3dce8f415f79e1db54c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd8f71a5e4bfa40758e7d51545f1b2eff43f11071060201770a574f89d391bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecc4f274e9e301fdd8acbd720c998a2aa1c5e00df4fc254bcacab4acb539b8ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452fd4724dbce515315ce137b8477166e605db76fec46d7b0ab23756c3cf1c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452fd4724dbce515315ce137b8477166e605db76fec46d7b0ab23756c3cf1c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:10Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:10 crc kubenswrapper[4632]: I0313 10:06:10.168346 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:10Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:10 crc kubenswrapper[4632]: I0313 10:06:10.182288 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdb8c440e2f83530ee3fd4be3d39b21575674047ecba0e26719eb06bde38dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39037d4d6f8345706dd36a2591ead2b263a8f0bcc4558684d2c06a5eccfbfbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:10Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:10 crc kubenswrapper[4632]: I0313 10:06:10.194973 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:10Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:10 crc kubenswrapper[4632]: I0313 10:06:10.207806 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:10Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:10 crc kubenswrapper[4632]: I0313 10:06:10.221545 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:10Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:10 crc kubenswrapper[4632]: I0313 10:06:10.232644 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8c3d39bd1ac3290c35d993771da5d7915b468b7376aadccf7b459f65dc7138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:10Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:10 crc kubenswrapper[4632]: I0313 10:06:10.242462 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cdf9ec894d29357b7b171dd1552444b97348f86d71e58afd9ab6bae1a05654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:10Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:10 crc kubenswrapper[4632]: I0313 10:06:10.259918 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85cd256cfb44a7c0f0961b308f26a8fdd3f76e9371a32d157163576e9eec7dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85cd256cfb44a7c0f0961b308f26a8fdd3f76e9371a32d157163576e9eec7dfe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T10:05:57Z\\\",\\\"message\\\":\\\":(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0313 10:05:57.066140 6990 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0313 10:05:57.066148 6990 services_controller.go:452] Built service openshift-marketplace/marketplace-operator-metrics per-node LB for network=default: []services.LB{}\\\\nI0313 10:05:57.066155 6990 services_controller.go:453] Built service openshift-marketplace/marketplace-operator-metrics template LB for network=default: []services.LB{}\\\\nI0313 10:05:57.066161 6990 services_controller.go:454] Service openshift-marketplace/marketplace-operator-metrics for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0313 10:05:57.066185 6990 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI0313 10:05:57.066204 6990 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 1.29706ms\\\\nF0313 10:05:57.066212 6990 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qb725_openshift-ovn-kubernetes(3b40c6b3-0061-4224-82d5-3ccf67998722)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:10Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:10 crc kubenswrapper[4632]: I0313 10:06:10.272692 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeb823b-9a8c-403a-9a60-1d74ba0fbffe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da62efee0b0e5a420abd2f18aae7b1ad532b0dd5dcda4d36d4efa9b039bd8811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:10Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:10 crc kubenswrapper[4632]: I0313 10:06:10.289397 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3ab26f24c338001da561ca80dccbab8a99da54054f6b7ddd2b0d5ac02f7dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:10Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:10 crc kubenswrapper[4632]: I0313 10:06:10.301245 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74ac702432406b147f9db010a8945c3e54f6f15a64346e895833c39bcc8f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:10Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:10 crc kubenswrapper[4632]: I0313 10:06:10.316984 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caab849a-f2dd-453b-85cc-768f57800789\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://951a71f1358e164fefc6fbe9404cf1d0e387d58b0bc1d060eeca64a85e0fc08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e800b40d3f99e6d38fa7e28f06be6260e6ebe4bb5e7c73de6734d0617092ac\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:18Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0313 10:03:50.495764 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0313 10:03:50.504436 1 observer_polling.go:159] Starting file observer\\\\nI0313 10:03:50.582400 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0313 10:03:50.592133 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0313 10:04:18.170496 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0313 10:04:18.170641 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8207ad7aa524df5af853ad8235c24a6addbc04e168248391629f00124901672d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc37f87081e3682692ee20f12c80aa65fbcb8604b381f313b8073f8019b96dbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2666e43569f4f3fb7d9deec4b70dc873d86a3a7c4ac2fac7eea45198e35ecf3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:10Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:10 crc kubenswrapper[4632]: I0313 10:06:10.490596 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:06:10 crc kubenswrapper[4632]: I0313 10:06:10.498047 4632 scope.go:117] "RemoveContainer" containerID="85cd256cfb44a7c0f0961b308f26a8fdd3f76e9371a32d157163576e9eec7dfe" Mar 13 10:06:10 crc kubenswrapper[4632]: E0313 10:06:10.498300 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qb725_openshift-ovn-kubernetes(3b40c6b3-0061-4224-82d5-3ccf67998722)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" Mar 13 10:06:11 crc kubenswrapper[4632]: I0313 10:06:11.525918 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:06:11 crc kubenswrapper[4632]: I0313 10:06:11.526352 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:06:11 crc kubenswrapper[4632]: I0313 10:06:11.526367 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:06:11 crc kubenswrapper[4632]: I0313 10:06:11.526386 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:06:11 crc kubenswrapper[4632]: I0313 10:06:11.526399 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:06:11Z","lastTransitionTime":"2026-03-13T10:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:06:11 crc kubenswrapper[4632]: E0313 10:06:11.541020 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:11Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:11 crc kubenswrapper[4632]: I0313 10:06:11.551500 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:06:11 crc kubenswrapper[4632]: I0313 10:06:11.551557 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:06:11 crc kubenswrapper[4632]: I0313 10:06:11.551869 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:06:11 crc kubenswrapper[4632]: I0313 10:06:11.551893 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:06:11 crc kubenswrapper[4632]: I0313 10:06:11.551914 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:06:11Z","lastTransitionTime":"2026-03-13T10:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:06:11 crc kubenswrapper[4632]: E0313 10:06:11.567353 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:11Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:11 crc kubenswrapper[4632]: I0313 10:06:11.571281 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:06:11 crc kubenswrapper[4632]: I0313 10:06:11.571343 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:06:11 crc kubenswrapper[4632]: I0313 10:06:11.571356 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:06:11 crc kubenswrapper[4632]: I0313 10:06:11.571371 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:06:11 crc kubenswrapper[4632]: I0313 10:06:11.571381 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:06:11Z","lastTransitionTime":"2026-03-13T10:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:06:11 crc kubenswrapper[4632]: E0313 10:06:11.582417 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:11Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:11 crc kubenswrapper[4632]: I0313 10:06:11.586478 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:06:11 crc kubenswrapper[4632]: I0313 10:06:11.586557 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:06:11 crc kubenswrapper[4632]: I0313 10:06:11.586573 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:06:11 crc kubenswrapper[4632]: I0313 10:06:11.586591 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:06:11 crc kubenswrapper[4632]: I0313 10:06:11.586621 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:06:11Z","lastTransitionTime":"2026-03-13T10:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:06:11 crc kubenswrapper[4632]: E0313 10:06:11.598206 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:11Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:11 crc kubenswrapper[4632]: I0313 10:06:11.602917 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:06:11 crc kubenswrapper[4632]: I0313 10:06:11.603003 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:06:11 crc kubenswrapper[4632]: I0313 10:06:11.603016 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:06:11 crc kubenswrapper[4632]: I0313 10:06:11.603035 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:06:11 crc kubenswrapper[4632]: I0313 10:06:11.603049 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:06:11Z","lastTransitionTime":"2026-03-13T10:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:06:11 crc kubenswrapper[4632]: E0313 10:06:11.616405 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:11Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:11 crc kubenswrapper[4632]: E0313 10:06:11.616526 4632 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 10:06:12 crc kubenswrapper[4632]: I0313 10:06:12.044306 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:06:12 crc kubenswrapper[4632]: I0313 10:06:12.044416 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:06:12 crc kubenswrapper[4632]: I0313 10:06:12.044451 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:06:12 crc kubenswrapper[4632]: E0313 10:06:12.044529 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:06:12 crc kubenswrapper[4632]: I0313 10:06:12.044663 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:06:12 crc kubenswrapper[4632]: E0313 10:06:12.044751 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:06:12 crc kubenswrapper[4632]: E0313 10:06:12.044890 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:06:12 crc kubenswrapper[4632]: E0313 10:06:12.045029 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:06:13 crc kubenswrapper[4632]: E0313 10:06:13.163488 4632 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 10:06:14 crc kubenswrapper[4632]: I0313 10:06:14.043772 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:06:14 crc kubenswrapper[4632]: E0313 10:06:14.043955 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:06:14 crc kubenswrapper[4632]: I0313 10:06:14.043804 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:06:14 crc kubenswrapper[4632]: I0313 10:06:14.044009 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:06:14 crc kubenswrapper[4632]: E0313 10:06:14.044026 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:06:14 crc kubenswrapper[4632]: I0313 10:06:14.043776 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:06:14 crc kubenswrapper[4632]: E0313 10:06:14.044086 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:06:14 crc kubenswrapper[4632]: E0313 10:06:14.044304 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:06:14 crc kubenswrapper[4632]: I0313 10:06:14.050150 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:06:14 crc kubenswrapper[4632]: I0313 10:06:14.050379 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:06:14 crc kubenswrapper[4632]: I0313 10:06:14.050434 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:06:14 crc kubenswrapper[4632]: E0313 10:06:14.050500 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:18.050442728 +0000 UTC m=+212.072972901 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:06:14 crc kubenswrapper[4632]: E0313 10:06:14.050524 4632 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 13 10:06:14 crc kubenswrapper[4632]: E0313 10:06:14.050589 4632 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 13 10:06:14 crc kubenswrapper[4632]: E0313 10:06:14.050675 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-13 10:07:18.050654944 +0000 UTC m=+212.073185107 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 13 10:06:14 crc kubenswrapper[4632]: E0313 10:06:14.050906 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-13 10:07:18.050884781 +0000 UTC m=+212.073415024 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 13 10:06:14 crc kubenswrapper[4632]: I0313 10:06:14.151926 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:06:14 crc kubenswrapper[4632]: I0313 10:06:14.152008 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad-metrics-certs\") pod \"network-metrics-daemon-z2vlz\" (UID: \"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\") " pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:06:14 crc kubenswrapper[4632]: I0313 10:06:14.152033 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:06:14 crc kubenswrapper[4632]: E0313 10:06:14.152225 4632 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 10:06:14 crc kubenswrapper[4632]: E0313 10:06:14.152246 4632 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 10:06:14 crc kubenswrapper[4632]: E0313 10:06:14.152259 4632 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:06:14 crc kubenswrapper[4632]: E0313 10:06:14.152295 4632 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 10:06:14 crc kubenswrapper[4632]: E0313 10:06:14.152348 4632 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 10:06:14 crc kubenswrapper[4632]: E0313 10:06:14.152366 4632 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:06:14 crc kubenswrapper[4632]: E0313 10:06:14.152319 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-13 10:07:18.152298344 +0000 UTC m=+212.174828477 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:06:14 crc kubenswrapper[4632]: E0313 10:06:14.152476 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-13 10:07:18.152448099 +0000 UTC m=+212.174978232 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:06:14 crc kubenswrapper[4632]: E0313 10:06:14.152550 4632 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 10:06:14 crc kubenswrapper[4632]: E0313 10:06:14.152588 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad-metrics-certs podName:ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad nodeName:}" failed. No retries permitted until 2026-03-13 10:07:18.152575682 +0000 UTC m=+212.175106025 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad-metrics-certs") pod "network-metrics-daemon-z2vlz" (UID: "ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 10:06:16 crc kubenswrapper[4632]: I0313 10:06:16.044189 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:06:16 crc kubenswrapper[4632]: E0313 10:06:16.044390 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:06:16 crc kubenswrapper[4632]: I0313 10:06:16.044513 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:06:16 crc kubenswrapper[4632]: I0313 10:06:16.044599 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:06:16 crc kubenswrapper[4632]: E0313 10:06:16.044728 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:06:16 crc kubenswrapper[4632]: E0313 10:06:16.044806 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:06:16 crc kubenswrapper[4632]: I0313 10:06:16.044406 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:06:16 crc kubenswrapper[4632]: E0313 10:06:16.044933 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:06:16 crc kubenswrapper[4632]: I0313 10:06:16.195606 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gqf22_4ec8e301-3037-4de0-94d2-32c49709660e/kube-multus/0.log" Mar 13 10:06:16 crc kubenswrapper[4632]: I0313 10:06:16.195658 4632 generic.go:334] "Generic (PLEG): container finished" podID="4ec8e301-3037-4de0-94d2-32c49709660e" containerID="0ae022ab87d0aedd5bbc6440acda6466cde0d3712108042da1225ea73ca35d6d" exitCode=1 Mar 13 10:06:16 crc kubenswrapper[4632]: I0313 10:06:16.195696 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gqf22" event={"ID":"4ec8e301-3037-4de0-94d2-32c49709660e","Type":"ContainerDied","Data":"0ae022ab87d0aedd5bbc6440acda6466cde0d3712108042da1225ea73ca35d6d"} Mar 13 10:06:16 crc kubenswrapper[4632]: I0313 10:06:16.199277 4632 scope.go:117] "RemoveContainer" containerID="0ae022ab87d0aedd5bbc6440acda6466cde0d3712108042da1225ea73ca35d6d" Mar 13 10:06:16 crc kubenswrapper[4632]: I0313 10:06:16.216533 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:16Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:16 crc kubenswrapper[4632]: I0313 10:06:16.237981 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8c3d39bd1ac3290c35d993771da5d7915b468b7376aadccf7b459f65dc7138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:16Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:16 crc kubenswrapper[4632]: I0313 10:06:16.253598 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cdf9ec894d29357b7b171dd1552444b97348f86d71e58afd9ab6bae1a05654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:16Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:16 crc kubenswrapper[4632]: I0313 10:06:16.288071 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85cd256cfb44a7c0f0961b308f26a8fdd3f76e9371a32d157163576e9eec7dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85cd256cfb44a7c0f0961b308f26a8fdd3f76e9371a32d157163576e9eec7dfe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T10:05:57Z\\\",\\\"message\\\":\\\":(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0313 10:05:57.066140 6990 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0313 10:05:57.066148 6990 services_controller.go:452] Built service openshift-marketplace/marketplace-operator-metrics per-node LB for network=default: []services.LB{}\\\\nI0313 10:05:57.066155 6990 services_controller.go:453] Built service openshift-marketplace/marketplace-operator-metrics template LB for network=default: []services.LB{}\\\\nI0313 10:05:57.066161 6990 services_controller.go:454] Service openshift-marketplace/marketplace-operator-metrics for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0313 10:05:57.066185 6990 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI0313 10:05:57.066204 6990 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 1.29706ms\\\\nF0313 10:05:57.066212 6990 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qb725_openshift-ovn-kubernetes(3b40c6b3-0061-4224-82d5-3ccf67998722)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:16Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:16 crc kubenswrapper[4632]: I0313 10:06:16.305529 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeb823b-9a8c-403a-9a60-1d74ba0fbffe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da62efee0b0e5a420abd2f18aae7b1ad532b0dd5dcda4d36d4efa9b039bd8811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:16Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:16 crc kubenswrapper[4632]: I0313 10:06:16.324104 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:16Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:16 crc kubenswrapper[4632]: I0313 10:06:16.339203 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74ac702432406b147f9db010a8945c3e54f6f15a64346e895833c39bcc8f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:16Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:16 crc kubenswrapper[4632]: I0313 10:06:16.353712 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caab849a-f2dd-453b-85cc-768f57800789\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://951a71f1358e164fefc6fbe9404cf1d0e387d58b0bc1d060eeca64a85e0fc08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e800b40d3f99e6d38fa7e28f06be6260e6ebe4bb5e7c73de6734d0617092ac\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:18Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0313 10:03:50.495764 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0313 10:03:50.504436 1 observer_polling.go:159] Starting file observer\\\\nI0313 10:03:50.582400 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0313 10:03:50.592133 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0313 10:04:18.170496 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0313 10:04:18.170641 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8207ad7aa524df5af853ad8235c24a6addbc04e168248391629f00124901672d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc37f87081e3682692ee20f12c80aa65fbcb8604b381f313b8073f8019b96dbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2666e43569f4f3fb7d9deec4b70dc873d86a3a7c4ac2fac7eea45198e35ecf3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:16Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:16 crc kubenswrapper[4632]: I0313 10:06:16.372173 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3ab26f24c338001da561ca80dccbab8a99da54054f6b7ddd2b0d5ac02f7dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:16Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:16 crc kubenswrapper[4632]: I0313 10:06:16.389415 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1443078dbf73aa095c42c9554ca6cfd14a0b7b3b58fdf9f9ca610ea8ae4faaf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be5cd3ab03fb352008bc7646fa1f6f5f4825c107aef63eb0697721666d30cd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be5cd3ab03fb352008bc7646fa1f6f5f4825c107aef63eb0697721666d30cd41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:16Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:16 crc kubenswrapper[4632]: I0313 10:06:16.406166 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b54038392eeeed9eefaa615e0f9a4a34341efebcd6f6e628d7a911545ca154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f678e4bdb921e73ed7f26a9d4565fc5668c94082003bb79357a2bf793b06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:16Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:16 crc kubenswrapper[4632]: I0313 10:06:16.419473 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8dc5225d33306dd28d6043b50b88e39a677356681187c0f881966e12e9494c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:16Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:16 crc kubenswrapper[4632]: I0313 10:06:16.433571 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ae022ab87d0aedd5bbc6440acda6466cde0d3712108042da1225ea73ca35d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ae022ab87d0aedd5bbc6440acda6466cde0d3712108042da1225ea73ca35d6d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T10:06:15Z\\\",\\\"message\\\":\\\"2026-03-13T10:05:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_68d567cc-883d-42b1-94b2-7210d6f53151\\\\n2026-03-13T10:05:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_68d567cc-883d-42b1-94b2-7210d6f53151 to /host/opt/cni/bin/\\\\n2026-03-13T10:05:30Z [verbose] multus-daemon started\\\\n2026-03-13T10:05:30Z [verbose] Readiness Indicator file check\\\\n2026-03-13T10:06:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:16Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:16 crc kubenswrapper[4632]: I0313 10:06:16.447536 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3570521a-40ff-48d4-a6c2-ef53f64eca38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba32b01674f111328387620c028407e11c9d44b3154c3dce8f415f79e1db54c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd8f71a5e4bfa40758e7d51545f1b2eff43f11071060201770a574f89d391bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecc4f274e9e301fdd8acbd720c998a2aa1c5e00df4fc254bcacab4acb539b8ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452fd4724dbce515315ce137b8477166e605db76fec46d7b0ab23756c3cf1c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452fd4724dbce515315ce137b8477166e605db76fec46d7b0ab23756c3cf1c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:16Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:16 crc kubenswrapper[4632]: I0313 10:06:16.460316 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:16Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:16 crc kubenswrapper[4632]: I0313 10:06:16.474889 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdb8c440e2f83530ee3fd4be3d39b21575674047ecba0e26719eb06bde38dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39037d4d6f8345706dd36a2591ead2b263a8f0bcc4558684d2c06a5eccfbfbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:16Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:16 crc kubenswrapper[4632]: I0313 10:06:16.486334 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:16Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:16 crc kubenswrapper[4632]: I0313 10:06:16.499989 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894cdc70-0747-4975-a22f-0dbd657e91a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d33939d9cfb29a666b40f15b0bd1e73ec4c62db28db999433541739f2a1c94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0313 10:04:51.224345 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 10:04:51.224743 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0313 10:04:51.231279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3511042304/tls.crt::/tmp/serving-cert-3511042304/tls.key\\\\\\\"\\\\nI0313 10:04:51.512346 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0313 10:04:51.516641 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0313 10:04:51.516667 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0313 10:04:51.516694 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0313 10:04:51.516704 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0313 10:04:51.524417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0313 10:04:51.524471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524478 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 10:04:51.524495 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 10:04:51.524500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 10:04:51.524505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 10:04:51.524633 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0313 10:04:51.529203 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:04:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:16Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:17 crc kubenswrapper[4632]: I0313 10:06:17.200185 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gqf22_4ec8e301-3037-4de0-94d2-32c49709660e/kube-multus/0.log" Mar 13 10:06:17 crc kubenswrapper[4632]: I0313 10:06:17.200260 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gqf22" event={"ID":"4ec8e301-3037-4de0-94d2-32c49709660e","Type":"ContainerStarted","Data":"e48bcc5861bda7a15e45c892fa67ba73299d99e896f36f2cb68274a659ec5d34"} Mar 13 10:06:17 crc kubenswrapper[4632]: I0313 10:06:17.216655 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:17Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:17 crc kubenswrapper[4632]: I0313 10:06:17.230038 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:17Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:17 crc kubenswrapper[4632]: I0313 10:06:17.241780 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8c3d39bd1ac3290c35d993771da5d7915b468b7376aadccf7b459f65dc7138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:17Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:17 crc kubenswrapper[4632]: I0313 10:06:17.252033 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cdf9ec894d29357b7b171dd1552444b97348f86d71e58afd9ab6bae1a05654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:17Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:17 crc kubenswrapper[4632]: I0313 10:06:17.269507 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85cd256cfb44a7c0f0961b308f26a8fdd3f76e9371a32d157163576e9eec7dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85cd256cfb44a7c0f0961b308f26a8fdd3f76e9371a32d157163576e9eec7dfe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T10:05:57Z\\\",\\\"message\\\":\\\":(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0313 10:05:57.066140 6990 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0313 10:05:57.066148 6990 services_controller.go:452] Built service openshift-marketplace/marketplace-operator-metrics per-node LB for network=default: []services.LB{}\\\\nI0313 10:05:57.066155 6990 services_controller.go:453] Built service openshift-marketplace/marketplace-operator-metrics template LB for network=default: []services.LB{}\\\\nI0313 10:05:57.066161 6990 services_controller.go:454] Service openshift-marketplace/marketplace-operator-metrics for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0313 10:05:57.066185 6990 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI0313 10:05:57.066204 6990 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 1.29706ms\\\\nF0313 10:05:57.066212 6990 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qb725_openshift-ovn-kubernetes(3b40c6b3-0061-4224-82d5-3ccf67998722)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:17Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:17 crc kubenswrapper[4632]: I0313 10:06:17.279607 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeb823b-9a8c-403a-9a60-1d74ba0fbffe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da62efee0b0e5a420abd2f18aae7b1ad532b0dd5dcda4d36d4efa9b039bd8811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:17Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:17 crc kubenswrapper[4632]: I0313 10:06:17.292336 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3ab26f24c338001da561ca80dccbab8a99da54054f6b7ddd2b0d5ac02f7dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:17Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:17 crc kubenswrapper[4632]: I0313 10:06:17.303858 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74ac702432406b147f9db010a8945c3e54f6f15a64346e895833c39bcc8f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:17Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:17 crc kubenswrapper[4632]: I0313 10:06:17.315524 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caab849a-f2dd-453b-85cc-768f57800789\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://951a71f1358e164fefc6fbe9404cf1d0e387d58b0bc1d060eeca64a85e0fc08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e800b40d3f99e6d38fa7e28f06be6260e6ebe4bb5e7c73de6734d0617092ac\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:18Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0313 10:03:50.495764 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0313 10:03:50.504436 1 observer_polling.go:159] Starting file observer\\\\nI0313 10:03:50.582400 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0313 10:03:50.592133 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0313 10:04:18.170496 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0313 10:04:18.170641 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8207ad7aa524df5af853ad8235c24a6addbc04e168248391629f00124901672d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc37f87081e3682692ee20f12c80aa65fbcb8604b381f313b8073f8019b96dbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2666e43569f4f3fb7d9deec4b70dc873d86a3a7c4ac2fac7eea45198e35ecf3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:17Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:17 crc kubenswrapper[4632]: I0313 10:06:17.328924 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48bcc5861bda7a15e45c892fa67ba73299d99e896f36f2cb68274a659ec5d34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ae022ab87d0aedd5bbc6440acda6466cde0d3712108042da1225ea73ca35d6d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T10:06:15Z\\\",\\\"message\\\":\\\"2026-03-13T10:05:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_68d567cc-883d-42b1-94b2-7210d6f53151\\\\n2026-03-13T10:05:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_68d567cc-883d-42b1-94b2-7210d6f53151 to /host/opt/cni/bin/\\\\n2026-03-13T10:05:30Z [verbose] multus-daemon started\\\\n2026-03-13T10:05:30Z [verbose] Readiness Indicator file check\\\\n2026-03-13T10:06:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:17Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:17 crc kubenswrapper[4632]: I0313 10:06:17.343186 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1443078dbf73aa095c42c9554ca6cfd14a0b7b3b58fdf9f9ca610ea8ae4faaf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be5cd3ab03fb352008bc7646fa1f6f5f4825c107aef63eb0697721666d30cd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be5cd3ab03fb352008bc7646fa1f6f5f4825c107aef63eb0697721666d30cd41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:17Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:17 crc kubenswrapper[4632]: I0313 10:06:17.355017 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b54038392eeeed9eefaa615e0f9a4a34341efebcd6f6e628d7a911545ca154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f678e4bdb921e73ed7f26a9d4565fc5668c94082003bb79357a2bf793b06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:17Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:17 crc kubenswrapper[4632]: I0313 10:06:17.365999 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8dc5225d33306dd28d6043b50b88e39a677356681187c0f881966e12e9494c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:17Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:17 crc kubenswrapper[4632]: I0313 10:06:17.377840 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894cdc70-0747-4975-a22f-0dbd657e91a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d33939d9cfb29a666b40f15b0bd1e73ec4c62db28db999433541739f2a1c94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0313 10:04:51.224345 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 10:04:51.224743 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0313 10:04:51.231279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3511042304/tls.crt::/tmp/serving-cert-3511042304/tls.key\\\\\\\"\\\\nI0313 10:04:51.512346 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0313 10:04:51.516641 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0313 10:04:51.516667 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0313 10:04:51.516694 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0313 10:04:51.516704 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0313 10:04:51.524417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0313 10:04:51.524471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524478 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 10:04:51.524495 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 10:04:51.524500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 10:04:51.524505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 10:04:51.524633 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0313 10:04:51.529203 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:04:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:17Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:17 crc kubenswrapper[4632]: I0313 10:06:17.388163 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3570521a-40ff-48d4-a6c2-ef53f64eca38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba32b01674f111328387620c028407e11c9d44b3154c3dce8f415f79e1db54c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd8f71a5e4bfa40758e7d51545f1b2eff43f11071060201770a574f89d391bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecc4f274e9e301fdd8acbd720c998a2aa1c5e00df4fc254bcacab4acb539b8ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452fd4724dbce515315ce137b8477166e605db76fec46d7b0ab23756c3cf1c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452fd4724dbce515315ce137b8477166e605db76fec46d7b0ab23756c3cf1c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:17Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:17 crc kubenswrapper[4632]: I0313 10:06:17.400421 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:17Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:17 crc kubenswrapper[4632]: I0313 10:06:17.413492 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdb8c440e2f83530ee3fd4be3d39b21575674047ecba0e26719eb06bde38dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39037d4d6f8345706dd36a2591ead2b263a8f0bcc4558684d2c06a5eccfbfbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:17Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:17 crc kubenswrapper[4632]: I0313 10:06:17.422833 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:17Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:18 crc kubenswrapper[4632]: I0313 10:06:18.044187 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:06:18 crc kubenswrapper[4632]: I0313 10:06:18.044263 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:06:18 crc kubenswrapper[4632]: I0313 10:06:18.044225 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:06:18 crc kubenswrapper[4632]: I0313 10:06:18.044187 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:06:18 crc kubenswrapper[4632]: E0313 10:06:18.044365 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:06:18 crc kubenswrapper[4632]: E0313 10:06:18.044428 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:06:18 crc kubenswrapper[4632]: E0313 10:06:18.044529 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:06:18 crc kubenswrapper[4632]: E0313 10:06:18.044642 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:06:18 crc kubenswrapper[4632]: I0313 10:06:18.059868 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48bcc5861bda7a15e45c892fa67ba73299d99e896f36f2cb68274a659ec5d34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ae022ab87d0aedd5bbc6440acda6466cde0d3712108042da1225ea73ca35d6d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T10:06:15Z\\\",\\\"message\\\":\\\"2026-03-13T10:05:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_68d567cc-883d-42b1-94b2-7210d6f53151\\\\n2026-03-13T10:05:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_68d567cc-883d-42b1-94b2-7210d6f53151 to /host/opt/cni/bin/\\\\n2026-03-13T10:05:30Z [verbose] multus-daemon started\\\\n2026-03-13T10:05:30Z [verbose] Readiness Indicator file check\\\\n2026-03-13T10:06:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:18Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:18 crc kubenswrapper[4632]: I0313 10:06:18.080003 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1443078dbf73aa095c42c9554ca6cfd14a0b7b3b58fdf9f9ca610ea8ae4faaf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be5cd3ab03fb352008bc7646fa1f6f5f4825c107aef63eb0697721666d30cd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be5cd3ab03fb352008bc7646fa1f6f5f4825c107aef63eb0697721666d30cd41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:18Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:18 crc kubenswrapper[4632]: I0313 10:06:18.096021 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b54038392eeeed9eefaa615e0f9a4a34341efebcd6f6e628d7a911545ca154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f678e4bdb921e73ed7f26a9d4565fc5668c94082003bb79357a2bf793b06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:18Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:18 crc kubenswrapper[4632]: I0313 10:06:18.109855 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8dc5225d33306dd28d6043b50b88e39a677356681187c0f881966e12e9494c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:18Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:18 crc kubenswrapper[4632]: I0313 10:06:18.126038 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894cdc70-0747-4975-a22f-0dbd657e91a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d33939d9cfb29a666b40f15b0bd1e73ec4c62db28db999433541739f2a1c94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0313 10:04:51.224345 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 10:04:51.224743 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0313 10:04:51.231279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3511042304/tls.crt::/tmp/serving-cert-3511042304/tls.key\\\\\\\"\\\\nI0313 10:04:51.512346 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0313 10:04:51.516641 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0313 10:04:51.516667 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0313 10:04:51.516694 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0313 10:04:51.516704 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0313 10:04:51.524417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0313 10:04:51.524471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524478 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 10:04:51.524495 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 10:04:51.524500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 10:04:51.524505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 10:04:51.524633 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0313 10:04:51.529203 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:04:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:18Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:18 crc kubenswrapper[4632]: I0313 10:06:18.138219 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3570521a-40ff-48d4-a6c2-ef53f64eca38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba32b01674f111328387620c028407e11c9d44b3154c3dce8f415f79e1db54c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd8f71a5e4bfa40758e7d51545f1b2eff43f11071060201770a574f89d391bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecc4f274e9e301fdd8acbd720c998a2aa1c5e00df4fc254bcacab4acb539b8ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452fd4724dbce515315ce137b8477166e605db76fec46d7b0ab23756c3cf1c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452fd4724dbce515315ce137b8477166e605db76fec46d7b0ab23756c3cf1c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:18Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:18 crc kubenswrapper[4632]: I0313 10:06:18.150604 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:18Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:18 crc kubenswrapper[4632]: I0313 10:06:18.163772 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdb8c440e2f83530ee3fd4be3d39b21575674047ecba0e26719eb06bde38dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39037d4d6f8345706dd36a2591ead2b263a8f0bcc4558684d2c06a5eccfbfbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:18Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:18 crc kubenswrapper[4632]: E0313 10:06:18.164256 4632 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 10:06:18 crc kubenswrapper[4632]: I0313 10:06:18.175653 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:18Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:18 crc kubenswrapper[4632]: I0313 10:06:18.196584 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:18Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:18 crc kubenswrapper[4632]: I0313 10:06:18.209437 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:18Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:18 crc kubenswrapper[4632]: I0313 10:06:18.220215 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8c3d39bd1ac3290c35d993771da5d7915b468b7376aadccf7b459f65dc7138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:18Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:18 crc kubenswrapper[4632]: I0313 10:06:18.232069 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cdf9ec894d29357b7b171dd1552444b97348f86d71e58afd9ab6bae1a05654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:18Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:18 crc kubenswrapper[4632]: I0313 10:06:18.249393 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85cd256cfb44a7c0f0961b308f26a8fdd3f76e9371a32d157163576e9eec7dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85cd256cfb44a7c0f0961b308f26a8fdd3f76e9371a32d157163576e9eec7dfe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T10:05:57Z\\\",\\\"message\\\":\\\":(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0313 10:05:57.066140 6990 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0313 10:05:57.066148 6990 services_controller.go:452] Built service openshift-marketplace/marketplace-operator-metrics per-node LB for network=default: []services.LB{}\\\\nI0313 10:05:57.066155 6990 services_controller.go:453] Built service openshift-marketplace/marketplace-operator-metrics template LB for network=default: []services.LB{}\\\\nI0313 10:05:57.066161 6990 services_controller.go:454] Service openshift-marketplace/marketplace-operator-metrics for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0313 10:05:57.066185 6990 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI0313 10:05:57.066204 6990 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 1.29706ms\\\\nF0313 10:05:57.066212 6990 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qb725_openshift-ovn-kubernetes(3b40c6b3-0061-4224-82d5-3ccf67998722)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:18Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:18 crc kubenswrapper[4632]: I0313 10:06:18.260846 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeb823b-9a8c-403a-9a60-1d74ba0fbffe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da62efee0b0e5a420abd2f18aae7b1ad532b0dd5dcda4d36d4efa9b039bd8811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:18Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:18 crc kubenswrapper[4632]: I0313 10:06:18.274638 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3ab26f24c338001da561ca80dccbab8a99da54054f6b7ddd2b0d5ac02f7dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:18Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:18 crc kubenswrapper[4632]: I0313 10:06:18.286976 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74ac702432406b147f9db010a8945c3e54f6f15a64346e895833c39bcc8f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:18Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:18 crc kubenswrapper[4632]: I0313 10:06:18.299892 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caab849a-f2dd-453b-85cc-768f57800789\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://951a71f1358e164fefc6fbe9404cf1d0e387d58b0bc1d060eeca64a85e0fc08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e800b40d3f99e6d38fa7e28f06be6260e6ebe4bb5e7c73de6734d0617092ac\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:18Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0313 10:03:50.495764 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0313 10:03:50.504436 1 observer_polling.go:159] Starting file observer\\\\nI0313 10:03:50.582400 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0313 10:03:50.592133 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0313 10:04:18.170496 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0313 10:04:18.170641 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8207ad7aa524df5af853ad8235c24a6addbc04e168248391629f00124901672d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc37f87081e3682692ee20f12c80aa65fbcb8604b381f313b8073f8019b96dbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2666e43569f4f3fb7d9deec4b70dc873d86a3a7c4ac2fac7eea45198e35ecf3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:18Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:20 crc kubenswrapper[4632]: I0313 10:06:20.043829 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:06:20 crc kubenswrapper[4632]: I0313 10:06:20.043913 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:06:20 crc kubenswrapper[4632]: I0313 10:06:20.043865 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:06:20 crc kubenswrapper[4632]: I0313 10:06:20.044020 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:06:20 crc kubenswrapper[4632]: E0313 10:06:20.044094 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:06:20 crc kubenswrapper[4632]: E0313 10:06:20.044287 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:06:20 crc kubenswrapper[4632]: E0313 10:06:20.044385 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:06:20 crc kubenswrapper[4632]: E0313 10:06:20.044484 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:06:21 crc kubenswrapper[4632]: I0313 10:06:21.702598 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:06:21 crc kubenswrapper[4632]: I0313 10:06:21.702656 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:06:21 crc kubenswrapper[4632]: I0313 10:06:21.702674 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:06:21 crc kubenswrapper[4632]: I0313 10:06:21.702698 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:06:21 crc kubenswrapper[4632]: I0313 10:06:21.702715 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:06:21Z","lastTransitionTime":"2026-03-13T10:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:06:21 crc kubenswrapper[4632]: E0313 10:06:21.724177 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:21Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:21 crc kubenswrapper[4632]: I0313 10:06:21.730251 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:06:21 crc kubenswrapper[4632]: I0313 10:06:21.730312 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:06:21 crc kubenswrapper[4632]: I0313 10:06:21.730329 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:06:21 crc kubenswrapper[4632]: I0313 10:06:21.730368 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:06:21 crc kubenswrapper[4632]: I0313 10:06:21.730385 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:06:21Z","lastTransitionTime":"2026-03-13T10:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:06:21 crc kubenswrapper[4632]: E0313 10:06:21.746039 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:21Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:21 crc kubenswrapper[4632]: I0313 10:06:21.750641 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:06:21 crc kubenswrapper[4632]: I0313 10:06:21.750675 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:06:21 crc kubenswrapper[4632]: I0313 10:06:21.750683 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:06:21 crc kubenswrapper[4632]: I0313 10:06:21.750697 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:06:21 crc kubenswrapper[4632]: I0313 10:06:21.750715 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:06:21Z","lastTransitionTime":"2026-03-13T10:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:06:21 crc kubenswrapper[4632]: E0313 10:06:21.768146 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:21Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:21 crc kubenswrapper[4632]: I0313 10:06:21.773442 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:06:21 crc kubenswrapper[4632]: I0313 10:06:21.773498 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:06:21 crc kubenswrapper[4632]: I0313 10:06:21.773515 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:06:21 crc kubenswrapper[4632]: I0313 10:06:21.773538 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:06:21 crc kubenswrapper[4632]: I0313 10:06:21.773555 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:06:21Z","lastTransitionTime":"2026-03-13T10:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:06:21 crc kubenswrapper[4632]: E0313 10:06:21.788096 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:21Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:21 crc kubenswrapper[4632]: I0313 10:06:21.792637 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:06:21 crc kubenswrapper[4632]: I0313 10:06:21.792681 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:06:21 crc kubenswrapper[4632]: I0313 10:06:21.792693 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:06:21 crc kubenswrapper[4632]: I0313 10:06:21.792708 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:06:21 crc kubenswrapper[4632]: I0313 10:06:21.792717 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:06:21Z","lastTransitionTime":"2026-03-13T10:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:06:21 crc kubenswrapper[4632]: E0313 10:06:21.804894 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:21Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:21 crc kubenswrapper[4632]: E0313 10:06:21.805045 4632 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 10:06:22 crc kubenswrapper[4632]: I0313 10:06:22.043959 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:06:22 crc kubenswrapper[4632]: I0313 10:06:22.044075 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:06:22 crc kubenswrapper[4632]: I0313 10:06:22.044128 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:06:22 crc kubenswrapper[4632]: I0313 10:06:22.044034 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:06:22 crc kubenswrapper[4632]: E0313 10:06:22.044327 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:06:22 crc kubenswrapper[4632]: E0313 10:06:22.044493 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:06:22 crc kubenswrapper[4632]: E0313 10:06:22.044586 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:06:22 crc kubenswrapper[4632]: E0313 10:06:22.044674 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:06:23 crc kubenswrapper[4632]: E0313 10:06:23.165701 4632 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 10:06:24 crc kubenswrapper[4632]: I0313 10:06:24.044232 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:06:24 crc kubenswrapper[4632]: I0313 10:06:24.044353 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:06:24 crc kubenswrapper[4632]: I0313 10:06:24.044298 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:06:24 crc kubenswrapper[4632]: E0313 10:06:24.044567 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:06:24 crc kubenswrapper[4632]: E0313 10:06:24.044491 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:06:24 crc kubenswrapper[4632]: E0313 10:06:24.044730 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:06:24 crc kubenswrapper[4632]: I0313 10:06:24.044345 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:06:24 crc kubenswrapper[4632]: E0313 10:06:24.045264 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:06:26 crc kubenswrapper[4632]: I0313 10:06:26.043812 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:06:26 crc kubenswrapper[4632]: I0313 10:06:26.043917 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:06:26 crc kubenswrapper[4632]: I0313 10:06:26.044008 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:06:26 crc kubenswrapper[4632]: I0313 10:06:26.044077 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:06:26 crc kubenswrapper[4632]: E0313 10:06:26.044615 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:06:26 crc kubenswrapper[4632]: E0313 10:06:26.044917 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:06:26 crc kubenswrapper[4632]: E0313 10:06:26.045058 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:06:26 crc kubenswrapper[4632]: E0313 10:06:26.045168 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:06:26 crc kubenswrapper[4632]: I0313 10:06:26.045744 4632 scope.go:117] "RemoveContainer" containerID="85cd256cfb44a7c0f0961b308f26a8fdd3f76e9371a32d157163576e9eec7dfe" Mar 13 10:06:26 crc kubenswrapper[4632]: I0313 10:06:26.231697 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qb725_3b40c6b3-0061-4224-82d5-3ccf67998722/ovnkube-controller/2.log" Mar 13 10:06:26 crc kubenswrapper[4632]: I0313 10:06:26.233789 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" event={"ID":"3b40c6b3-0061-4224-82d5-3ccf67998722","Type":"ContainerStarted","Data":"8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b"} Mar 13 10:06:26 crc kubenswrapper[4632]: I0313 10:06:26.234230 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:06:26 crc kubenswrapper[4632]: I0313 10:06:26.251392 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caab849a-f2dd-453b-85cc-768f57800789\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://951a71f1358e164fefc6fbe9404cf1d0e387d58b0bc1d060eeca64a85e0fc08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e800b40d3f99e6d38fa7e28f06be6260e6ebe4bb5e7c73de6734d0617092ac\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:18Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0313 10:03:50.495764 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0313 10:03:50.504436 1 observer_polling.go:159] Starting file observer\\\\nI0313 10:03:50.582400 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0313 10:03:50.592133 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0313 10:04:18.170496 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0313 10:04:18.170641 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8207ad7aa524df5af853ad8235c24a6addbc04e168248391629f00124901672d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc37f87081e3682692ee20f12c80aa65fbcb8604b381f313b8073f8019b96dbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2666e43569f4f3fb7d9deec4b70dc873d86a3a7c4ac2fac7eea45198e35ecf3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:26 crc kubenswrapper[4632]: I0313 10:06:26.265337 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3ab26f24c338001da561ca80dccbab8a99da54054f6b7ddd2b0d5ac02f7dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:26 crc kubenswrapper[4632]: I0313 10:06:26.276166 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74ac702432406b147f9db010a8945c3e54f6f15a64346e895833c39bcc8f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:26 crc kubenswrapper[4632]: I0313 10:06:26.287268 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8dc5225d33306dd28d6043b50b88e39a677356681187c0f881966e12e9494c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:26 crc kubenswrapper[4632]: I0313 10:06:26.300599 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48bcc5861bda7a15e45c892fa67ba73299d99e896f36f2cb68274a659ec5d34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ae022ab87d0aedd5bbc6440acda6466cde0d3712108042da1225ea73ca35d6d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T10:06:15Z\\\",\\\"message\\\":\\\"2026-03-13T10:05:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_68d567cc-883d-42b1-94b2-7210d6f53151\\\\n2026-03-13T10:05:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_68d567cc-883d-42b1-94b2-7210d6f53151 to /host/opt/cni/bin/\\\\n2026-03-13T10:05:30Z [verbose] multus-daemon started\\\\n2026-03-13T10:05:30Z [verbose] Readiness Indicator file check\\\\n2026-03-13T10:06:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:26 crc kubenswrapper[4632]: I0313 10:06:26.320831 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1443078dbf73aa095c42c9554ca6cfd14a0b7b3b58fdf9f9ca610ea8ae4faaf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be5cd3ab03fb352008bc7646fa1f6f5f4825c107aef63eb0697721666d30cd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be5cd3ab03fb352008bc7646fa1f6f5f4825c107aef63eb0697721666d30cd41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:26 crc kubenswrapper[4632]: I0313 10:06:26.335588 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b54038392eeeed9eefaa615e0f9a4a34341efebcd6f6e628d7a911545ca154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f678e4bdb921e73ed7f26a9d4565fc5668c94082003bb79357a2bf793b06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:26 crc kubenswrapper[4632]: I0313 10:06:26.346114 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:26 crc kubenswrapper[4632]: I0313 10:06:26.358254 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894cdc70-0747-4975-a22f-0dbd657e91a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d33939d9cfb29a666b40f15b0bd1e73ec4c62db28db999433541739f2a1c94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0313 10:04:51.224345 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 10:04:51.224743 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0313 10:04:51.231279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3511042304/tls.crt::/tmp/serving-cert-3511042304/tls.key\\\\\\\"\\\\nI0313 10:04:51.512346 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0313 10:04:51.516641 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0313 10:04:51.516667 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0313 10:04:51.516694 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0313 10:04:51.516704 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0313 10:04:51.524417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0313 10:04:51.524471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524478 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 10:04:51.524495 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 10:04:51.524500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 10:04:51.524505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 10:04:51.524633 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0313 10:04:51.529203 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:04:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:26 crc kubenswrapper[4632]: I0313 10:06:26.372700 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3570521a-40ff-48d4-a6c2-ef53f64eca38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba32b01674f111328387620c028407e11c9d44b3154c3dce8f415f79e1db54c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd8f71a5e4bfa40758e7d51545f1b2eff43f11071060201770a574f89d391bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecc4f274e9e301fdd8acbd720c998a2aa1c5e00df4fc254bcacab4acb539b8ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452fd4724dbce515315ce137b8477166e605db76fec46d7b0ab23756c3cf1c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452fd4724dbce515315ce137b8477166e605db76fec46d7b0ab23756c3cf1c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:26 crc kubenswrapper[4632]: I0313 10:06:26.384761 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:26 crc kubenswrapper[4632]: I0313 10:06:26.396272 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdb8c440e2f83530ee3fd4be3d39b21575674047ecba0e26719eb06bde38dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39037d4d6f8345706dd36a2591ead2b263a8f0bcc4558684d2c06a5eccfbfbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:26 crc kubenswrapper[4632]: I0313 10:06:26.414407 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85cd256cfb44a7c0f0961b308f26a8fdd3f76e9371a32d157163576e9eec7dfe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T10:05:57Z\\\",\\\"message\\\":\\\":(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0313 10:05:57.066140 6990 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0313 10:05:57.066148 6990 services_controller.go:452] Built service openshift-marketplace/marketplace-operator-metrics per-node LB for network=default: []services.LB{}\\\\nI0313 10:05:57.066155 6990 services_controller.go:453] Built service openshift-marketplace/marketplace-operator-metrics template LB for network=default: []services.LB{}\\\\nI0313 10:05:57.066161 6990 services_controller.go:454] Service openshift-marketplace/marketplace-operator-metrics for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0313 10:05:57.066185 6990 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI0313 10:05:57.066204 6990 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 1.29706ms\\\\nF0313 10:05:57.066212 6990 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:26 crc kubenswrapper[4632]: I0313 10:06:26.425771 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeb823b-9a8c-403a-9a60-1d74ba0fbffe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da62efee0b0e5a420abd2f18aae7b1ad532b0dd5dcda4d36d4efa9b039bd8811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:26 crc kubenswrapper[4632]: I0313 10:06:26.440018 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:26 crc kubenswrapper[4632]: I0313 10:06:26.454303 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:26 crc kubenswrapper[4632]: I0313 10:06:26.469507 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8c3d39bd1ac3290c35d993771da5d7915b468b7376aadccf7b459f65dc7138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:26 crc kubenswrapper[4632]: I0313 10:06:26.482008 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cdf9ec894d29357b7b171dd1552444b97348f86d71e58afd9ab6bae1a05654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:26Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:28 crc kubenswrapper[4632]: I0313 10:06:28.043596 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:06:28 crc kubenswrapper[4632]: E0313 10:06:28.044070 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:06:28 crc kubenswrapper[4632]: I0313 10:06:28.044087 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:06:28 crc kubenswrapper[4632]: E0313 10:06:28.044209 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:06:28 crc kubenswrapper[4632]: I0313 10:06:28.043626 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:06:28 crc kubenswrapper[4632]: I0313 10:06:28.043893 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:06:28 crc kubenswrapper[4632]: E0313 10:06:28.044277 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:06:28 crc kubenswrapper[4632]: E0313 10:06:28.044320 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:06:28 crc kubenswrapper[4632]: I0313 10:06:28.057726 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7bcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z2vlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:28 crc kubenswrapper[4632]: I0313 10:06:28.071988 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894cdc70-0747-4975-a22f-0dbd657e91a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d33939d9cfb29a666b40f15b0bd1e73ec4c62db28db999433541739f2a1c94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0313 10:04:51.224345 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 10:04:51.224743 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0313 10:04:51.231279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3511042304/tls.crt::/tmp/serving-cert-3511042304/tls.key\\\\\\\"\\\\nI0313 10:04:51.512346 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0313 10:04:51.516641 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0313 10:04:51.516667 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0313 10:04:51.516694 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0313 10:04:51.516704 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0313 10:04:51.524417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0313 10:04:51.524471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524478 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0313 10:04:51.524490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 10:04:51.524495 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 10:04:51.524500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 10:04:51.524505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 10:04:51.524633 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0313 10:04:51.529203 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:04:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:28 crc kubenswrapper[4632]: I0313 10:06:28.084587 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3570521a-40ff-48d4-a6c2-ef53f64eca38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba32b01674f111328387620c028407e11c9d44b3154c3dce8f415f79e1db54c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd8f71a5e4bfa40758e7d51545f1b2eff43f11071060201770a574f89d391bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecc4f274e9e301fdd8acbd720c998a2aa1c5e00df4fc254bcacab4acb539b8ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452fd4724dbce515315ce137b8477166e605db76fec46d7b0ab23756c3cf1c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452fd4724dbce515315ce137b8477166e605db76fec46d7b0ab23756c3cf1c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:28 crc kubenswrapper[4632]: I0313 10:06:28.100452 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:28 crc kubenswrapper[4632]: I0313 10:06:28.114440 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdb8c440e2f83530ee3fd4be3d39b21575674047ecba0e26719eb06bde38dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39037d4d6f8345706dd36a2591ead2b263a8f0bcc4558684d2c06a5eccfbfbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:28 crc kubenswrapper[4632]: I0313 10:06:28.141508 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b40c6b3-0061-4224-82d5-3ccf67998722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85cd256cfb44a7c0f0961b308f26a8fdd3f76e9371a32d157163576e9eec7dfe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T10:05:57Z\\\",\\\"message\\\":\\\":(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0313 10:05:57.066140 6990 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0313 10:05:57.066148 6990 services_controller.go:452] Built service openshift-marketplace/marketplace-operator-metrics per-node LB for network=default: []services.LB{}\\\\nI0313 10:05:57.066155 6990 services_controller.go:453] Built service openshift-marketplace/marketplace-operator-metrics template LB for network=default: []services.LB{}\\\\nI0313 10:05:57.066161 6990 services_controller.go:454] Service openshift-marketplace/marketplace-operator-metrics for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0313 10:05:57.066185 6990 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI0313 10:05:57.066204 6990 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 1.29706ms\\\\nF0313 10:05:57.066212 6990 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6cl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qb725\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:28 crc kubenswrapper[4632]: I0313 10:06:28.151896 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeb823b-9a8c-403a-9a60-1d74ba0fbffe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da62efee0b0e5a420abd2f18aae7b1ad532b0dd5dcda4d36d4efa9b039bd8811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3129c4f8116cd2f517b7c37b8d2af3f30c4e028d18bacf2f99042ec954b60611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:28 crc kubenswrapper[4632]: I0313 10:06:28.166853 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:28 crc kubenswrapper[4632]: E0313 10:06:28.167072 4632 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 10:06:28 crc kubenswrapper[4632]: I0313 10:06:28.182073 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:28 crc kubenswrapper[4632]: I0313 10:06:28.198263 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n55jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b29b9ad7-8cc9-434f-8731-a86265c383fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8c3d39bd1ac3290c35d993771da5d7915b468b7376aadccf7b459f65dc7138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgn9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n55jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:28 crc kubenswrapper[4632]: I0313 10:06:28.212072 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zwlc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a50974e-f938-40f7-ace5-2a3b4cb1f3e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cdf9ec894d29357b7b171dd1552444b97348f86d71e58afd9ab6bae1a05654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq5zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zwlc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:28 crc kubenswrapper[4632]: I0313 10:06:28.226678 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caab849a-f2dd-453b-85cc-768f57800789\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://951a71f1358e164fefc6fbe9404cf1d0e387d58b0bc1d060eeca64a85e0fc08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59e800b40d3f99e6d38fa7e28f06be6260e6ebe4bb5e7c73de6734d0617092ac\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T10:04:18Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0313 10:03:50.495764 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0313 10:03:50.504436 1 observer_polling.go:159] Starting file observer\\\\nI0313 10:03:50.582400 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0313 10:03:50.592133 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0313 10:04:18.170496 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0313 10:04:18.170641 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8207ad7aa524df5af853ad8235c24a6addbc04e168248391629f00124901672d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc37f87081e3682692ee20f12c80aa65fbcb8604b381f313b8073f8019b96dbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2666e43569f4f3fb7d9deec4b70dc873d86a3a7c4ac2fac7eea45198e35ecf3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:03:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:28 crc kubenswrapper[4632]: I0313 10:06:28.243229 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3ab26f24c338001da561ca80dccbab8a99da54054f6b7ddd2b0d5ac02f7dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:28 crc kubenswrapper[4632]: I0313 10:06:28.257612 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d77b18a7-7ad9-4bf5-bff5-da45878af7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74ac702432406b147f9db010a8945c3e54f6f15a64346e895833c39bcc8f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zkscb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:28 crc kubenswrapper[4632]: I0313 10:06:28.272002 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8dc5225d33306dd28d6043b50b88e39a677356681187c0f881966e12e9494c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:28 crc kubenswrapper[4632]: I0313 10:06:28.288502 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gqf22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ec8e301-3037-4de0-94d2-32c49709660e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48bcc5861bda7a15e45c892fa67ba73299d99e896f36f2cb68274a659ec5d34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ae022ab87d0aedd5bbc6440acda6466cde0d3712108042da1225ea73ca35d6d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T10:06:15Z\\\",\\\"message\\\":\\\"2026-03-13T10:05:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_68d567cc-883d-42b1-94b2-7210d6f53151\\\\n2026-03-13T10:05:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_68d567cc-883d-42b1-94b2-7210d6f53151 to /host/opt/cni/bin/\\\\n2026-03-13T10:05:30Z [verbose] multus-daemon started\\\\n2026-03-13T10:05:30Z [verbose] Readiness Indicator file check\\\\n2026-03-13T10:06:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gqf22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:28 crc kubenswrapper[4632]: I0313 10:06:28.305394 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b054ca08-1d09-4eca-a608-eb5b9323959a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1443078dbf73aa095c42c9554ca6cfd14a0b7b3b58fdf9f9ca610ea8ae4faaf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5751cc23a251bd0c422190506273ba218a1268ad61dde76e71ca7fca7db0e35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2aec1c6df1b2f1a979a083b85e27dedfee5df54edcd6072534e3e6336f0791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b11c0fe86b3039760a8f2421d22ecdbf9a435ae7afbec2af53c04d7b7326655e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc94caceb393a1194da38ccf2d8cb2847902c15e49246643d4c2fbd259f71a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c5c8753ebd5c621b714f9ad16bfcbeb42cb5b41a65a0bf8262e9d9a861e0a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be5cd3ab03fb352008bc7646fa1f6f5f4825c107aef63eb0697721666d30cd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be5cd3ab03fb352008bc7646fa1f6f5f4825c107aef63eb0697721666d30cd41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T10:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T10:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9s5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qlc8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:28 crc kubenswrapper[4632]: I0313 10:06:28.319204 4632 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c542d5-8c38-4243-8af7-cfc0d8e22773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T10:05:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b54038392eeeed9eefaa615e0f9a4a34341efebcd6f6e628d7a911545ca154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f678e4bdb921e73ed7f26a9d4565fc5668c94082003bb79357a2bf793b06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T10:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffbwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T10:05:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kbtt2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:28Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:30 crc kubenswrapper[4632]: I0313 10:06:30.043512 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:06:30 crc kubenswrapper[4632]: I0313 10:06:30.043665 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:06:30 crc kubenswrapper[4632]: E0313 10:06:30.043783 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:06:30 crc kubenswrapper[4632]: I0313 10:06:30.043540 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:06:30 crc kubenswrapper[4632]: E0313 10:06:30.043868 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:06:30 crc kubenswrapper[4632]: I0313 10:06:30.043510 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:06:30 crc kubenswrapper[4632]: E0313 10:06:30.044263 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:06:30 crc kubenswrapper[4632]: E0313 10:06:30.044341 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:06:32 crc kubenswrapper[4632]: I0313 10:06:32.005534 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:06:32 crc kubenswrapper[4632]: I0313 10:06:32.005601 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:06:32 crc kubenswrapper[4632]: I0313 10:06:32.005615 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:06:32 crc kubenswrapper[4632]: I0313 10:06:32.005635 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:06:32 crc kubenswrapper[4632]: I0313 10:06:32.005650 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:06:32Z","lastTransitionTime":"2026-03-13T10:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:06:32 crc kubenswrapper[4632]: E0313 10:06:32.022001 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:32Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:32 crc kubenswrapper[4632]: I0313 10:06:32.026093 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:06:32 crc kubenswrapper[4632]: I0313 10:06:32.026153 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:06:32 crc kubenswrapper[4632]: I0313 10:06:32.026164 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:06:32 crc kubenswrapper[4632]: I0313 10:06:32.026184 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:06:32 crc kubenswrapper[4632]: I0313 10:06:32.026196 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:06:32Z","lastTransitionTime":"2026-03-13T10:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:06:32 crc kubenswrapper[4632]: E0313 10:06:32.039869 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:32Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:32 crc kubenswrapper[4632]: I0313 10:06:32.043512 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:06:32 crc kubenswrapper[4632]: I0313 10:06:32.043665 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:06:32 crc kubenswrapper[4632]: I0313 10:06:32.043723 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:06:32 crc kubenswrapper[4632]: I0313 10:06:32.043810 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:06:32 crc kubenswrapper[4632]: E0313 10:06:32.043853 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:06:32 crc kubenswrapper[4632]: I0313 10:06:32.044013 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:06:32 crc kubenswrapper[4632]: I0313 10:06:32.044039 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:06:32 crc kubenswrapper[4632]: I0313 10:06:32.044050 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:06:32 crc kubenswrapper[4632]: I0313 10:06:32.044063 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:06:32 crc kubenswrapper[4632]: I0313 10:06:32.044075 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:06:32Z","lastTransitionTime":"2026-03-13T10:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:06:32 crc kubenswrapper[4632]: E0313 10:06:32.044128 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:06:32 crc kubenswrapper[4632]: E0313 10:06:32.043968 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:06:32 crc kubenswrapper[4632]: E0313 10:06:32.044216 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:06:32 crc kubenswrapper[4632]: E0313 10:06:32.058892 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:32Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:32 crc kubenswrapper[4632]: I0313 10:06:32.061994 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:06:32 crc kubenswrapper[4632]: I0313 10:06:32.062026 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:06:32 crc kubenswrapper[4632]: I0313 10:06:32.062038 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:06:32 crc kubenswrapper[4632]: I0313 10:06:32.062055 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:06:32 crc kubenswrapper[4632]: I0313 10:06:32.062065 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:06:32Z","lastTransitionTime":"2026-03-13T10:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:06:32 crc kubenswrapper[4632]: E0313 10:06:32.074338 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:32Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:32 crc kubenswrapper[4632]: I0313 10:06:32.077990 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:06:32 crc kubenswrapper[4632]: I0313 10:06:32.078021 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:06:32 crc kubenswrapper[4632]: I0313 10:06:32.078031 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:06:32 crc kubenswrapper[4632]: I0313 10:06:32.078044 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:06:32 crc kubenswrapper[4632]: I0313 10:06:32.078053 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:06:32Z","lastTransitionTime":"2026-03-13T10:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:06:32 crc kubenswrapper[4632]: E0313 10:06:32.088530 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T10:06:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5d63e17-4c81-494f-81b9-40163ac26c6b\\\",\\\"systemUUID\\\":\\\"e8be0c8f-16ef-4a1d-b190-772a9f649bc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T10:06:32Z is after 2025-08-24T17:21:41Z" Mar 13 10:06:32 crc kubenswrapper[4632]: E0313 10:06:32.088637 4632 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 10:06:33 crc kubenswrapper[4632]: I0313 10:06:33.060722 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Mar 13 10:06:33 crc kubenswrapper[4632]: E0313 10:06:33.168803 4632 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 10:06:34 crc kubenswrapper[4632]: I0313 10:06:34.044140 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:06:34 crc kubenswrapper[4632]: E0313 10:06:34.044282 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:06:34 crc kubenswrapper[4632]: I0313 10:06:34.044299 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:06:34 crc kubenswrapper[4632]: E0313 10:06:34.044461 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:06:34 crc kubenswrapper[4632]: I0313 10:06:34.044165 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:06:34 crc kubenswrapper[4632]: E0313 10:06:34.044675 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:06:34 crc kubenswrapper[4632]: I0313 10:06:34.044807 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:06:34 crc kubenswrapper[4632]: E0313 10:06:34.044932 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:06:36 crc kubenswrapper[4632]: I0313 10:06:36.044106 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:06:36 crc kubenswrapper[4632]: I0313 10:06:36.045058 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:06:36 crc kubenswrapper[4632]: I0313 10:06:36.044126 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:06:36 crc kubenswrapper[4632]: I0313 10:06:36.044106 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:06:36 crc kubenswrapper[4632]: E0313 10:06:36.045120 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:06:36 crc kubenswrapper[4632]: E0313 10:06:36.045199 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:06:36 crc kubenswrapper[4632]: E0313 10:06:36.045291 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:06:36 crc kubenswrapper[4632]: E0313 10:06:36.045362 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:06:38 crc kubenswrapper[4632]: I0313 10:06:38.044314 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:06:38 crc kubenswrapper[4632]: I0313 10:06:38.044412 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:06:38 crc kubenswrapper[4632]: I0313 10:06:38.044374 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:06:38 crc kubenswrapper[4632]: I0313 10:06:38.044317 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:06:38 crc kubenswrapper[4632]: E0313 10:06:38.044623 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:06:38 crc kubenswrapper[4632]: E0313 10:06:38.044697 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:06:38 crc kubenswrapper[4632]: E0313 10:06:38.044869 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:06:38 crc kubenswrapper[4632]: E0313 10:06:38.044977 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:06:38 crc kubenswrapper[4632]: I0313 10:06:38.111477 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-qlc8m" podStartSLOduration=119.111456792 podStartE2EDuration="1m59.111456792s" podCreationTimestamp="2026-03-13 10:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:06:38.087376923 +0000 UTC m=+172.109907076" watchObservedRunningTime="2026-03-13 10:06:38.111456792 +0000 UTC m=+172.133986925" Mar 13 10:06:38 crc kubenswrapper[4632]: I0313 10:06:38.126323 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kbtt2" podStartSLOduration=117.126300993 podStartE2EDuration="1m57.126300993s" podCreationTimestamp="2026-03-13 10:04:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:06:38.111874815 +0000 UTC m=+172.134404948" watchObservedRunningTime="2026-03-13 10:06:38.126300993 +0000 UTC m=+172.148831126" Mar 13 10:06:38 crc kubenswrapper[4632]: I0313 10:06:38.163168 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-gqf22" podStartSLOduration=119.163149259 podStartE2EDuration="1m59.163149259s" podCreationTimestamp="2026-03-13 10:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:06:38.162526729 +0000 UTC m=+172.185056862" watchObservedRunningTime="2026-03-13 10:06:38.163149259 +0000 UTC m=+172.185679402" Mar 13 10:06:38 crc kubenswrapper[4632]: E0313 10:06:38.169787 4632 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 10:06:38 crc kubenswrapper[4632]: I0313 10:06:38.210247 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=45.210223652 podStartE2EDuration="45.210223652s" podCreationTimestamp="2026-03-13 10:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:06:38.18958499 +0000 UTC m=+172.212115123" watchObservedRunningTime="2026-03-13 10:06:38.210223652 +0000 UTC m=+172.232753785" Mar 13 10:06:38 crc kubenswrapper[4632]: I0313 10:06:38.275338 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=5.275319785 podStartE2EDuration="5.275319785s" podCreationTimestamp="2026-03-13 10:06:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:06:38.273825799 +0000 UTC m=+172.296355952" watchObservedRunningTime="2026-03-13 10:06:38.275319785 +0000 UTC m=+172.297849928" Mar 13 10:06:38 crc kubenswrapper[4632]: I0313 10:06:38.294542 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=87.294525392 podStartE2EDuration="1m27.294525392s" podCreationTimestamp="2026-03-13 10:05:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:06:38.294318185 +0000 UTC m=+172.316848318" watchObservedRunningTime="2026-03-13 10:06:38.294525392 +0000 UTC m=+172.317055545" Mar 13 10:06:38 crc kubenswrapper[4632]: I0313 10:06:38.319551 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-n55jt" podStartSLOduration=119.319533369 podStartE2EDuration="1m59.319533369s" podCreationTimestamp="2026-03-13 10:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:06:38.318502738 +0000 UTC m=+172.341032881" watchObservedRunningTime="2026-03-13 10:06:38.319533369 +0000 UTC m=+172.342063512" Mar 13 10:06:38 crc kubenswrapper[4632]: I0313 10:06:38.356779 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" podStartSLOduration=118.356762647 podStartE2EDuration="1m58.356762647s" podCreationTimestamp="2026-03-13 10:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:06:38.356296591 +0000 UTC m=+172.378826744" watchObservedRunningTime="2026-03-13 10:06:38.356762647 +0000 UTC m=+172.379292780" Mar 13 10:06:38 crc kubenswrapper[4632]: I0313 10:06:38.357023 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-zwlc8" podStartSLOduration=119.357018584 podStartE2EDuration="1m59.357018584s" podCreationTimestamp="2026-03-13 10:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:06:38.328674864 +0000 UTC m=+172.351205007" watchObservedRunningTime="2026-03-13 10:06:38.357018584 +0000 UTC m=+172.379548717" Mar 13 10:06:38 crc kubenswrapper[4632]: I0313 10:06:38.367646 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=81.367636045 podStartE2EDuration="1m21.367636045s" podCreationTimestamp="2026-03-13 10:05:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:06:38.367552052 +0000 UTC m=+172.390082185" watchObservedRunningTime="2026-03-13 10:06:38.367636045 +0000 UTC m=+172.390166188" Mar 13 10:06:38 crc kubenswrapper[4632]: I0313 10:06:38.389485 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podStartSLOduration=119.389461452 podStartE2EDuration="1m59.389461452s" podCreationTimestamp="2026-03-13 10:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:06:38.389312558 +0000 UTC m=+172.411842701" watchObservedRunningTime="2026-03-13 10:06:38.389461452 +0000 UTC m=+172.411991595" Mar 13 10:06:38 crc kubenswrapper[4632]: I0313 10:06:38.420570 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=51.420549709 podStartE2EDuration="51.420549709s" podCreationTimestamp="2026-03-13 10:05:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:06:38.407854784 +0000 UTC m=+172.430384927" watchObservedRunningTime="2026-03-13 10:06:38.420549709 +0000 UTC m=+172.443079842" Mar 13 10:06:40 crc kubenswrapper[4632]: I0313 10:06:40.043225 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:06:40 crc kubenswrapper[4632]: I0313 10:06:40.043262 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:06:40 crc kubenswrapper[4632]: I0313 10:06:40.043262 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:06:40 crc kubenswrapper[4632]: E0313 10:06:40.043386 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:06:40 crc kubenswrapper[4632]: I0313 10:06:40.043421 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:06:40 crc kubenswrapper[4632]: E0313 10:06:40.043540 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:06:40 crc kubenswrapper[4632]: E0313 10:06:40.043613 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:06:40 crc kubenswrapper[4632]: E0313 10:06:40.043672 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:06:40 crc kubenswrapper[4632]: I0313 10:06:40.514540 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="ovnkube-controller" probeResult="failure" output="" Mar 13 10:06:42 crc kubenswrapper[4632]: I0313 10:06:42.044072 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:06:42 crc kubenswrapper[4632]: I0313 10:06:42.044131 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:06:42 crc kubenswrapper[4632]: I0313 10:06:42.044134 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:06:42 crc kubenswrapper[4632]: I0313 10:06:42.044132 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:06:42 crc kubenswrapper[4632]: E0313 10:06:42.044343 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:06:42 crc kubenswrapper[4632]: E0313 10:06:42.044397 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:06:42 crc kubenswrapper[4632]: E0313 10:06:42.044569 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:06:42 crc kubenswrapper[4632]: E0313 10:06:42.044728 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:06:42 crc kubenswrapper[4632]: I0313 10:06:42.105178 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 10:06:42 crc kubenswrapper[4632]: I0313 10:06:42.105223 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 10:06:42 crc kubenswrapper[4632]: I0313 10:06:42.105232 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 10:06:42 crc kubenswrapper[4632]: I0313 10:06:42.105246 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 10:06:42 crc kubenswrapper[4632]: I0313 10:06:42.105254 4632 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T10:06:42Z","lastTransitionTime":"2026-03-13T10:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 10:06:42 crc kubenswrapper[4632]: I0313 10:06:42.154109 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lrdz"] Mar 13 10:06:42 crc kubenswrapper[4632]: I0313 10:06:42.154541 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lrdz" Mar 13 10:06:42 crc kubenswrapper[4632]: I0313 10:06:42.159744 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 13 10:06:42 crc kubenswrapper[4632]: I0313 10:06:42.159971 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 13 10:06:42 crc kubenswrapper[4632]: I0313 10:06:42.159977 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 13 10:06:42 crc kubenswrapper[4632]: I0313 10:06:42.160242 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 13 10:06:42 crc kubenswrapper[4632]: I0313 10:06:42.281155 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9d64d959-3f9f-43eb-b37f-79c8ec6c38bd-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5lrdz\" (UID: \"9d64d959-3f9f-43eb-b37f-79c8ec6c38bd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lrdz" Mar 13 10:06:42 crc kubenswrapper[4632]: I0313 10:06:42.281309 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d64d959-3f9f-43eb-b37f-79c8ec6c38bd-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5lrdz\" (UID: \"9d64d959-3f9f-43eb-b37f-79c8ec6c38bd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lrdz" Mar 13 10:06:42 crc kubenswrapper[4632]: I0313 10:06:42.281394 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d64d959-3f9f-43eb-b37f-79c8ec6c38bd-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5lrdz\" (UID: \"9d64d959-3f9f-43eb-b37f-79c8ec6c38bd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lrdz" Mar 13 10:06:42 crc kubenswrapper[4632]: I0313 10:06:42.281451 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9d64d959-3f9f-43eb-b37f-79c8ec6c38bd-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5lrdz\" (UID: \"9d64d959-3f9f-43eb-b37f-79c8ec6c38bd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lrdz" Mar 13 10:06:42 crc kubenswrapper[4632]: I0313 10:06:42.281555 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9d64d959-3f9f-43eb-b37f-79c8ec6c38bd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5lrdz\" (UID: \"9d64d959-3f9f-43eb-b37f-79c8ec6c38bd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lrdz" Mar 13 10:06:42 crc kubenswrapper[4632]: I0313 10:06:42.363501 4632 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 13 10:06:42 crc kubenswrapper[4632]: I0313 10:06:42.373868 4632 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 13 10:06:42 crc kubenswrapper[4632]: I0313 10:06:42.382221 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d64d959-3f9f-43eb-b37f-79c8ec6c38bd-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5lrdz\" (UID: \"9d64d959-3f9f-43eb-b37f-79c8ec6c38bd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lrdz" Mar 13 10:06:42 crc kubenswrapper[4632]: I0313 10:06:42.382267 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d64d959-3f9f-43eb-b37f-79c8ec6c38bd-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5lrdz\" (UID: \"9d64d959-3f9f-43eb-b37f-79c8ec6c38bd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lrdz" Mar 13 10:06:42 crc kubenswrapper[4632]: I0313 10:06:42.382303 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9d64d959-3f9f-43eb-b37f-79c8ec6c38bd-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5lrdz\" (UID: \"9d64d959-3f9f-43eb-b37f-79c8ec6c38bd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lrdz" Mar 13 10:06:42 crc kubenswrapper[4632]: I0313 10:06:42.382329 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9d64d959-3f9f-43eb-b37f-79c8ec6c38bd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5lrdz\" (UID: \"9d64d959-3f9f-43eb-b37f-79c8ec6c38bd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lrdz" Mar 13 10:06:42 crc kubenswrapper[4632]: I0313 10:06:42.382355 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9d64d959-3f9f-43eb-b37f-79c8ec6c38bd-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5lrdz\" (UID: \"9d64d959-3f9f-43eb-b37f-79c8ec6c38bd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lrdz" Mar 13 10:06:42 crc kubenswrapper[4632]: I0313 10:06:42.382405 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9d64d959-3f9f-43eb-b37f-79c8ec6c38bd-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5lrdz\" (UID: \"9d64d959-3f9f-43eb-b37f-79c8ec6c38bd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lrdz" Mar 13 10:06:42 crc kubenswrapper[4632]: I0313 10:06:42.382487 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9d64d959-3f9f-43eb-b37f-79c8ec6c38bd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5lrdz\" (UID: \"9d64d959-3f9f-43eb-b37f-79c8ec6c38bd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lrdz" Mar 13 10:06:42 crc kubenswrapper[4632]: I0313 10:06:42.383242 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9d64d959-3f9f-43eb-b37f-79c8ec6c38bd-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5lrdz\" (UID: \"9d64d959-3f9f-43eb-b37f-79c8ec6c38bd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lrdz" Mar 13 10:06:42 crc kubenswrapper[4632]: I0313 10:06:42.387893 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d64d959-3f9f-43eb-b37f-79c8ec6c38bd-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5lrdz\" (UID: \"9d64d959-3f9f-43eb-b37f-79c8ec6c38bd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lrdz" Mar 13 10:06:42 crc kubenswrapper[4632]: I0313 10:06:42.406034 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d64d959-3f9f-43eb-b37f-79c8ec6c38bd-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5lrdz\" (UID: \"9d64d959-3f9f-43eb-b37f-79c8ec6c38bd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lrdz" Mar 13 10:06:42 crc kubenswrapper[4632]: I0313 10:06:42.471381 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lrdz" Mar 13 10:06:42 crc kubenswrapper[4632]: W0313 10:06:42.484905 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d64d959_3f9f_43eb_b37f_79c8ec6c38bd.slice/crio-71750ded659d29caf6b277a47cbf881699484dc3bfd621c59f2723f533ebae09 WatchSource:0}: Error finding container 71750ded659d29caf6b277a47cbf881699484dc3bfd621c59f2723f533ebae09: Status 404 returned error can't find the container with id 71750ded659d29caf6b277a47cbf881699484dc3bfd621c59f2723f533ebae09 Mar 13 10:06:43 crc kubenswrapper[4632]: E0313 10:06:43.171188 4632 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 10:06:43 crc kubenswrapper[4632]: I0313 10:06:43.388972 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lrdz" event={"ID":"9d64d959-3f9f-43eb-b37f-79c8ec6c38bd","Type":"ContainerStarted","Data":"5a8ef2bbbb0db2a9f2a7bfaaba16dd25db7a7a570f20b3976fb4a807b682cbde"} Mar 13 10:06:43 crc kubenswrapper[4632]: I0313 10:06:43.389325 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lrdz" event={"ID":"9d64d959-3f9f-43eb-b37f-79c8ec6c38bd","Type":"ContainerStarted","Data":"71750ded659d29caf6b277a47cbf881699484dc3bfd621c59f2723f533ebae09"} Mar 13 10:06:43 crc kubenswrapper[4632]: I0313 10:06:43.409810 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lrdz" podStartSLOduration=124.409777729 podStartE2EDuration="2m4.409777729s" podCreationTimestamp="2026-03-13 10:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:06:43.408747407 +0000 UTC m=+177.431277560" watchObservedRunningTime="2026-03-13 10:06:43.409777729 +0000 UTC m=+177.432307862" Mar 13 10:06:44 crc kubenswrapper[4632]: I0313 10:06:44.043817 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:06:44 crc kubenswrapper[4632]: I0313 10:06:44.043914 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:06:44 crc kubenswrapper[4632]: E0313 10:06:44.044046 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:06:44 crc kubenswrapper[4632]: I0313 10:06:44.044126 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:06:44 crc kubenswrapper[4632]: E0313 10:06:44.044149 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:06:44 crc kubenswrapper[4632]: I0313 10:06:44.043914 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:06:44 crc kubenswrapper[4632]: E0313 10:06:44.044404 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:06:44 crc kubenswrapper[4632]: E0313 10:06:44.044479 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:06:46 crc kubenswrapper[4632]: I0313 10:06:46.044274 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:06:46 crc kubenswrapper[4632]: I0313 10:06:46.044342 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:06:46 crc kubenswrapper[4632]: I0313 10:06:46.044418 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:06:46 crc kubenswrapper[4632]: I0313 10:06:46.045138 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:06:46 crc kubenswrapper[4632]: E0313 10:06:46.045361 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:06:46 crc kubenswrapper[4632]: E0313 10:06:46.045824 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:06:46 crc kubenswrapper[4632]: E0313 10:06:46.046164 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:06:46 crc kubenswrapper[4632]: E0313 10:06:46.046451 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:06:47 crc kubenswrapper[4632]: I0313 10:06:47.407340 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qb725_3b40c6b3-0061-4224-82d5-3ccf67998722/ovnkube-controller/3.log" Mar 13 10:06:47 crc kubenswrapper[4632]: I0313 10:06:47.408028 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qb725_3b40c6b3-0061-4224-82d5-3ccf67998722/ovnkube-controller/2.log" Mar 13 10:06:47 crc kubenswrapper[4632]: I0313 10:06:47.410617 4632 generic.go:334] "Generic (PLEG): container finished" podID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerID="8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b" exitCode=1 Mar 13 10:06:47 crc kubenswrapper[4632]: I0313 10:06:47.410663 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" event={"ID":"3b40c6b3-0061-4224-82d5-3ccf67998722","Type":"ContainerDied","Data":"8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b"} Mar 13 10:06:47 crc kubenswrapper[4632]: I0313 10:06:47.410697 4632 scope.go:117] "RemoveContainer" containerID="85cd256cfb44a7c0f0961b308f26a8fdd3f76e9371a32d157163576e9eec7dfe" Mar 13 10:06:47 crc kubenswrapper[4632]: I0313 10:06:47.412197 4632 scope.go:117] "RemoveContainer" containerID="8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b" Mar 13 10:06:47 crc kubenswrapper[4632]: E0313 10:06:47.412370 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qb725_openshift-ovn-kubernetes(3b40c6b3-0061-4224-82d5-3ccf67998722)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" Mar 13 10:06:48 crc kubenswrapper[4632]: I0313 10:06:48.043256 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:06:48 crc kubenswrapper[4632]: I0313 10:06:48.043276 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:06:48 crc kubenswrapper[4632]: I0313 10:06:48.043335 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:06:48 crc kubenswrapper[4632]: I0313 10:06:48.043352 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:06:48 crc kubenswrapper[4632]: E0313 10:06:48.045462 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:06:48 crc kubenswrapper[4632]: E0313 10:06:48.045646 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:06:48 crc kubenswrapper[4632]: E0313 10:06:48.045779 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:06:48 crc kubenswrapper[4632]: E0313 10:06:48.045983 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:06:48 crc kubenswrapper[4632]: E0313 10:06:48.172659 4632 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 10:06:48 crc kubenswrapper[4632]: I0313 10:06:48.415233 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qb725_3b40c6b3-0061-4224-82d5-3ccf67998722/ovnkube-controller/3.log" Mar 13 10:06:50 crc kubenswrapper[4632]: I0313 10:06:50.043413 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:06:50 crc kubenswrapper[4632]: I0313 10:06:50.043482 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:06:50 crc kubenswrapper[4632]: E0313 10:06:50.043589 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:06:50 crc kubenswrapper[4632]: I0313 10:06:50.043663 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:06:50 crc kubenswrapper[4632]: E0313 10:06:50.043790 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:06:50 crc kubenswrapper[4632]: I0313 10:06:50.043426 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:06:50 crc kubenswrapper[4632]: E0313 10:06:50.043903 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:06:50 crc kubenswrapper[4632]: E0313 10:06:50.044020 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:06:52 crc kubenswrapper[4632]: I0313 10:06:52.043615 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:06:52 crc kubenswrapper[4632]: I0313 10:06:52.043773 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:06:52 crc kubenswrapper[4632]: E0313 10:06:52.043788 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:06:52 crc kubenswrapper[4632]: I0313 10:06:52.043926 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:06:52 crc kubenswrapper[4632]: I0313 10:06:52.044021 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:06:52 crc kubenswrapper[4632]: E0313 10:06:52.044045 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:06:52 crc kubenswrapper[4632]: E0313 10:06:52.044174 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:06:52 crc kubenswrapper[4632]: E0313 10:06:52.044302 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:06:53 crc kubenswrapper[4632]: E0313 10:06:53.174466 4632 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 10:06:54 crc kubenswrapper[4632]: I0313 10:06:54.043806 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:06:54 crc kubenswrapper[4632]: I0313 10:06:54.044021 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:06:54 crc kubenswrapper[4632]: I0313 10:06:54.043894 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:06:54 crc kubenswrapper[4632]: I0313 10:06:54.044015 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:06:54 crc kubenswrapper[4632]: E0313 10:06:54.044218 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:06:54 crc kubenswrapper[4632]: E0313 10:06:54.044261 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:06:54 crc kubenswrapper[4632]: E0313 10:06:54.044405 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:06:54 crc kubenswrapper[4632]: E0313 10:06:54.044531 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:06:56 crc kubenswrapper[4632]: I0313 10:06:56.043995 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:06:56 crc kubenswrapper[4632]: I0313 10:06:56.044032 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:06:56 crc kubenswrapper[4632]: E0313 10:06:56.045235 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:06:56 crc kubenswrapper[4632]: I0313 10:06:56.044101 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:06:56 crc kubenswrapper[4632]: E0313 10:06:56.045569 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:06:56 crc kubenswrapper[4632]: E0313 10:06:56.045389 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:06:56 crc kubenswrapper[4632]: I0313 10:06:56.044065 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:06:56 crc kubenswrapper[4632]: E0313 10:06:56.045848 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:06:58 crc kubenswrapper[4632]: I0313 10:06:58.044158 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:06:58 crc kubenswrapper[4632]: I0313 10:06:58.044164 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:06:58 crc kubenswrapper[4632]: I0313 10:06:58.044178 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:06:58 crc kubenswrapper[4632]: I0313 10:06:58.044195 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:06:58 crc kubenswrapper[4632]: E0313 10:06:58.045280 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:06:58 crc kubenswrapper[4632]: E0313 10:06:58.045465 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:06:58 crc kubenswrapper[4632]: E0313 10:06:58.045564 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:06:58 crc kubenswrapper[4632]: E0313 10:06:58.045520 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:06:58 crc kubenswrapper[4632]: E0313 10:06:58.176234 4632 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 10:07:00 crc kubenswrapper[4632]: I0313 10:07:00.043739 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:07:00 crc kubenswrapper[4632]: I0313 10:07:00.043795 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:07:00 crc kubenswrapper[4632]: I0313 10:07:00.043754 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:07:00 crc kubenswrapper[4632]: E0313 10:07:00.043911 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:07:00 crc kubenswrapper[4632]: E0313 10:07:00.043994 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:07:00 crc kubenswrapper[4632]: E0313 10:07:00.044074 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:07:00 crc kubenswrapper[4632]: I0313 10:07:00.045017 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:07:00 crc kubenswrapper[4632]: E0313 10:07:00.045169 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:07:02 crc kubenswrapper[4632]: I0313 10:07:02.043915 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:07:02 crc kubenswrapper[4632]: I0313 10:07:02.044016 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:07:02 crc kubenswrapper[4632]: I0313 10:07:02.044054 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:07:02 crc kubenswrapper[4632]: I0313 10:07:02.043964 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:07:02 crc kubenswrapper[4632]: E0313 10:07:02.044141 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:07:02 crc kubenswrapper[4632]: E0313 10:07:02.044234 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:07:02 crc kubenswrapper[4632]: E0313 10:07:02.044296 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:07:02 crc kubenswrapper[4632]: E0313 10:07:02.044452 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:07:02 crc kubenswrapper[4632]: I0313 10:07:02.466023 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gqf22_4ec8e301-3037-4de0-94d2-32c49709660e/kube-multus/1.log" Mar 13 10:07:02 crc kubenswrapper[4632]: I0313 10:07:02.466654 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gqf22_4ec8e301-3037-4de0-94d2-32c49709660e/kube-multus/0.log" Mar 13 10:07:02 crc kubenswrapper[4632]: I0313 10:07:02.466719 4632 generic.go:334] "Generic (PLEG): container finished" podID="4ec8e301-3037-4de0-94d2-32c49709660e" containerID="e48bcc5861bda7a15e45c892fa67ba73299d99e896f36f2cb68274a659ec5d34" exitCode=1 Mar 13 10:07:02 crc kubenswrapper[4632]: I0313 10:07:02.466760 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gqf22" event={"ID":"4ec8e301-3037-4de0-94d2-32c49709660e","Type":"ContainerDied","Data":"e48bcc5861bda7a15e45c892fa67ba73299d99e896f36f2cb68274a659ec5d34"} Mar 13 10:07:02 crc kubenswrapper[4632]: I0313 10:07:02.466807 4632 scope.go:117] "RemoveContainer" containerID="0ae022ab87d0aedd5bbc6440acda6466cde0d3712108042da1225ea73ca35d6d" Mar 13 10:07:02 crc kubenswrapper[4632]: I0313 10:07:02.467217 4632 scope.go:117] "RemoveContainer" containerID="e48bcc5861bda7a15e45c892fa67ba73299d99e896f36f2cb68274a659ec5d34" Mar 13 10:07:02 crc kubenswrapper[4632]: E0313 10:07:02.467386 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-gqf22_openshift-multus(4ec8e301-3037-4de0-94d2-32c49709660e)\"" pod="openshift-multus/multus-gqf22" podUID="4ec8e301-3037-4de0-94d2-32c49709660e" Mar 13 10:07:03 crc kubenswrapper[4632]: I0313 10:07:03.045121 4632 scope.go:117] "RemoveContainer" containerID="8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b" Mar 13 10:07:03 crc kubenswrapper[4632]: E0313 10:07:03.045520 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qb725_openshift-ovn-kubernetes(3b40c6b3-0061-4224-82d5-3ccf67998722)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" Mar 13 10:07:03 crc kubenswrapper[4632]: E0313 10:07:03.177669 4632 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 10:07:03 crc kubenswrapper[4632]: I0313 10:07:03.471130 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gqf22_4ec8e301-3037-4de0-94d2-32c49709660e/kube-multus/1.log" Mar 13 10:07:04 crc kubenswrapper[4632]: I0313 10:07:04.043718 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:07:04 crc kubenswrapper[4632]: I0313 10:07:04.043802 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:07:04 crc kubenswrapper[4632]: E0313 10:07:04.044194 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:07:04 crc kubenswrapper[4632]: I0313 10:07:04.043914 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:07:04 crc kubenswrapper[4632]: I0313 10:07:04.043813 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:07:04 crc kubenswrapper[4632]: E0313 10:07:04.044361 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:07:04 crc kubenswrapper[4632]: E0313 10:07:04.044282 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:07:04 crc kubenswrapper[4632]: E0313 10:07:04.044461 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:07:06 crc kubenswrapper[4632]: I0313 10:07:06.044281 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:07:06 crc kubenswrapper[4632]: E0313 10:07:06.044455 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:07:06 crc kubenswrapper[4632]: I0313 10:07:06.044542 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:07:06 crc kubenswrapper[4632]: E0313 10:07:06.044599 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:07:06 crc kubenswrapper[4632]: I0313 10:07:06.044648 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:07:06 crc kubenswrapper[4632]: E0313 10:07:06.044701 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:07:06 crc kubenswrapper[4632]: I0313 10:07:06.045362 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:07:06 crc kubenswrapper[4632]: E0313 10:07:06.046017 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:07:08 crc kubenswrapper[4632]: I0313 10:07:08.044076 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:07:08 crc kubenswrapper[4632]: I0313 10:07:08.044090 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:07:08 crc kubenswrapper[4632]: I0313 10:07:08.044113 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:07:08 crc kubenswrapper[4632]: I0313 10:07:08.044183 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:07:08 crc kubenswrapper[4632]: E0313 10:07:08.045317 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:07:08 crc kubenswrapper[4632]: E0313 10:07:08.045420 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:07:08 crc kubenswrapper[4632]: E0313 10:07:08.045483 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:07:08 crc kubenswrapper[4632]: E0313 10:07:08.045536 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:07:08 crc kubenswrapper[4632]: E0313 10:07:08.179337 4632 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 10:07:10 crc kubenswrapper[4632]: I0313 10:07:10.043495 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:07:10 crc kubenswrapper[4632]: E0313 10:07:10.043665 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:07:10 crc kubenswrapper[4632]: I0313 10:07:10.043913 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:07:10 crc kubenswrapper[4632]: E0313 10:07:10.043998 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:07:10 crc kubenswrapper[4632]: I0313 10:07:10.044121 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:07:10 crc kubenswrapper[4632]: E0313 10:07:10.044167 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:07:10 crc kubenswrapper[4632]: I0313 10:07:10.044275 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:07:10 crc kubenswrapper[4632]: E0313 10:07:10.044316 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:07:12 crc kubenswrapper[4632]: I0313 10:07:12.043959 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:07:12 crc kubenswrapper[4632]: I0313 10:07:12.044013 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:07:12 crc kubenswrapper[4632]: I0313 10:07:12.043971 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:07:12 crc kubenswrapper[4632]: I0313 10:07:12.044186 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:07:12 crc kubenswrapper[4632]: E0313 10:07:12.044200 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:07:12 crc kubenswrapper[4632]: E0313 10:07:12.044265 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:07:12 crc kubenswrapper[4632]: E0313 10:07:12.044305 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:07:12 crc kubenswrapper[4632]: E0313 10:07:12.044344 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:07:13 crc kubenswrapper[4632]: I0313 10:07:13.044277 4632 scope.go:117] "RemoveContainer" containerID="e48bcc5861bda7a15e45c892fa67ba73299d99e896f36f2cb68274a659ec5d34" Mar 13 10:07:13 crc kubenswrapper[4632]: E0313 10:07:13.180854 4632 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 10:07:13 crc kubenswrapper[4632]: I0313 10:07:13.509277 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gqf22_4ec8e301-3037-4de0-94d2-32c49709660e/kube-multus/1.log" Mar 13 10:07:13 crc kubenswrapper[4632]: I0313 10:07:13.509334 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gqf22" event={"ID":"4ec8e301-3037-4de0-94d2-32c49709660e","Type":"ContainerStarted","Data":"5fd2699ddbdedbd54069c44af8e38bc058b347d99af772939ae6ec1d10220723"} Mar 13 10:07:14 crc kubenswrapper[4632]: I0313 10:07:14.043440 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:07:14 crc kubenswrapper[4632]: I0313 10:07:14.043484 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:07:14 crc kubenswrapper[4632]: I0313 10:07:14.043445 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:07:14 crc kubenswrapper[4632]: E0313 10:07:14.043715 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:07:14 crc kubenswrapper[4632]: I0313 10:07:14.043788 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:07:14 crc kubenswrapper[4632]: E0313 10:07:14.043828 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:07:14 crc kubenswrapper[4632]: E0313 10:07:14.043924 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:07:14 crc kubenswrapper[4632]: E0313 10:07:14.044021 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:07:14 crc kubenswrapper[4632]: I0313 10:07:14.044580 4632 scope.go:117] "RemoveContainer" containerID="8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b" Mar 13 10:07:14 crc kubenswrapper[4632]: E0313 10:07:14.044737 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qb725_openshift-ovn-kubernetes(3b40c6b3-0061-4224-82d5-3ccf67998722)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" Mar 13 10:07:16 crc kubenswrapper[4632]: I0313 10:07:16.043244 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:07:16 crc kubenswrapper[4632]: I0313 10:07:16.043320 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:07:16 crc kubenswrapper[4632]: E0313 10:07:16.043395 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:07:16 crc kubenswrapper[4632]: I0313 10:07:16.043339 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:07:16 crc kubenswrapper[4632]: E0313 10:07:16.043491 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:07:16 crc kubenswrapper[4632]: E0313 10:07:16.043697 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:07:16 crc kubenswrapper[4632]: I0313 10:07:16.044124 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:07:16 crc kubenswrapper[4632]: E0313 10:07:16.044221 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:07:18 crc kubenswrapper[4632]: I0313 10:07:18.043184 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:07:18 crc kubenswrapper[4632]: I0313 10:07:18.043184 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:07:18 crc kubenswrapper[4632]: I0313 10:07:18.043224 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:07:18 crc kubenswrapper[4632]: E0313 10:07:18.044374 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:07:18 crc kubenswrapper[4632]: I0313 10:07:18.044393 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:07:18 crc kubenswrapper[4632]: E0313 10:07:18.044456 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:07:18 crc kubenswrapper[4632]: E0313 10:07:18.044577 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:07:18 crc kubenswrapper[4632]: E0313 10:07:18.044634 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:07:18 crc kubenswrapper[4632]: I0313 10:07:18.071966 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:18 crc kubenswrapper[4632]: E0313 10:07:18.072089 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:09:20.072062916 +0000 UTC m=+334.094593049 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:18 crc kubenswrapper[4632]: I0313 10:07:18.072123 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:07:18 crc kubenswrapper[4632]: I0313 10:07:18.072170 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:07:18 crc kubenswrapper[4632]: E0313 10:07:18.072250 4632 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 13 10:07:18 crc kubenswrapper[4632]: E0313 10:07:18.072312 4632 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 13 10:07:18 crc kubenswrapper[4632]: E0313 10:07:18.072336 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-13 10:09:20.072323431 +0000 UTC m=+334.094853564 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 13 10:07:18 crc kubenswrapper[4632]: E0313 10:07:18.072355 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-13 10:09:20.072345381 +0000 UTC m=+334.094875514 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 13 10:07:18 crc kubenswrapper[4632]: I0313 10:07:18.173594 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:07:18 crc kubenswrapper[4632]: I0313 10:07:18.173677 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:07:18 crc kubenswrapper[4632]: I0313 10:07:18.173710 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad-metrics-certs\") pod \"network-metrics-daemon-z2vlz\" (UID: \"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\") " pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:07:18 crc kubenswrapper[4632]: E0313 10:07:18.173838 4632 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 10:07:18 crc kubenswrapper[4632]: E0313 10:07:18.173837 4632 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 10:07:18 crc kubenswrapper[4632]: E0313 10:07:18.173884 4632 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 10:07:18 crc kubenswrapper[4632]: E0313 10:07:18.173897 4632 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:07:18 crc kubenswrapper[4632]: E0313 10:07:18.173861 4632 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 10:07:18 crc kubenswrapper[4632]: E0313 10:07:18.173899 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad-metrics-certs podName:ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad nodeName:}" failed. No retries permitted until 2026-03-13 10:09:20.173881433 +0000 UTC m=+334.196411576 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad-metrics-certs") pod "network-metrics-daemon-z2vlz" (UID: "ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 10:07:18 crc kubenswrapper[4632]: E0313 10:07:18.174020 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-13 10:09:20.174003045 +0000 UTC m=+334.196533178 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:07:18 crc kubenswrapper[4632]: E0313 10:07:18.174070 4632 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 10:07:18 crc kubenswrapper[4632]: E0313 10:07:18.174164 4632 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:07:18 crc kubenswrapper[4632]: E0313 10:07:18.174220 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-13 10:09:20.174208439 +0000 UTC m=+334.196738682 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:07:18 crc kubenswrapper[4632]: E0313 10:07:18.182376 4632 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 10:07:20 crc kubenswrapper[4632]: I0313 10:07:20.043796 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:07:20 crc kubenswrapper[4632]: I0313 10:07:20.043861 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:07:20 crc kubenswrapper[4632]: I0313 10:07:20.043805 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:07:20 crc kubenswrapper[4632]: E0313 10:07:20.043994 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:07:20 crc kubenswrapper[4632]: E0313 10:07:20.044064 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:07:20 crc kubenswrapper[4632]: I0313 10:07:20.043820 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:07:20 crc kubenswrapper[4632]: E0313 10:07:20.044205 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:07:20 crc kubenswrapper[4632]: E0313 10:07:20.044241 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:07:22 crc kubenswrapper[4632]: I0313 10:07:22.043866 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:07:22 crc kubenswrapper[4632]: I0313 10:07:22.043934 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:07:22 crc kubenswrapper[4632]: I0313 10:07:22.044082 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:07:22 crc kubenswrapper[4632]: E0313 10:07:22.044086 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:07:22 crc kubenswrapper[4632]: E0313 10:07:22.044246 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:07:22 crc kubenswrapper[4632]: E0313 10:07:22.044358 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:07:22 crc kubenswrapper[4632]: I0313 10:07:22.044417 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:07:22 crc kubenswrapper[4632]: E0313 10:07:22.044688 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:07:23 crc kubenswrapper[4632]: E0313 10:07:23.184088 4632 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 10:07:24 crc kubenswrapper[4632]: I0313 10:07:24.044121 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:07:24 crc kubenswrapper[4632]: I0313 10:07:24.044121 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:07:24 crc kubenswrapper[4632]: E0313 10:07:24.044317 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:07:24 crc kubenswrapper[4632]: E0313 10:07:24.044372 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:07:24 crc kubenswrapper[4632]: I0313 10:07:24.044165 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:07:24 crc kubenswrapper[4632]: E0313 10:07:24.044446 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:07:24 crc kubenswrapper[4632]: I0313 10:07:24.044703 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:07:24 crc kubenswrapper[4632]: E0313 10:07:24.044789 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:07:26 crc kubenswrapper[4632]: I0313 10:07:26.043532 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:07:26 crc kubenswrapper[4632]: E0313 10:07:26.043666 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:07:26 crc kubenswrapper[4632]: I0313 10:07:26.043669 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:07:26 crc kubenswrapper[4632]: I0313 10:07:26.043532 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:07:26 crc kubenswrapper[4632]: E0313 10:07:26.043818 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:07:26 crc kubenswrapper[4632]: E0313 10:07:26.043732 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:07:26 crc kubenswrapper[4632]: I0313 10:07:26.044507 4632 scope.go:117] "RemoveContainer" containerID="8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b" Mar 13 10:07:26 crc kubenswrapper[4632]: E0313 10:07:26.044834 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qb725_openshift-ovn-kubernetes(3b40c6b3-0061-4224-82d5-3ccf67998722)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" Mar 13 10:07:26 crc kubenswrapper[4632]: I0313 10:07:26.044932 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:07:26 crc kubenswrapper[4632]: E0313 10:07:26.045210 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:07:28 crc kubenswrapper[4632]: I0313 10:07:28.044356 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:07:28 crc kubenswrapper[4632]: I0313 10:07:28.044415 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:07:28 crc kubenswrapper[4632]: I0313 10:07:28.044460 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:07:28 crc kubenswrapper[4632]: E0313 10:07:28.045655 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:07:28 crc kubenswrapper[4632]: E0313 10:07:28.045870 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:07:28 crc kubenswrapper[4632]: E0313 10:07:28.046017 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:07:28 crc kubenswrapper[4632]: I0313 10:07:28.046159 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:07:28 crc kubenswrapper[4632]: E0313 10:07:28.046267 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:07:28 crc kubenswrapper[4632]: E0313 10:07:28.185867 4632 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 10:07:30 crc kubenswrapper[4632]: I0313 10:07:30.044000 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:07:30 crc kubenswrapper[4632]: I0313 10:07:30.044064 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:07:30 crc kubenswrapper[4632]: I0313 10:07:30.044075 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:07:30 crc kubenswrapper[4632]: E0313 10:07:30.044827 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:07:30 crc kubenswrapper[4632]: E0313 10:07:30.044512 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:07:30 crc kubenswrapper[4632]: E0313 10:07:30.044728 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:07:30 crc kubenswrapper[4632]: I0313 10:07:30.044101 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:07:30 crc kubenswrapper[4632]: E0313 10:07:30.044930 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:07:32 crc kubenswrapper[4632]: I0313 10:07:32.043981 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:07:32 crc kubenswrapper[4632]: I0313 10:07:32.044036 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:07:32 crc kubenswrapper[4632]: I0313 10:07:32.044049 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:07:32 crc kubenswrapper[4632]: I0313 10:07:32.044004 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:07:32 crc kubenswrapper[4632]: E0313 10:07:32.044152 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:07:32 crc kubenswrapper[4632]: E0313 10:07:32.044690 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:07:32 crc kubenswrapper[4632]: E0313 10:07:32.051518 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:07:32 crc kubenswrapper[4632]: E0313 10:07:32.052160 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:07:33 crc kubenswrapper[4632]: E0313 10:07:33.187094 4632 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 10:07:34 crc kubenswrapper[4632]: I0313 10:07:34.044432 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:07:34 crc kubenswrapper[4632]: I0313 10:07:34.044494 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:07:34 crc kubenswrapper[4632]: I0313 10:07:34.044476 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:07:34 crc kubenswrapper[4632]: I0313 10:07:34.044447 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:07:34 crc kubenswrapper[4632]: E0313 10:07:34.044603 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:07:34 crc kubenswrapper[4632]: E0313 10:07:34.044673 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:07:34 crc kubenswrapper[4632]: E0313 10:07:34.044732 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:07:34 crc kubenswrapper[4632]: E0313 10:07:34.044786 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:07:36 crc kubenswrapper[4632]: I0313 10:07:36.044133 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:07:36 crc kubenswrapper[4632]: E0313 10:07:36.044275 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:07:36 crc kubenswrapper[4632]: I0313 10:07:36.044449 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:07:36 crc kubenswrapper[4632]: E0313 10:07:36.044518 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:07:36 crc kubenswrapper[4632]: I0313 10:07:36.044631 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:07:36 crc kubenswrapper[4632]: E0313 10:07:36.044675 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:07:36 crc kubenswrapper[4632]: I0313 10:07:36.044858 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:07:36 crc kubenswrapper[4632]: E0313 10:07:36.044963 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:07:38 crc kubenswrapper[4632]: I0313 10:07:38.043385 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:07:38 crc kubenswrapper[4632]: I0313 10:07:38.043396 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:07:38 crc kubenswrapper[4632]: I0313 10:07:38.043504 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:07:38 crc kubenswrapper[4632]: I0313 10:07:38.043509 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:07:38 crc kubenswrapper[4632]: E0313 10:07:38.044682 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:07:38 crc kubenswrapper[4632]: E0313 10:07:38.044568 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:07:38 crc kubenswrapper[4632]: E0313 10:07:38.045552 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:07:38 crc kubenswrapper[4632]: E0313 10:07:38.045667 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:07:38 crc kubenswrapper[4632]: I0313 10:07:38.046133 4632 scope.go:117] "RemoveContainer" containerID="8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b" Mar 13 10:07:38 crc kubenswrapper[4632]: E0313 10:07:38.188500 4632 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 10:07:38 crc kubenswrapper[4632]: I0313 10:07:38.588772 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qb725_3b40c6b3-0061-4224-82d5-3ccf67998722/ovnkube-controller/3.log" Mar 13 10:07:38 crc kubenswrapper[4632]: I0313 10:07:38.590929 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" event={"ID":"3b40c6b3-0061-4224-82d5-3ccf67998722","Type":"ContainerStarted","Data":"166459cac16ec3b496c247f07645c5ed749fa575d48ccabd0d32f2daa03d6776"} Mar 13 10:07:38 crc kubenswrapper[4632]: I0313 10:07:38.591426 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:07:38 crc kubenswrapper[4632]: I0313 10:07:38.925693 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-z2vlz"] Mar 13 10:07:38 crc kubenswrapper[4632]: I0313 10:07:38.925822 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:07:38 crc kubenswrapper[4632]: E0313 10:07:38.925919 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:07:40 crc kubenswrapper[4632]: I0313 10:07:40.044207 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:07:40 crc kubenswrapper[4632]: I0313 10:07:40.044200 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:07:40 crc kubenswrapper[4632]: I0313 10:07:40.044219 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:07:40 crc kubenswrapper[4632]: E0313 10:07:40.044652 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:07:40 crc kubenswrapper[4632]: E0313 10:07:40.044932 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:07:40 crc kubenswrapper[4632]: E0313 10:07:40.045072 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:07:40 crc kubenswrapper[4632]: I0313 10:07:40.461442 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 10:07:40 crc kubenswrapper[4632]: I0313 10:07:40.461515 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 10:07:41 crc kubenswrapper[4632]: I0313 10:07:41.043920 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:07:41 crc kubenswrapper[4632]: E0313 10:07:41.044151 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:07:42 crc kubenswrapper[4632]: I0313 10:07:42.043176 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:07:42 crc kubenswrapper[4632]: I0313 10:07:42.043374 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:07:42 crc kubenswrapper[4632]: E0313 10:07:42.043560 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 10:07:42 crc kubenswrapper[4632]: I0313 10:07:42.043815 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:07:42 crc kubenswrapper[4632]: E0313 10:07:42.043890 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 10:07:42 crc kubenswrapper[4632]: E0313 10:07:42.044121 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.043762 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:07:43 crc kubenswrapper[4632]: E0313 10:07:43.043939 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z2vlz" podUID="ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.283152 4632 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.321046 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-c6jnc"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.321562 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-c6jnc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.321927 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9v5nn"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.322313 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9v5nn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.322616 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.323224 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.323547 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-p9gp2"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.324091 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-hmljp"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.324182 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.324738 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hmljp" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.328836 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9sqbn"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.329255 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9sqbn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.329691 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.329937 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.330127 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.330626 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.340686 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.340766 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.340686 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.340697 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.340937 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.341139 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xthqz"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.341533 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xthqz" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.341693 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.345043 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qtrc2"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.345537 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qtrc2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.345915 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.346526 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.346548 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.346589 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.346606 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.346694 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.346700 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.346859 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.347008 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.347227 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.352213 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.353829 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-w2hhj"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.354381 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-w2hhj" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.357901 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.358242 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.358460 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.358802 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.358944 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.359336 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.359450 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.359554 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.359654 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.359757 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.360053 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.360171 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.377032 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.377477 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.377514 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.377980 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.378101 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.378153 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.377984 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.378374 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.380049 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.380180 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.380309 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.380383 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.380415 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.380508 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.380784 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.381024 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.381467 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.381887 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8sl88"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.383566 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.384181 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.385306 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.390475 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.409523 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.409757 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.412549 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.412562 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.412693 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.412723 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.414704 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-svhr5"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.415441 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hd8rx"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.417268 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sbtn5"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.417676 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-zn7mn"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.417788 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hd8rx" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.412983 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.413164 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.415615 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-svhr5" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.418271 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zn7mn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.418979 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-sbtn5" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.413205 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.421111 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.413277 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.415677 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.415757 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.415827 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.416043 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.416184 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.422182 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.423108 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.424016 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9rrcn"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.424388 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-6fqf5"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.424710 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.424761 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-6fqf5" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.425213 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9rrcn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.427286 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fxs5z"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.427815 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.428012 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-k955n"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.428648 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-k955n" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.429596 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.430092 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.430182 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.430273 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.433350 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-t9vht"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.434142 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-946gp"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.435132 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hb4c6"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.435756 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-c6jnc"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.435925 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hb4c6" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.436342 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-99hff"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.436509 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-t9vht" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.437027 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-99hff" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.437781 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.438364 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9wxcs"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.438651 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.438895 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9wxcs" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.439707 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.439922 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.439936 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.440096 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.440238 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.440397 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.441284 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-946gp" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.442102 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.442739 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.449208 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.449466 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.449553 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.449723 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.449828 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.449944 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.450063 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.450229 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.450323 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.450411 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.450512 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.454608 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.454723 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.454816 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.454883 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.454970 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rntsr"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.455236 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.474647 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.476547 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rvkzz"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.477330 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rntsr" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.478735 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rvkzz" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.482466 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19fca6e-5095-42b6-8590-32c5b2c73308-serving-cert\") pod \"apiserver-7bbb656c7d-7vrbc\" (UID: \"d19fca6e-5095-42b6-8590-32c5b2c73308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.482573 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/37df1143-69fc-4d13-a5d3-790a9d14814a-etcd-client\") pod \"apiserver-76f77b778f-p9gp2\" (UID: \"37df1143-69fc-4d13-a5d3-790a9d14814a\") " pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.482637 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/560e6c43-4285-4ca8-98b9-874e9dcb5810-config\") pod \"route-controller-manager-6576b87f9c-xthqz\" (UID: \"560e6c43-4285-4ca8-98b9-874e9dcb5810\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xthqz" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.482718 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/37df1143-69fc-4d13-a5d3-790a9d14814a-node-pullsecrets\") pod \"apiserver-76f77b778f-p9gp2\" (UID: \"37df1143-69fc-4d13-a5d3-790a9d14814a\") " pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.482754 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-audit-policies\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.482845 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/54b67d35-da46-4e38-9b9a-e91855d6d88d-auth-proxy-config\") pod \"machine-approver-56656f9798-hmljp\" (UID: \"54b67d35-da46-4e38-9b9a-e91855d6d88d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hmljp" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.482916 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/275c3112-6912-49f8-9d3f-8147662fb99f-config\") pod \"machine-api-operator-5694c8668f-c6jnc\" (UID: \"275c3112-6912-49f8-9d3f-8147662fb99f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-c6jnc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.482999 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8966c5f5-d0a8-4533-842c-0930c1a97bd7-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qtrc2\" (UID: \"8966c5f5-d0a8-4533-842c-0930c1a97bd7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qtrc2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.483074 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/37df1143-69fc-4d13-a5d3-790a9d14814a-audit-dir\") pod \"apiserver-76f77b778f-p9gp2\" (UID: \"37df1143-69fc-4d13-a5d3-790a9d14814a\") " pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.483099 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.483386 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vvbt\" (UniqueName: \"kubernetes.io/projected/70f440bb-5dd8-4863-9749-bc5f7c547750-kube-api-access-6vvbt\") pod \"controller-manager-879f6c89f-9v5nn\" (UID: \"70f440bb-5dd8-4863-9749-bc5f7c547750\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9v5nn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.483462 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.483493 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/54b67d35-da46-4e38-9b9a-e91855d6d88d-machine-approver-tls\") pod \"machine-approver-56656f9798-hmljp\" (UID: \"54b67d35-da46-4e38-9b9a-e91855d6d88d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hmljp" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.483552 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19fca6e-5095-42b6-8590-32c5b2c73308-encryption-config\") pod \"apiserver-7bbb656c7d-7vrbc\" (UID: \"d19fca6e-5095-42b6-8590-32c5b2c73308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.483643 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc56m\" (UniqueName: \"kubernetes.io/projected/d19fca6e-5095-42b6-8590-32c5b2c73308-kube-api-access-tc56m\") pod \"apiserver-7bbb656c7d-7vrbc\" (UID: \"d19fca6e-5095-42b6-8590-32c5b2c73308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.483794 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-audit-dir\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.483866 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.484273 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/560e6c43-4285-4ca8-98b9-874e9dcb5810-serving-cert\") pod \"route-controller-manager-6576b87f9c-xthqz\" (UID: \"560e6c43-4285-4ca8-98b9-874e9dcb5810\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xthqz" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.484316 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37df1143-69fc-4d13-a5d3-790a9d14814a-serving-cert\") pod \"apiserver-76f77b778f-p9gp2\" (UID: \"37df1143-69fc-4d13-a5d3-790a9d14814a\") " pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.484350 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/37df1143-69fc-4d13-a5d3-790a9d14814a-image-import-ca\") pod \"apiserver-76f77b778f-p9gp2\" (UID: \"37df1143-69fc-4d13-a5d3-790a9d14814a\") " pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.484377 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c90710f-4595-425c-8be1-1436f43b5069-config\") pod \"openshift-apiserver-operator-796bbdcf4f-9sqbn\" (UID: \"2c90710f-4595-425c-8be1-1436f43b5069\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9sqbn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.484483 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r8xf\" (UniqueName: \"kubernetes.io/projected/54b67d35-da46-4e38-9b9a-e91855d6d88d-kube-api-access-4r8xf\") pod \"machine-approver-56656f9798-hmljp\" (UID: \"54b67d35-da46-4e38-9b9a-e91855d6d88d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hmljp" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.484526 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d19fca6e-5095-42b6-8590-32c5b2c73308-audit-policies\") pod \"apiserver-7bbb656c7d-7vrbc\" (UID: \"d19fca6e-5095-42b6-8590-32c5b2c73308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.484597 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54b67d35-da46-4e38-9b9a-e91855d6d88d-config\") pod \"machine-approver-56656f9798-hmljp\" (UID: \"54b67d35-da46-4e38-9b9a-e91855d6d88d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hmljp" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.484696 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.484857 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/560e6c43-4285-4ca8-98b9-874e9dcb5810-client-ca\") pod \"route-controller-manager-6576b87f9c-xthqz\" (UID: \"560e6c43-4285-4ca8-98b9-874e9dcb5810\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xthqz" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.484968 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nsdx\" (UniqueName: \"kubernetes.io/projected/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-kube-api-access-9nsdx\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.485019 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19fca6e-5095-42b6-8590-32c5b2c73308-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7vrbc\" (UID: \"d19fca6e-5095-42b6-8590-32c5b2c73308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.485064 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d19fca6e-5095-42b6-8590-32c5b2c73308-audit-dir\") pod \"apiserver-7bbb656c7d-7vrbc\" (UID: \"d19fca6e-5095-42b6-8590-32c5b2c73308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.485124 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx8gt\" (UniqueName: \"kubernetes.io/projected/8966c5f5-d0a8-4533-842c-0930c1a97bd7-kube-api-access-hx8gt\") pod \"cluster-samples-operator-665b6dd947-qtrc2\" (UID: \"8966c5f5-d0a8-4533-842c-0930c1a97bd7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qtrc2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.485336 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt2mw\" (UniqueName: \"kubernetes.io/projected/2c90710f-4595-425c-8be1-1436f43b5069-kube-api-access-rt2mw\") pod \"openshift-apiserver-operator-796bbdcf4f-9sqbn\" (UID: \"2c90710f-4595-425c-8be1-1436f43b5069\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9sqbn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.485412 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70f440bb-5dd8-4863-9749-bc5f7c547750-serving-cert\") pod \"controller-manager-879f6c89f-9v5nn\" (UID: \"70f440bb-5dd8-4863-9749-bc5f7c547750\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9v5nn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.485582 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.485648 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.485741 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.485845 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70f440bb-5dd8-4863-9749-bc5f7c547750-config\") pod \"controller-manager-879f6c89f-9v5nn\" (UID: \"70f440bb-5dd8-4863-9749-bc5f7c547750\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9v5nn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.485875 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19fca6e-5095-42b6-8590-32c5b2c73308-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7vrbc\" (UID: \"d19fca6e-5095-42b6-8590-32c5b2c73308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.485951 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/37df1143-69fc-4d13-a5d3-790a9d14814a-etcd-serving-ca\") pod \"apiserver-76f77b778f-p9gp2\" (UID: \"37df1143-69fc-4d13-a5d3-790a9d14814a\") " pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.485980 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/275c3112-6912-49f8-9d3f-8147662fb99f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-c6jnc\" (UID: \"275c3112-6912-49f8-9d3f-8147662fb99f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-c6jnc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.486007 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdwcs\" (UniqueName: \"kubernetes.io/projected/560e6c43-4285-4ca8-98b9-874e9dcb5810-kube-api-access-sdwcs\") pod \"route-controller-manager-6576b87f9c-xthqz\" (UID: \"560e6c43-4285-4ca8-98b9-874e9dcb5810\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xthqz" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.486121 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37df1143-69fc-4d13-a5d3-790a9d14814a-config\") pod \"apiserver-76f77b778f-p9gp2\" (UID: \"37df1143-69fc-4d13-a5d3-790a9d14814a\") " pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.486159 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70f440bb-5dd8-4863-9749-bc5f7c547750-client-ca\") pod \"controller-manager-879f6c89f-9v5nn\" (UID: \"70f440bb-5dd8-4863-9749-bc5f7c547750\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9v5nn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.486180 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19fca6e-5095-42b6-8590-32c5b2c73308-etcd-client\") pod \"apiserver-7bbb656c7d-7vrbc\" (UID: \"d19fca6e-5095-42b6-8590-32c5b2c73308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.486218 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/37df1143-69fc-4d13-a5d3-790a9d14814a-encryption-config\") pod \"apiserver-76f77b778f-p9gp2\" (UID: \"37df1143-69fc-4d13-a5d3-790a9d14814a\") " pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.486280 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.486318 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.486360 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/37df1143-69fc-4d13-a5d3-790a9d14814a-audit\") pod \"apiserver-76f77b778f-p9gp2\" (UID: \"37df1143-69fc-4d13-a5d3-790a9d14814a\") " pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.486383 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37df1143-69fc-4d13-a5d3-790a9d14814a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-p9gp2\" (UID: \"37df1143-69fc-4d13-a5d3-790a9d14814a\") " pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.486411 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c90710f-4595-425c-8be1-1436f43b5069-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-9sqbn\" (UID: \"2c90710f-4595-425c-8be1-1436f43b5069\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9sqbn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.486509 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70f440bb-5dd8-4863-9749-bc5f7c547750-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9v5nn\" (UID: \"70f440bb-5dd8-4863-9749-bc5f7c547750\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9v5nn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.486567 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdwt2\" (UniqueName: \"kubernetes.io/projected/37df1143-69fc-4d13-a5d3-790a9d14814a-kube-api-access-hdwt2\") pod \"apiserver-76f77b778f-p9gp2\" (UID: \"37df1143-69fc-4d13-a5d3-790a9d14814a\") " pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.486681 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzjdx\" (UniqueName: \"kubernetes.io/projected/275c3112-6912-49f8-9d3f-8147662fb99f-kube-api-access-zzjdx\") pod \"machine-api-operator-5694c8668f-c6jnc\" (UID: \"275c3112-6912-49f8-9d3f-8147662fb99f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-c6jnc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.486814 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.486860 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f869\" (UniqueName: \"kubernetes.io/projected/7d155f24-9bfc-4039-9981-10e7f724fa51-kube-api-access-8f869\") pod \"downloads-7954f5f757-w2hhj\" (UID: \"7d155f24-9bfc-4039-9981-10e7f724fa51\") " pod="openshift-console/downloads-7954f5f757-w2hhj" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.489058 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.486930 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/275c3112-6912-49f8-9d3f-8147662fb99f-images\") pod \"machine-api-operator-5694c8668f-c6jnc\" (UID: \"275c3112-6912-49f8-9d3f-8147662fb99f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-c6jnc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.494028 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.496622 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jrkwc"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.497870 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jrkwc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.502103 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6ndt"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.503155 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6ndt" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.523645 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.523724 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pvwll"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.524273 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.524304 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pvwll" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.526030 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9sqbn"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.526285 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.529840 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2n99d"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.530431 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.530467 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.530820 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.531034 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2n99d" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.531220 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.531306 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfvsc"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.532074 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfvsc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.532923 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.533484 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-4hmjh"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.533882 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.534548 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tqbl9"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.535135 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tqbl9" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.535351 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-zh465"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.535553 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.535860 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zh465" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.539439 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-68mjx"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.540273 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556606-mkrp2"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.540927 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556600-r9flg"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.541495 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556600-r9flg" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.540464 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-4hmjh" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.541081 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556606-mkrp2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.542012 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xthqz"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.540510 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-68mjx" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.544747 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.548250 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-w2hhj"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.548290 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-p9gp2"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.548306 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qtrc2"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.548578 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.552327 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-hvrrc"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.554161 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9v5nn"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.554259 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvrrc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.554726 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.556227 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-6fqf5"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.557013 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-946gp"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.557932 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hb4c6"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.559346 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-qcb4l"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.560101 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qcb4l" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.560525 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-svhr5"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.566382 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.571560 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8sl88"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.573412 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fxs5z"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.575149 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9rrcn"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.575929 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.591219 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.596439 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/275c3112-6912-49f8-9d3f-8147662fb99f-config\") pod \"machine-api-operator-5694c8668f-c6jnc\" (UID: \"275c3112-6912-49f8-9d3f-8147662fb99f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-c6jnc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.596688 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8966c5f5-d0a8-4533-842c-0930c1a97bd7-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qtrc2\" (UID: \"8966c5f5-d0a8-4533-842c-0930c1a97bd7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qtrc2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.596803 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/37df1143-69fc-4d13-a5d3-790a9d14814a-audit-dir\") pod \"apiserver-76f77b778f-p9gp2\" (UID: \"37df1143-69fc-4d13-a5d3-790a9d14814a\") " pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.596888 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.596991 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vvbt\" (UniqueName: \"kubernetes.io/projected/70f440bb-5dd8-4863-9749-bc5f7c547750-kube-api-access-6vvbt\") pod \"controller-manager-879f6c89f-9v5nn\" (UID: \"70f440bb-5dd8-4863-9749-bc5f7c547750\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9v5nn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.597071 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.597208 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/37df1143-69fc-4d13-a5d3-790a9d14814a-audit-dir\") pod \"apiserver-76f77b778f-p9gp2\" (UID: \"37df1143-69fc-4d13-a5d3-790a9d14814a\") " pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.591552 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-4hmjh"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.597643 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hd8rx"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.597735 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.597799 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9wxcs"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.597871 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pvwll"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.597973 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-68mjx"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.598499 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/275c3112-6912-49f8-9d3f-8147662fb99f-config\") pod \"machine-api-operator-5694c8668f-c6jnc\" (UID: \"275c3112-6912-49f8-9d3f-8147662fb99f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-c6jnc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.603216 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.603514 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.607853 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.611993 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8966c5f5-d0a8-4533-842c-0930c1a97bd7-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qtrc2\" (UID: \"8966c5f5-d0a8-4533-842c-0930c1a97bd7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qtrc2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.613391 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-zn7mn"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.617895 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2n99d"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.618088 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/54b67d35-da46-4e38-9b9a-e91855d6d88d-machine-approver-tls\") pod \"machine-approver-56656f9798-hmljp\" (UID: \"54b67d35-da46-4e38-9b9a-e91855d6d88d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hmljp" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.618310 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19fca6e-5095-42b6-8590-32c5b2c73308-encryption-config\") pod \"apiserver-7bbb656c7d-7vrbc\" (UID: \"d19fca6e-5095-42b6-8590-32c5b2c73308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.618355 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc56m\" (UniqueName: \"kubernetes.io/projected/d19fca6e-5095-42b6-8590-32c5b2c73308-kube-api-access-tc56m\") pod \"apiserver-7bbb656c7d-7vrbc\" (UID: \"d19fca6e-5095-42b6-8590-32c5b2c73308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.618389 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-audit-dir\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.618425 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.618465 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f5a50074-5531-442f-a0e9-0578f15634c1-console-serving-cert\") pod \"console-f9d7485db-zn7mn\" (UID: \"f5a50074-5531-442f-a0e9-0578f15634c1\") " pod="openshift-console/console-f9d7485db-zn7mn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.618511 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/560e6c43-4285-4ca8-98b9-874e9dcb5810-serving-cert\") pod \"route-controller-manager-6576b87f9c-xthqz\" (UID: \"560e6c43-4285-4ca8-98b9-874e9dcb5810\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xthqz" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.618548 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37df1143-69fc-4d13-a5d3-790a9d14814a-serving-cert\") pod \"apiserver-76f77b778f-p9gp2\" (UID: \"37df1143-69fc-4d13-a5d3-790a9d14814a\") " pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.618585 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/37df1143-69fc-4d13-a5d3-790a9d14814a-image-import-ca\") pod \"apiserver-76f77b778f-p9gp2\" (UID: \"37df1143-69fc-4d13-a5d3-790a9d14814a\") " pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.618648 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c90710f-4595-425c-8be1-1436f43b5069-config\") pod \"openshift-apiserver-operator-796bbdcf4f-9sqbn\" (UID: \"2c90710f-4595-425c-8be1-1436f43b5069\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9sqbn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.618700 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r8xf\" (UniqueName: \"kubernetes.io/projected/54b67d35-da46-4e38-9b9a-e91855d6d88d-kube-api-access-4r8xf\") pod \"machine-approver-56656f9798-hmljp\" (UID: \"54b67d35-da46-4e38-9b9a-e91855d6d88d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hmljp" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.618727 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d19fca6e-5095-42b6-8590-32c5b2c73308-audit-policies\") pod \"apiserver-7bbb656c7d-7vrbc\" (UID: \"d19fca6e-5095-42b6-8590-32c5b2c73308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.618758 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f5a50074-5531-442f-a0e9-0578f15634c1-console-oauth-config\") pod \"console-f9d7485db-zn7mn\" (UID: \"f5a50074-5531-442f-a0e9-0578f15634c1\") " pod="openshift-console/console-f9d7485db-zn7mn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.618823 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54b67d35-da46-4e38-9b9a-e91855d6d88d-config\") pod \"machine-approver-56656f9798-hmljp\" (UID: \"54b67d35-da46-4e38-9b9a-e91855d6d88d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hmljp" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.618877 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.618967 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/560e6c43-4285-4ca8-98b9-874e9dcb5810-client-ca\") pod \"route-controller-manager-6576b87f9c-xthqz\" (UID: \"560e6c43-4285-4ca8-98b9-874e9dcb5810\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xthqz" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.619006 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nsdx\" (UniqueName: \"kubernetes.io/projected/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-kube-api-access-9nsdx\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.619034 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19fca6e-5095-42b6-8590-32c5b2c73308-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7vrbc\" (UID: \"d19fca6e-5095-42b6-8590-32c5b2c73308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.619058 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d19fca6e-5095-42b6-8590-32c5b2c73308-audit-dir\") pod \"apiserver-7bbb656c7d-7vrbc\" (UID: \"d19fca6e-5095-42b6-8590-32c5b2c73308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.619089 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5a50074-5531-442f-a0e9-0578f15634c1-trusted-ca-bundle\") pod \"console-f9d7485db-zn7mn\" (UID: \"f5a50074-5531-442f-a0e9-0578f15634c1\") " pod="openshift-console/console-f9d7485db-zn7mn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.619121 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx8gt\" (UniqueName: \"kubernetes.io/projected/8966c5f5-d0a8-4533-842c-0930c1a97bd7-kube-api-access-hx8gt\") pod \"cluster-samples-operator-665b6dd947-qtrc2\" (UID: \"8966c5f5-d0a8-4533-842c-0930c1a97bd7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qtrc2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.619166 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt2mw\" (UniqueName: \"kubernetes.io/projected/2c90710f-4595-425c-8be1-1436f43b5069-kube-api-access-rt2mw\") pod \"openshift-apiserver-operator-796bbdcf4f-9sqbn\" (UID: \"2c90710f-4595-425c-8be1-1436f43b5069\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9sqbn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.619204 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70f440bb-5dd8-4863-9749-bc5f7c547750-serving-cert\") pod \"controller-manager-879f6c89f-9v5nn\" (UID: \"70f440bb-5dd8-4863-9749-bc5f7c547750\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9v5nn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.619232 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.619291 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.619333 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.619364 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/37df1143-69fc-4d13-a5d3-790a9d14814a-etcd-serving-ca\") pod \"apiserver-76f77b778f-p9gp2\" (UID: \"37df1143-69fc-4d13-a5d3-790a9d14814a\") " pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.619393 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70f440bb-5dd8-4863-9749-bc5f7c547750-config\") pod \"controller-manager-879f6c89f-9v5nn\" (UID: \"70f440bb-5dd8-4863-9749-bc5f7c547750\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9v5nn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.619420 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19fca6e-5095-42b6-8590-32c5b2c73308-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7vrbc\" (UID: \"d19fca6e-5095-42b6-8590-32c5b2c73308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.619445 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/275c3112-6912-49f8-9d3f-8147662fb99f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-c6jnc\" (UID: \"275c3112-6912-49f8-9d3f-8147662fb99f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-c6jnc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.619478 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdwcs\" (UniqueName: \"kubernetes.io/projected/560e6c43-4285-4ca8-98b9-874e9dcb5810-kube-api-access-sdwcs\") pod \"route-controller-manager-6576b87f9c-xthqz\" (UID: \"560e6c43-4285-4ca8-98b9-874e9dcb5810\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xthqz" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.619510 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37df1143-69fc-4d13-a5d3-790a9d14814a-config\") pod \"apiserver-76f77b778f-p9gp2\" (UID: \"37df1143-69fc-4d13-a5d3-790a9d14814a\") " pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.619538 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70f440bb-5dd8-4863-9749-bc5f7c547750-client-ca\") pod \"controller-manager-879f6c89f-9v5nn\" (UID: \"70f440bb-5dd8-4863-9749-bc5f7c547750\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9v5nn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.619561 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19fca6e-5095-42b6-8590-32c5b2c73308-etcd-client\") pod \"apiserver-7bbb656c7d-7vrbc\" (UID: \"d19fca6e-5095-42b6-8590-32c5b2c73308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.619591 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f5a50074-5531-442f-a0e9-0578f15634c1-console-config\") pod \"console-f9d7485db-zn7mn\" (UID: \"f5a50074-5531-442f-a0e9-0578f15634c1\") " pod="openshift-console/console-f9d7485db-zn7mn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.619622 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/37df1143-69fc-4d13-a5d3-790a9d14814a-audit\") pod \"apiserver-76f77b778f-p9gp2\" (UID: \"37df1143-69fc-4d13-a5d3-790a9d14814a\") " pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.619652 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/37df1143-69fc-4d13-a5d3-790a9d14814a-encryption-config\") pod \"apiserver-76f77b778f-p9gp2\" (UID: \"37df1143-69fc-4d13-a5d3-790a9d14814a\") " pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.619679 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.619704 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.619732 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f5a50074-5531-442f-a0e9-0578f15634c1-oauth-serving-cert\") pod \"console-f9d7485db-zn7mn\" (UID: \"f5a50074-5531-442f-a0e9-0578f15634c1\") " pod="openshift-console/console-f9d7485db-zn7mn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.619760 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpqvj\" (UniqueName: \"kubernetes.io/projected/f5a50074-5531-442f-a0e9-0578f15634c1-kube-api-access-gpqvj\") pod \"console-f9d7485db-zn7mn\" (UID: \"f5a50074-5531-442f-a0e9-0578f15634c1\") " pod="openshift-console/console-f9d7485db-zn7mn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.619803 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37df1143-69fc-4d13-a5d3-790a9d14814a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-p9gp2\" (UID: \"37df1143-69fc-4d13-a5d3-790a9d14814a\") " pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.619830 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c90710f-4595-425c-8be1-1436f43b5069-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-9sqbn\" (UID: \"2c90710f-4595-425c-8be1-1436f43b5069\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9sqbn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.619859 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70f440bb-5dd8-4863-9749-bc5f7c547750-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9v5nn\" (UID: \"70f440bb-5dd8-4863-9749-bc5f7c547750\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9v5nn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.619896 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdwt2\" (UniqueName: \"kubernetes.io/projected/37df1143-69fc-4d13-a5d3-790a9d14814a-kube-api-access-hdwt2\") pod \"apiserver-76f77b778f-p9gp2\" (UID: \"37df1143-69fc-4d13-a5d3-790a9d14814a\") " pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.619928 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzjdx\" (UniqueName: \"kubernetes.io/projected/275c3112-6912-49f8-9d3f-8147662fb99f-kube-api-access-zzjdx\") pod \"machine-api-operator-5694c8668f-c6jnc\" (UID: \"275c3112-6912-49f8-9d3f-8147662fb99f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-c6jnc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.620034 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.620064 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f869\" (UniqueName: \"kubernetes.io/projected/7d155f24-9bfc-4039-9981-10e7f724fa51-kube-api-access-8f869\") pod \"downloads-7954f5f757-w2hhj\" (UID: \"7d155f24-9bfc-4039-9981-10e7f724fa51\") " pod="openshift-console/downloads-7954f5f757-w2hhj" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.620095 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/275c3112-6912-49f8-9d3f-8147662fb99f-images\") pod \"machine-api-operator-5694c8668f-c6jnc\" (UID: \"275c3112-6912-49f8-9d3f-8147662fb99f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-c6jnc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.620123 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.620174 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f5a50074-5531-442f-a0e9-0578f15634c1-service-ca\") pod \"console-f9d7485db-zn7mn\" (UID: \"f5a50074-5531-442f-a0e9-0578f15634c1\") " pod="openshift-console/console-f9d7485db-zn7mn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.620202 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/560e6c43-4285-4ca8-98b9-874e9dcb5810-config\") pod \"route-controller-manager-6576b87f9c-xthqz\" (UID: \"560e6c43-4285-4ca8-98b9-874e9dcb5810\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xthqz" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.620231 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19fca6e-5095-42b6-8590-32c5b2c73308-serving-cert\") pod \"apiserver-7bbb656c7d-7vrbc\" (UID: \"d19fca6e-5095-42b6-8590-32c5b2c73308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.620254 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-audit-dir\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.620259 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/37df1143-69fc-4d13-a5d3-790a9d14814a-etcd-client\") pod \"apiserver-76f77b778f-p9gp2\" (UID: \"37df1143-69fc-4d13-a5d3-790a9d14814a\") " pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.620362 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/37df1143-69fc-4d13-a5d3-790a9d14814a-node-pullsecrets\") pod \"apiserver-76f77b778f-p9gp2\" (UID: \"37df1143-69fc-4d13-a5d3-790a9d14814a\") " pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.620399 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-audit-policies\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.620443 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/54b67d35-da46-4e38-9b9a-e91855d6d88d-auth-proxy-config\") pod \"machine-approver-56656f9798-hmljp\" (UID: \"54b67d35-da46-4e38-9b9a-e91855d6d88d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hmljp" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.621602 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/54b67d35-da46-4e38-9b9a-e91855d6d88d-auth-proxy-config\") pod \"machine-approver-56656f9798-hmljp\" (UID: \"54b67d35-da46-4e38-9b9a-e91855d6d88d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hmljp" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.622310 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.622316 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19fca6e-5095-42b6-8590-32c5b2c73308-encryption-config\") pod \"apiserver-7bbb656c7d-7vrbc\" (UID: \"d19fca6e-5095-42b6-8590-32c5b2c73308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.623432 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37df1143-69fc-4d13-a5d3-790a9d14814a-config\") pod \"apiserver-76f77b778f-p9gp2\" (UID: \"37df1143-69fc-4d13-a5d3-790a9d14814a\") " pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.624383 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d19fca6e-5095-42b6-8590-32c5b2c73308-audit-policies\") pod \"apiserver-7bbb656c7d-7vrbc\" (UID: \"d19fca6e-5095-42b6-8590-32c5b2c73308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.625054 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54b67d35-da46-4e38-9b9a-e91855d6d88d-config\") pod \"machine-approver-56656f9798-hmljp\" (UID: \"54b67d35-da46-4e38-9b9a-e91855d6d88d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hmljp" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.625873 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/37df1143-69fc-4d13-a5d3-790a9d14814a-etcd-client\") pod \"apiserver-76f77b778f-p9gp2\" (UID: \"37df1143-69fc-4d13-a5d3-790a9d14814a\") " pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.627223 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70f440bb-5dd8-4863-9749-bc5f7c547750-client-ca\") pod \"controller-manager-879f6c89f-9v5nn\" (UID: \"70f440bb-5dd8-4863-9749-bc5f7c547750\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9v5nn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.628181 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/560e6c43-4285-4ca8-98b9-874e9dcb5810-client-ca\") pod \"route-controller-manager-6576b87f9c-xthqz\" (UID: \"560e6c43-4285-4ca8-98b9-874e9dcb5810\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xthqz" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.628534 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.629041 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19fca6e-5095-42b6-8590-32c5b2c73308-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7vrbc\" (UID: \"d19fca6e-5095-42b6-8590-32c5b2c73308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.629094 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d19fca6e-5095-42b6-8590-32c5b2c73308-audit-dir\") pod \"apiserver-7bbb656c7d-7vrbc\" (UID: \"d19fca6e-5095-42b6-8590-32c5b2c73308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.632233 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/37df1143-69fc-4d13-a5d3-790a9d14814a-image-import-ca\") pod \"apiserver-76f77b778f-p9gp2\" (UID: \"37df1143-69fc-4d13-a5d3-790a9d14814a\") " pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.632279 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19fca6e-5095-42b6-8590-32c5b2c73308-etcd-client\") pod \"apiserver-7bbb656c7d-7vrbc\" (UID: \"d19fca6e-5095-42b6-8590-32c5b2c73308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.632345 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.632583 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/37df1143-69fc-4d13-a5d3-790a9d14814a-node-pullsecrets\") pod \"apiserver-76f77b778f-p9gp2\" (UID: \"37df1143-69fc-4d13-a5d3-790a9d14814a\") " pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.632625 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/560e6c43-4285-4ca8-98b9-874e9dcb5810-serving-cert\") pod \"route-controller-manager-6576b87f9c-xthqz\" (UID: \"560e6c43-4285-4ca8-98b9-874e9dcb5810\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xthqz" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.633060 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/37df1143-69fc-4d13-a5d3-790a9d14814a-audit\") pod \"apiserver-76f77b778f-p9gp2\" (UID: \"37df1143-69fc-4d13-a5d3-790a9d14814a\") " pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.633165 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-audit-policies\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.633822 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/275c3112-6912-49f8-9d3f-8147662fb99f-images\") pod \"machine-api-operator-5694c8668f-c6jnc\" (UID: \"275c3112-6912-49f8-9d3f-8147662fb99f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-c6jnc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.619975 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c90710f-4595-425c-8be1-1436f43b5069-config\") pod \"openshift-apiserver-operator-796bbdcf4f-9sqbn\" (UID: \"2c90710f-4595-425c-8be1-1436f43b5069\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9sqbn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.634726 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37df1143-69fc-4d13-a5d3-790a9d14814a-serving-cert\") pod \"apiserver-76f77b778f-p9gp2\" (UID: \"37df1143-69fc-4d13-a5d3-790a9d14814a\") " pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.635034 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.637749 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sbtn5"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.638277 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/560e6c43-4285-4ca8-98b9-874e9dcb5810-config\") pod \"route-controller-manager-6576b87f9c-xthqz\" (UID: \"560e6c43-4285-4ca8-98b9-874e9dcb5810\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xthqz" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.646670 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.647139 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70f440bb-5dd8-4863-9749-bc5f7c547750-serving-cert\") pod \"controller-manager-879f6c89f-9v5nn\" (UID: \"70f440bb-5dd8-4863-9749-bc5f7c547750\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9v5nn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.647168 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/54b67d35-da46-4e38-9b9a-e91855d6d88d-machine-approver-tls\") pod \"machine-approver-56656f9798-hmljp\" (UID: \"54b67d35-da46-4e38-9b9a-e91855d6d88d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hmljp" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.653042 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.654679 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/37df1143-69fc-4d13-a5d3-790a9d14814a-etcd-serving-ca\") pod \"apiserver-76f77b778f-p9gp2\" (UID: \"37df1143-69fc-4d13-a5d3-790a9d14814a\") " pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.654784 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.656189 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70f440bb-5dd8-4863-9749-bc5f7c547750-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9v5nn\" (UID: \"70f440bb-5dd8-4863-9749-bc5f7c547750\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9v5nn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.656307 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.656421 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/37df1143-69fc-4d13-a5d3-790a9d14814a-encryption-config\") pod \"apiserver-76f77b778f-p9gp2\" (UID: \"37df1143-69fc-4d13-a5d3-790a9d14814a\") " pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.656575 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37df1143-69fc-4d13-a5d3-790a9d14814a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-p9gp2\" (UID: \"37df1143-69fc-4d13-a5d3-790a9d14814a\") " pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.656878 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6ndt"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.658839 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c90710f-4595-425c-8be1-1436f43b5069-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-9sqbn\" (UID: \"2c90710f-4595-425c-8be1-1436f43b5069\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9sqbn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.660154 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.660757 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.663394 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70f440bb-5dd8-4863-9749-bc5f7c547750-config\") pod \"controller-manager-879f6c89f-9v5nn\" (UID: \"70f440bb-5dd8-4863-9749-bc5f7c547750\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9v5nn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.663476 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556606-mkrp2"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.663840 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rntsr"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.664935 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.665159 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfvsc"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.665256 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19fca6e-5095-42b6-8590-32c5b2c73308-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7vrbc\" (UID: \"d19fca6e-5095-42b6-8590-32c5b2c73308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.666237 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jrkwc"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.667148 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556600-r9flg"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.668305 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tqbl9"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.669334 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/275c3112-6912-49f8-9d3f-8147662fb99f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-c6jnc\" (UID: \"275c3112-6912-49f8-9d3f-8147662fb99f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-c6jnc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.669466 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-99hff"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.669756 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.672155 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rvkzz"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.675078 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-zh465"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.676590 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-hlf9t"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.678050 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-k955n"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.678200 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-hlf9t" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.678627 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-qcb4l"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.679579 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19fca6e-5095-42b6-8590-32c5b2c73308-serving-cert\") pod \"apiserver-7bbb656c7d-7vrbc\" (UID: \"d19fca6e-5095-42b6-8590-32c5b2c73308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.680886 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-hvrrc"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.683480 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.686596 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-g2wxc"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.687486 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-g2wxc" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.689968 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-g2wxc"] Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.703270 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.720979 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f5a50074-5531-442f-a0e9-0578f15634c1-console-serving-cert\") pod \"console-f9d7485db-zn7mn\" (UID: \"f5a50074-5531-442f-a0e9-0578f15634c1\") " pod="openshift-console/console-f9d7485db-zn7mn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.721033 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f5a50074-5531-442f-a0e9-0578f15634c1-console-oauth-config\") pod \"console-f9d7485db-zn7mn\" (UID: \"f5a50074-5531-442f-a0e9-0578f15634c1\") " pod="openshift-console/console-f9d7485db-zn7mn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.721071 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5a50074-5531-442f-a0e9-0578f15634c1-trusted-ca-bundle\") pod \"console-f9d7485db-zn7mn\" (UID: \"f5a50074-5531-442f-a0e9-0578f15634c1\") " pod="openshift-console/console-f9d7485db-zn7mn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.721116 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f5a50074-5531-442f-a0e9-0578f15634c1-console-config\") pod \"console-f9d7485db-zn7mn\" (UID: \"f5a50074-5531-442f-a0e9-0578f15634c1\") " pod="openshift-console/console-f9d7485db-zn7mn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.721134 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f5a50074-5531-442f-a0e9-0578f15634c1-oauth-serving-cert\") pod \"console-f9d7485db-zn7mn\" (UID: \"f5a50074-5531-442f-a0e9-0578f15634c1\") " pod="openshift-console/console-f9d7485db-zn7mn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.721148 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpqvj\" (UniqueName: \"kubernetes.io/projected/f5a50074-5531-442f-a0e9-0578f15634c1-kube-api-access-gpqvj\") pod \"console-f9d7485db-zn7mn\" (UID: \"f5a50074-5531-442f-a0e9-0578f15634c1\") " pod="openshift-console/console-f9d7485db-zn7mn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.721184 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f5a50074-5531-442f-a0e9-0578f15634c1-service-ca\") pod \"console-f9d7485db-zn7mn\" (UID: \"f5a50074-5531-442f-a0e9-0578f15634c1\") " pod="openshift-console/console-f9d7485db-zn7mn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.722250 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f5a50074-5531-442f-a0e9-0578f15634c1-console-config\") pod \"console-f9d7485db-zn7mn\" (UID: \"f5a50074-5531-442f-a0e9-0578f15634c1\") " pod="openshift-console/console-f9d7485db-zn7mn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.722301 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f5a50074-5531-442f-a0e9-0578f15634c1-oauth-serving-cert\") pod \"console-f9d7485db-zn7mn\" (UID: \"f5a50074-5531-442f-a0e9-0578f15634c1\") " pod="openshift-console/console-f9d7485db-zn7mn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.723472 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5a50074-5531-442f-a0e9-0578f15634c1-trusted-ca-bundle\") pod \"console-f9d7485db-zn7mn\" (UID: \"f5a50074-5531-442f-a0e9-0578f15634c1\") " pod="openshift-console/console-f9d7485db-zn7mn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.724234 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.725195 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f5a50074-5531-442f-a0e9-0578f15634c1-console-serving-cert\") pod \"console-f9d7485db-zn7mn\" (UID: \"f5a50074-5531-442f-a0e9-0578f15634c1\") " pod="openshift-console/console-f9d7485db-zn7mn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.725355 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f5a50074-5531-442f-a0e9-0578f15634c1-console-oauth-config\") pod \"console-f9d7485db-zn7mn\" (UID: \"f5a50074-5531-442f-a0e9-0578f15634c1\") " pod="openshift-console/console-f9d7485db-zn7mn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.725519 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f5a50074-5531-442f-a0e9-0578f15634c1-service-ca\") pod \"console-f9d7485db-zn7mn\" (UID: \"f5a50074-5531-442f-a0e9-0578f15634c1\") " pod="openshift-console/console-f9d7485db-zn7mn" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.743579 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.763582 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.783538 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.803201 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.823589 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.843883 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.864071 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.883734 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.904298 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.923768 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.944414 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.963400 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 13 10:07:43 crc kubenswrapper[4632]: I0313 10:07:43.983706 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.004376 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.023603 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.043176 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.043193 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.043201 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.043911 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.063509 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.083773 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.103898 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.124873 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.144065 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.163795 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.184606 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.204349 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.223722 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.243917 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.264335 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.282863 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.324137 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.344230 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.364454 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.384463 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.404375 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.424155 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.444505 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.463416 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.483481 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.502014 4632 request.go:700] Waited for 1.003734169s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&limit=500&resourceVersion=0 Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.517662 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.524204 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.545309 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.564912 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.584704 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.604105 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.644528 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.664298 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.684687 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.711412 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.723569 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.744697 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.763777 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.784353 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.803651 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.823129 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.844877 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.863923 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.884445 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.904350 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.923901 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.944273 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.964087 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 13 10:07:44 crc kubenswrapper[4632]: I0313 10:07:44.984127 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.005395 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.023719 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.044200 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.044807 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.063837 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.084657 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.105262 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.124797 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.145179 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.164799 4632 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.183937 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.203671 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.224265 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.245197 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.264313 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.284852 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.320695 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vvbt\" (UniqueName: \"kubernetes.io/projected/70f440bb-5dd8-4863-9749-bc5f7c547750-kube-api-access-6vvbt\") pod \"controller-manager-879f6c89f-9v5nn\" (UID: \"70f440bb-5dd8-4863-9749-bc5f7c547750\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9v5nn" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.339367 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc56m\" (UniqueName: \"kubernetes.io/projected/d19fca6e-5095-42b6-8590-32c5b2c73308-kube-api-access-tc56m\") pod \"apiserver-7bbb656c7d-7vrbc\" (UID: \"d19fca6e-5095-42b6-8590-32c5b2c73308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.372785 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r8xf\" (UniqueName: \"kubernetes.io/projected/54b67d35-da46-4e38-9b9a-e91855d6d88d-kube-api-access-4r8xf\") pod \"machine-approver-56656f9798-hmljp\" (UID: \"54b67d35-da46-4e38-9b9a-e91855d6d88d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hmljp" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.379524 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt2mw\" (UniqueName: \"kubernetes.io/projected/2c90710f-4595-425c-8be1-1436f43b5069-kube-api-access-rt2mw\") pod \"openshift-apiserver-operator-796bbdcf4f-9sqbn\" (UID: \"2c90710f-4595-425c-8be1-1436f43b5069\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9sqbn" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.398085 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nsdx\" (UniqueName: \"kubernetes.io/projected/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-kube-api-access-9nsdx\") pod \"oauth-openshift-558db77b4-8sl88\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.417904 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx8gt\" (UniqueName: \"kubernetes.io/projected/8966c5f5-d0a8-4533-842c-0930c1a97bd7-kube-api-access-hx8gt\") pod \"cluster-samples-operator-665b6dd947-qtrc2\" (UID: \"8966c5f5-d0a8-4533-842c-0930c1a97bd7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qtrc2" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.438108 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdwt2\" (UniqueName: \"kubernetes.io/projected/37df1143-69fc-4d13-a5d3-790a9d14814a-kube-api-access-hdwt2\") pod \"apiserver-76f77b778f-p9gp2\" (UID: \"37df1143-69fc-4d13-a5d3-790a9d14814a\") " pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.456370 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9v5nn" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.458658 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzjdx\" (UniqueName: \"kubernetes.io/projected/275c3112-6912-49f8-9d3f-8147662fb99f-kube-api-access-zzjdx\") pod \"machine-api-operator-5694c8668f-c6jnc\" (UID: \"275c3112-6912-49f8-9d3f-8147662fb99f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-c6jnc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.478233 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f869\" (UniqueName: \"kubernetes.io/projected/7d155f24-9bfc-4039-9981-10e7f724fa51-kube-api-access-8f869\") pod \"downloads-7954f5f757-w2hhj\" (UID: \"7d155f24-9bfc-4039-9981-10e7f724fa51\") " pod="openshift-console/downloads-7954f5f757-w2hhj" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.480882 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.501745 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdwcs\" (UniqueName: \"kubernetes.io/projected/560e6c43-4285-4ca8-98b9-874e9dcb5810-kube-api-access-sdwcs\") pod \"route-controller-manager-6576b87f9c-xthqz\" (UID: \"560e6c43-4285-4ca8-98b9-874e9dcb5810\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xthqz" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.503812 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.515419 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.521635 4632 request.go:700] Waited for 1.843009811s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dnode-bootstrapper-token&limit=500&resourceVersion=0 Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.523850 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.525972 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hmljp" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.543834 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.547298 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9sqbn" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.563652 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.573105 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xthqz" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.584294 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.597835 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qtrc2" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.603991 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.631667 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-w2hhj" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.654915 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.655929 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.676528 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.676751 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpqvj\" (UniqueName: \"kubernetes.io/projected/f5a50074-5531-442f-a0e9-0578f15634c1-kube-api-access-gpqvj\") pod \"console-f9d7485db-zn7mn\" (UID: \"f5a50074-5531-442f-a0e9-0578f15634c1\") " pod="openshift-console/console-f9d7485db-zn7mn" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.684494 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.687579 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hmljp" event={"ID":"54b67d35-da46-4e38-9b9a-e91855d6d88d","Type":"ContainerStarted","Data":"80698629bef138bcda96418314faf78a7b20ad4ae2335c25b157e45f3f92fa55"} Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.708818 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.747253 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-c6jnc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.756906 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22993daf-2b32-4be5-8eb7-f9194e903d62-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-946gp\" (UID: \"22993daf-2b32-4be5-8eb7-f9194e903d62\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-946gp" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.757013 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvs6x\" (UniqueName: \"kubernetes.io/projected/bbb27a61-7407-4cd7-84df-4b66fbdcf82d-kube-api-access-pvs6x\") pod \"machine-config-operator-74547568cd-99hff\" (UID: \"bbb27a61-7407-4cd7-84df-4b66fbdcf82d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-99hff" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.757042 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/32f62e32-732b-4646-85f0-45b8ea6544a6-profile-collector-cert\") pod \"catalog-operator-68c6474976-r5v5p\" (UID: \"32f62e32-732b-4646-85f0-45b8ea6544a6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.757063 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7b8ca1c-c3de-4829-ab9f-860f76033c63-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-hd8rx\" (UID: \"b7b8ca1c-c3de-4829-ab9f-860f76033c63\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hd8rx" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.757099 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cclg\" (UniqueName: \"kubernetes.io/projected/ef269b18-ea84-43c2-971c-e772149acbf6-kube-api-access-2cclg\") pod \"console-operator-58897d9998-sbtn5\" (UID: \"ef269b18-ea84-43c2-971c-e772149acbf6\") " pod="openshift-console-operator/console-operator-58897d9998-sbtn5" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.757120 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e884f4d1-d4f3-4ef7-b1f2-c39ea2eee50e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hb4c6\" (UID: \"e884f4d1-d4f3-4ef7-b1f2-c39ea2eee50e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hb4c6" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.757142 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f781cb50-1e1b-4586-ba59-b204b1a6beec-config\") pod \"etcd-operator-b45778765-k955n\" (UID: \"f781cb50-1e1b-4586-ba59-b204b1a6beec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k955n" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.757162 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f781cb50-1e1b-4586-ba59-b204b1a6beec-etcd-service-ca\") pod \"etcd-operator-b45778765-k955n\" (UID: \"f781cb50-1e1b-4586-ba59-b204b1a6beec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k955n" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.757183 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f781cb50-1e1b-4586-ba59-b204b1a6beec-etcd-ca\") pod \"etcd-operator-b45778765-k955n\" (UID: \"f781cb50-1e1b-4586-ba59-b204b1a6beec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k955n" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.757224 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bbb27a61-7407-4cd7-84df-4b66fbdcf82d-proxy-tls\") pod \"machine-config-operator-74547568cd-99hff\" (UID: \"bbb27a61-7407-4cd7-84df-4b66fbdcf82d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-99hff" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.757244 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mb4q\" (UniqueName: \"kubernetes.io/projected/c94773d8-a922-4778-b2ba-8937e9d6c19b-kube-api-access-7mb4q\") pod \"dns-operator-744455d44c-6fqf5\" (UID: \"c94773d8-a922-4778-b2ba-8937e9d6c19b\") " pod="openshift-dns-operator/dns-operator-744455d44c-6fqf5" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.757268 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a4c5a906-1d0b-40e7-aa4f-bc945e9f1f59-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-9rrcn\" (UID: \"a4c5a906-1d0b-40e7-aa4f-bc945e9f1f59\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9rrcn" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.757291 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e884f4d1-d4f3-4ef7-b1f2-c39ea2eee50e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hb4c6\" (UID: \"e884f4d1-d4f3-4ef7-b1f2-c39ea2eee50e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hb4c6" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.757314 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/7b959a85-56a5-4296-9cf3-87741e1f9c39-stats-auth\") pod \"router-default-5444994796-t9vht\" (UID: \"7b959a85-56a5-4296-9cf3-87741e1f9c39\") " pod="openshift-ingress/router-default-5444994796-t9vht" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.757335 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fdaf1cb9-0ab4-477f-bbd5-d8d33ab56f73-trusted-ca\") pod \"ingress-operator-5b745b69d9-jrkwc\" (UID: \"fdaf1cb9-0ab4-477f-bbd5-d8d33ab56f73\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jrkwc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.757376 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b959a85-56a5-4296-9cf3-87741e1f9c39-metrics-certs\") pod \"router-default-5444994796-t9vht\" (UID: \"7b959a85-56a5-4296-9cf3-87741e1f9c39\") " pod="openshift-ingress/router-default-5444994796-t9vht" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.757396 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62kgq\" (UniqueName: \"kubernetes.io/projected/7b959a85-56a5-4296-9cf3-87741e1f9c39-kube-api-access-62kgq\") pod \"router-default-5444994796-t9vht\" (UID: \"7b959a85-56a5-4296-9cf3-87741e1f9c39\") " pod="openshift-ingress/router-default-5444994796-t9vht" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.757429 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef269b18-ea84-43c2-971c-e772149acbf6-trusted-ca\") pod \"console-operator-58897d9998-sbtn5\" (UID: \"ef269b18-ea84-43c2-971c-e772149acbf6\") " pod="openshift-console-operator/console-operator-58897d9998-sbtn5" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.757460 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f56fc09a-e2b7-46db-b938-f276df3f033e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.757635 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22993daf-2b32-4be5-8eb7-f9194e903d62-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-946gp\" (UID: \"22993daf-2b32-4be5-8eb7-f9194e903d62\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-946gp" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.757673 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f781cb50-1e1b-4586-ba59-b204b1a6beec-serving-cert\") pod \"etcd-operator-b45778765-k955n\" (UID: \"f781cb50-1e1b-4586-ba59-b204b1a6beec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k955n" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.757700 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c94773d8-a922-4778-b2ba-8937e9d6c19b-metrics-tls\") pod \"dns-operator-744455d44c-6fqf5\" (UID: \"c94773d8-a922-4778-b2ba-8937e9d6c19b\") " pod="openshift-dns-operator/dns-operator-744455d44c-6fqf5" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.757727 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52zmx\" (UniqueName: \"kubernetes.io/projected/fdaf1cb9-0ab4-477f-bbd5-d8d33ab56f73-kube-api-access-52zmx\") pod \"ingress-operator-5b745b69d9-jrkwc\" (UID: \"fdaf1cb9-0ab4-477f-bbd5-d8d33ab56f73\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jrkwc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.757880 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bbb27a61-7407-4cd7-84df-4b66fbdcf82d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-99hff\" (UID: \"bbb27a61-7407-4cd7-84df-4b66fbdcf82d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-99hff" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.757926 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fdaf1cb9-0ab4-477f-bbd5-d8d33ab56f73-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jrkwc\" (UID: \"fdaf1cb9-0ab4-477f-bbd5-d8d33ab56f73\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jrkwc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.759510 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f781cb50-1e1b-4586-ba59-b204b1a6beec-etcd-client\") pod \"etcd-operator-b45778765-k955n\" (UID: \"f781cb50-1e1b-4586-ba59-b204b1a6beec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k955n" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.759544 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cd4c3b3-6825-4bd2-97a5-330f91782d4b-config\") pod \"kube-controller-manager-operator-78b949d7b-rvkzz\" (UID: \"9cd4c3b3-6825-4bd2-97a5-330f91782d4b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rvkzz" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.759580 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bbb27a61-7407-4cd7-84df-4b66fbdcf82d-images\") pod \"machine-config-operator-74547568cd-99hff\" (UID: \"bbb27a61-7407-4cd7-84df-4b66fbdcf82d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-99hff" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.759606 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7x7q\" (UniqueName: \"kubernetes.io/projected/353e9ca9-cb3b-4c6e-b1ca-446611a12dca-kube-api-access-w7x7q\") pod \"authentication-operator-69f744f599-svhr5\" (UID: \"353e9ca9-cb3b-4c6e-b1ca-446611a12dca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-svhr5" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.759627 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zq92\" (UniqueName: \"kubernetes.io/projected/a4c5a906-1d0b-40e7-aa4f-bc945e9f1f59-kube-api-access-8zq92\") pod \"cluster-image-registry-operator-dc59b4c8b-9rrcn\" (UID: \"a4c5a906-1d0b-40e7-aa4f-bc945e9f1f59\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9rrcn" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.759652 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f56fc09a-e2b7-46db-b938-f276df3f033e-registry-certificates\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.759683 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fdaf1cb9-0ab4-477f-bbd5-d8d33ab56f73-metrics-tls\") pod \"ingress-operator-5b745b69d9-jrkwc\" (UID: \"fdaf1cb9-0ab4-477f-bbd5-d8d33ab56f73\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jrkwc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.759708 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a4c5a906-1d0b-40e7-aa4f-bc945e9f1f59-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-9rrcn\" (UID: \"a4c5a906-1d0b-40e7-aa4f-bc945e9f1f59\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9rrcn" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.759751 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cd4c3b3-6825-4bd2-97a5-330f91782d4b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-rvkzz\" (UID: \"9cd4c3b3-6825-4bd2-97a5-330f91782d4b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rvkzz" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.759781 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f56fc09a-e2b7-46db-b938-f276df3f033e-trusted-ca\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.759852 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fc6g\" (UniqueName: \"kubernetes.io/projected/f660255f-8f78-4876-973d-db58f2ee7020-kube-api-access-9fc6g\") pod \"openshift-config-operator-7777fb866f-sk2l6\" (UID: \"f660255f-8f78-4876-973d-db58f2ee7020\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.759883 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f660255f-8f78-4876-973d-db58f2ee7020-serving-cert\") pod \"openshift-config-operator-7777fb866f-sk2l6\" (UID: \"f660255f-8f78-4876-973d-db58f2ee7020\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.759932 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/353e9ca9-cb3b-4c6e-b1ca-446611a12dca-serving-cert\") pod \"authentication-operator-69f744f599-svhr5\" (UID: \"353e9ca9-cb3b-4c6e-b1ca-446611a12dca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-svhr5" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.759997 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k7tt\" (UniqueName: \"kubernetes.io/projected/32f62e32-732b-4646-85f0-45b8ea6544a6-kube-api-access-4k7tt\") pod \"catalog-operator-68c6474976-r5v5p\" (UID: \"32f62e32-732b-4646-85f0-45b8ea6544a6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.760016 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8be807d4-9bc2-41a1-b69f-1b0af031b5ab-proxy-tls\") pod \"machine-config-controller-84d6567774-rntsr\" (UID: \"8be807d4-9bc2-41a1-b69f-1b0af031b5ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rntsr" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.760063 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9cd4c3b3-6825-4bd2-97a5-330f91782d4b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-rvkzz\" (UID: \"9cd4c3b3-6825-4bd2-97a5-330f91782d4b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rvkzz" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.760122 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/353e9ca9-cb3b-4c6e-b1ca-446611a12dca-service-ca-bundle\") pod \"authentication-operator-69f744f599-svhr5\" (UID: \"353e9ca9-cb3b-4c6e-b1ca-446611a12dca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-svhr5" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.761981 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a4c5a906-1d0b-40e7-aa4f-bc945e9f1f59-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-9rrcn\" (UID: \"a4c5a906-1d0b-40e7-aa4f-bc945e9f1f59\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9rrcn" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.762067 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef269b18-ea84-43c2-971c-e772149acbf6-serving-cert\") pod \"console-operator-58897d9998-sbtn5\" (UID: \"ef269b18-ea84-43c2-971c-e772149acbf6\") " pod="openshift-console-operator/console-operator-58897d9998-sbtn5" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.762126 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b959a85-56a5-4296-9cf3-87741e1f9c39-service-ca-bundle\") pod \"router-default-5444994796-t9vht\" (UID: \"7b959a85-56a5-4296-9cf3-87741e1f9c39\") " pod="openshift-ingress/router-default-5444994796-t9vht" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.762165 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef269b18-ea84-43c2-971c-e772149acbf6-config\") pod \"console-operator-58897d9998-sbtn5\" (UID: \"ef269b18-ea84-43c2-971c-e772149acbf6\") " pod="openshift-console-operator/console-operator-58897d9998-sbtn5" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.762197 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f56fc09a-e2b7-46db-b938-f276df3f033e-registry-tls\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.762217 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2493565c-3af9-4edf-a2f3-8a7a501e9305-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l6ndt\" (UID: \"2493565c-3af9-4edf-a2f3-8a7a501e9305\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6ndt" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.762247 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxh5f\" (UniqueName: \"kubernetes.io/projected/f781cb50-1e1b-4586-ba59-b204b1a6beec-kube-api-access-hxh5f\") pod \"etcd-operator-b45778765-k955n\" (UID: \"f781cb50-1e1b-4586-ba59-b204b1a6beec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k955n" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.762337 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ghtc\" (UniqueName: \"kubernetes.io/projected/b7b8ca1c-c3de-4829-ab9f-860f76033c63-kube-api-access-2ghtc\") pod \"openshift-controller-manager-operator-756b6f6bc6-hd8rx\" (UID: \"b7b8ca1c-c3de-4829-ab9f-860f76033c63\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hd8rx" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.763636 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.763726 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6kft\" (UniqueName: \"kubernetes.io/projected/8be807d4-9bc2-41a1-b69f-1b0af031b5ab-kube-api-access-l6kft\") pod \"machine-config-controller-84d6567774-rntsr\" (UID: \"8be807d4-9bc2-41a1-b69f-1b0af031b5ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rntsr" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.763780 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f56fc09a-e2b7-46db-b938-f276df3f033e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.763804 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2493565c-3af9-4edf-a2f3-8a7a501e9305-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l6ndt\" (UID: \"2493565c-3af9-4edf-a2f3-8a7a501e9305\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6ndt" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.763824 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rq5z\" (UniqueName: \"kubernetes.io/projected/96067558-b20b-411c-b1af-b8fbb61df8f7-kube-api-access-9rq5z\") pod \"migrator-59844c95c7-9wxcs\" (UID: \"96067558-b20b-411c-b1af-b8fbb61df8f7\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9wxcs" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.763881 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mprmj\" (UniqueName: \"kubernetes.io/projected/f56fc09a-e2b7-46db-b938-f276df3f033e-kube-api-access-mprmj\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.763906 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8be807d4-9bc2-41a1-b69f-1b0af031b5ab-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rntsr\" (UID: \"8be807d4-9bc2-41a1-b69f-1b0af031b5ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rntsr" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.763995 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/7b959a85-56a5-4296-9cf3-87741e1f9c39-default-certificate\") pod \"router-default-5444994796-t9vht\" (UID: \"7b959a85-56a5-4296-9cf3-87741e1f9c39\") " pod="openshift-ingress/router-default-5444994796-t9vht" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.764017 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f660255f-8f78-4876-973d-db58f2ee7020-available-featuregates\") pod \"openshift-config-operator-7777fb866f-sk2l6\" (UID: \"f660255f-8f78-4876-973d-db58f2ee7020\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.764051 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.764101 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f56fc09a-e2b7-46db-b938-f276df3f033e-bound-sa-token\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.764121 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/32f62e32-732b-4646-85f0-45b8ea6544a6-srv-cert\") pod \"catalog-operator-68c6474976-r5v5p\" (UID: \"32f62e32-732b-4646-85f0-45b8ea6544a6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.764141 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txsm4\" (UniqueName: \"kubernetes.io/projected/e884f4d1-d4f3-4ef7-b1f2-c39ea2eee50e-kube-api-access-txsm4\") pod \"kube-storage-version-migrator-operator-b67b599dd-hb4c6\" (UID: \"e884f4d1-d4f3-4ef7-b1f2-c39ea2eee50e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hb4c6" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.764211 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/353e9ca9-cb3b-4c6e-b1ca-446611a12dca-config\") pod \"authentication-operator-69f744f599-svhr5\" (UID: \"353e9ca9-cb3b-4c6e-b1ca-446611a12dca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-svhr5" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.764238 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/353e9ca9-cb3b-4c6e-b1ca-446611a12dca-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-svhr5\" (UID: \"353e9ca9-cb3b-4c6e-b1ca-446611a12dca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-svhr5" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.764260 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2493565c-3af9-4edf-a2f3-8a7a501e9305-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l6ndt\" (UID: \"2493565c-3af9-4edf-a2f3-8a7a501e9305\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6ndt" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.764276 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22993daf-2b32-4be5-8eb7-f9194e903d62-config\") pod \"kube-apiserver-operator-766d6c64bb-946gp\" (UID: \"22993daf-2b32-4be5-8eb7-f9194e903d62\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-946gp" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.764294 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7b8ca1c-c3de-4829-ab9f-860f76033c63-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-hd8rx\" (UID: \"b7b8ca1c-c3de-4829-ab9f-860f76033c63\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hd8rx" Mar 13 10:07:45 crc kubenswrapper[4632]: E0313 10:07:45.765800 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:46.265771738 +0000 UTC m=+240.288301871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.790208 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.824316 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc"] Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.865048 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.865255 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f660255f-8f78-4876-973d-db58f2ee7020-serving-cert\") pod \"openshift-config-operator-7777fb866f-sk2l6\" (UID: \"f660255f-8f78-4876-973d-db58f2ee7020\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.865286 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/946f5fcb-dde4-4784-965d-75a47187e703-signing-key\") pod \"service-ca-9c57cc56f-68mjx\" (UID: \"946f5fcb-dde4-4784-965d-75a47187e703\") " pod="openshift-service-ca/service-ca-9c57cc56f-68mjx" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.865308 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/946f5fcb-dde4-4784-965d-75a47187e703-signing-cabundle\") pod \"service-ca-9c57cc56f-68mjx\" (UID: \"946f5fcb-dde4-4784-965d-75a47187e703\") " pod="openshift-service-ca/service-ca-9c57cc56f-68mjx" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.865332 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l2w2\" (UniqueName: \"kubernetes.io/projected/e0e1f142-2930-4f9b-b851-f7f7df22676b-kube-api-access-8l2w2\") pod \"multus-admission-controller-857f4d67dd-4hmjh\" (UID: \"e0e1f142-2930-4f9b-b851-f7f7df22676b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4hmjh" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.865355 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pst8h\" (UniqueName: \"kubernetes.io/projected/ebf1040d-57dd-47ef-b839-6f78a7c5c75f-kube-api-access-pst8h\") pod \"olm-operator-6b444d44fb-tqbl9\" (UID: \"ebf1040d-57dd-47ef-b839-6f78a7c5c75f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tqbl9" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.865375 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f0f88609-cbfe-4ccc-b5db-e5c1be771855-certs\") pod \"machine-config-server-hlf9t\" (UID: \"f0f88609-cbfe-4ccc-b5db-e5c1be771855\") " pod="openshift-machine-config-operator/machine-config-server-hlf9t" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.865400 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/353e9ca9-cb3b-4c6e-b1ca-446611a12dca-serving-cert\") pod \"authentication-operator-69f744f599-svhr5\" (UID: \"353e9ca9-cb3b-4c6e-b1ca-446611a12dca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-svhr5" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.865435 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k7tt\" (UniqueName: \"kubernetes.io/projected/32f62e32-732b-4646-85f0-45b8ea6544a6-kube-api-access-4k7tt\") pod \"catalog-operator-68c6474976-r5v5p\" (UID: \"32f62e32-732b-4646-85f0-45b8ea6544a6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.865456 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8be807d4-9bc2-41a1-b69f-1b0af031b5ab-proxy-tls\") pod \"machine-config-controller-84d6567774-rntsr\" (UID: \"8be807d4-9bc2-41a1-b69f-1b0af031b5ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rntsr" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.865478 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9cd4c3b3-6825-4bd2-97a5-330f91782d4b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-rvkzz\" (UID: \"9cd4c3b3-6825-4bd2-97a5-330f91782d4b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rvkzz" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.865500 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/528d3aa9-10bf-4029-a4d2-85768264fde8-secret-volume\") pod \"collect-profiles-29556600-r9flg\" (UID: \"528d3aa9-10bf-4029-a4d2-85768264fde8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556600-r9flg" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.865523 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/353e9ca9-cb3b-4c6e-b1ca-446611a12dca-service-ca-bundle\") pod \"authentication-operator-69f744f599-svhr5\" (UID: \"353e9ca9-cb3b-4c6e-b1ca-446611a12dca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-svhr5" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.865544 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a4c5a906-1d0b-40e7-aa4f-bc945e9f1f59-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-9rrcn\" (UID: \"a4c5a906-1d0b-40e7-aa4f-bc945e9f1f59\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9rrcn" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.865565 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4db028f0-524e-46fc-aa33-da38ed7b8fa6-cert\") pod \"ingress-canary-qcb4l\" (UID: \"4db028f0-524e-46fc-aa33-da38ed7b8fa6\") " pod="openshift-ingress-canary/ingress-canary-qcb4l" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.865585 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef269b18-ea84-43c2-971c-e772149acbf6-serving-cert\") pod \"console-operator-58897d9998-sbtn5\" (UID: \"ef269b18-ea84-43c2-971c-e772149acbf6\") " pod="openshift-console-operator/console-operator-58897d9998-sbtn5" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.865628 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/09ddc697-7ac1-4896-b9e2-1ae6c59c6f47-registration-dir\") pod \"csi-hostpathplugin-hvrrc\" (UID: \"09ddc697-7ac1-4896-b9e2-1ae6c59c6f47\") " pod="hostpath-provisioner/csi-hostpathplugin-hvrrc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.865651 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b959a85-56a5-4296-9cf3-87741e1f9c39-service-ca-bundle\") pod \"router-default-5444994796-t9vht\" (UID: \"7b959a85-56a5-4296-9cf3-87741e1f9c39\") " pod="openshift-ingress/router-default-5444994796-t9vht" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.865673 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hcvp\" (UniqueName: \"kubernetes.io/projected/49c520f1-fb05-48ca-8435-1985ce668451-kube-api-access-2hcvp\") pod \"packageserver-d55dfcdfc-zgxcd\" (UID: \"49c520f1-fb05-48ca-8435-1985ce668451\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.865693 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef269b18-ea84-43c2-971c-e772149acbf6-config\") pod \"console-operator-58897d9998-sbtn5\" (UID: \"ef269b18-ea84-43c2-971c-e772149acbf6\") " pod="openshift-console-operator/console-operator-58897d9998-sbtn5" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.865716 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2332524f-f990-4ef2-90b3-8b90c389d873-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pvwll\" (UID: \"2332524f-f990-4ef2-90b3-8b90c389d873\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pvwll" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.865756 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f56fc09a-e2b7-46db-b938-f276df3f033e-registry-tls\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.865775 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2493565c-3af9-4edf-a2f3-8a7a501e9305-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l6ndt\" (UID: \"2493565c-3af9-4edf-a2f3-8a7a501e9305\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6ndt" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.865796 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxh5f\" (UniqueName: \"kubernetes.io/projected/f781cb50-1e1b-4586-ba59-b204b1a6beec-kube-api-access-hxh5f\") pod \"etcd-operator-b45778765-k955n\" (UID: \"f781cb50-1e1b-4586-ba59-b204b1a6beec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k955n" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.865820 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ghtc\" (UniqueName: \"kubernetes.io/projected/b7b8ca1c-c3de-4829-ab9f-860f76033c63-kube-api-access-2ghtc\") pod \"openshift-controller-manager-operator-756b6f6bc6-hd8rx\" (UID: \"b7b8ca1c-c3de-4829-ab9f-860f76033c63\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hd8rx" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.865845 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm8lp\" (UniqueName: \"kubernetes.io/projected/528d3aa9-10bf-4029-a4d2-85768264fde8-kube-api-access-vm8lp\") pod \"collect-profiles-29556600-r9flg\" (UID: \"528d3aa9-10bf-4029-a4d2-85768264fde8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556600-r9flg" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.865868 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6kft\" (UniqueName: \"kubernetes.io/projected/8be807d4-9bc2-41a1-b69f-1b0af031b5ab-kube-api-access-l6kft\") pod \"machine-config-controller-84d6567774-rntsr\" (UID: \"8be807d4-9bc2-41a1-b69f-1b0af031b5ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rntsr" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.865891 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f56fc09a-e2b7-46db-b938-f276df3f033e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.865912 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2493565c-3af9-4edf-a2f3-8a7a501e9305-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l6ndt\" (UID: \"2493565c-3af9-4edf-a2f3-8a7a501e9305\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6ndt" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.865932 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rq5z\" (UniqueName: \"kubernetes.io/projected/96067558-b20b-411c-b1af-b8fbb61df8f7-kube-api-access-9rq5z\") pod \"migrator-59844c95c7-9wxcs\" (UID: \"96067558-b20b-411c-b1af-b8fbb61df8f7\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9wxcs" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.865975 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5h9s\" (UniqueName: \"kubernetes.io/projected/4db028f0-524e-46fc-aa33-da38ed7b8fa6-kube-api-access-p5h9s\") pod \"ingress-canary-qcb4l\" (UID: \"4db028f0-524e-46fc-aa33-da38ed7b8fa6\") " pod="openshift-ingress-canary/ingress-canary-qcb4l" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866007 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mprmj\" (UniqueName: \"kubernetes.io/projected/f56fc09a-e2b7-46db-b938-f276df3f033e-kube-api-access-mprmj\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866031 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8be807d4-9bc2-41a1-b69f-1b0af031b5ab-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rntsr\" (UID: \"8be807d4-9bc2-41a1-b69f-1b0af031b5ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rntsr" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866053 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/7b959a85-56a5-4296-9cf3-87741e1f9c39-default-certificate\") pod \"router-default-5444994796-t9vht\" (UID: \"7b959a85-56a5-4296-9cf3-87741e1f9c39\") " pod="openshift-ingress/router-default-5444994796-t9vht" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866077 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ebf1040d-57dd-47ef-b839-6f78a7c5c75f-srv-cert\") pod \"olm-operator-6b444d44fb-tqbl9\" (UID: \"ebf1040d-57dd-47ef-b839-6f78a7c5c75f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tqbl9" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866100 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f660255f-8f78-4876-973d-db58f2ee7020-available-featuregates\") pod \"openshift-config-operator-7777fb866f-sk2l6\" (UID: \"f660255f-8f78-4876-973d-db58f2ee7020\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866121 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/49c520f1-fb05-48ca-8435-1985ce668451-tmpfs\") pod \"packageserver-d55dfcdfc-zgxcd\" (UID: \"49c520f1-fb05-48ca-8435-1985ce668451\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866154 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f56fc09a-e2b7-46db-b938-f276df3f033e-bound-sa-token\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866177 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f0f88609-cbfe-4ccc-b5db-e5c1be771855-node-bootstrap-token\") pod \"machine-config-server-hlf9t\" (UID: \"f0f88609-cbfe-4ccc-b5db-e5c1be771855\") " pod="openshift-machine-config-operator/machine-config-server-hlf9t" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866214 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/32f62e32-732b-4646-85f0-45b8ea6544a6-srv-cert\") pod \"catalog-operator-68c6474976-r5v5p\" (UID: \"32f62e32-732b-4646-85f0-45b8ea6544a6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866235 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/779b2915-e0d0-4e90-9c6d-af28f555fd7b-config\") pod \"service-ca-operator-777779d784-zh465\" (UID: \"779b2915-e0d0-4e90-9c6d-af28f555fd7b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zh465" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866257 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txsm4\" (UniqueName: \"kubernetes.io/projected/e884f4d1-d4f3-4ef7-b1f2-c39ea2eee50e-kube-api-access-txsm4\") pod \"kube-storage-version-migrator-operator-b67b599dd-hb4c6\" (UID: \"e884f4d1-d4f3-4ef7-b1f2-c39ea2eee50e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hb4c6" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866281 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/353e9ca9-cb3b-4c6e-b1ca-446611a12dca-config\") pod \"authentication-operator-69f744f599-svhr5\" (UID: \"353e9ca9-cb3b-4c6e-b1ca-446611a12dca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-svhr5" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866301 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/353e9ca9-cb3b-4c6e-b1ca-446611a12dca-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-svhr5\" (UID: \"353e9ca9-cb3b-4c6e-b1ca-446611a12dca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-svhr5" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866326 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2vc6\" (UniqueName: \"kubernetes.io/projected/779b2915-e0d0-4e90-9c6d-af28f555fd7b-kube-api-access-q2vc6\") pod \"service-ca-operator-777779d784-zh465\" (UID: \"779b2915-e0d0-4e90-9c6d-af28f555fd7b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zh465" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866351 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2493565c-3af9-4edf-a2f3-8a7a501e9305-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l6ndt\" (UID: \"2493565c-3af9-4edf-a2f3-8a7a501e9305\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6ndt" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866373 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22993daf-2b32-4be5-8eb7-f9194e903d62-config\") pod \"kube-apiserver-operator-766d6c64bb-946gp\" (UID: \"22993daf-2b32-4be5-8eb7-f9194e903d62\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-946gp" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866395 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7b8ca1c-c3de-4829-ab9f-860f76033c63-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-hd8rx\" (UID: \"b7b8ca1c-c3de-4829-ab9f-860f76033c63\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hd8rx" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866417 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e0e1f142-2930-4f9b-b851-f7f7df22676b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-4hmjh\" (UID: \"e0e1f142-2930-4f9b-b851-f7f7df22676b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4hmjh" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866439 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22993daf-2b32-4be5-8eb7-f9194e903d62-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-946gp\" (UID: \"22993daf-2b32-4be5-8eb7-f9194e903d62\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-946gp" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866465 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvs6x\" (UniqueName: \"kubernetes.io/projected/bbb27a61-7407-4cd7-84df-4b66fbdcf82d-kube-api-access-pvs6x\") pod \"machine-config-operator-74547568cd-99hff\" (UID: \"bbb27a61-7407-4cd7-84df-4b66fbdcf82d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-99hff" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866486 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/32f62e32-732b-4646-85f0-45b8ea6544a6-profile-collector-cert\") pod \"catalog-operator-68c6474976-r5v5p\" (UID: \"32f62e32-732b-4646-85f0-45b8ea6544a6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866507 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7b8ca1c-c3de-4829-ab9f-860f76033c63-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-hd8rx\" (UID: \"b7b8ca1c-c3de-4829-ab9f-860f76033c63\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hd8rx" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866530 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/797176c6-dd56-48d6-8004-ff1dd5353a50-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2n99d\" (UID: \"797176c6-dd56-48d6-8004-ff1dd5353a50\") " pod="openshift-marketplace/marketplace-operator-79b997595-2n99d" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866555 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cclg\" (UniqueName: \"kubernetes.io/projected/ef269b18-ea84-43c2-971c-e772149acbf6-kube-api-access-2cclg\") pod \"console-operator-58897d9998-sbtn5\" (UID: \"ef269b18-ea84-43c2-971c-e772149acbf6\") " pod="openshift-console-operator/console-operator-58897d9998-sbtn5" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866577 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e884f4d1-d4f3-4ef7-b1f2-c39ea2eee50e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hb4c6\" (UID: \"e884f4d1-d4f3-4ef7-b1f2-c39ea2eee50e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hb4c6" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866598 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f781cb50-1e1b-4586-ba59-b204b1a6beec-config\") pod \"etcd-operator-b45778765-k955n\" (UID: \"f781cb50-1e1b-4586-ba59-b204b1a6beec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k955n" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866617 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f781cb50-1e1b-4586-ba59-b204b1a6beec-etcd-service-ca\") pod \"etcd-operator-b45778765-k955n\" (UID: \"f781cb50-1e1b-4586-ba59-b204b1a6beec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k955n" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866640 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx2pr\" (UniqueName: \"kubernetes.io/projected/f0f88609-cbfe-4ccc-b5db-e5c1be771855-kube-api-access-xx2pr\") pod \"machine-config-server-hlf9t\" (UID: \"f0f88609-cbfe-4ccc-b5db-e5c1be771855\") " pod="openshift-machine-config-operator/machine-config-server-hlf9t" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866662 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qtrb\" (UniqueName: \"kubernetes.io/projected/797176c6-dd56-48d6-8004-ff1dd5353a50-kube-api-access-8qtrb\") pod \"marketplace-operator-79b997595-2n99d\" (UID: \"797176c6-dd56-48d6-8004-ff1dd5353a50\") " pod="openshift-marketplace/marketplace-operator-79b997595-2n99d" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866686 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f781cb50-1e1b-4586-ba59-b204b1a6beec-etcd-ca\") pod \"etcd-operator-b45778765-k955n\" (UID: \"f781cb50-1e1b-4586-ba59-b204b1a6beec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k955n" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866720 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bbb27a61-7407-4cd7-84df-4b66fbdcf82d-proxy-tls\") pod \"machine-config-operator-74547568cd-99hff\" (UID: \"bbb27a61-7407-4cd7-84df-4b66fbdcf82d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-99hff" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866741 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mb4q\" (UniqueName: \"kubernetes.io/projected/c94773d8-a922-4778-b2ba-8937e9d6c19b-kube-api-access-7mb4q\") pod \"dns-operator-744455d44c-6fqf5\" (UID: \"c94773d8-a922-4778-b2ba-8937e9d6c19b\") " pod="openshift-dns-operator/dns-operator-744455d44c-6fqf5" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866764 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a4c5a906-1d0b-40e7-aa4f-bc945e9f1f59-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-9rrcn\" (UID: \"a4c5a906-1d0b-40e7-aa4f-bc945e9f1f59\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9rrcn" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866786 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e884f4d1-d4f3-4ef7-b1f2-c39ea2eee50e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hb4c6\" (UID: \"e884f4d1-d4f3-4ef7-b1f2-c39ea2eee50e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hb4c6" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866808 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/7b959a85-56a5-4296-9cf3-87741e1f9c39-stats-auth\") pod \"router-default-5444994796-t9vht\" (UID: \"7b959a85-56a5-4296-9cf3-87741e1f9c39\") " pod="openshift-ingress/router-default-5444994796-t9vht" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866829 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fdaf1cb9-0ab4-477f-bbd5-d8d33ab56f73-trusted-ca\") pod \"ingress-operator-5b745b69d9-jrkwc\" (UID: \"fdaf1cb9-0ab4-477f-bbd5-d8d33ab56f73\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jrkwc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866847 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b959a85-56a5-4296-9cf3-87741e1f9c39-metrics-certs\") pod \"router-default-5444994796-t9vht\" (UID: \"7b959a85-56a5-4296-9cf3-87741e1f9c39\") " pod="openshift-ingress/router-default-5444994796-t9vht" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866861 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62kgq\" (UniqueName: \"kubernetes.io/projected/7b959a85-56a5-4296-9cf3-87741e1f9c39-kube-api-access-62kgq\") pod \"router-default-5444994796-t9vht\" (UID: \"7b959a85-56a5-4296-9cf3-87741e1f9c39\") " pod="openshift-ingress/router-default-5444994796-t9vht" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866882 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ebf1040d-57dd-47ef-b839-6f78a7c5c75f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tqbl9\" (UID: \"ebf1040d-57dd-47ef-b839-6f78a7c5c75f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tqbl9" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866902 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/49c520f1-fb05-48ca-8435-1985ce668451-apiservice-cert\") pod \"packageserver-d55dfcdfc-zgxcd\" (UID: \"49c520f1-fb05-48ca-8435-1985ce668451\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866921 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78rbb\" (UniqueName: \"kubernetes.io/projected/58d59f3d-e656-4217-9472-62508a7ccc93-kube-api-access-78rbb\") pod \"dns-default-g2wxc\" (UID: \"58d59f3d-e656-4217-9472-62508a7ccc93\") " pod="openshift-dns/dns-default-g2wxc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866970 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef269b18-ea84-43c2-971c-e772149acbf6-trusted-ca\") pod \"console-operator-58897d9998-sbtn5\" (UID: \"ef269b18-ea84-43c2-971c-e772149acbf6\") " pod="openshift-console-operator/console-operator-58897d9998-sbtn5" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.866996 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f56fc09a-e2b7-46db-b938-f276df3f033e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.867012 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd756\" (UniqueName: \"kubernetes.io/projected/c822257d-9d2f-4b6f-87de-131de5cd0efe-kube-api-access-sd756\") pod \"auto-csr-approver-29556606-mkrp2\" (UID: \"c822257d-9d2f-4b6f-87de-131de5cd0efe\") " pod="openshift-infra/auto-csr-approver-29556606-mkrp2" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.867027 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/09ddc697-7ac1-4896-b9e2-1ae6c59c6f47-plugins-dir\") pod \"csi-hostpathplugin-hvrrc\" (UID: \"09ddc697-7ac1-4896-b9e2-1ae6c59c6f47\") " pod="hostpath-provisioner/csi-hostpathplugin-hvrrc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.867044 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22993daf-2b32-4be5-8eb7-f9194e903d62-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-946gp\" (UID: \"22993daf-2b32-4be5-8eb7-f9194e903d62\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-946gp" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.867059 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/58d59f3d-e656-4217-9472-62508a7ccc93-metrics-tls\") pod \"dns-default-g2wxc\" (UID: \"58d59f3d-e656-4217-9472-62508a7ccc93\") " pod="openshift-dns/dns-default-g2wxc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.867077 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f781cb50-1e1b-4586-ba59-b204b1a6beec-serving-cert\") pod \"etcd-operator-b45778765-k955n\" (UID: \"f781cb50-1e1b-4586-ba59-b204b1a6beec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k955n" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.867093 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c94773d8-a922-4778-b2ba-8937e9d6c19b-metrics-tls\") pod \"dns-operator-744455d44c-6fqf5\" (UID: \"c94773d8-a922-4778-b2ba-8937e9d6c19b\") " pod="openshift-dns-operator/dns-operator-744455d44c-6fqf5" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.867115 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e100e6e-7259-4262-be47-9c2b5be7a53a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-xfvsc\" (UID: \"4e100e6e-7259-4262-be47-9c2b5be7a53a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfvsc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.867139 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52zmx\" (UniqueName: \"kubernetes.io/projected/fdaf1cb9-0ab4-477f-bbd5-d8d33ab56f73-kube-api-access-52zmx\") pod \"ingress-operator-5b745b69d9-jrkwc\" (UID: \"fdaf1cb9-0ab4-477f-bbd5-d8d33ab56f73\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jrkwc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.867159 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwlxd\" (UniqueName: \"kubernetes.io/projected/4e100e6e-7259-4262-be47-9c2b5be7a53a-kube-api-access-wwlxd\") pod \"package-server-manager-789f6589d5-xfvsc\" (UID: \"4e100e6e-7259-4262-be47-9c2b5be7a53a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfvsc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.867179 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/09ddc697-7ac1-4896-b9e2-1ae6c59c6f47-socket-dir\") pod \"csi-hostpathplugin-hvrrc\" (UID: \"09ddc697-7ac1-4896-b9e2-1ae6c59c6f47\") " pod="hostpath-provisioner/csi-hostpathplugin-hvrrc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.867222 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsqdg\" (UniqueName: \"kubernetes.io/projected/946f5fcb-dde4-4784-965d-75a47187e703-kube-api-access-vsqdg\") pod \"service-ca-9c57cc56f-68mjx\" (UID: \"946f5fcb-dde4-4784-965d-75a47187e703\") " pod="openshift-service-ca/service-ca-9c57cc56f-68mjx" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.867240 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bbb27a61-7407-4cd7-84df-4b66fbdcf82d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-99hff\" (UID: \"bbb27a61-7407-4cd7-84df-4b66fbdcf82d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-99hff" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.867255 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58d59f3d-e656-4217-9472-62508a7ccc93-config-volume\") pod \"dns-default-g2wxc\" (UID: \"58d59f3d-e656-4217-9472-62508a7ccc93\") " pod="openshift-dns/dns-default-g2wxc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.867277 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fdaf1cb9-0ab4-477f-bbd5-d8d33ab56f73-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jrkwc\" (UID: \"fdaf1cb9-0ab4-477f-bbd5-d8d33ab56f73\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jrkwc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.867298 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f781cb50-1e1b-4586-ba59-b204b1a6beec-etcd-client\") pod \"etcd-operator-b45778765-k955n\" (UID: \"f781cb50-1e1b-4586-ba59-b204b1a6beec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k955n" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.867319 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cd4c3b3-6825-4bd2-97a5-330f91782d4b-config\") pod \"kube-controller-manager-operator-78b949d7b-rvkzz\" (UID: \"9cd4c3b3-6825-4bd2-97a5-330f91782d4b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rvkzz" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.867336 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bbb27a61-7407-4cd7-84df-4b66fbdcf82d-images\") pod \"machine-config-operator-74547568cd-99hff\" (UID: \"bbb27a61-7407-4cd7-84df-4b66fbdcf82d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-99hff" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.867351 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7x7q\" (UniqueName: \"kubernetes.io/projected/353e9ca9-cb3b-4c6e-b1ca-446611a12dca-kube-api-access-w7x7q\") pod \"authentication-operator-69f744f599-svhr5\" (UID: \"353e9ca9-cb3b-4c6e-b1ca-446611a12dca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-svhr5" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.867370 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zq92\" (UniqueName: \"kubernetes.io/projected/a4c5a906-1d0b-40e7-aa4f-bc945e9f1f59-kube-api-access-8zq92\") pod \"cluster-image-registry-operator-dc59b4c8b-9rrcn\" (UID: \"a4c5a906-1d0b-40e7-aa4f-bc945e9f1f59\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9rrcn" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.867385 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/09ddc697-7ac1-4896-b9e2-1ae6c59c6f47-csi-data-dir\") pod \"csi-hostpathplugin-hvrrc\" (UID: \"09ddc697-7ac1-4896-b9e2-1ae6c59c6f47\") " pod="hostpath-provisioner/csi-hostpathplugin-hvrrc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.867400 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/797176c6-dd56-48d6-8004-ff1dd5353a50-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2n99d\" (UID: \"797176c6-dd56-48d6-8004-ff1dd5353a50\") " pod="openshift-marketplace/marketplace-operator-79b997595-2n99d" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.867416 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f56fc09a-e2b7-46db-b938-f276df3f033e-registry-certificates\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.867432 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fdaf1cb9-0ab4-477f-bbd5-d8d33ab56f73-metrics-tls\") pod \"ingress-operator-5b745b69d9-jrkwc\" (UID: \"fdaf1cb9-0ab4-477f-bbd5-d8d33ab56f73\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jrkwc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.867448 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a4c5a906-1d0b-40e7-aa4f-bc945e9f1f59-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-9rrcn\" (UID: \"a4c5a906-1d0b-40e7-aa4f-bc945e9f1f59\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9rrcn" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.867462 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/09ddc697-7ac1-4896-b9e2-1ae6c59c6f47-mountpoint-dir\") pod \"csi-hostpathplugin-hvrrc\" (UID: \"09ddc697-7ac1-4896-b9e2-1ae6c59c6f47\") " pod="hostpath-provisioner/csi-hostpathplugin-hvrrc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.867481 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cd4c3b3-6825-4bd2-97a5-330f91782d4b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-rvkzz\" (UID: \"9cd4c3b3-6825-4bd2-97a5-330f91782d4b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rvkzz" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.867497 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/49c520f1-fb05-48ca-8435-1985ce668451-webhook-cert\") pod \"packageserver-d55dfcdfc-zgxcd\" (UID: \"49c520f1-fb05-48ca-8435-1985ce668451\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.867511 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/528d3aa9-10bf-4029-a4d2-85768264fde8-config-volume\") pod \"collect-profiles-29556600-r9flg\" (UID: \"528d3aa9-10bf-4029-a4d2-85768264fde8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556600-r9flg" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.867526 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/779b2915-e0d0-4e90-9c6d-af28f555fd7b-serving-cert\") pod \"service-ca-operator-777779d784-zh465\" (UID: \"779b2915-e0d0-4e90-9c6d-af28f555fd7b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zh465" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.867544 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f56fc09a-e2b7-46db-b938-f276df3f033e-trusted-ca\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.867562 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fc6g\" (UniqueName: \"kubernetes.io/projected/f660255f-8f78-4876-973d-db58f2ee7020-kube-api-access-9fc6g\") pod \"openshift-config-operator-7777fb866f-sk2l6\" (UID: \"f660255f-8f78-4876-973d-db58f2ee7020\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.867578 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc9r4\" (UniqueName: \"kubernetes.io/projected/09ddc697-7ac1-4896-b9e2-1ae6c59c6f47-kube-api-access-wc9r4\") pod \"csi-hostpathplugin-hvrrc\" (UID: \"09ddc697-7ac1-4896-b9e2-1ae6c59c6f47\") " pod="hostpath-provisioner/csi-hostpathplugin-hvrrc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.867594 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw6wb\" (UniqueName: \"kubernetes.io/projected/2332524f-f990-4ef2-90b3-8b90c389d873-kube-api-access-hw6wb\") pod \"control-plane-machine-set-operator-78cbb6b69f-pvwll\" (UID: \"2332524f-f990-4ef2-90b3-8b90c389d873\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pvwll" Mar 13 10:07:45 crc kubenswrapper[4632]: E0313 10:07:45.867731 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:46.367716797 +0000 UTC m=+240.390246930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.869830 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef269b18-ea84-43c2-971c-e772149acbf6-config\") pod \"console-operator-58897d9998-sbtn5\" (UID: \"ef269b18-ea84-43c2-971c-e772149acbf6\") " pod="openshift-console-operator/console-operator-58897d9998-sbtn5" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.871636 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f781cb50-1e1b-4586-ba59-b204b1a6beec-config\") pod \"etcd-operator-b45778765-k955n\" (UID: \"f781cb50-1e1b-4586-ba59-b204b1a6beec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k955n" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.872300 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f781cb50-1e1b-4586-ba59-b204b1a6beec-etcd-service-ca\") pod \"etcd-operator-b45778765-k955n\" (UID: \"f781cb50-1e1b-4586-ba59-b204b1a6beec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k955n" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.872831 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f781cb50-1e1b-4586-ba59-b204b1a6beec-etcd-ca\") pod \"etcd-operator-b45778765-k955n\" (UID: \"f781cb50-1e1b-4586-ba59-b204b1a6beec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k955n" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.876726 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/353e9ca9-cb3b-4c6e-b1ca-446611a12dca-service-ca-bundle\") pod \"authentication-operator-69f744f599-svhr5\" (UID: \"353e9ca9-cb3b-4c6e-b1ca-446611a12dca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-svhr5" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.880762 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f660255f-8f78-4876-973d-db58f2ee7020-available-featuregates\") pod \"openshift-config-operator-7777fb866f-sk2l6\" (UID: \"f660255f-8f78-4876-973d-db58f2ee7020\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.881249 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef269b18-ea84-43c2-971c-e772149acbf6-trusted-ca\") pod \"console-operator-58897d9998-sbtn5\" (UID: \"ef269b18-ea84-43c2-971c-e772149acbf6\") " pod="openshift-console-operator/console-operator-58897d9998-sbtn5" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.883755 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8be807d4-9bc2-41a1-b69f-1b0af031b5ab-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rntsr\" (UID: \"8be807d4-9bc2-41a1-b69f-1b0af031b5ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rntsr" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.885110 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f56fc09a-e2b7-46db-b938-f276df3f033e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.885757 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2493565c-3af9-4edf-a2f3-8a7a501e9305-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l6ndt\" (UID: \"2493565c-3af9-4edf-a2f3-8a7a501e9305\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6ndt" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.886533 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f56fc09a-e2b7-46db-b938-f276df3f033e-registry-tls\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.887915 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bbb27a61-7407-4cd7-84df-4b66fbdcf82d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-99hff\" (UID: \"bbb27a61-7407-4cd7-84df-4b66fbdcf82d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-99hff" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.888886 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f660255f-8f78-4876-973d-db58f2ee7020-serving-cert\") pod \"openshift-config-operator-7777fb866f-sk2l6\" (UID: \"f660255f-8f78-4876-973d-db58f2ee7020\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.890819 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8be807d4-9bc2-41a1-b69f-1b0af031b5ab-proxy-tls\") pod \"machine-config-controller-84d6567774-rntsr\" (UID: \"8be807d4-9bc2-41a1-b69f-1b0af031b5ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rntsr" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.891455 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef269b18-ea84-43c2-971c-e772149acbf6-serving-cert\") pod \"console-operator-58897d9998-sbtn5\" (UID: \"ef269b18-ea84-43c2-971c-e772149acbf6\") " pod="openshift-console-operator/console-operator-58897d9998-sbtn5" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.894164 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cd4c3b3-6825-4bd2-97a5-330f91782d4b-config\") pod \"kube-controller-manager-operator-78b949d7b-rvkzz\" (UID: \"9cd4c3b3-6825-4bd2-97a5-330f91782d4b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rvkzz" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.898753 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/353e9ca9-cb3b-4c6e-b1ca-446611a12dca-config\") pod \"authentication-operator-69f744f599-svhr5\" (UID: \"353e9ca9-cb3b-4c6e-b1ca-446611a12dca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-svhr5" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.899277 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fdaf1cb9-0ab4-477f-bbd5-d8d33ab56f73-trusted-ca\") pod \"ingress-operator-5b745b69d9-jrkwc\" (UID: \"fdaf1cb9-0ab4-477f-bbd5-d8d33ab56f73\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jrkwc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.899622 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22993daf-2b32-4be5-8eb7-f9194e903d62-config\") pod \"kube-apiserver-operator-766d6c64bb-946gp\" (UID: \"22993daf-2b32-4be5-8eb7-f9194e903d62\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-946gp" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.899686 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a4c5a906-1d0b-40e7-aa4f-bc945e9f1f59-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-9rrcn\" (UID: \"a4c5a906-1d0b-40e7-aa4f-bc945e9f1f59\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9rrcn" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.899772 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b959a85-56a5-4296-9cf3-87741e1f9c39-service-ca-bundle\") pod \"router-default-5444994796-t9vht\" (UID: \"7b959a85-56a5-4296-9cf3-87741e1f9c39\") " pod="openshift-ingress/router-default-5444994796-t9vht" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.899973 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bbb27a61-7407-4cd7-84df-4b66fbdcf82d-images\") pod \"machine-config-operator-74547568cd-99hff\" (UID: \"bbb27a61-7407-4cd7-84df-4b66fbdcf82d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-99hff" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.900105 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c94773d8-a922-4778-b2ba-8937e9d6c19b-metrics-tls\") pod \"dns-operator-744455d44c-6fqf5\" (UID: \"c94773d8-a922-4778-b2ba-8937e9d6c19b\") " pod="openshift-dns-operator/dns-operator-744455d44c-6fqf5" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.903177 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/353e9ca9-cb3b-4c6e-b1ca-446611a12dca-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-svhr5\" (UID: \"353e9ca9-cb3b-4c6e-b1ca-446611a12dca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-svhr5" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.904116 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f56fc09a-e2b7-46db-b938-f276df3f033e-registry-certificates\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.907091 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e884f4d1-d4f3-4ef7-b1f2-c39ea2eee50e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hb4c6\" (UID: \"e884f4d1-d4f3-4ef7-b1f2-c39ea2eee50e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hb4c6" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.910089 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9sqbn"] Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.910153 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9v5nn"] Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.911298 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f56fc09a-e2b7-46db-b938-f276df3f033e-trusted-ca\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.911902 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7b8ca1c-c3de-4829-ab9f-860f76033c63-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-hd8rx\" (UID: \"b7b8ca1c-c3de-4829-ab9f-860f76033c63\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hd8rx" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.917025 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7b8ca1c-c3de-4829-ab9f-860f76033c63-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-hd8rx\" (UID: \"b7b8ca1c-c3de-4829-ab9f-860f76033c63\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hd8rx" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.918394 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f781cb50-1e1b-4586-ba59-b204b1a6beec-serving-cert\") pod \"etcd-operator-b45778765-k955n\" (UID: \"f781cb50-1e1b-4586-ba59-b204b1a6beec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k955n" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.918545 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f56fc09a-e2b7-46db-b938-f276df3f033e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.918592 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/7b959a85-56a5-4296-9cf3-87741e1f9c39-default-certificate\") pod \"router-default-5444994796-t9vht\" (UID: \"7b959a85-56a5-4296-9cf3-87741e1f9c39\") " pod="openshift-ingress/router-default-5444994796-t9vht" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.918866 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f781cb50-1e1b-4586-ba59-b204b1a6beec-etcd-client\") pod \"etcd-operator-b45778765-k955n\" (UID: \"f781cb50-1e1b-4586-ba59-b204b1a6beec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k955n" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.919123 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/7b959a85-56a5-4296-9cf3-87741e1f9c39-stats-auth\") pod \"router-default-5444994796-t9vht\" (UID: \"7b959a85-56a5-4296-9cf3-87741e1f9c39\") " pod="openshift-ingress/router-default-5444994796-t9vht" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.919229 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/353e9ca9-cb3b-4c6e-b1ca-446611a12dca-serving-cert\") pod \"authentication-operator-69f744f599-svhr5\" (UID: \"353e9ca9-cb3b-4c6e-b1ca-446611a12dca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-svhr5" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.919645 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a4c5a906-1d0b-40e7-aa4f-bc945e9f1f59-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-9rrcn\" (UID: \"a4c5a906-1d0b-40e7-aa4f-bc945e9f1f59\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9rrcn" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.921549 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/32f62e32-732b-4646-85f0-45b8ea6544a6-srv-cert\") pod \"catalog-operator-68c6474976-r5v5p\" (UID: \"32f62e32-732b-4646-85f0-45b8ea6544a6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.922152 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k7tt\" (UniqueName: \"kubernetes.io/projected/32f62e32-732b-4646-85f0-45b8ea6544a6-kube-api-access-4k7tt\") pod \"catalog-operator-68c6474976-r5v5p\" (UID: \"32f62e32-732b-4646-85f0-45b8ea6544a6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.924347 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2493565c-3af9-4edf-a2f3-8a7a501e9305-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l6ndt\" (UID: \"2493565c-3af9-4edf-a2f3-8a7a501e9305\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6ndt" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.924816 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e884f4d1-d4f3-4ef7-b1f2-c39ea2eee50e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hb4c6\" (UID: \"e884f4d1-d4f3-4ef7-b1f2-c39ea2eee50e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hb4c6" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.924818 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fdaf1cb9-0ab4-477f-bbd5-d8d33ab56f73-metrics-tls\") pod \"ingress-operator-5b745b69d9-jrkwc\" (UID: \"fdaf1cb9-0ab4-477f-bbd5-d8d33ab56f73\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jrkwc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.925318 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bbb27a61-7407-4cd7-84df-4b66fbdcf82d-proxy-tls\") pod \"machine-config-operator-74547568cd-99hff\" (UID: \"bbb27a61-7407-4cd7-84df-4b66fbdcf82d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-99hff" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.925458 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b959a85-56a5-4296-9cf3-87741e1f9c39-metrics-certs\") pod \"router-default-5444994796-t9vht\" (UID: \"7b959a85-56a5-4296-9cf3-87741e1f9c39\") " pod="openshift-ingress/router-default-5444994796-t9vht" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.925486 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9cd4c3b3-6825-4bd2-97a5-330f91782d4b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-rvkzz\" (UID: \"9cd4c3b3-6825-4bd2-97a5-330f91782d4b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rvkzz" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.936158 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cd4c3b3-6825-4bd2-97a5-330f91782d4b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-rvkzz\" (UID: \"9cd4c3b3-6825-4bd2-97a5-330f91782d4b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rvkzz" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.938515 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/32f62e32-732b-4646-85f0-45b8ea6544a6-profile-collector-cert\") pod \"catalog-operator-68c6474976-r5v5p\" (UID: \"32f62e32-732b-4646-85f0-45b8ea6544a6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p" Mar 13 10:07:45 crc kubenswrapper[4632]: W0313 10:07:45.938756 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c90710f_4595_425c_8be1_1436f43b5069.slice/crio-0fe3af461e6c99b0dddea09fd2fc17bc9781ce95850249ce166d4812660ac046 WatchSource:0}: Error finding container 0fe3af461e6c99b0dddea09fd2fc17bc9781ce95850249ce166d4812660ac046: Status 404 returned error can't find the container with id 0fe3af461e6c99b0dddea09fd2fc17bc9781ce95850249ce166d4812660ac046 Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.941735 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22993daf-2b32-4be5-8eb7-f9194e903d62-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-946gp\" (UID: \"22993daf-2b32-4be5-8eb7-f9194e903d62\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-946gp" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.957482 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cclg\" (UniqueName: \"kubernetes.io/projected/ef269b18-ea84-43c2-971c-e772149acbf6-kube-api-access-2cclg\") pod \"console-operator-58897d9998-sbtn5\" (UID: \"ef269b18-ea84-43c2-971c-e772149acbf6\") " pod="openshift-console-operator/console-operator-58897d9998-sbtn5" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.969382 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qtrc2"] Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.970714 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/528d3aa9-10bf-4029-a4d2-85768264fde8-secret-volume\") pod \"collect-profiles-29556600-r9flg\" (UID: \"528d3aa9-10bf-4029-a4d2-85768264fde8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556600-r9flg" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.970733 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62kgq\" (UniqueName: \"kubernetes.io/projected/7b959a85-56a5-4296-9cf3-87741e1f9c39-kube-api-access-62kgq\") pod \"router-default-5444994796-t9vht\" (UID: \"7b959a85-56a5-4296-9cf3-87741e1f9c39\") " pod="openshift-ingress/router-default-5444994796-t9vht" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.970760 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4db028f0-524e-46fc-aa33-da38ed7b8fa6-cert\") pod \"ingress-canary-qcb4l\" (UID: \"4db028f0-524e-46fc-aa33-da38ed7b8fa6\") " pod="openshift-ingress-canary/ingress-canary-qcb4l" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.970789 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/09ddc697-7ac1-4896-b9e2-1ae6c59c6f47-registration-dir\") pod \"csi-hostpathplugin-hvrrc\" (UID: \"09ddc697-7ac1-4896-b9e2-1ae6c59c6f47\") " pod="hostpath-provisioner/csi-hostpathplugin-hvrrc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.970823 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hcvp\" (UniqueName: \"kubernetes.io/projected/49c520f1-fb05-48ca-8435-1985ce668451-kube-api-access-2hcvp\") pod \"packageserver-d55dfcdfc-zgxcd\" (UID: \"49c520f1-fb05-48ca-8435-1985ce668451\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.970849 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2332524f-f990-4ef2-90b3-8b90c389d873-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pvwll\" (UID: \"2332524f-f990-4ef2-90b3-8b90c389d873\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pvwll" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.970900 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm8lp\" (UniqueName: \"kubernetes.io/projected/528d3aa9-10bf-4029-a4d2-85768264fde8-kube-api-access-vm8lp\") pod \"collect-profiles-29556600-r9flg\" (UID: \"528d3aa9-10bf-4029-a4d2-85768264fde8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556600-r9flg" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.970928 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5h9s\" (UniqueName: \"kubernetes.io/projected/4db028f0-524e-46fc-aa33-da38ed7b8fa6-kube-api-access-p5h9s\") pod \"ingress-canary-qcb4l\" (UID: \"4db028f0-524e-46fc-aa33-da38ed7b8fa6\") " pod="openshift-ingress-canary/ingress-canary-qcb4l" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.971065 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ebf1040d-57dd-47ef-b839-6f78a7c5c75f-srv-cert\") pod \"olm-operator-6b444d44fb-tqbl9\" (UID: \"ebf1040d-57dd-47ef-b839-6f78a7c5c75f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tqbl9" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.971082 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/49c520f1-fb05-48ca-8435-1985ce668451-tmpfs\") pod \"packageserver-d55dfcdfc-zgxcd\" (UID: \"49c520f1-fb05-48ca-8435-1985ce668451\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.971116 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.971141 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f0f88609-cbfe-4ccc-b5db-e5c1be771855-node-bootstrap-token\") pod \"machine-config-server-hlf9t\" (UID: \"f0f88609-cbfe-4ccc-b5db-e5c1be771855\") " pod="openshift-machine-config-operator/machine-config-server-hlf9t" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.971156 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/779b2915-e0d0-4e90-9c6d-af28f555fd7b-config\") pod \"service-ca-operator-777779d784-zh465\" (UID: \"779b2915-e0d0-4e90-9c6d-af28f555fd7b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zh465" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.971201 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e0e1f142-2930-4f9b-b851-f7f7df22676b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-4hmjh\" (UID: \"e0e1f142-2930-4f9b-b851-f7f7df22676b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4hmjh" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.971218 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2vc6\" (UniqueName: \"kubernetes.io/projected/779b2915-e0d0-4e90-9c6d-af28f555fd7b-kube-api-access-q2vc6\") pod \"service-ca-operator-777779d784-zh465\" (UID: \"779b2915-e0d0-4e90-9c6d-af28f555fd7b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zh465" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.971249 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/797176c6-dd56-48d6-8004-ff1dd5353a50-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2n99d\" (UID: \"797176c6-dd56-48d6-8004-ff1dd5353a50\") " pod="openshift-marketplace/marketplace-operator-79b997595-2n99d" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.971295 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx2pr\" (UniqueName: \"kubernetes.io/projected/f0f88609-cbfe-4ccc-b5db-e5c1be771855-kube-api-access-xx2pr\") pod \"machine-config-server-hlf9t\" (UID: \"f0f88609-cbfe-4ccc-b5db-e5c1be771855\") " pod="openshift-machine-config-operator/machine-config-server-hlf9t" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.971314 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qtrb\" (UniqueName: \"kubernetes.io/projected/797176c6-dd56-48d6-8004-ff1dd5353a50-kube-api-access-8qtrb\") pod \"marketplace-operator-79b997595-2n99d\" (UID: \"797176c6-dd56-48d6-8004-ff1dd5353a50\") " pod="openshift-marketplace/marketplace-operator-79b997595-2n99d" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.971361 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ebf1040d-57dd-47ef-b839-6f78a7c5c75f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tqbl9\" (UID: \"ebf1040d-57dd-47ef-b839-6f78a7c5c75f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tqbl9" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.971375 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/49c520f1-fb05-48ca-8435-1985ce668451-apiservice-cert\") pod \"packageserver-d55dfcdfc-zgxcd\" (UID: \"49c520f1-fb05-48ca-8435-1985ce668451\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.971390 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78rbb\" (UniqueName: \"kubernetes.io/projected/58d59f3d-e656-4217-9472-62508a7ccc93-kube-api-access-78rbb\") pod \"dns-default-g2wxc\" (UID: \"58d59f3d-e656-4217-9472-62508a7ccc93\") " pod="openshift-dns/dns-default-g2wxc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.971432 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd756\" (UniqueName: \"kubernetes.io/projected/c822257d-9d2f-4b6f-87de-131de5cd0efe-kube-api-access-sd756\") pod \"auto-csr-approver-29556606-mkrp2\" (UID: \"c822257d-9d2f-4b6f-87de-131de5cd0efe\") " pod="openshift-infra/auto-csr-approver-29556606-mkrp2" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.971447 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/09ddc697-7ac1-4896-b9e2-1ae6c59c6f47-plugins-dir\") pod \"csi-hostpathplugin-hvrrc\" (UID: \"09ddc697-7ac1-4896-b9e2-1ae6c59c6f47\") " pod="hostpath-provisioner/csi-hostpathplugin-hvrrc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.971465 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/58d59f3d-e656-4217-9472-62508a7ccc93-metrics-tls\") pod \"dns-default-g2wxc\" (UID: \"58d59f3d-e656-4217-9472-62508a7ccc93\") " pod="openshift-dns/dns-default-g2wxc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.971482 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e100e6e-7259-4262-be47-9c2b5be7a53a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-xfvsc\" (UID: \"4e100e6e-7259-4262-be47-9c2b5be7a53a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfvsc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.971516 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwlxd\" (UniqueName: \"kubernetes.io/projected/4e100e6e-7259-4262-be47-9c2b5be7a53a-kube-api-access-wwlxd\") pod \"package-server-manager-789f6589d5-xfvsc\" (UID: \"4e100e6e-7259-4262-be47-9c2b5be7a53a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfvsc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.971532 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/09ddc697-7ac1-4896-b9e2-1ae6c59c6f47-socket-dir\") pod \"csi-hostpathplugin-hvrrc\" (UID: \"09ddc697-7ac1-4896-b9e2-1ae6c59c6f47\") " pod="hostpath-provisioner/csi-hostpathplugin-hvrrc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.971554 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsqdg\" (UniqueName: \"kubernetes.io/projected/946f5fcb-dde4-4784-965d-75a47187e703-kube-api-access-vsqdg\") pod \"service-ca-9c57cc56f-68mjx\" (UID: \"946f5fcb-dde4-4784-965d-75a47187e703\") " pod="openshift-service-ca/service-ca-9c57cc56f-68mjx" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.971591 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58d59f3d-e656-4217-9472-62508a7ccc93-config-volume\") pod \"dns-default-g2wxc\" (UID: \"58d59f3d-e656-4217-9472-62508a7ccc93\") " pod="openshift-dns/dns-default-g2wxc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.971629 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/09ddc697-7ac1-4896-b9e2-1ae6c59c6f47-mountpoint-dir\") pod \"csi-hostpathplugin-hvrrc\" (UID: \"09ddc697-7ac1-4896-b9e2-1ae6c59c6f47\") " pod="hostpath-provisioner/csi-hostpathplugin-hvrrc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.971644 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/09ddc697-7ac1-4896-b9e2-1ae6c59c6f47-csi-data-dir\") pod \"csi-hostpathplugin-hvrrc\" (UID: \"09ddc697-7ac1-4896-b9e2-1ae6c59c6f47\") " pod="hostpath-provisioner/csi-hostpathplugin-hvrrc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.971686 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/797176c6-dd56-48d6-8004-ff1dd5353a50-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2n99d\" (UID: \"797176c6-dd56-48d6-8004-ff1dd5353a50\") " pod="openshift-marketplace/marketplace-operator-79b997595-2n99d" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.971702 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/49c520f1-fb05-48ca-8435-1985ce668451-webhook-cert\") pod \"packageserver-d55dfcdfc-zgxcd\" (UID: \"49c520f1-fb05-48ca-8435-1985ce668451\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.971716 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/528d3aa9-10bf-4029-a4d2-85768264fde8-config-volume\") pod \"collect-profiles-29556600-r9flg\" (UID: \"528d3aa9-10bf-4029-a4d2-85768264fde8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556600-r9flg" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.971749 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/779b2915-e0d0-4e90-9c6d-af28f555fd7b-serving-cert\") pod \"service-ca-operator-777779d784-zh465\" (UID: \"779b2915-e0d0-4e90-9c6d-af28f555fd7b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zh465" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.971770 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc9r4\" (UniqueName: \"kubernetes.io/projected/09ddc697-7ac1-4896-b9e2-1ae6c59c6f47-kube-api-access-wc9r4\") pod \"csi-hostpathplugin-hvrrc\" (UID: \"09ddc697-7ac1-4896-b9e2-1ae6c59c6f47\") " pod="hostpath-provisioner/csi-hostpathplugin-hvrrc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.971787 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw6wb\" (UniqueName: \"kubernetes.io/projected/2332524f-f990-4ef2-90b3-8b90c389d873-kube-api-access-hw6wb\") pod \"control-plane-machine-set-operator-78cbb6b69f-pvwll\" (UID: \"2332524f-f990-4ef2-90b3-8b90c389d873\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pvwll" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.971803 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/946f5fcb-dde4-4784-965d-75a47187e703-signing-key\") pod \"service-ca-9c57cc56f-68mjx\" (UID: \"946f5fcb-dde4-4784-965d-75a47187e703\") " pod="openshift-service-ca/service-ca-9c57cc56f-68mjx" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.971836 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/946f5fcb-dde4-4784-965d-75a47187e703-signing-cabundle\") pod \"service-ca-9c57cc56f-68mjx\" (UID: \"946f5fcb-dde4-4784-965d-75a47187e703\") " pod="openshift-service-ca/service-ca-9c57cc56f-68mjx" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.971854 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l2w2\" (UniqueName: \"kubernetes.io/projected/e0e1f142-2930-4f9b-b851-f7f7df22676b-kube-api-access-8l2w2\") pod \"multus-admission-controller-857f4d67dd-4hmjh\" (UID: \"e0e1f142-2930-4f9b-b851-f7f7df22676b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4hmjh" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.971871 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pst8h\" (UniqueName: \"kubernetes.io/projected/ebf1040d-57dd-47ef-b839-6f78a7c5c75f-kube-api-access-pst8h\") pod \"olm-operator-6b444d44fb-tqbl9\" (UID: \"ebf1040d-57dd-47ef-b839-6f78a7c5c75f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tqbl9" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.971884 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f0f88609-cbfe-4ccc-b5db-e5c1be771855-certs\") pod \"machine-config-server-hlf9t\" (UID: \"f0f88609-cbfe-4ccc-b5db-e5c1be771855\") " pod="openshift-machine-config-operator/machine-config-server-hlf9t" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.979222 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-p9gp2"] Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.979780 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/09ddc697-7ac1-4896-b9e2-1ae6c59c6f47-registration-dir\") pod \"csi-hostpathplugin-hvrrc\" (UID: \"09ddc697-7ac1-4896-b9e2-1ae6c59c6f47\") " pod="hostpath-provisioner/csi-hostpathplugin-hvrrc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.980771 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/09ddc697-7ac1-4896-b9e2-1ae6c59c6f47-plugins-dir\") pod \"csi-hostpathplugin-hvrrc\" (UID: \"09ddc697-7ac1-4896-b9e2-1ae6c59c6f47\") " pod="hostpath-provisioner/csi-hostpathplugin-hvrrc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.981118 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/779b2915-e0d0-4e90-9c6d-af28f555fd7b-config\") pod \"service-ca-operator-777779d784-zh465\" (UID: \"779b2915-e0d0-4e90-9c6d-af28f555fd7b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zh465" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.981337 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zn7mn" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.985219 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/528d3aa9-10bf-4029-a4d2-85768264fde8-config-volume\") pod \"collect-profiles-29556600-r9flg\" (UID: \"528d3aa9-10bf-4029-a4d2-85768264fde8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556600-r9flg" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.985668 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/49c520f1-fb05-48ca-8435-1985ce668451-tmpfs\") pod \"packageserver-d55dfcdfc-zgxcd\" (UID: \"49c520f1-fb05-48ca-8435-1985ce668451\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" Mar 13 10:07:45 crc kubenswrapper[4632]: E0313 10:07:45.985968 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:46.485927977 +0000 UTC m=+240.508458110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.992833 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/09ddc697-7ac1-4896-b9e2-1ae6c59c6f47-mountpoint-dir\") pod \"csi-hostpathplugin-hvrrc\" (UID: \"09ddc697-7ac1-4896-b9e2-1ae6c59c6f47\") " pod="hostpath-provisioner/csi-hostpathplugin-hvrrc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.992931 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/09ddc697-7ac1-4896-b9e2-1ae6c59c6f47-csi-data-dir\") pod \"csi-hostpathplugin-hvrrc\" (UID: \"09ddc697-7ac1-4896-b9e2-1ae6c59c6f47\") " pod="hostpath-provisioner/csi-hostpathplugin-hvrrc" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.993832 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/797176c6-dd56-48d6-8004-ff1dd5353a50-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2n99d\" (UID: \"797176c6-dd56-48d6-8004-ff1dd5353a50\") " pod="openshift-marketplace/marketplace-operator-79b997595-2n99d" Mar 13 10:07:45 crc kubenswrapper[4632]: I0313 10:07:45.994324 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58d59f3d-e656-4217-9472-62508a7ccc93-config-volume\") pod \"dns-default-g2wxc\" (UID: \"58d59f3d-e656-4217-9472-62508a7ccc93\") " pod="openshift-dns/dns-default-g2wxc" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.000581 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/946f5fcb-dde4-4784-965d-75a47187e703-signing-cabundle\") pod \"service-ca-9c57cc56f-68mjx\" (UID: \"946f5fcb-dde4-4784-965d-75a47187e703\") " pod="openshift-service-ca/service-ca-9c57cc56f-68mjx" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.001207 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/09ddc697-7ac1-4896-b9e2-1ae6c59c6f47-socket-dir\") pod \"csi-hostpathplugin-hvrrc\" (UID: \"09ddc697-7ac1-4896-b9e2-1ae6c59c6f47\") " pod="hostpath-provisioner/csi-hostpathplugin-hvrrc" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.006442 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f0f88609-cbfe-4ccc-b5db-e5c1be771855-certs\") pod \"machine-config-server-hlf9t\" (UID: \"f0f88609-cbfe-4ccc-b5db-e5c1be771855\") " pod="openshift-machine-config-operator/machine-config-server-hlf9t" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.010789 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e0e1f142-2930-4f9b-b851-f7f7df22676b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-4hmjh\" (UID: \"e0e1f142-2930-4f9b-b851-f7f7df22676b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4hmjh" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.011414 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ebf1040d-57dd-47ef-b839-6f78a7c5c75f-srv-cert\") pod \"olm-operator-6b444d44fb-tqbl9\" (UID: \"ebf1040d-57dd-47ef-b839-6f78a7c5c75f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tqbl9" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.015928 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a4c5a906-1d0b-40e7-aa4f-bc945e9f1f59-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-9rrcn\" (UID: \"a4c5a906-1d0b-40e7-aa4f-bc945e9f1f59\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9rrcn" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.017076 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/49c520f1-fb05-48ca-8435-1985ce668451-apiservice-cert\") pod \"packageserver-d55dfcdfc-zgxcd\" (UID: \"49c520f1-fb05-48ca-8435-1985ce668451\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.018658 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e100e6e-7259-4262-be47-9c2b5be7a53a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-xfvsc\" (UID: \"4e100e6e-7259-4262-be47-9c2b5be7a53a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfvsc" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.030797 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52zmx\" (UniqueName: \"kubernetes.io/projected/fdaf1cb9-0ab4-477f-bbd5-d8d33ab56f73-kube-api-access-52zmx\") pod \"ingress-operator-5b745b69d9-jrkwc\" (UID: \"fdaf1cb9-0ab4-477f-bbd5-d8d33ab56f73\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jrkwc" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.036251 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/779b2915-e0d0-4e90-9c6d-af28f555fd7b-serving-cert\") pod \"service-ca-operator-777779d784-zh465\" (UID: \"779b2915-e0d0-4e90-9c6d-af28f555fd7b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zh465" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.040419 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/946f5fcb-dde4-4784-965d-75a47187e703-signing-key\") pod \"service-ca-9c57cc56f-68mjx\" (UID: \"946f5fcb-dde4-4784-965d-75a47187e703\") " pod="openshift-service-ca/service-ca-9c57cc56f-68mjx" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.051858 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mprmj\" (UniqueName: \"kubernetes.io/projected/f56fc09a-e2b7-46db-b938-f276df3f033e-kube-api-access-mprmj\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.054646 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/528d3aa9-10bf-4029-a4d2-85768264fde8-secret-volume\") pod \"collect-profiles-29556600-r9flg\" (UID: \"528d3aa9-10bf-4029-a4d2-85768264fde8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556600-r9flg" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.055092 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f0f88609-cbfe-4ccc-b5db-e5c1be771855-node-bootstrap-token\") pod \"machine-config-server-hlf9t\" (UID: \"f0f88609-cbfe-4ccc-b5db-e5c1be771855\") " pod="openshift-machine-config-operator/machine-config-server-hlf9t" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.055218 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ebf1040d-57dd-47ef-b839-6f78a7c5c75f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tqbl9\" (UID: \"ebf1040d-57dd-47ef-b839-6f78a7c5c75f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tqbl9" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.055407 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2332524f-f990-4ef2-90b3-8b90c389d873-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pvwll\" (UID: \"2332524f-f990-4ef2-90b3-8b90c389d873\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pvwll" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.055627 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/797176c6-dd56-48d6-8004-ff1dd5353a50-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2n99d\" (UID: \"797176c6-dd56-48d6-8004-ff1dd5353a50\") " pod="openshift-marketplace/marketplace-operator-79b997595-2n99d" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.064138 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/49c520f1-fb05-48ca-8435-1985ce668451-webhook-cert\") pod \"packageserver-d55dfcdfc-zgxcd\" (UID: \"49c520f1-fb05-48ca-8435-1985ce668451\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.077127 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4db028f0-524e-46fc-aa33-da38ed7b8fa6-cert\") pod \"ingress-canary-qcb4l\" (UID: \"4db028f0-524e-46fc-aa33-da38ed7b8fa6\") " pod="openshift-ingress-canary/ingress-canary-qcb4l" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.078804 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/58d59f3d-e656-4217-9472-62508a7ccc93-metrics-tls\") pod \"dns-default-g2wxc\" (UID: \"58d59f3d-e656-4217-9472-62508a7ccc93\") " pod="openshift-dns/dns-default-g2wxc" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.079383 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:46 crc kubenswrapper[4632]: E0313 10:07:46.080793 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:46.580761291 +0000 UTC m=+240.603291424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.083059 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:46 crc kubenswrapper[4632]: E0313 10:07:46.083726 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:46.58370773 +0000 UTC m=+240.606237863 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.088504 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f56fc09a-e2b7-46db-b938-f276df3f033e-bound-sa-token\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.098141 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-sbtn5" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.100377 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8sl88"] Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.102588 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fdaf1cb9-0ab4-477f-bbd5-d8d33ab56f73-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jrkwc\" (UID: \"fdaf1cb9-0ab4-477f-bbd5-d8d33ab56f73\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jrkwc" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.112460 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxh5f\" (UniqueName: \"kubernetes.io/projected/f781cb50-1e1b-4586-ba59-b204b1a6beec-kube-api-access-hxh5f\") pod \"etcd-operator-b45778765-k955n\" (UID: \"f781cb50-1e1b-4586-ba59-b204b1a6beec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k955n" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.122435 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ghtc\" (UniqueName: \"kubernetes.io/projected/b7b8ca1c-c3de-4829-ab9f-860f76033c63-kube-api-access-2ghtc\") pod \"openshift-controller-manager-operator-756b6f6bc6-hd8rx\" (UID: \"b7b8ca1c-c3de-4829-ab9f-860f76033c63\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hd8rx" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.127243 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-k955n" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.144274 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-c6jnc"] Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.157767 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6kft\" (UniqueName: \"kubernetes.io/projected/8be807d4-9bc2-41a1-b69f-1b0af031b5ab-kube-api-access-l6kft\") pod \"machine-config-controller-84d6567774-rntsr\" (UID: \"8be807d4-9bc2-41a1-b69f-1b0af031b5ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rntsr" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.170104 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rq5z\" (UniqueName: \"kubernetes.io/projected/96067558-b20b-411c-b1af-b8fbb61df8f7-kube-api-access-9rq5z\") pod \"migrator-59844c95c7-9wxcs\" (UID: \"96067558-b20b-411c-b1af-b8fbb61df8f7\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9wxcs" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.170940 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xthqz"] Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.181395 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.183155 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mb4q\" (UniqueName: \"kubernetes.io/projected/c94773d8-a922-4778-b2ba-8937e9d6c19b-kube-api-access-7mb4q\") pod \"dns-operator-744455d44c-6fqf5\" (UID: \"c94773d8-a922-4778-b2ba-8937e9d6c19b\") " pod="openshift-dns-operator/dns-operator-744455d44c-6fqf5" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.188190 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:46 crc kubenswrapper[4632]: E0313 10:07:46.188397 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:46.688362014 +0000 UTC m=+240.710892157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.188775 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:46 crc kubenswrapper[4632]: E0313 10:07:46.189152 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:46.689144931 +0000 UTC m=+240.711675064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.189942 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-t9vht" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.201392 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rntsr" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.207180 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txsm4\" (UniqueName: \"kubernetes.io/projected/e884f4d1-d4f3-4ef7-b1f2-c39ea2eee50e-kube-api-access-txsm4\") pod \"kube-storage-version-migrator-operator-b67b599dd-hb4c6\" (UID: \"e884f4d1-d4f3-4ef7-b1f2-c39ea2eee50e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hb4c6" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.209177 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rvkzz" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.216665 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jrkwc" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.247915 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2493565c-3af9-4edf-a2f3-8a7a501e9305-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l6ndt\" (UID: \"2493565c-3af9-4edf-a2f3-8a7a501e9305\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6ndt" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.254363 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7x7q\" (UniqueName: \"kubernetes.io/projected/353e9ca9-cb3b-4c6e-b1ca-446611a12dca-kube-api-access-w7x7q\") pod \"authentication-operator-69f744f599-svhr5\" (UID: \"353e9ca9-cb3b-4c6e-b1ca-446611a12dca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-svhr5" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.257577 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-w2hhj"] Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.268408 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-svhr5" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.270487 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zq92\" (UniqueName: \"kubernetes.io/projected/a4c5a906-1d0b-40e7-aa4f-bc945e9f1f59-kube-api-access-8zq92\") pod \"cluster-image-registry-operator-dc59b4c8b-9rrcn\" (UID: \"a4c5a906-1d0b-40e7-aa4f-bc945e9f1f59\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9rrcn" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.285729 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22993daf-2b32-4be5-8eb7-f9194e903d62-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-946gp\" (UID: \"22993daf-2b32-4be5-8eb7-f9194e903d62\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-946gp" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.289670 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:46 crc kubenswrapper[4632]: E0313 10:07:46.293474 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:46.793436147 +0000 UTC m=+240.815966280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.299283 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hd8rx" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.302829 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvs6x\" (UniqueName: \"kubernetes.io/projected/bbb27a61-7407-4cd7-84df-4b66fbdcf82d-kube-api-access-pvs6x\") pod \"machine-config-operator-74547568cd-99hff\" (UID: \"bbb27a61-7407-4cd7-84df-4b66fbdcf82d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-99hff" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.321512 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fc6g\" (UniqueName: \"kubernetes.io/projected/f660255f-8f78-4876-973d-db58f2ee7020-kube-api-access-9fc6g\") pod \"openshift-config-operator-7777fb866f-sk2l6\" (UID: \"f660255f-8f78-4876-973d-db58f2ee7020\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.341068 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-zn7mn"] Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.350427 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd756\" (UniqueName: \"kubernetes.io/projected/c822257d-9d2f-4b6f-87de-131de5cd0efe-kube-api-access-sd756\") pod \"auto-csr-approver-29556606-mkrp2\" (UID: \"c822257d-9d2f-4b6f-87de-131de5cd0efe\") " pod="openshift-infra/auto-csr-approver-29556606-mkrp2" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.364499 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx2pr\" (UniqueName: \"kubernetes.io/projected/f0f88609-cbfe-4ccc-b5db-e5c1be771855-kube-api-access-xx2pr\") pod \"machine-config-server-hlf9t\" (UID: \"f0f88609-cbfe-4ccc-b5db-e5c1be771855\") " pod="openshift-machine-config-operator/machine-config-server-hlf9t" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.382808 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm8lp\" (UniqueName: \"kubernetes.io/projected/528d3aa9-10bf-4029-a4d2-85768264fde8-kube-api-access-vm8lp\") pod \"collect-profiles-29556600-r9flg\" (UID: \"528d3aa9-10bf-4029-a4d2-85768264fde8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556600-r9flg" Mar 13 10:07:46 crc kubenswrapper[4632]: E0313 10:07:46.393224 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:46.893203511 +0000 UTC m=+240.915733644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.392759 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.401346 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-6fqf5" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.411608 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9rrcn" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.413173 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hcvp\" (UniqueName: \"kubernetes.io/projected/49c520f1-fb05-48ca-8435-1985ce668451-kube-api-access-2hcvp\") pod \"packageserver-d55dfcdfc-zgxcd\" (UID: \"49c520f1-fb05-48ca-8435-1985ce668451\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.419823 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2vc6\" (UniqueName: \"kubernetes.io/projected/779b2915-e0d0-4e90-9c6d-af28f555fd7b-kube-api-access-q2vc6\") pod \"service-ca-operator-777779d784-zh465\" (UID: \"779b2915-e0d0-4e90-9c6d-af28f555fd7b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zh465" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.436887 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.446118 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hb4c6" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.453303 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-99hff" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.471613 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9wxcs" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.475060 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-946gp" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.475198 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsqdg\" (UniqueName: \"kubernetes.io/projected/946f5fcb-dde4-4784-965d-75a47187e703-kube-api-access-vsqdg\") pod \"service-ca-9c57cc56f-68mjx\" (UID: \"946f5fcb-dde4-4784-965d-75a47187e703\") " pod="openshift-service-ca/service-ca-9c57cc56f-68mjx" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.491760 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qtrb\" (UniqueName: \"kubernetes.io/projected/797176c6-dd56-48d6-8004-ff1dd5353a50-kube-api-access-8qtrb\") pod \"marketplace-operator-79b997595-2n99d\" (UID: \"797176c6-dd56-48d6-8004-ff1dd5353a50\") " pod="openshift-marketplace/marketplace-operator-79b997595-2n99d" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.496375 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:46 crc kubenswrapper[4632]: E0313 10:07:46.496802 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:46.996785543 +0000 UTC m=+241.019315666 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.501848 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc9r4\" (UniqueName: \"kubernetes.io/projected/09ddc697-7ac1-4896-b9e2-1ae6c59c6f47-kube-api-access-wc9r4\") pod \"csi-hostpathplugin-hvrrc\" (UID: \"09ddc697-7ac1-4896-b9e2-1ae6c59c6f47\") " pod="hostpath-provisioner/csi-hostpathplugin-hvrrc" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.529296 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6ndt" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.536862 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78rbb\" (UniqueName: \"kubernetes.io/projected/58d59f3d-e656-4217-9472-62508a7ccc93-kube-api-access-78rbb\") pod \"dns-default-g2wxc\" (UID: \"58d59f3d-e656-4217-9472-62508a7ccc93\") " pod="openshift-dns/dns-default-g2wxc" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.547329 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5h9s\" (UniqueName: \"kubernetes.io/projected/4db028f0-524e-46fc-aa33-da38ed7b8fa6-kube-api-access-p5h9s\") pod \"ingress-canary-qcb4l\" (UID: \"4db028f0-524e-46fc-aa33-da38ed7b8fa6\") " pod="openshift-ingress-canary/ingress-canary-qcb4l" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.562600 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.563463 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2n99d" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.563473 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l2w2\" (UniqueName: \"kubernetes.io/projected/e0e1f142-2930-4f9b-b851-f7f7df22676b-kube-api-access-8l2w2\") pod \"multus-admission-controller-857f4d67dd-4hmjh\" (UID: \"e0e1f142-2930-4f9b-b851-f7f7df22676b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4hmjh" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.579286 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zh465" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.591686 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-4hmjh" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.593007 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pst8h\" (UniqueName: \"kubernetes.io/projected/ebf1040d-57dd-47ef-b839-6f78a7c5c75f-kube-api-access-pst8h\") pod \"olm-operator-6b444d44fb-tqbl9\" (UID: \"ebf1040d-57dd-47ef-b839-6f78a7c5c75f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tqbl9" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.598701 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:46 crc kubenswrapper[4632]: E0313 10:07:46.599151 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:47.09913465 +0000 UTC m=+241.121664783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.604738 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwlxd\" (UniqueName: \"kubernetes.io/projected/4e100e6e-7259-4262-be47-9c2b5be7a53a-kube-api-access-wwlxd\") pod \"package-server-manager-789f6589d5-xfvsc\" (UID: \"4e100e6e-7259-4262-be47-9c2b5be7a53a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfvsc" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.605526 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556606-mkrp2" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.606234 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw6wb\" (UniqueName: \"kubernetes.io/projected/2332524f-f990-4ef2-90b3-8b90c389d873-kube-api-access-hw6wb\") pod \"control-plane-machine-set-operator-78cbb6b69f-pvwll\" (UID: \"2332524f-f990-4ef2-90b3-8b90c389d873\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pvwll" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.617342 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sbtn5"] Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.618074 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556600-r9flg" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.626921 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-68mjx" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.653291 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvrrc" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.654432 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qcb4l" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.662655 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-hlf9t" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.675901 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-g2wxc" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.701682 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:46 crc kubenswrapper[4632]: E0313 10:07:46.702539 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:47.202520358 +0000 UTC m=+241.225050491 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.805055 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:46 crc kubenswrapper[4632]: E0313 10:07:46.805349 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:47.305336975 +0000 UTC m=+241.327867108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.838571 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pvwll" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.861679 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfvsc" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.873015 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tqbl9" Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.908315 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qtrc2" event={"ID":"8966c5f5-d0a8-4533-842c-0930c1a97bd7","Type":"ContainerStarted","Data":"2237ac232e1fe4d9854f091db4771705644f579421680aecb051bafd7b457de5"} Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.910285 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:46 crc kubenswrapper[4632]: E0313 10:07:46.910846 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:47.410823215 +0000 UTC m=+241.433353348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.919573 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-w2hhj" event={"ID":"7d155f24-9bfc-4039-9981-10e7f724fa51","Type":"ContainerStarted","Data":"6560d4d8f94cb50cc670c974ce19515844f3d9021206a20e645ccc7bea0024b1"} Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.926768 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-k955n"] Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.967578 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hmljp" event={"ID":"54b67d35-da46-4e38-9b9a-e91855d6d88d","Type":"ContainerStarted","Data":"dd2270d6f010487fdb464c24099d1d08bc8eb87013a5ecf0899f2f88527bb38a"} Mar 13 10:07:46 crc kubenswrapper[4632]: I0313 10:07:46.972410 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rntsr"] Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.009206 4632 generic.go:334] "Generic (PLEG): container finished" podID="d19fca6e-5095-42b6-8590-32c5b2c73308" containerID="d29dfaad69b4c668d9514564ac1fac14021c2da5ea130b61ca7e86e2c34d5223" exitCode=0 Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.009364 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc" event={"ID":"d19fca6e-5095-42b6-8590-32c5b2c73308","Type":"ContainerDied","Data":"d29dfaad69b4c668d9514564ac1fac14021c2da5ea130b61ca7e86e2c34d5223"} Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.009406 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc" event={"ID":"d19fca6e-5095-42b6-8590-32c5b2c73308","Type":"ContainerStarted","Data":"7436d5dff5db5b7d40d786566aa2dea44858e5fe31b4b8aece1de8a3e88e87cf"} Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.011915 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:47 crc kubenswrapper[4632]: E0313 10:07:47.012364 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:47.512346876 +0000 UTC m=+241.534877009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.015814 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jrkwc"] Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.022156 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-sbtn5" event={"ID":"ef269b18-ea84-43c2-971c-e772149acbf6","Type":"ContainerStarted","Data":"75c15ceccf80c00f0980ce9fe8061ee72b4cae63ba83d1ef9a1e704159958941"} Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.031722 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9sqbn" event={"ID":"2c90710f-4595-425c-8be1-1436f43b5069","Type":"ContainerStarted","Data":"2d58cd0f978c6bbcd3613f18efda5d2b07f6320b4bb672f96c06d6f1b9392d0c"} Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.031790 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9sqbn" event={"ID":"2c90710f-4595-425c-8be1-1436f43b5069","Type":"ContainerStarted","Data":"0fe3af461e6c99b0dddea09fd2fc17bc9781ce95850249ce166d4812660ac046"} Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.048222 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xthqz" event={"ID":"560e6c43-4285-4ca8-98b9-874e9dcb5810","Type":"ContainerStarted","Data":"e9aab0e9cd1796940dcc2818af221f5b388f490c5b2161fb3217fdbc24d92e66"} Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.054435 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-t9vht" event={"ID":"7b959a85-56a5-4296-9cf3-87741e1f9c39","Type":"ContainerStarted","Data":"15bfcb8fb0de56bcff8ed22a6dadbf140d5efb1811983961ddae8f45269a3699"} Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.061928 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" event={"ID":"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a","Type":"ContainerStarted","Data":"46d5cd8b5a8d1e4d5e145a625b40cd39a2bdcba910908f1195bf38b9cf2ad7c8"} Mar 13 10:07:47 crc kubenswrapper[4632]: W0313 10:07:47.093416 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8be807d4_9bc2_41a1_b69f_1b0af031b5ab.slice/crio-f39d4790900980d6f8b0fb35dd4baa9babd1e887665cd118c696b9ba33ee881c WatchSource:0}: Error finding container f39d4790900980d6f8b0fb35dd4baa9babd1e887665cd118c696b9ba33ee881c: Status 404 returned error can't find the container with id f39d4790900980d6f8b0fb35dd4baa9babd1e887665cd118c696b9ba33ee881c Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.094206 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9v5nn" event={"ID":"70f440bb-5dd8-4863-9749-bc5f7c547750","Type":"ContainerStarted","Data":"cacc884e7672aacd612df662055c2d9769da0a235fec8c1ddc593601e1331830"} Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.094242 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9v5nn" event={"ID":"70f440bb-5dd8-4863-9749-bc5f7c547750","Type":"ContainerStarted","Data":"365524316a4e3e846e005a856282706fac826be9337ec760f74d5dd19061bccd"} Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.095403 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-9v5nn" Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.113914 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.119252 4632 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9v5nn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.119325 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9v5nn" podUID="70f440bb-5dd8-4863-9749-bc5f7c547750" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Mar 13 10:07:47 crc kubenswrapper[4632]: E0313 10:07:47.121586 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:47.614127671 +0000 UTC m=+241.636657804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.121937 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:47 crc kubenswrapper[4632]: E0313 10:07:47.123880 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:47.623870119 +0000 UTC m=+241.646400242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.220433 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-c6jnc" event={"ID":"275c3112-6912-49f8-9d3f-8147662fb99f","Type":"ContainerStarted","Data":"b2f35c19a6f9bc4062fbc156a0e6a89b8f0bc286049907966124c4fa1d962cf0"} Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.222990 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:47 crc kubenswrapper[4632]: E0313 10:07:47.224380 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:47.724360178 +0000 UTC m=+241.746890321 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.234835 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zn7mn" event={"ID":"f5a50074-5531-442f-a0e9-0578f15634c1","Type":"ContainerStarted","Data":"c0f56571b6b9472de716bb190b1d68fe783e6f7b131b06ae9b0c01071f1d985f"} Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.253424 4632 generic.go:334] "Generic (PLEG): container finished" podID="37df1143-69fc-4d13-a5d3-790a9d14814a" containerID="1727fc9a0b7884510a5f00372a1dc955706d0b945e2fa4f057778a1cc32974e7" exitCode=0 Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.253514 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" event={"ID":"37df1143-69fc-4d13-a5d3-790a9d14814a","Type":"ContainerDied","Data":"1727fc9a0b7884510a5f00372a1dc955706d0b945e2fa4f057778a1cc32974e7"} Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.253556 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" event={"ID":"37df1143-69fc-4d13-a5d3-790a9d14814a","Type":"ContainerStarted","Data":"d1c99f569127b25b9fbcaede6867472b51e2252615cbf84c8b6a6df75564818b"} Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.344994 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:47 crc kubenswrapper[4632]: E0313 10:07:47.370238 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:47.870177367 +0000 UTC m=+241.892707500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.437103 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p"] Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.471725 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:47 crc kubenswrapper[4632]: E0313 10:07:47.476842 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:47.976804481 +0000 UTC m=+241.999334614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.484719 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-svhr5"] Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.487268 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rvkzz"] Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.588508 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:47 crc kubenswrapper[4632]: E0313 10:07:47.588789 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:48.088776773 +0000 UTC m=+242.111306906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.653044 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hd8rx"] Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.689876 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:47 crc kubenswrapper[4632]: E0313 10:07:47.691079 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:48.191057949 +0000 UTC m=+242.213588082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.706644 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9rrcn"] Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.725544 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6"] Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.792971 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:47 crc kubenswrapper[4632]: E0313 10:07:47.794012 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:48.293986368 +0000 UTC m=+242.316516491 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.883121 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-9v5nn" podStartSLOduration=187.883103777 podStartE2EDuration="3m7.883103777s" podCreationTimestamp="2026-03-13 10:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:47.874052443 +0000 UTC m=+241.896582596" watchObservedRunningTime="2026-03-13 10:07:47.883103777 +0000 UTC m=+241.905633910" Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.885109 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-zh465"] Mar 13 10:07:47 crc kubenswrapper[4632]: W0313 10:07:47.891182 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7b8ca1c_c3de_4829_ab9f_860f76033c63.slice/crio-9bd6a1e25880e0d50cb7206a62d253a15785eb5c750fcc9d272be545f04fec11 WatchSource:0}: Error finding container 9bd6a1e25880e0d50cb7206a62d253a15785eb5c750fcc9d272be545f04fec11: Status 404 returned error can't find the container with id 9bd6a1e25880e0d50cb7206a62d253a15785eb5c750fcc9d272be545f04fec11 Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.894185 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:47 crc kubenswrapper[4632]: E0313 10:07:47.894591 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:48.394542739 +0000 UTC m=+242.417072872 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:47 crc kubenswrapper[4632]: I0313 10:07:47.997838 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:48 crc kubenswrapper[4632]: E0313 10:07:48.028157 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:48.528072108 +0000 UTC m=+242.550602241 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.153153 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:48 crc kubenswrapper[4632]: E0313 10:07:48.153833 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:48.65380517 +0000 UTC m=+242.676335303 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.256204 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:48 crc kubenswrapper[4632]: E0313 10:07:48.256623 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:48.756610906 +0000 UTC m=+242.779141039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.310128 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6ndt"] Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.353025 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xthqz" event={"ID":"560e6c43-4285-4ca8-98b9-874e9dcb5810","Type":"ContainerStarted","Data":"6158a81f875e1232b8d27bb41ad2531364da77cdf3704ac46d4ec2470ad3e550"} Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.353373 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xthqz" Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.358974 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:48 crc kubenswrapper[4632]: E0313 10:07:48.359822 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:48.85980099 +0000 UTC m=+242.882331123 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.362147 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-6fqf5"] Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.370839 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-99hff"] Mar 13 10:07:48 crc kubenswrapper[4632]: W0313 10:07:48.375425 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod779b2915_e0d0_4e90_9c6d_af28f555fd7b.slice/crio-0b59f9c490eaefcaa625809b8cbeebb469b353d8229d8a4081413e9f101a689c WatchSource:0}: Error finding container 0b59f9c490eaefcaa625809b8cbeebb469b353d8229d8a4081413e9f101a689c: Status 404 returned error can't find the container with id 0b59f9c490eaefcaa625809b8cbeebb469b353d8229d8a4081413e9f101a689c Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.419822 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hb4c6"] Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.483000 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9sqbn" podStartSLOduration=189.482970019 podStartE2EDuration="3m9.482970019s" podCreationTimestamp="2026-03-13 10:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:48.410891006 +0000 UTC m=+242.433421149" watchObservedRunningTime="2026-03-13 10:07:48.482970019 +0000 UTC m=+242.505500152" Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.494319 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:48 crc kubenswrapper[4632]: E0313 10:07:48.496380 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:48.996362182 +0000 UTC m=+243.018892315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.498608 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-hlf9t" event={"ID":"f0f88609-cbfe-4ccc-b5db-e5c1be771855","Type":"ContainerStarted","Data":"ff16b2a3d9f3d7d99f8d4ce7d4d78b9e1cab2df5df81d37700d025fd29eb7322"} Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.510115 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" event={"ID":"f660255f-8f78-4876-973d-db58f2ee7020","Type":"ContainerStarted","Data":"aef5416b84402461bc252efa526c879aee119a858392680529c54a17da1ad089"} Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.540706 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p" event={"ID":"32f62e32-732b-4646-85f0-45b8ea6544a6","Type":"ContainerStarted","Data":"90b7ec6506badd9443bd2435c386d50093b4439a3938b26453bc3448d5b17f87"} Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.588353 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-c6jnc" event={"ID":"275c3112-6912-49f8-9d3f-8147662fb99f","Type":"ContainerStarted","Data":"659c231578759d8866d439292e45b9c6aaafb089bba68dedf66a629dc8c40639"} Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.599598 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:48 crc kubenswrapper[4632]: E0313 10:07:48.600310 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:49.10028473 +0000 UTC m=+243.122814863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.601858 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.602146 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-hvrrc"] Mar 13 10:07:48 crc kubenswrapper[4632]: E0313 10:07:48.602322 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:49.102306201 +0000 UTC m=+243.124836334 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.605135 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hmljp" event={"ID":"54b67d35-da46-4e38-9b9a-e91855d6d88d","Type":"ContainerStarted","Data":"8feffc3b7bf857c2072a56dc9b0fd9b862356263b01ceb8dccd59275afac9e52"} Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.628693 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rvkzz" event={"ID":"9cd4c3b3-6825-4bd2-97a5-330f91782d4b","Type":"ContainerStarted","Data":"9273fa4a8c3b1d68730f94261ebd33aa40699d16a884dcdb48f02b215427c5bb"} Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.630690 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rntsr" event={"ID":"8be807d4-9bc2-41a1-b69f-1b0af031b5ab","Type":"ContainerStarted","Data":"f39d4790900980d6f8b0fb35dd4baa9babd1e887665cd118c696b9ba33ee881c"} Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.631854 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9rrcn" event={"ID":"a4c5a906-1d0b-40e7-aa4f-bc945e9f1f59","Type":"ContainerStarted","Data":"2b9c22a14a4a7ae7fff9ce82d82b8b32f6520e8ebcbfa421349d5b033517659a"} Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.636629 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-946gp"] Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.640268 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd"] Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.641230 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hd8rx" event={"ID":"b7b8ca1c-c3de-4829-ab9f-860f76033c63","Type":"ContainerStarted","Data":"9bd6a1e25880e0d50cb7206a62d253a15785eb5c750fcc9d272be545f04fec11"} Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.664483 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2n99d"] Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.675507 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" event={"ID":"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a","Type":"ContainerStarted","Data":"7595f698758f3f9ece4af82a45628aeb01bfa58cde4c80fefdeaf746f39aba12"} Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.677526 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.707877 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:48 crc kubenswrapper[4632]: E0313 10:07:48.708019 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:49.207994506 +0000 UTC m=+243.230524639 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.708224 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:48 crc kubenswrapper[4632]: E0313 10:07:48.709036 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:49.209027856 +0000 UTC m=+243.231557989 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.726272 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xthqz" podStartSLOduration=187.726237306 podStartE2EDuration="3m7.726237306s" podCreationTimestamp="2026-03-13 10:04:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:48.696515362 +0000 UTC m=+242.719045495" watchObservedRunningTime="2026-03-13 10:07:48.726237306 +0000 UTC m=+242.748767439" Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.756427 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-w2hhj" Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.802271 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-k955n" event={"ID":"f781cb50-1e1b-4586-ba59-b204b1a6beec","Type":"ContainerStarted","Data":"c52bdf516fa5eedcad8ea9ff2d5e00053c74f310f559396f45b05df9adae9a66"} Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.803842 4632 patch_prober.go:28] interesting pod/downloads-7954f5f757-w2hhj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.803898 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w2hhj" podUID="7d155f24-9bfc-4039-9981-10e7f724fa51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.810525 4632 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-8sl88 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.17:6443/healthz\": dial tcp 10.217.0.17:6443: connect: connection refused" start-of-body= Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.810594 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" podUID="f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.17:6443/healthz\": dial tcp 10.217.0.17:6443: connect: connection refused" Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.811985 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:48 crc kubenswrapper[4632]: E0313 10:07:48.812420 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:49.312399474 +0000 UTC m=+243.334929607 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.914341 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:48 crc kubenswrapper[4632]: E0313 10:07:48.919974 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:49.419958877 +0000 UTC m=+243.442489010 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:48 crc kubenswrapper[4632]: I0313 10:07:48.985575 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qtrc2" event={"ID":"8966c5f5-d0a8-4533-842c-0930c1a97bd7","Type":"ContainerStarted","Data":"1fb481567c320a802926357902cdf9e454a08bef3b34ac3ab075ac8849449faf"} Mar 13 10:07:49 crc kubenswrapper[4632]: I0313 10:07:49.016269 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:49 crc kubenswrapper[4632]: E0313 10:07:49.077454 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:49.577426033 +0000 UTC m=+243.599956166 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:49 crc kubenswrapper[4632]: I0313 10:07:49.100468 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jrkwc" event={"ID":"fdaf1cb9-0ab4-477f-bbd5-d8d33ab56f73","Type":"ContainerStarted","Data":"5bb775d7a4d37ab105f66a4275bfb44cc48f9a56145a7c70d951017418982da0"} Mar 13 10:07:49 crc kubenswrapper[4632]: E0313 10:07:49.279048 4632 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8be807d4_9bc2_41a1_b69f_1b0af031b5ab.slice/crio-4472e43626812cd6438ed2e942691abb4a297046dc1f20f89eb299f8aec4a1d2.scope\": RecentStats: unable to find data in memory cache]" Mar 13 10:07:49 crc kubenswrapper[4632]: I0313 10:07:49.279575 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zn7mn" event={"ID":"f5a50074-5531-442f-a0e9-0578f15634c1","Type":"ContainerStarted","Data":"662793b7c27b62a99fd064350b3cd52eb21f393bbf5603bbcbf03a65855922bf"} Mar 13 10:07:49 crc kubenswrapper[4632]: I0313 10:07:49.343797 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hmljp" podStartSLOduration=190.343775878 podStartE2EDuration="3m10.343775878s" podCreationTimestamp="2026-03-13 10:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:49.327782333 +0000 UTC m=+243.350312466" watchObservedRunningTime="2026-03-13 10:07:49.343775878 +0000 UTC m=+243.366306011" Mar 13 10:07:49 crc kubenswrapper[4632]: I0313 10:07:49.344336 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:49 crc kubenswrapper[4632]: E0313 10:07:49.344864 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:49.844799078 +0000 UTC m=+243.867329211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:49 crc kubenswrapper[4632]: I0313 10:07:49.380636 4632 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-xthqz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 10:07:49 crc kubenswrapper[4632]: I0313 10:07:49.380745 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xthqz" podUID="560e6c43-4285-4ca8-98b9-874e9dcb5810" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 10:07:49 crc kubenswrapper[4632]: I0313 10:07:49.394934 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-svhr5" event={"ID":"353e9ca9-cb3b-4c6e-b1ca-446611a12dca","Type":"ContainerStarted","Data":"0238ab2e7a36fdf21574adc11d217b059ba17531c3b892fddcd341539fdf7844"} Mar 13 10:07:49 crc kubenswrapper[4632]: I0313 10:07:49.444650 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-w2hhj" podStartSLOduration=189.444613785 podStartE2EDuration="3m9.444613785s" podCreationTimestamp="2026-03-13 10:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:49.411853919 +0000 UTC m=+243.434384052" watchObservedRunningTime="2026-03-13 10:07:49.444613785 +0000 UTC m=+243.467143928" Mar 13 10:07:49 crc kubenswrapper[4632]: I0313 10:07:49.461176 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:49 crc kubenswrapper[4632]: I0313 10:07:49.463031 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-4hmjh"] Mar 13 10:07:49 crc kubenswrapper[4632]: E0313 10:07:49.468901 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:49.968831946 +0000 UTC m=+243.991362079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:49 crc kubenswrapper[4632]: I0313 10:07:49.485925 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-9v5nn" Mar 13 10:07:49 crc kubenswrapper[4632]: I0313 10:07:49.510628 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" podStartSLOduration=190.510601853 podStartE2EDuration="3m10.510601853s" podCreationTimestamp="2026-03-13 10:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:49.507773586 +0000 UTC m=+243.530303719" watchObservedRunningTime="2026-03-13 10:07:49.510601853 +0000 UTC m=+243.533131986" Mar 13 10:07:49 crc kubenswrapper[4632]: I0313 10:07:49.554357 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-qcb4l"] Mar 13 10:07:49 crc kubenswrapper[4632]: I0313 10:07:49.572640 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:49 crc kubenswrapper[4632]: E0313 10:07:49.573073 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:50.073054391 +0000 UTC m=+244.095584524 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:49 crc kubenswrapper[4632]: I0313 10:07:49.618309 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pvwll"] Mar 13 10:07:49 crc kubenswrapper[4632]: I0313 10:07:49.629225 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-zn7mn" podStartSLOduration=190.629208651 podStartE2EDuration="3m10.629208651s" podCreationTimestamp="2026-03-13 10:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:49.627575717 +0000 UTC m=+243.650105850" watchObservedRunningTime="2026-03-13 10:07:49.629208651 +0000 UTC m=+243.651738784" Mar 13 10:07:49 crc kubenswrapper[4632]: I0313 10:07:49.675739 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:49 crc kubenswrapper[4632]: E0313 10:07:49.678724 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:50.178680524 +0000 UTC m=+244.201210657 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:49 crc kubenswrapper[4632]: I0313 10:07:49.682349 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:49 crc kubenswrapper[4632]: E0313 10:07:49.682971 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:50.182936291 +0000 UTC m=+244.205466424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:49 crc kubenswrapper[4632]: I0313 10:07:49.714135 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9wxcs"] Mar 13 10:07:49 crc kubenswrapper[4632]: I0313 10:07:49.726149 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556606-mkrp2"] Mar 13 10:07:49 crc kubenswrapper[4632]: W0313 10:07:49.749151 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2332524f_f990_4ef2_90b3_8b90c389d873.slice/crio-e3d7f16736e0371190aa67ce95de1cd66666710393070685ef1ec194c35672ea WatchSource:0}: Error finding container e3d7f16736e0371190aa67ce95de1cd66666710393070685ef1ec194c35672ea: Status 404 returned error can't find the container with id e3d7f16736e0371190aa67ce95de1cd66666710393070685ef1ec194c35672ea Mar 13 10:07:49 crc kubenswrapper[4632]: I0313 10:07:49.765816 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tqbl9"] Mar 13 10:07:49 crc kubenswrapper[4632]: I0313 10:07:49.789812 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:49 crc kubenswrapper[4632]: E0313 10:07:49.790216 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:50.290197638 +0000 UTC m=+244.312727771 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:49 crc kubenswrapper[4632]: I0313 10:07:49.800459 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-68mjx"] Mar 13 10:07:49 crc kubenswrapper[4632]: W0313 10:07:49.844441 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0e1f142_2930_4f9b_b851_f7f7df22676b.slice/crio-94e0a638ee2ae51c16d5af96281e3d95c3d9ca2db3d7fd1fb46c71f1257c5158 WatchSource:0}: Error finding container 94e0a638ee2ae51c16d5af96281e3d95c3d9ca2db3d7fd1fb46c71f1257c5158: Status 404 returned error can't find the container with id 94e0a638ee2ae51c16d5af96281e3d95c3d9ca2db3d7fd1fb46c71f1257c5158 Mar 13 10:07:49 crc kubenswrapper[4632]: I0313 10:07:49.885982 4632 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 10:07:49 crc kubenswrapper[4632]: I0313 10:07:49.898753 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:49 crc kubenswrapper[4632]: E0313 10:07:49.899206 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:50.399191849 +0000 UTC m=+244.421721982 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:49 crc kubenswrapper[4632]: I0313 10:07:49.903330 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-g2wxc"] Mar 13 10:07:49 crc kubenswrapper[4632]: I0313 10:07:49.920808 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfvsc"] Mar 13 10:07:49 crc kubenswrapper[4632]: I0313 10:07:49.984566 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556600-r9flg"] Mar 13 10:07:50 crc kubenswrapper[4632]: I0313 10:07:50.001867 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:50 crc kubenswrapper[4632]: E0313 10:07:50.002223 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:50.50220402 +0000 UTC m=+244.524734153 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:50 crc kubenswrapper[4632]: W0313 10:07:50.084622 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58d59f3d_e656_4217_9472_62508a7ccc93.slice/crio-9696f157835f576c90de9fc6fb04fe18f862a43e1e14381bfaf3ea5fa2f8c5df WatchSource:0}: Error finding container 9696f157835f576c90de9fc6fb04fe18f862a43e1e14381bfaf3ea5fa2f8c5df: Status 404 returned error can't find the container with id 9696f157835f576c90de9fc6fb04fe18f862a43e1e14381bfaf3ea5fa2f8c5df Mar 13 10:07:50 crc kubenswrapper[4632]: I0313 10:07:50.106919 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:50 crc kubenswrapper[4632]: E0313 10:07:50.107509 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:50.607493576 +0000 UTC m=+244.630023709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:50 crc kubenswrapper[4632]: I0313 10:07:50.225670 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:50 crc kubenswrapper[4632]: E0313 10:07:50.226079 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:50.726059353 +0000 UTC m=+244.748589486 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:50 crc kubenswrapper[4632]: I0313 10:07:50.276801 4632 ???:1] "http: TLS handshake error from 192.168.126.11:51414: no serving certificate available for the kubelet" Mar 13 10:07:50 crc kubenswrapper[4632]: I0313 10:07:50.334674 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:50 crc kubenswrapper[4632]: E0313 10:07:50.335105 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:50.835092985 +0000 UTC m=+244.857623118 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:50 crc kubenswrapper[4632]: I0313 10:07:50.417901 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-68mjx" event={"ID":"946f5fcb-dde4-4784-965d-75a47187e703","Type":"ContainerStarted","Data":"2636a650497e2001cc9b3101d94e482e82f27bd0bc1ba4c4ff52faa01ca79e70"} Mar 13 10:07:50 crc kubenswrapper[4632]: I0313 10:07:50.438490 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:50 crc kubenswrapper[4632]: E0313 10:07:50.439009 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:50.938992084 +0000 UTC m=+244.961522217 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:50 crc kubenswrapper[4632]: I0313 10:07:50.490141 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" event={"ID":"49c520f1-fb05-48ca-8435-1985ce668451","Type":"ContainerStarted","Data":"c54852cb281200c3dfab6510e8cb54c5924c24038a30081db00b84832819abcb"} Mar 13 10:07:50 crc kubenswrapper[4632]: I0313 10:07:50.511719 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zh465" event={"ID":"779b2915-e0d0-4e90-9c6d-af28f555fd7b","Type":"ContainerStarted","Data":"0b59f9c490eaefcaa625809b8cbeebb469b353d8229d8a4081413e9f101a689c"} Mar 13 10:07:50 crc kubenswrapper[4632]: I0313 10:07:50.544146 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:50 crc kubenswrapper[4632]: E0313 10:07:50.544527 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:51.044515164 +0000 UTC m=+245.067045297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:50 crc kubenswrapper[4632]: I0313 10:07:50.596698 4632 ???:1] "http: TLS handshake error from 192.168.126.11:51430: no serving certificate available for the kubelet" Mar 13 10:07:50 crc kubenswrapper[4632]: I0313 10:07:50.602308 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-99hff" event={"ID":"bbb27a61-7407-4cd7-84df-4b66fbdcf82d","Type":"ContainerStarted","Data":"978d841db0ae703cb57743ebee1052df0e79457f163667ee122d8877204644c5"} Mar 13 10:07:50 crc kubenswrapper[4632]: I0313 10:07:50.649725 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:50 crc kubenswrapper[4632]: I0313 10:07:50.650327 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-hlf9t" event={"ID":"f0f88609-cbfe-4ccc-b5db-e5c1be771855","Type":"ContainerStarted","Data":"37785bfa0feaa29bfa5e7bf2222e8c269922d184a39c7f35513f8811c878debd"} Mar 13 10:07:50 crc kubenswrapper[4632]: E0313 10:07:50.650648 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:51.150628338 +0000 UTC m=+245.173158471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:50 crc kubenswrapper[4632]: I0313 10:07:50.670620 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556600-r9flg" event={"ID":"528d3aa9-10bf-4029-a4d2-85768264fde8","Type":"ContainerStarted","Data":"1137745f79e5dd4b86f11690ff5ed0914b045872452dc8054e30e019f43d068c"} Mar 13 10:07:50 crc kubenswrapper[4632]: I0313 10:07:50.717821 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pvwll" event={"ID":"2332524f-f990-4ef2-90b3-8b90c389d873","Type":"ContainerStarted","Data":"e3d7f16736e0371190aa67ce95de1cd66666710393070685ef1ec194c35672ea"} Mar 13 10:07:50 crc kubenswrapper[4632]: I0313 10:07:50.718751 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9wxcs" event={"ID":"96067558-b20b-411c-b1af-b8fbb61df8f7","Type":"ContainerStarted","Data":"cf854e7992ea940d3b22d1d6f00a4593b8bc5d524fdb7929fd54bc70cd27491d"} Mar 13 10:07:50 crc kubenswrapper[4632]: I0313 10:07:50.719507 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tqbl9" event={"ID":"ebf1040d-57dd-47ef-b839-6f78a7c5c75f","Type":"ContainerStarted","Data":"a8ecc63c9ed7db7b994c34ee5fa665d5251ee69358655fb999be7016a6cf0616"} Mar 13 10:07:50 crc kubenswrapper[4632]: I0313 10:07:50.734185 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-svhr5" event={"ID":"353e9ca9-cb3b-4c6e-b1ca-446611a12dca","Type":"ContainerStarted","Data":"696b18c58833c0581e6bf36ae1881e00a6717c6dc6b1a5150c21fe634a2b6edb"} Mar 13 10:07:50 crc kubenswrapper[4632]: I0313 10:07:50.737795 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-6fqf5" event={"ID":"c94773d8-a922-4778-b2ba-8937e9d6c19b","Type":"ContainerStarted","Data":"fff7b0aa35e9d16e30d04426b48150081a0880479b641def5d6a9bcfb7f47cd6"} Mar 13 10:07:50 crc kubenswrapper[4632]: I0313 10:07:50.751202 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:50 crc kubenswrapper[4632]: E0313 10:07:50.755911 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:51.255893654 +0000 UTC m=+245.278423787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:50 crc kubenswrapper[4632]: I0313 10:07:50.793891 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfvsc" event={"ID":"4e100e6e-7259-4262-be47-9c2b5be7a53a","Type":"ContainerStarted","Data":"b9e3d723e372f0843b9d5d3305252118cb4fcdcab3fc206a25062c241fe435f9"} Mar 13 10:07:50 crc kubenswrapper[4632]: I0313 10:07:50.797232 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-hlf9t" podStartSLOduration=7.797214583 podStartE2EDuration="7.797214583s" podCreationTimestamp="2026-03-13 10:07:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:50.717286551 +0000 UTC m=+244.739816684" watchObservedRunningTime="2026-03-13 10:07:50.797214583 +0000 UTC m=+244.819744716" Mar 13 10:07:50 crc kubenswrapper[4632]: I0313 10:07:50.842149 4632 ???:1] "http: TLS handshake error from 192.168.126.11:51446: no serving certificate available for the kubelet" Mar 13 10:07:50 crc kubenswrapper[4632]: I0313 10:07:50.853774 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:50 crc kubenswrapper[4632]: E0313 10:07:50.854294 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:51.354270191 +0000 UTC m=+245.376800324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:50 crc kubenswrapper[4632]: I0313 10:07:50.866654 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-g2wxc" event={"ID":"58d59f3d-e656-4217-9472-62508a7ccc93","Type":"ContainerStarted","Data":"9696f157835f576c90de9fc6fb04fe18f862a43e1e14381bfaf3ea5fa2f8c5df"} Mar 13 10:07:50 crc kubenswrapper[4632]: I0313 10:07:50.918617 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-946gp" event={"ID":"22993daf-2b32-4be5-8eb7-f9194e903d62","Type":"ContainerStarted","Data":"a1638203d2af94586ec485f42c4fbf775571059e1c2de85a3a1d38eac9847322"} Mar 13 10:07:50 crc kubenswrapper[4632]: I0313 10:07:50.955608 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-svhr5" podStartSLOduration=191.955581967 podStartE2EDuration="3m11.955581967s" podCreationTimestamp="2026-03-13 10:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:50.799925328 +0000 UTC m=+244.822455461" watchObservedRunningTime="2026-03-13 10:07:50.955581967 +0000 UTC m=+244.978112110" Mar 13 10:07:50 crc kubenswrapper[4632]: I0313 10:07:50.956812 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:50 crc kubenswrapper[4632]: I0313 10:07:50.958134 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-k955n" podStartSLOduration=190.958117188 podStartE2EDuration="3m10.958117188s" podCreationTimestamp="2026-03-13 10:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:50.955435924 +0000 UTC m=+244.977966057" watchObservedRunningTime="2026-03-13 10:07:50.958117188 +0000 UTC m=+244.980647321" Mar 13 10:07:50 crc kubenswrapper[4632]: E0313 10:07:50.959372 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:51.459351494 +0000 UTC m=+245.481881627 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:50 crc kubenswrapper[4632]: I0313 10:07:50.960319 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-t9vht" event={"ID":"7b959a85-56a5-4296-9cf3-87741e1f9c39","Type":"ContainerStarted","Data":"c0db1ffabe3d33862c8266179a821f8fd8c1a4906081849cc73b575a98544e3b"} Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.030035 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qtrc2" event={"ID":"8966c5f5-d0a8-4533-842c-0930c1a97bd7","Type":"ContainerStarted","Data":"ec9be579ef24f67bc9a4d0bd07390f0db6e2e0a609ceb2fe6ac4a52cee11b067"} Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.044285 4632 ???:1] "http: TLS handshake error from 192.168.126.11:51450: no serving certificate available for the kubelet" Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.092985 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:51 crc kubenswrapper[4632]: E0313 10:07:51.093650 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:51.593634279 +0000 UTC m=+245.616164412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.098432 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2n99d" event={"ID":"797176c6-dd56-48d6-8004-ff1dd5353a50","Type":"ContainerStarted","Data":"f11fbb0ec92177c2b8cb772cacb63ff7d8a26b02bee6907aaa00dedbedf68d98"} Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.131461 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hb4c6" event={"ID":"e884f4d1-d4f3-4ef7-b1f2-c39ea2eee50e","Type":"ContainerStarted","Data":"eac967ea2e870d8d27ce821cf58f72bcea7cf75f33059be8e3c88ba059d1a1ac"} Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.150510 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-t9vht" podStartSLOduration=191.150481592 podStartE2EDuration="3m11.150481592s" podCreationTimestamp="2026-03-13 10:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:51.094995846 +0000 UTC m=+245.117525989" watchObservedRunningTime="2026-03-13 10:07:51.150481592 +0000 UTC m=+245.173011725" Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.152861 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qtrc2" podStartSLOduration=192.15285124 podStartE2EDuration="3m12.15285124s" podCreationTimestamp="2026-03-13 10:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:51.149614745 +0000 UTC m=+245.172144908" watchObservedRunningTime="2026-03-13 10:07:51.15285124 +0000 UTC m=+245.175381373" Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.200176 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-t9vht" Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.200431 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:51 crc kubenswrapper[4632]: E0313 10:07:51.200852 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:51.700835994 +0000 UTC m=+245.723366127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.207381 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:07:51 crc kubenswrapper[4632]: [-]has-synced failed: reason withheld Mar 13 10:07:51 crc kubenswrapper[4632]: [+]process-running ok Mar 13 10:07:51 crc kubenswrapper[4632]: healthz check failed Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.207443 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.213869 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-c6jnc" event={"ID":"275c3112-6912-49f8-9d3f-8147662fb99f","Type":"ContainerStarted","Data":"16e1c9b7987925e139c95cc985e936b717e71be620d5b4e52242b0526d6a2335"} Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.237607 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556606-mkrp2" event={"ID":"c822257d-9d2f-4b6f-87de-131de5cd0efe","Type":"ContainerStarted","Data":"4b486b426e38ba0d310d07052394a9d5bdba25cfa8d2705294f114f94eaedc81"} Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.258654 4632 ???:1] "http: TLS handshake error from 192.168.126.11:51462: no serving certificate available for the kubelet" Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.272804 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-c6jnc" podStartSLOduration=190.272775033 podStartE2EDuration="3m10.272775033s" podCreationTimestamp="2026-03-13 10:04:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:51.270443566 +0000 UTC m=+245.292973699" watchObservedRunningTime="2026-03-13 10:07:51.272775033 +0000 UTC m=+245.295305166" Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.278920 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-w2hhj" event={"ID":"7d155f24-9bfc-4039-9981-10e7f724fa51","Type":"ContainerStarted","Data":"dc3a73e428d40f73e3034fb8b7d18fcfe7453c6673209f7b70847a9a508f90d4"} Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.280702 4632 patch_prober.go:28] interesting pod/downloads-7954f5f757-w2hhj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.280756 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w2hhj" podUID="7d155f24-9bfc-4039-9981-10e7f724fa51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.283555 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvrrc" event={"ID":"09ddc697-7ac1-4896-b9e2-1ae6c59c6f47","Type":"ContainerStarted","Data":"6bb360edaeb98a0ce0225fba59f7c71dd52dfa0c38be17890b61097dd0c283b3"} Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.323553 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:51 crc kubenswrapper[4632]: E0313 10:07:51.323874 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:51.82385677 +0000 UTC m=+245.846386913 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.365460 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-sbtn5" event={"ID":"ef269b18-ea84-43c2-971c-e772149acbf6","Type":"ContainerStarted","Data":"faaf308e22c1a8d08431430b330cacf53efc9923cc70f0515be295533e608c79"} Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.367080 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-sbtn5" Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.395273 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p" event={"ID":"32f62e32-732b-4646-85f0-45b8ea6544a6","Type":"ContainerStarted","Data":"4f41aedb607002fa771d4b82bf1fb15a527c048ee3048ce7cd9db7dc1d8b7961"} Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.396658 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p" Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.408855 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qcb4l" event={"ID":"4db028f0-524e-46fc-aa33-da38ed7b8fa6","Type":"ContainerStarted","Data":"05d9328f5993ff28efe917f43052a2d7d2b56f187f221997b229d80138a9668c"} Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.419956 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-4hmjh" event={"ID":"e0e1f142-2930-4f9b-b851-f7f7df22676b","Type":"ContainerStarted","Data":"94e0a638ee2ae51c16d5af96281e3d95c3d9ca2db3d7fd1fb46c71f1257c5158"} Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.421109 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6ndt" event={"ID":"2493565c-3af9-4edf-a2f3-8a7a501e9305","Type":"ContainerStarted","Data":"1fc8ce799e02b250ba115dbeb135562734f9d1f478f83e9ebd8c897a9aa0f527"} Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.422577 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rntsr" event={"ID":"8be807d4-9bc2-41a1-b69f-1b0af031b5ab","Type":"ContainerStarted","Data":"4472e43626812cd6438ed2e942691abb4a297046dc1f20f89eb299f8aec4a1d2"} Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.425109 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:51 crc kubenswrapper[4632]: E0313 10:07:51.428201 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:51.928173937 +0000 UTC m=+245.950704300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.431423 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc" event={"ID":"d19fca6e-5095-42b6-8590-32c5b2c73308","Type":"ContainerStarted","Data":"e308322021135c5ed9ce8a32583947cf382a4bc2a981fcbfb12a53b54fda4790"} Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.483251 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hd8rx" event={"ID":"b7b8ca1c-c3de-4829-ab9f-860f76033c63","Type":"ContainerStarted","Data":"afd7707f1200eec84045aec7e26bcc717b636a687a21f4ec5c7a205ad80ec7f3"} Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.485563 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-sbtn5" podStartSLOduration=192.485542961 podStartE2EDuration="3m12.485542961s" podCreationTimestamp="2026-03-13 10:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:51.484160413 +0000 UTC m=+245.506690546" watchObservedRunningTime="2026-03-13 10:07:51.485542961 +0000 UTC m=+245.508073084" Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.496255 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jrkwc" event={"ID":"fdaf1cb9-0ab4-477f-bbd5-d8d33ab56f73","Type":"ContainerStarted","Data":"b19f68b6880ea930ca87409ab0f966556f18774e7a975f90f350a496f8371831"} Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.508342 4632 ???:1] "http: TLS handshake error from 192.168.126.11:51466: no serving certificate available for the kubelet" Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.523990 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" event={"ID":"37df1143-69fc-4d13-a5d3-790a9d14814a","Type":"ContainerStarted","Data":"671ef1ed2b00036ec2e404981c153eb4a8e77da75376b49caef1e2b96bc79aec"} Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.529195 4632 patch_prober.go:28] interesting pod/console-operator-58897d9998-sbtn5 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.529266 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-sbtn5" podUID="ef269b18-ea84-43c2-971c-e772149acbf6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.531239 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:51 crc kubenswrapper[4632]: E0313 10:07:51.533297 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:52.03327393 +0000 UTC m=+246.055804223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.569662 4632 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-r5v5p container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.569786 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p" podUID="32f62e32-732b-4646-85f0-45b8ea6544a6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.570038 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xthqz" Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.661547 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:51 crc kubenswrapper[4632]: E0313 10:07:51.662126 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:52.162107905 +0000 UTC m=+246.184638028 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.729218 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc" podStartSLOduration=190.729198655 podStartE2EDuration="3m10.729198655s" podCreationTimestamp="2026-03-13 10:04:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:51.728824578 +0000 UTC m=+245.751354731" watchObservedRunningTime="2026-03-13 10:07:51.729198655 +0000 UTC m=+245.751728788" Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.730405 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hd8rx" podStartSLOduration=191.73039721 podStartE2EDuration="3m11.73039721s" podCreationTimestamp="2026-03-13 10:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:51.601732409 +0000 UTC m=+245.624262542" watchObservedRunningTime="2026-03-13 10:07:51.73039721 +0000 UTC m=+245.752927353" Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.772522 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:51 crc kubenswrapper[4632]: E0313 10:07:51.773112 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:52.273067066 +0000 UTC m=+246.295597209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.773269 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.773312 4632 ???:1] "http: TLS handshake error from 192.168.126.11:51482: no serving certificate available for the kubelet" Mar 13 10:07:51 crc kubenswrapper[4632]: E0313 10:07:51.773840 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:52.273828481 +0000 UTC m=+246.296358614 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.844109 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p" podStartSLOduration=190.844093518 podStartE2EDuration="3m10.844093518s" podCreationTimestamp="2026-03-13 10:04:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:51.842918364 +0000 UTC m=+245.865448497" watchObservedRunningTime="2026-03-13 10:07:51.844093518 +0000 UTC m=+245.866623651" Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.875387 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:51 crc kubenswrapper[4632]: E0313 10:07:51.875750 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:52.375731599 +0000 UTC m=+246.398261742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.983294 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:51 crc kubenswrapper[4632]: E0313 10:07:51.983901 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:52.483886844 +0000 UTC m=+246.506416977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:51 crc kubenswrapper[4632]: I0313 10:07:51.987397 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:07:52 crc kubenswrapper[4632]: I0313 10:07:52.088765 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:52 crc kubenswrapper[4632]: E0313 10:07:52.089075 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:52.589020737 +0000 UTC m=+246.611550870 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:52 crc kubenswrapper[4632]: I0313 10:07:52.089494 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:52 crc kubenswrapper[4632]: E0313 10:07:52.089903 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:52.589895365 +0000 UTC m=+246.612425498 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:52 crc kubenswrapper[4632]: I0313 10:07:52.210634 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:52 crc kubenswrapper[4632]: E0313 10:07:52.210757 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:52.710742098 +0000 UTC m=+246.733272231 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:52 crc kubenswrapper[4632]: I0313 10:07:52.210992 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:52 crc kubenswrapper[4632]: E0313 10:07:52.211266 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:52.711256419 +0000 UTC m=+246.733786552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:52 crc kubenswrapper[4632]: I0313 10:07:52.261050 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:07:52 crc kubenswrapper[4632]: [-]has-synced failed: reason withheld Mar 13 10:07:52 crc kubenswrapper[4632]: [+]process-running ok Mar 13 10:07:52 crc kubenswrapper[4632]: healthz check failed Mar 13 10:07:52 crc kubenswrapper[4632]: I0313 10:07:52.261132 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:07:52 crc kubenswrapper[4632]: I0313 10:07:52.311522 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:52 crc kubenswrapper[4632]: E0313 10:07:52.312198 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:52.812182357 +0000 UTC m=+246.834712490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:52 crc kubenswrapper[4632]: I0313 10:07:52.362651 4632 ???:1] "http: TLS handshake error from 192.168.126.11:51492: no serving certificate available for the kubelet" Mar 13 10:07:52 crc kubenswrapper[4632]: I0313 10:07:52.414469 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:52 crc kubenswrapper[4632]: E0313 10:07:52.414913 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:52.914897091 +0000 UTC m=+246.937427214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:52 crc kubenswrapper[4632]: I0313 10:07:52.515553 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:52 crc kubenswrapper[4632]: E0313 10:07:52.516017 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:53.016000182 +0000 UTC m=+247.038530315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:52 crc kubenswrapper[4632]: I0313 10:07:52.560678 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9rrcn" event={"ID":"a4c5a906-1d0b-40e7-aa4f-bc945e9f1f59","Type":"ContainerStarted","Data":"7a5bcf9f212f5a2b310a7efd17456057f682cd2aa737d0333fa5a860860efb95"} Mar 13 10:07:52 crc kubenswrapper[4632]: I0313 10:07:52.581923 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556600-r9flg" event={"ID":"528d3aa9-10bf-4029-a4d2-85768264fde8","Type":"ContainerStarted","Data":"da165dd4ae62fa2ea1c777c8125fcd4bfe4bd102f508da056f1a058689bba35e"} Mar 13 10:07:52 crc kubenswrapper[4632]: I0313 10:07:52.597620 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-946gp" event={"ID":"22993daf-2b32-4be5-8eb7-f9194e903d62","Type":"ContainerStarted","Data":"448dab19b144eab73c92a38dec4aa7a678df60a74c15d3992f8cfe45580486ed"} Mar 13 10:07:52 crc kubenswrapper[4632]: I0313 10:07:52.623058 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rvkzz" event={"ID":"9cd4c3b3-6825-4bd2-97a5-330f91782d4b","Type":"ContainerStarted","Data":"ba6ebfd612cba7e001fb3e96528df7785ea1937329f26e6ca8b2ffa9099a0267"} Mar 13 10:07:52 crc kubenswrapper[4632]: I0313 10:07:52.625763 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:52 crc kubenswrapper[4632]: E0313 10:07:52.628199 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:53.12818158 +0000 UTC m=+247.150711713 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:52 crc kubenswrapper[4632]: I0313 10:07:52.638286 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pvwll" event={"ID":"2332524f-f990-4ef2-90b3-8b90c389d873","Type":"ContainerStarted","Data":"d31f3a76795fd3df921747cc5a2960017c10149aa7e4a878019746d93c54cc18"} Mar 13 10:07:52 crc kubenswrapper[4632]: I0313 10:07:52.653901 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9rrcn" podStartSLOduration=192.653869351 podStartE2EDuration="3m12.653869351s" podCreationTimestamp="2026-03-13 10:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:52.651456891 +0000 UTC m=+246.673987044" watchObservedRunningTime="2026-03-13 10:07:52.653869351 +0000 UTC m=+246.676399504" Mar 13 10:07:52 crc kubenswrapper[4632]: I0313 10:07:52.663477 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6ndt" event={"ID":"2493565c-3af9-4edf-a2f3-8a7a501e9305","Type":"ContainerStarted","Data":"1afce30866e9f140a28fff0a431440447ec493a74037d1bf14031a4607047deb"} Mar 13 10:07:52 crc kubenswrapper[4632]: I0313 10:07:52.707712 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rntsr" event={"ID":"8be807d4-9bc2-41a1-b69f-1b0af031b5ab","Type":"ContainerStarted","Data":"99faffe7be689eda64884a72770d491d25afa5bde0cae33d38c8950ba26f6a7f"} Mar 13 10:07:52 crc kubenswrapper[4632]: I0313 10:07:52.737691 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:52 crc kubenswrapper[4632]: E0313 10:07:52.739403 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:53.239386886 +0000 UTC m=+247.261917019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:52 crc kubenswrapper[4632]: I0313 10:07:52.765769 4632 generic.go:334] "Generic (PLEG): container finished" podID="f660255f-8f78-4876-973d-db58f2ee7020" containerID="8784ad9d9edf3b0167053e3050a765f4ed5e301efeda89b7117d2a334a743a5e" exitCode=0 Mar 13 10:07:52 crc kubenswrapper[4632]: I0313 10:07:52.765881 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" event={"ID":"f660255f-8f78-4876-973d-db58f2ee7020","Type":"ContainerDied","Data":"8784ad9d9edf3b0167053e3050a765f4ed5e301efeda89b7117d2a334a743a5e"} Mar 13 10:07:52 crc kubenswrapper[4632]: I0313 10:07:52.787673 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29556600-r9flg" podStartSLOduration=193.787638075 podStartE2EDuration="3m13.787638075s" podCreationTimestamp="2026-03-13 10:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:52.750020782 +0000 UTC m=+246.772550915" watchObservedRunningTime="2026-03-13 10:07:52.787638075 +0000 UTC m=+246.810168208" Mar 13 10:07:52 crc kubenswrapper[4632]: I0313 10:07:52.817099 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" event={"ID":"49c520f1-fb05-48ca-8435-1985ce668451","Type":"ContainerStarted","Data":"35b32e739ccce4a6f84a62ef541fb840a3cf0ce2a60fb788f618073e6f79bd60"} Mar 13 10:07:52 crc kubenswrapper[4632]: I0313 10:07:52.817159 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" Mar 13 10:07:52 crc kubenswrapper[4632]: I0313 10:07:52.833894 4632 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-zgxcd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" start-of-body= Mar 13 10:07:52 crc kubenswrapper[4632]: I0313 10:07:52.833977 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" podUID="49c520f1-fb05-48ca-8435-1985ce668451" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" Mar 13 10:07:52 crc kubenswrapper[4632]: I0313 10:07:52.843740 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:52 crc kubenswrapper[4632]: E0313 10:07:52.849802 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:53.349780766 +0000 UTC m=+247.372310899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:52 crc kubenswrapper[4632]: I0313 10:07:52.865897 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zh465" event={"ID":"779b2915-e0d0-4e90-9c6d-af28f555fd7b","Type":"ContainerStarted","Data":"b1cb38909f02af1a3486fbdd45dec5e71ff2321e667af9738833b913d588a2ea"} Mar 13 10:07:52 crc kubenswrapper[4632]: I0313 10:07:52.919726 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rvkzz" podStartSLOduration=192.919692755 podStartE2EDuration="3m12.919692755s" podCreationTimestamp="2026-03-13 10:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:52.848060192 +0000 UTC m=+246.870590335" watchObservedRunningTime="2026-03-13 10:07:52.919692755 +0000 UTC m=+246.942222888" Mar 13 10:07:52 crc kubenswrapper[4632]: I0313 10:07:52.950818 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:52 crc kubenswrapper[4632]: E0313 10:07:52.952793 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:53.452772826 +0000 UTC m=+247.475302959 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:53 crc kubenswrapper[4632]: I0313 10:07:53.056263 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-k955n" event={"ID":"f781cb50-1e1b-4586-ba59-b204b1a6beec","Type":"ContainerStarted","Data":"883d07c6c6d265d059ef0c146e608f387ce1adfcf7af4157abcb8ce50bb4dff3"} Mar 13 10:07:53 crc kubenswrapper[4632]: I0313 10:07:53.057551 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:53 crc kubenswrapper[4632]: E0313 10:07:53.057899 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:53.557884489 +0000 UTC m=+247.580414612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:53 crc kubenswrapper[4632]: I0313 10:07:53.082356 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-99hff" event={"ID":"bbb27a61-7407-4cd7-84df-4b66fbdcf82d","Type":"ContainerStarted","Data":"30e6bdf5c51b5f0b537bb2a7b181b4e9345f94e30109bbcf3545aa94fd680a70"} Mar 13 10:07:53 crc kubenswrapper[4632]: I0313 10:07:53.098002 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-946gp" podStartSLOduration=193.097985283 podStartE2EDuration="3m13.097985283s" podCreationTimestamp="2026-03-13 10:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:53.095570564 +0000 UTC m=+247.118100697" watchObservedRunningTime="2026-03-13 10:07:53.097985283 +0000 UTC m=+247.120515416" Mar 13 10:07:53 crc kubenswrapper[4632]: I0313 10:07:53.117262 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9wxcs" event={"ID":"96067558-b20b-411c-b1af-b8fbb61df8f7","Type":"ContainerStarted","Data":"cee136f6a40b1d254350a9976840583417034cd82c227204e53e60b7c5c6eca8"} Mar 13 10:07:53 crc kubenswrapper[4632]: I0313 10:07:53.144116 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2n99d" event={"ID":"797176c6-dd56-48d6-8004-ff1dd5353a50","Type":"ContainerStarted","Data":"0fd5f07ae3c28f8c24cc66a585de93acc08f170fb621bbeb190cd66596980871"} Mar 13 10:07:53 crc kubenswrapper[4632]: I0313 10:07:53.145332 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-2n99d" Mar 13 10:07:53 crc kubenswrapper[4632]: I0313 10:07:53.159531 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:53 crc kubenswrapper[4632]: E0313 10:07:53.160340 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:53.660323948 +0000 UTC m=+247.682854081 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:53 crc kubenswrapper[4632]: I0313 10:07:53.160840 4632 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2n99d container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Mar 13 10:07:53 crc kubenswrapper[4632]: I0313 10:07:53.160892 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2n99d" podUID="797176c6-dd56-48d6-8004-ff1dd5353a50" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Mar 13 10:07:53 crc kubenswrapper[4632]: I0313 10:07:53.161753 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hb4c6" event={"ID":"e884f4d1-d4f3-4ef7-b1f2-c39ea2eee50e","Type":"ContainerStarted","Data":"e1f80f4408c7077e4880cbeb6be74d125578b1b416485e0937117937025c127c"} Mar 13 10:07:53 crc kubenswrapper[4632]: I0313 10:07:53.194197 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jrkwc" event={"ID":"fdaf1cb9-0ab4-477f-bbd5-d8d33ab56f73","Type":"ContainerStarted","Data":"42d99267513fb3271ca67710e3271f3c0526e8ee8db64d915dd6b44a1763398c"} Mar 13 10:07:53 crc kubenswrapper[4632]: I0313 10:07:53.218038 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:07:53 crc kubenswrapper[4632]: [-]has-synced failed: reason withheld Mar 13 10:07:53 crc kubenswrapper[4632]: [+]process-running ok Mar 13 10:07:53 crc kubenswrapper[4632]: healthz check failed Mar 13 10:07:53 crc kubenswrapper[4632]: I0313 10:07:53.218093 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:07:53 crc kubenswrapper[4632]: I0313 10:07:53.221725 4632 patch_prober.go:28] interesting pod/downloads-7954f5f757-w2hhj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 13 10:07:53 crc kubenswrapper[4632]: I0313 10:07:53.221767 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w2hhj" podUID="7d155f24-9bfc-4039-9981-10e7f724fa51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 13 10:07:53 crc kubenswrapper[4632]: I0313 10:07:53.261695 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:53 crc kubenswrapper[4632]: E0313 10:07:53.269255 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:53.769239368 +0000 UTC m=+247.791769501 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:53 crc kubenswrapper[4632]: I0313 10:07:53.304141 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p" Mar 13 10:07:53 crc kubenswrapper[4632]: I0313 10:07:53.397799 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:53 crc kubenswrapper[4632]: E0313 10:07:53.402222 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:53.902185476 +0000 UTC m=+247.924715619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:53 crc kubenswrapper[4632]: I0313 10:07:53.402785 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:53 crc kubenswrapper[4632]: E0313 10:07:53.405025 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:53.905012893 +0000 UTC m=+247.927543026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:53 crc kubenswrapper[4632]: I0313 10:07:53.491551 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pvwll" podStartSLOduration=192.491534529 podStartE2EDuration="3m12.491534529s" podCreationTimestamp="2026-03-13 10:04:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:53.211457976 +0000 UTC m=+247.233988109" watchObservedRunningTime="2026-03-13 10:07:53.491534529 +0000 UTC m=+247.514064662" Mar 13 10:07:53 crc kubenswrapper[4632]: I0313 10:07:53.537217 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:53 crc kubenswrapper[4632]: E0313 10:07:53.537561 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:54.037542113 +0000 UTC m=+248.060072246 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:53 crc kubenswrapper[4632]: I0313 10:07:53.640480 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:53 crc kubenswrapper[4632]: E0313 10:07:53.641093 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:54.141080594 +0000 UTC m=+248.163610727 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:53 crc kubenswrapper[4632]: I0313 10:07:53.641662 4632 ???:1] "http: TLS handshake error from 192.168.126.11:51506: no serving certificate available for the kubelet" Mar 13 10:07:53 crc kubenswrapper[4632]: I0313 10:07:53.719007 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-2n99d" podStartSLOduration=192.718988555 podStartE2EDuration="3m12.718988555s" podCreationTimestamp="2026-03-13 10:04:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:53.707095164 +0000 UTC m=+247.729625297" watchObservedRunningTime="2026-03-13 10:07:53.718988555 +0000 UTC m=+247.741518678" Mar 13 10:07:53 crc kubenswrapper[4632]: I0313 10:07:53.719245 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jrkwc" podStartSLOduration=193.71923961 podStartE2EDuration="3m13.71923961s" podCreationTimestamp="2026-03-13 10:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:53.543893862 +0000 UTC m=+247.566424005" watchObservedRunningTime="2026-03-13 10:07:53.71923961 +0000 UTC m=+247.741769743" Mar 13 10:07:53 crc kubenswrapper[4632]: I0313 10:07:53.742163 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:53 crc kubenswrapper[4632]: E0313 10:07:53.742666 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:54.242634464 +0000 UTC m=+248.265164737 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:53 crc kubenswrapper[4632]: I0313 10:07:53.742873 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:53 crc kubenswrapper[4632]: E0313 10:07:53.743482 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:54.243470172 +0000 UTC m=+248.266000305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:53 crc kubenswrapper[4632]: I0313 10:07:53.838458 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hb4c6" podStartSLOduration=193.838433919 podStartE2EDuration="3m13.838433919s" podCreationTimestamp="2026-03-13 10:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:53.830984538 +0000 UTC m=+247.853514671" watchObservedRunningTime="2026-03-13 10:07:53.838433919 +0000 UTC m=+247.860964052" Mar 13 10:07:53 crc kubenswrapper[4632]: I0313 10:07:53.844308 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:53 crc kubenswrapper[4632]: E0313 10:07:53.844641 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:54.344620784 +0000 UTC m=+248.367150917 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:53 crc kubenswrapper[4632]: I0313 10:07:53.885331 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zh465" podStartSLOduration=192.88531112 podStartE2EDuration="3m12.88531112s" podCreationTimestamp="2026-03-13 10:04:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:53.877221206 +0000 UTC m=+247.899751339" watchObservedRunningTime="2026-03-13 10:07:53.88531112 +0000 UTC m=+247.907841253" Mar 13 10:07:53 crc kubenswrapper[4632]: I0313 10:07:53.950084 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:53 crc kubenswrapper[4632]: E0313 10:07:53.950599 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:54.450557474 +0000 UTC m=+248.473087607 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.053420 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:54 crc kubenswrapper[4632]: E0313 10:07:54.053589 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:54.553560955 +0000 UTC m=+248.576091088 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.053691 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:54 crc kubenswrapper[4632]: E0313 10:07:54.072745 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:54.572718453 +0000 UTC m=+248.595248586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.091815 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9wxcs" podStartSLOduration=194.09178793 podStartE2EDuration="3m14.09178793s" podCreationTimestamp="2026-03-13 10:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:53.971234544 +0000 UTC m=+247.993764677" watchObservedRunningTime="2026-03-13 10:07:54.09178793 +0000 UTC m=+248.114318063" Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.091953 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" podStartSLOduration=193.091932613 podStartE2EDuration="3m13.091932613s" podCreationTimestamp="2026-03-13 10:04:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:54.088602696 +0000 UTC m=+248.111132839" watchObservedRunningTime="2026-03-13 10:07:54.091932613 +0000 UTC m=+248.114462746" Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.158463 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:54 crc kubenswrapper[4632]: E0313 10:07:54.158892 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:54.658871452 +0000 UTC m=+248.681401585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.225120 4632 patch_prober.go:28] interesting pod/console-operator-58897d9998-sbtn5 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.225201 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-sbtn5" podUID="ef269b18-ea84-43c2-971c-e772149acbf6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.241538 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:07:54 crc kubenswrapper[4632]: [-]has-synced failed: reason withheld Mar 13 10:07:54 crc kubenswrapper[4632]: [+]process-running ok Mar 13 10:07:54 crc kubenswrapper[4632]: healthz check failed Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.241644 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.243834 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6ndt" podStartSLOduration=194.243809575 podStartE2EDuration="3m14.243809575s" podCreationTimestamp="2026-03-13 10:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:54.241895506 +0000 UTC m=+248.264425639" watchObservedRunningTime="2026-03-13 10:07:54.243809575 +0000 UTC m=+248.266339708" Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.271304 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:54 crc kubenswrapper[4632]: E0313 10:07:54.271770 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:54.771754022 +0000 UTC m=+248.794284155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.306879 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" event={"ID":"f660255f-8f78-4876-973d-db58f2ee7020","Type":"ContainerStarted","Data":"e98f0e8253db82d7fc1c628a628a0d9ea91c85c3796f3abe0d968983b3e782e2"} Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.307116 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.351856 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" event={"ID":"37df1143-69fc-4d13-a5d3-790a9d14814a","Type":"ContainerStarted","Data":"112e377136996e25010b35483b2e1f7104c5fa4408a57b127c0f29e9b4b1396a"} Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.365851 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-g2wxc" event={"ID":"58d59f3d-e656-4217-9472-62508a7ccc93","Type":"ContainerStarted","Data":"1fd32c57b88f0b8b4ead90868742916aabf06aaf3a5152f76aac591bcdbbd091"} Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.366519 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-g2wxc" Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.373018 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:54 crc kubenswrapper[4632]: E0313 10:07:54.374091 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:54.874073178 +0000 UTC m=+248.896603311 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.389444 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qcb4l" event={"ID":"4db028f0-524e-46fc-aa33-da38ed7b8fa6","Type":"ContainerStarted","Data":"b3bb037ac8508fee6cfb9112e0d5912db93ffd0e474f14540ca64e50c35d91a0"} Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.407160 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-99hff" event={"ID":"bbb27a61-7407-4cd7-84df-4b66fbdcf82d","Type":"ContainerStarted","Data":"bf8af74d9b0f042318756359c4286a7a2e4768a4e8d05e51ae504c7fda6422fe"} Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.419265 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-6fqf5" event={"ID":"c94773d8-a922-4778-b2ba-8937e9d6c19b","Type":"ContainerStarted","Data":"3fc5945ba70c7d6bff986b63e755d289379e65187e067a619b17e24a4f9716f1"} Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.436016 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9wxcs" event={"ID":"96067558-b20b-411c-b1af-b8fbb61df8f7","Type":"ContainerStarted","Data":"cd7a89e2368215d8104acc2802924935cdf3abebfe9ce85efe2cac7fa0fb0fa7"} Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.456450 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-4hmjh" event={"ID":"e0e1f142-2930-4f9b-b851-f7f7df22676b","Type":"ContainerStarted","Data":"ecbcedb42472f525c93e094d11c0b45f162c4052cd44889cc6b3056b5b130d62"} Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.472862 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rntsr" podStartSLOduration=194.472828393 podStartE2EDuration="3m14.472828393s" podCreationTimestamp="2026-03-13 10:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:54.340728812 +0000 UTC m=+248.363258945" watchObservedRunningTime="2026-03-13 10:07:54.472828393 +0000 UTC m=+248.495358526" Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.475229 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:54 crc kubenswrapper[4632]: E0313 10:07:54.477094 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:54.977076059 +0000 UTC m=+248.999606392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.481147 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tqbl9" event={"ID":"ebf1040d-57dd-47ef-b839-6f78a7c5c75f","Type":"ContainerStarted","Data":"956edf235abcb661b9044b0baa5051ccc57d0211ecc18378cd6b766dc4edf9bd"} Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.482098 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tqbl9" Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.485310 4632 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tqbl9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.485378 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tqbl9" podUID="ebf1040d-57dd-47ef-b839-6f78a7c5c75f" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.497785 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvrrc" event={"ID":"09ddc697-7ac1-4896-b9e2-1ae6c59c6f47","Type":"ContainerStarted","Data":"da106490637a2286eb470c5eb4cdab3ab51dc9c3d415ff2d01e1800f284ee2d9"} Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.508076 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfvsc" event={"ID":"4e100e6e-7259-4262-be47-9c2b5be7a53a","Type":"ContainerStarted","Data":"8686dc189770f7b6373b29069a26380efb52cc7fda6bcf92c2bb9d4fee9440fc"} Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.508135 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfvsc" event={"ID":"4e100e6e-7259-4262-be47-9c2b5be7a53a","Type":"ContainerStarted","Data":"5558b8a7760fdc6f72ff7d0488173d63a3f337d120614ea0027a0eb87a92cc2d"} Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.508853 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfvsc" Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.510326 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-68mjx" event={"ID":"946f5fcb-dde4-4784-965d-75a47187e703","Type":"ContainerStarted","Data":"7b1701334cdec6b285605c47da84fb005fe850e1c337e7faaa2c84d60edc5fe2"} Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.511574 4632 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-zgxcd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" start-of-body= Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.511617 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" podUID="49c520f1-fb05-48ca-8435-1985ce668451" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.512419 4632 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2n99d container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.512452 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2n99d" podUID="797176c6-dd56-48d6-8004-ff1dd5353a50" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.577499 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:54 crc kubenswrapper[4632]: E0313 10:07:54.578041 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:55.078021317 +0000 UTC m=+249.100551460 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.627828 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-99hff" podStartSLOduration=194.627807568 podStartE2EDuration="3m14.627807568s" podCreationTimestamp="2026-03-13 10:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:54.541405464 +0000 UTC m=+248.563935607" watchObservedRunningTime="2026-03-13 10:07:54.627807568 +0000 UTC m=+248.650337701" Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.682521 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:54 crc kubenswrapper[4632]: E0313 10:07:54.687073 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:55.1870568 +0000 UTC m=+249.209586933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.699167 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfvsc" podStartSLOduration=193.699148356 podStartE2EDuration="3m13.699148356s" podCreationTimestamp="2026-03-13 10:04:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:54.681782593 +0000 UTC m=+248.704312726" watchObservedRunningTime="2026-03-13 10:07:54.699148356 +0000 UTC m=+248.721678489" Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.700135 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tqbl9" podStartSLOduration=193.700127135 podStartE2EDuration="3m13.700127135s" podCreationTimestamp="2026-03-13 10:04:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:54.629585794 +0000 UTC m=+248.652115937" watchObservedRunningTime="2026-03-13 10:07:54.700127135 +0000 UTC m=+248.722657268" Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.727366 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-4hmjh" podStartSLOduration=194.727341898 podStartE2EDuration="3m14.727341898s" podCreationTimestamp="2026-03-13 10:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:54.727099723 +0000 UTC m=+248.749629856" watchObservedRunningTime="2026-03-13 10:07:54.727341898 +0000 UTC m=+248.749872031" Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.784113 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:54 crc kubenswrapper[4632]: E0313 10:07:54.784502 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:55.284486778 +0000 UTC m=+249.307016911 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.887656 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:54 crc kubenswrapper[4632]: E0313 10:07:54.888072 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:55.38805299 +0000 UTC m=+249.410583123 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.981278 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-qcb4l" podStartSLOduration=11.98125528 podStartE2EDuration="11.98125528s" podCreationTimestamp="2026-03-13 10:07:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:54.905612445 +0000 UTC m=+248.928142568" watchObservedRunningTime="2026-03-13 10:07:54.98125528 +0000 UTC m=+249.003785413" Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.983962 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jvh86"] Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.984929 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jvh86" Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.988222 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.988537 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:54 crc kubenswrapper[4632]: E0313 10:07:54.988701 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:55.488679791 +0000 UTC m=+249.511209924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.988752 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.988929 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kzjw\" (UniqueName: \"kubernetes.io/projected/bd46ae04-0610-4aa5-9385-dd45de66c5dd-kube-api-access-5kzjw\") pod \"community-operators-jvh86\" (UID: \"bd46ae04-0610-4aa5-9385-dd45de66c5dd\") " pod="openshift-marketplace/community-operators-jvh86" Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.989054 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd46ae04-0610-4aa5-9385-dd45de66c5dd-utilities\") pod \"community-operators-jvh86\" (UID: \"bd46ae04-0610-4aa5-9385-dd45de66c5dd\") " pod="openshift-marketplace/community-operators-jvh86" Mar 13 10:07:54 crc kubenswrapper[4632]: E0313 10:07:54.989090 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:55.489077519 +0000 UTC m=+249.511607652 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:54 crc kubenswrapper[4632]: I0313 10:07:54.989149 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd46ae04-0610-4aa5-9385-dd45de66c5dd-catalog-content\") pod \"community-operators-jvh86\" (UID: \"bd46ae04-0610-4aa5-9385-dd45de66c5dd\") " pod="openshift-marketplace/community-operators-jvh86" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.090106 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.090278 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kzjw\" (UniqueName: \"kubernetes.io/projected/bd46ae04-0610-4aa5-9385-dd45de66c5dd-kube-api-access-5kzjw\") pod \"community-operators-jvh86\" (UID: \"bd46ae04-0610-4aa5-9385-dd45de66c5dd\") " pod="openshift-marketplace/community-operators-jvh86" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.090337 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd46ae04-0610-4aa5-9385-dd45de66c5dd-utilities\") pod \"community-operators-jvh86\" (UID: \"bd46ae04-0610-4aa5-9385-dd45de66c5dd\") " pod="openshift-marketplace/community-operators-jvh86" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.090355 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd46ae04-0610-4aa5-9385-dd45de66c5dd-catalog-content\") pod \"community-operators-jvh86\" (UID: \"bd46ae04-0610-4aa5-9385-dd45de66c5dd\") " pod="openshift-marketplace/community-operators-jvh86" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.090826 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd46ae04-0610-4aa5-9385-dd45de66c5dd-catalog-content\") pod \"community-operators-jvh86\" (UID: \"bd46ae04-0610-4aa5-9385-dd45de66c5dd\") " pod="openshift-marketplace/community-operators-jvh86" Mar 13 10:07:55 crc kubenswrapper[4632]: E0313 10:07:55.090901 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:55.590886115 +0000 UTC m=+249.613416248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.091377 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd46ae04-0610-4aa5-9385-dd45de66c5dd-utilities\") pod \"community-operators-jvh86\" (UID: \"bd46ae04-0610-4aa5-9385-dd45de66c5dd\") " pod="openshift-marketplace/community-operators-jvh86" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.115864 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" podStartSLOduration=196.115845321 podStartE2EDuration="3m16.115845321s" podCreationTimestamp="2026-03-13 10:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:55.076411261 +0000 UTC m=+249.098941394" watchObservedRunningTime="2026-03-13 10:07:55.115845321 +0000 UTC m=+249.138375454" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.127459 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jvh86"] Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.194533 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:55 crc kubenswrapper[4632]: E0313 10:07:55.194916 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:55.694901936 +0000 UTC m=+249.717432069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.202474 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:07:55 crc kubenswrapper[4632]: [-]has-synced failed: reason withheld Mar 13 10:07:55 crc kubenswrapper[4632]: [+]process-running ok Mar 13 10:07:55 crc kubenswrapper[4632]: healthz check failed Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.202546 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.228370 4632 ???:1] "http: TLS handshake error from 192.168.126.11:53496: no serving certificate available for the kubelet" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.237661 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kzjw\" (UniqueName: \"kubernetes.io/projected/bd46ae04-0610-4aa5-9385-dd45de66c5dd-kube-api-access-5kzjw\") pod \"community-operators-jvh86\" (UID: \"bd46ae04-0610-4aa5-9385-dd45de66c5dd\") " pod="openshift-marketplace/community-operators-jvh86" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.296324 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:55 crc kubenswrapper[4632]: E0313 10:07:55.297212 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:55.797177592 +0000 UTC m=+249.819707725 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.316058 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" podStartSLOduration=196.316033054 podStartE2EDuration="3m16.316033054s" podCreationTimestamp="2026-03-13 10:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:55.307760176 +0000 UTC m=+249.330290309" watchObservedRunningTime="2026-03-13 10:07:55.316033054 +0000 UTC m=+249.338563187" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.318864 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xd455"] Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.320158 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xd455" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.333173 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jvh86" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.401928 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.402028 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh6g9\" (UniqueName: \"kubernetes.io/projected/cd6e3c73-fbc1-4213-bbef-02dd2b0587f8-kube-api-access-kh6g9\") pod \"community-operators-xd455\" (UID: \"cd6e3c73-fbc1-4213-bbef-02dd2b0587f8\") " pod="openshift-marketplace/community-operators-xd455" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.402104 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd6e3c73-fbc1-4213-bbef-02dd2b0587f8-utilities\") pod \"community-operators-xd455\" (UID: \"cd6e3c73-fbc1-4213-bbef-02dd2b0587f8\") " pod="openshift-marketplace/community-operators-xd455" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.402210 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd6e3c73-fbc1-4213-bbef-02dd2b0587f8-catalog-content\") pod \"community-operators-xd455\" (UID: \"cd6e3c73-fbc1-4213-bbef-02dd2b0587f8\") " pod="openshift-marketplace/community-operators-xd455" Mar 13 10:07:55 crc kubenswrapper[4632]: E0313 10:07:55.402737 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:55.902717453 +0000 UTC m=+249.925247586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.424383 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xd455"] Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.482640 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.484122 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.508908 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.509462 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd6e3c73-fbc1-4213-bbef-02dd2b0587f8-catalog-content\") pod \"community-operators-xd455\" (UID: \"cd6e3c73-fbc1-4213-bbef-02dd2b0587f8\") " pod="openshift-marketplace/community-operators-xd455" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.509567 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh6g9\" (UniqueName: \"kubernetes.io/projected/cd6e3c73-fbc1-4213-bbef-02dd2b0587f8-kube-api-access-kh6g9\") pod \"community-operators-xd455\" (UID: \"cd6e3c73-fbc1-4213-bbef-02dd2b0587f8\") " pod="openshift-marketplace/community-operators-xd455" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.509637 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd6e3c73-fbc1-4213-bbef-02dd2b0587f8-utilities\") pod \"community-operators-xd455\" (UID: \"cd6e3c73-fbc1-4213-bbef-02dd2b0587f8\") " pod="openshift-marketplace/community-operators-xd455" Mar 13 10:07:55 crc kubenswrapper[4632]: E0313 10:07:55.510243 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:56.010189534 +0000 UTC m=+250.032719667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.511122 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd6e3c73-fbc1-4213-bbef-02dd2b0587f8-catalog-content\") pod \"community-operators-xd455\" (UID: \"cd6e3c73-fbc1-4213-bbef-02dd2b0587f8\") " pod="openshift-marketplace/community-operators-xd455" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.511476 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd6e3c73-fbc1-4213-bbef-02dd2b0587f8-utilities\") pod \"community-operators-xd455\" (UID: \"cd6e3c73-fbc1-4213-bbef-02dd2b0587f8\") " pod="openshift-marketplace/community-operators-xd455" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.511495 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-g2wxc" podStartSLOduration=12.511485401 podStartE2EDuration="12.511485401s" podCreationTimestamp="2026-03-13 10:07:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:55.443763576 +0000 UTC m=+249.466293709" watchObservedRunningTime="2026-03-13 10:07:55.511485401 +0000 UTC m=+249.534015534" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.511617 4632 patch_prober.go:28] interesting pod/console-operator-58897d9998-sbtn5 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.511667 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-sbtn5" podUID="ef269b18-ea84-43c2-971c-e772149acbf6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.512387 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p8wjg"] Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.513150 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-68mjx" podStartSLOduration=194.513141794 podStartE2EDuration="3m14.513141794s" podCreationTimestamp="2026-03-13 10:04:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:55.507895527 +0000 UTC m=+249.530425670" watchObservedRunningTime="2026-03-13 10:07:55.513141794 +0000 UTC m=+249.535671927" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.513598 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p8wjg" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.517756 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.518319 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.530898 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc" Mar 13 10:07:55 crc kubenswrapper[4632]: W0313 10:07:55.541552 4632 reflector.go:561] object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g": failed to list *v1.Secret: secrets "certified-operators-dockercfg-4rs5g" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Mar 13 10:07:55 crc kubenswrapper[4632]: E0313 10:07:55.541634 4632 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-4rs5g\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"certified-operators-dockercfg-4rs5g\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.570855 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh6g9\" (UniqueName: \"kubernetes.io/projected/cd6e3c73-fbc1-4213-bbef-02dd2b0587f8-kube-api-access-kh6g9\") pod \"community-operators-xd455\" (UID: \"cd6e3c73-fbc1-4213-bbef-02dd2b0587f8\") " pod="openshift-marketplace/community-operators-xd455" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.578889 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p8wjg"] Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.599237 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-6fqf5" event={"ID":"c94773d8-a922-4778-b2ba-8937e9d6c19b","Type":"ContainerStarted","Data":"7322cc652a565a3dbc9f2961b94879fa1c1cefc9d2f67a83d4d25ee10d30123a"} Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.611015 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.611076 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlsv8\" (UniqueName: \"kubernetes.io/projected/b11a7dff-bf08-44c3-b4f4-923119c13717-kube-api-access-wlsv8\") pod \"certified-operators-p8wjg\" (UID: \"b11a7dff-bf08-44c3-b4f4-923119c13717\") " pod="openshift-marketplace/certified-operators-p8wjg" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.611162 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11a7dff-bf08-44c3-b4f4-923119c13717-utilities\") pod \"certified-operators-p8wjg\" (UID: \"b11a7dff-bf08-44c3-b4f4-923119c13717\") " pod="openshift-marketplace/certified-operators-p8wjg" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.611187 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11a7dff-bf08-44c3-b4f4-923119c13717-catalog-content\") pod \"certified-operators-p8wjg\" (UID: \"b11a7dff-bf08-44c3-b4f4-923119c13717\") " pod="openshift-marketplace/certified-operators-p8wjg" Mar 13 10:07:55 crc kubenswrapper[4632]: E0313 10:07:55.611586 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:56.111572251 +0000 UTC m=+250.134102384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.613299 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-4hmjh" event={"ID":"e0e1f142-2930-4f9b-b851-f7f7df22676b","Type":"ContainerStarted","Data":"bc5e045ba766bfb2723324f723497a7fd48f730201e476208c152ec22d0fe530"} Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.632149 4632 patch_prober.go:28] interesting pod/downloads-7954f5f757-w2hhj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.632200 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-w2hhj" podUID="7d155f24-9bfc-4039-9981-10e7f724fa51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.632226 4632 patch_prober.go:28] interesting pod/downloads-7954f5f757-w2hhj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.632275 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w2hhj" podUID="7d155f24-9bfc-4039-9981-10e7f724fa51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.646823 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-g2wxc" event={"ID":"58d59f3d-e656-4217-9472-62508a7ccc93","Type":"ContainerStarted","Data":"d2664a1ff1c3241483169e35dd71718d3944b4647d3547d3172180d62f38666f"} Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.652335 4632 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2n99d container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.652365 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2n99d" podUID="797176c6-dd56-48d6-8004-ff1dd5353a50" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.677208 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vrbc" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.691867 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8z668"] Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.699266 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8z668" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.704843 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tqbl9" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.706746 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xd455" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.715994 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.716437 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9845f384-2720-4d6a-aa73-1e66e30f7c2c-catalog-content\") pod \"certified-operators-8z668\" (UID: \"9845f384-2720-4d6a-aa73-1e66e30f7c2c\") " pod="openshift-marketplace/certified-operators-8z668" Mar 13 10:07:55 crc kubenswrapper[4632]: E0313 10:07:55.716784 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:56.216751145 +0000 UTC m=+250.239281468 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.717027 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.717074 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlsv8\" (UniqueName: \"kubernetes.io/projected/b11a7dff-bf08-44c3-b4f4-923119c13717-kube-api-access-wlsv8\") pod \"certified-operators-p8wjg\" (UID: \"b11a7dff-bf08-44c3-b4f4-923119c13717\") " pod="openshift-marketplace/certified-operators-p8wjg" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.717162 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdp56\" (UniqueName: \"kubernetes.io/projected/9845f384-2720-4d6a-aa73-1e66e30f7c2c-kube-api-access-sdp56\") pod \"certified-operators-8z668\" (UID: \"9845f384-2720-4d6a-aa73-1e66e30f7c2c\") " pod="openshift-marketplace/certified-operators-8z668" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.717391 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9845f384-2720-4d6a-aa73-1e66e30f7c2c-utilities\") pod \"certified-operators-8z668\" (UID: \"9845f384-2720-4d6a-aa73-1e66e30f7c2c\") " pod="openshift-marketplace/certified-operators-8z668" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.717620 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11a7dff-bf08-44c3-b4f4-923119c13717-utilities\") pod \"certified-operators-p8wjg\" (UID: \"b11a7dff-bf08-44c3-b4f4-923119c13717\") " pod="openshift-marketplace/certified-operators-p8wjg" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.717672 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11a7dff-bf08-44c3-b4f4-923119c13717-catalog-content\") pod \"certified-operators-p8wjg\" (UID: \"b11a7dff-bf08-44c3-b4f4-923119c13717\") " pod="openshift-marketplace/certified-operators-p8wjg" Mar 13 10:07:55 crc kubenswrapper[4632]: E0313 10:07:55.736826 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:56.236797453 +0000 UTC m=+250.259327576 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.738625 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11a7dff-bf08-44c3-b4f4-923119c13717-utilities\") pod \"certified-operators-p8wjg\" (UID: \"b11a7dff-bf08-44c3-b4f4-923119c13717\") " pod="openshift-marketplace/certified-operators-p8wjg" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.738833 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11a7dff-bf08-44c3-b4f4-923119c13717-catalog-content\") pod \"certified-operators-p8wjg\" (UID: \"b11a7dff-bf08-44c3-b4f4-923119c13717\") " pod="openshift-marketplace/certified-operators-p8wjg" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.798224 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8z668"] Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.819162 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.819431 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9845f384-2720-4d6a-aa73-1e66e30f7c2c-utilities\") pod \"certified-operators-8z668\" (UID: \"9845f384-2720-4d6a-aa73-1e66e30f7c2c\") " pod="openshift-marketplace/certified-operators-8z668" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.819520 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9845f384-2720-4d6a-aa73-1e66e30f7c2c-catalog-content\") pod \"certified-operators-8z668\" (UID: \"9845f384-2720-4d6a-aa73-1e66e30f7c2c\") " pod="openshift-marketplace/certified-operators-8z668" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.819636 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdp56\" (UniqueName: \"kubernetes.io/projected/9845f384-2720-4d6a-aa73-1e66e30f7c2c-kube-api-access-sdp56\") pod \"certified-operators-8z668\" (UID: \"9845f384-2720-4d6a-aa73-1e66e30f7c2c\") " pod="openshift-marketplace/certified-operators-8z668" Mar 13 10:07:55 crc kubenswrapper[4632]: E0313 10:07:55.820103 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:56.320084823 +0000 UTC m=+250.342614956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.820558 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9845f384-2720-4d6a-aa73-1e66e30f7c2c-utilities\") pod \"certified-operators-8z668\" (UID: \"9845f384-2720-4d6a-aa73-1e66e30f7c2c\") " pod="openshift-marketplace/certified-operators-8z668" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.820831 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9845f384-2720-4d6a-aa73-1e66e30f7c2c-catalog-content\") pod \"certified-operators-8z668\" (UID: \"9845f384-2720-4d6a-aa73-1e66e30f7c2c\") " pod="openshift-marketplace/certified-operators-8z668" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.840912 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlsv8\" (UniqueName: \"kubernetes.io/projected/b11a7dff-bf08-44c3-b4f4-923119c13717-kube-api-access-wlsv8\") pod \"certified-operators-p8wjg\" (UID: \"b11a7dff-bf08-44c3-b4f4-923119c13717\") " pod="openshift-marketplace/certified-operators-p8wjg" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.888865 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdp56\" (UniqueName: \"kubernetes.io/projected/9845f384-2720-4d6a-aa73-1e66e30f7c2c-kube-api-access-sdp56\") pod \"certified-operators-8z668\" (UID: \"9845f384-2720-4d6a-aa73-1e66e30f7c2c\") " pod="openshift-marketplace/certified-operators-8z668" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.924400 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:55 crc kubenswrapper[4632]: E0313 10:07:55.924837 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:56.424819598 +0000 UTC m=+250.447349721 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.984187 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-zn7mn" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.985668 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-zn7mn" Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.994542 4632 patch_prober.go:28] interesting pod/console-f9d7485db-zn7mn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Mar 13 10:07:55 crc kubenswrapper[4632]: I0313 10:07:55.994625 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-zn7mn" podUID="f5a50074-5531-442f-a0e9-0578f15634c1" containerName="console" probeResult="failure" output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" Mar 13 10:07:56 crc kubenswrapper[4632]: I0313 10:07:56.026759 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:56 crc kubenswrapper[4632]: E0313 10:07:56.028898 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:56.528869359 +0000 UTC m=+250.551399492 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:56 crc kubenswrapper[4632]: I0313 10:07:56.129653 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:56 crc kubenswrapper[4632]: E0313 10:07:56.130114 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:56.630102254 +0000 UTC m=+250.652632387 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:56 crc kubenswrapper[4632]: I0313 10:07:56.191380 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-t9vht" Mar 13 10:07:56 crc kubenswrapper[4632]: I0313 10:07:56.197172 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:07:56 crc kubenswrapper[4632]: [-]has-synced failed: reason withheld Mar 13 10:07:56 crc kubenswrapper[4632]: [+]process-running ok Mar 13 10:07:56 crc kubenswrapper[4632]: healthz check failed Mar 13 10:07:56 crc kubenswrapper[4632]: I0313 10:07:56.197282 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:07:56 crc kubenswrapper[4632]: I0313 10:07:56.205105 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-6fqf5" podStartSLOduration=196.205069095 podStartE2EDuration="3m16.205069095s" podCreationTimestamp="2026-03-13 10:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:07:56.18360821 +0000 UTC m=+250.206138353" watchObservedRunningTime="2026-03-13 10:07:56.205069095 +0000 UTC m=+250.227599228" Mar 13 10:07:56 crc kubenswrapper[4632]: I0313 10:07:56.235928 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:56 crc kubenswrapper[4632]: E0313 10:07:56.249247 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:56.749211091 +0000 UTC m=+250.771741224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:56 crc kubenswrapper[4632]: I0313 10:07:56.396064 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:56 crc kubenswrapper[4632]: E0313 10:07:56.396810 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:56.896796686 +0000 UTC m=+250.919326819 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:56 crc kubenswrapper[4632]: I0313 10:07:56.503450 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:56 crc kubenswrapper[4632]: E0313 10:07:56.504075 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:57.004052533 +0000 UTC m=+251.026582666 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:56 crc kubenswrapper[4632]: I0313 10:07:56.580464 4632 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2n99d container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Mar 13 10:07:56 crc kubenswrapper[4632]: I0313 10:07:56.580501 4632 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2n99d container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Mar 13 10:07:56 crc kubenswrapper[4632]: I0313 10:07:56.580551 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-2n99d" podUID="797176c6-dd56-48d6-8004-ff1dd5353a50" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Mar 13 10:07:56 crc kubenswrapper[4632]: I0313 10:07:56.580581 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2n99d" podUID="797176c6-dd56-48d6-8004-ff1dd5353a50" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Mar 13 10:07:56 crc kubenswrapper[4632]: I0313 10:07:56.607227 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:56 crc kubenswrapper[4632]: E0313 10:07:56.607707 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:57.107689856 +0000 UTC m=+251.130219989 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:56 crc kubenswrapper[4632]: I0313 10:07:56.691158 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvrrc" event={"ID":"09ddc697-7ac1-4896-b9e2-1ae6c59c6f47","Type":"ContainerStarted","Data":"3bf30655a86ac973647b131da3b6f942b8afa26f8c24f35c2fd8e55e5b065db4"} Mar 13 10:07:56 crc kubenswrapper[4632]: I0313 10:07:56.709151 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:56 crc kubenswrapper[4632]: E0313 10:07:56.710005 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:57.209928491 +0000 UTC m=+251.232458614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:56 crc kubenswrapper[4632]: I0313 10:07:56.785822 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-sbtn5" Mar 13 10:07:56 crc kubenswrapper[4632]: I0313 10:07:56.814292 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:56 crc kubenswrapper[4632]: E0313 10:07:56.820410 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:57.320385633 +0000 UTC m=+251.342915966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:56 crc kubenswrapper[4632]: I0313 10:07:56.865089 4632 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-marketplace/certified-operators-p8wjg" secret="" err="failed to sync secret cache: timed out waiting for the condition" Mar 13 10:07:56 crc kubenswrapper[4632]: I0313 10:07:56.865241 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p8wjg" Mar 13 10:07:56 crc kubenswrapper[4632]: I0313 10:07:56.917613 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:56 crc kubenswrapper[4632]: E0313 10:07:56.918791 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:57.418769239 +0000 UTC m=+251.441299372 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:57 crc kubenswrapper[4632]: I0313 10:07:57.023552 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:57 crc kubenswrapper[4632]: E0313 10:07:57.024042 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:57.524030005 +0000 UTC m=+251.546560138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:57 crc kubenswrapper[4632]: I0313 10:07:57.093667 4632 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-marketplace/certified-operators-8z668" secret="" err="failed to sync secret cache: timed out waiting for the condition" Mar 13 10:07:57 crc kubenswrapper[4632]: I0313 10:07:57.093758 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8z668" Mar 13 10:07:57 crc kubenswrapper[4632]: I0313 10:07:57.127029 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 13 10:07:57 crc kubenswrapper[4632]: I0313 10:07:57.132559 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:57 crc kubenswrapper[4632]: E0313 10:07:57.133968 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:57.633920625 +0000 UTC m=+251.656450758 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:57 crc kubenswrapper[4632]: I0313 10:07:57.192489 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jvh86"] Mar 13 10:07:57 crc kubenswrapper[4632]: I0313 10:07:57.202687 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:07:57 crc kubenswrapper[4632]: [-]has-synced failed: reason withheld Mar 13 10:07:57 crc kubenswrapper[4632]: [+]process-running ok Mar 13 10:07:57 crc kubenswrapper[4632]: healthz check failed Mar 13 10:07:57 crc kubenswrapper[4632]: I0313 10:07:57.202777 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:07:57 crc kubenswrapper[4632]: I0313 10:07:57.235397 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:57 crc kubenswrapper[4632]: E0313 10:07:57.235887 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:57.735868683 +0000 UTC m=+251.758398816 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:57 crc kubenswrapper[4632]: I0313 10:07:57.266468 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xd455"] Mar 13 10:07:57 crc kubenswrapper[4632]: I0313 10:07:57.340174 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:57 crc kubenswrapper[4632]: E0313 10:07:57.340663 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:57.84063984 +0000 UTC m=+251.863169963 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:57 crc kubenswrapper[4632]: I0313 10:07:57.466027 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:57 crc kubenswrapper[4632]: E0313 10:07:57.467180 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:57.967155268 +0000 UTC m=+251.989685401 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:57 crc kubenswrapper[4632]: I0313 10:07:57.538069 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" Mar 13 10:07:57 crc kubenswrapper[4632]: I0313 10:07:57.569706 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:57 crc kubenswrapper[4632]: E0313 10:07:57.570169 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:58.070153837 +0000 UTC m=+252.092683960 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:57 crc kubenswrapper[4632]: I0313 10:07:57.672088 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:57 crc kubenswrapper[4632]: E0313 10:07:57.672674 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:58.172661177 +0000 UTC m=+252.195191310 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:57 crc kubenswrapper[4632]: I0313 10:07:57.699422 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xd455" event={"ID":"cd6e3c73-fbc1-4213-bbef-02dd2b0587f8","Type":"ContainerStarted","Data":"33259f14f07cee3a1d7261a44a8f74cbd0957ccef81016b9631c3a0a7ccd4085"} Mar 13 10:07:57 crc kubenswrapper[4632]: I0313 10:07:57.701247 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jvh86" event={"ID":"bd46ae04-0610-4aa5-9385-dd45de66c5dd","Type":"ContainerStarted","Data":"c1e7366f3326cfd08308453ff8a94a3f8d3ce8ebc6a33b2bfafadd960643927e"} Mar 13 10:07:57 crc kubenswrapper[4632]: I0313 10:07:57.774204 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:57 crc kubenswrapper[4632]: E0313 10:07:57.774387 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:58.274360761 +0000 UTC m=+252.296890894 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:57 crc kubenswrapper[4632]: I0313 10:07:57.774642 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:57 crc kubenswrapper[4632]: E0313 10:07:57.775627 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:58.275616657 +0000 UTC m=+252.298147000 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:57 crc kubenswrapper[4632]: I0313 10:07:57.813119 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-txp2w"] Mar 13 10:07:57 crc kubenswrapper[4632]: I0313 10:07:57.814593 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-txp2w" Mar 13 10:07:57 crc kubenswrapper[4632]: I0313 10:07:57.862716 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 13 10:07:57 crc kubenswrapper[4632]: I0313 10:07:57.877594 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:57 crc kubenswrapper[4632]: E0313 10:07:57.878394 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:58.378377642 +0000 UTC m=+252.400907775 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:57 crc kubenswrapper[4632]: I0313 10:07:57.955382 4632 patch_prober.go:28] interesting pod/apiserver-76f77b778f-p9gp2 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 13 10:07:57 crc kubenswrapper[4632]: [+]log ok Mar 13 10:07:57 crc kubenswrapper[4632]: [+]etcd ok Mar 13 10:07:57 crc kubenswrapper[4632]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 13 10:07:57 crc kubenswrapper[4632]: [+]poststarthook/generic-apiserver-start-informers ok Mar 13 10:07:57 crc kubenswrapper[4632]: [+]poststarthook/max-in-flight-filter ok Mar 13 10:07:57 crc kubenswrapper[4632]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 13 10:07:57 crc kubenswrapper[4632]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 13 10:07:57 crc kubenswrapper[4632]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 13 10:07:57 crc kubenswrapper[4632]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Mar 13 10:07:57 crc kubenswrapper[4632]: [+]poststarthook/project.openshift.io-projectcache ok Mar 13 10:07:57 crc kubenswrapper[4632]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 13 10:07:57 crc kubenswrapper[4632]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Mar 13 10:07:57 crc kubenswrapper[4632]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 13 10:07:57 crc kubenswrapper[4632]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 13 10:07:57 crc kubenswrapper[4632]: livez check failed Mar 13 10:07:57 crc kubenswrapper[4632]: I0313 10:07:57.955468 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" podUID="37df1143-69fc-4d13-a5d3-790a9d14814a" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:07:57 crc kubenswrapper[4632]: I0313 10:07:57.985040 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5jx8\" (UniqueName: \"kubernetes.io/projected/f0cd0b7e-eded-4a51-8b1e-e67b9381bc87-kube-api-access-z5jx8\") pod \"redhat-marketplace-txp2w\" (UID: \"f0cd0b7e-eded-4a51-8b1e-e67b9381bc87\") " pod="openshift-marketplace/redhat-marketplace-txp2w" Mar 13 10:07:57 crc kubenswrapper[4632]: I0313 10:07:57.985101 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:57 crc kubenswrapper[4632]: I0313 10:07:57.985182 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0cd0b7e-eded-4a51-8b1e-e67b9381bc87-utilities\") pod \"redhat-marketplace-txp2w\" (UID: \"f0cd0b7e-eded-4a51-8b1e-e67b9381bc87\") " pod="openshift-marketplace/redhat-marketplace-txp2w" Mar 13 10:07:57 crc kubenswrapper[4632]: I0313 10:07:57.985205 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0cd0b7e-eded-4a51-8b1e-e67b9381bc87-catalog-content\") pod \"redhat-marketplace-txp2w\" (UID: \"f0cd0b7e-eded-4a51-8b1e-e67b9381bc87\") " pod="openshift-marketplace/redhat-marketplace-txp2w" Mar 13 10:07:57 crc kubenswrapper[4632]: E0313 10:07:57.985824 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:58.485808193 +0000 UTC m=+252.508338326 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.028839 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-txp2w"] Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.141436 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.141703 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0cd0b7e-eded-4a51-8b1e-e67b9381bc87-utilities\") pod \"redhat-marketplace-txp2w\" (UID: \"f0cd0b7e-eded-4a51-8b1e-e67b9381bc87\") " pod="openshift-marketplace/redhat-marketplace-txp2w" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.141730 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0cd0b7e-eded-4a51-8b1e-e67b9381bc87-catalog-content\") pod \"redhat-marketplace-txp2w\" (UID: \"f0cd0b7e-eded-4a51-8b1e-e67b9381bc87\") " pod="openshift-marketplace/redhat-marketplace-txp2w" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.141797 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5jx8\" (UniqueName: \"kubernetes.io/projected/f0cd0b7e-eded-4a51-8b1e-e67b9381bc87-kube-api-access-z5jx8\") pod \"redhat-marketplace-txp2w\" (UID: \"f0cd0b7e-eded-4a51-8b1e-e67b9381bc87\") " pod="openshift-marketplace/redhat-marketplace-txp2w" Mar 13 10:07:58 crc kubenswrapper[4632]: E0313 10:07:58.142279 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:58.642265628 +0000 UTC m=+252.664795761 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.143127 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0cd0b7e-eded-4a51-8b1e-e67b9381bc87-catalog-content\") pod \"redhat-marketplace-txp2w\" (UID: \"f0cd0b7e-eded-4a51-8b1e-e67b9381bc87\") " pod="openshift-marketplace/redhat-marketplace-txp2w" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.143442 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0cd0b7e-eded-4a51-8b1e-e67b9381bc87-utilities\") pod \"redhat-marketplace-txp2w\" (UID: \"f0cd0b7e-eded-4a51-8b1e-e67b9381bc87\") " pod="openshift-marketplace/redhat-marketplace-txp2w" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.168693 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t6bkt"] Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.169797 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t6bkt" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.196154 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:07:58 crc kubenswrapper[4632]: [-]has-synced failed: reason withheld Mar 13 10:07:58 crc kubenswrapper[4632]: [+]process-running ok Mar 13 10:07:58 crc kubenswrapper[4632]: healthz check failed Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.196217 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.232646 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t6bkt"] Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.239964 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5jx8\" (UniqueName: \"kubernetes.io/projected/f0cd0b7e-eded-4a51-8b1e-e67b9381bc87-kube-api-access-z5jx8\") pod \"redhat-marketplace-txp2w\" (UID: \"f0cd0b7e-eded-4a51-8b1e-e67b9381bc87\") " pod="openshift-marketplace/redhat-marketplace-txp2w" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.243211 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:58 crc kubenswrapper[4632]: E0313 10:07:58.243890 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:58.743871679 +0000 UTC m=+252.766401812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.297284 4632 ???:1] "http: TLS handshake error from 192.168.126.11:53504: no serving certificate available for the kubelet" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.345060 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z2gc7"] Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.350882 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:58 crc kubenswrapper[4632]: E0313 10:07:58.371137 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:58.871103111 +0000 UTC m=+252.893633254 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:58 crc kubenswrapper[4632]: E0313 10:07:58.386769 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:58.886752878 +0000 UTC m=+252.909283011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.386350 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.386868 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/668c4640-0e5f-4c98-8b6e-dbdffdbfe14e-utilities\") pod \"redhat-marketplace-t6bkt\" (UID: \"668c4640-0e5f-4c98-8b6e-dbdffdbfe14e\") " pod="openshift-marketplace/redhat-marketplace-t6bkt" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.386910 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/668c4640-0e5f-4c98-8b6e-dbdffdbfe14e-catalog-content\") pod \"redhat-marketplace-t6bkt\" (UID: \"668c4640-0e5f-4c98-8b6e-dbdffdbfe14e\") " pod="openshift-marketplace/redhat-marketplace-t6bkt" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.387065 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc6ff\" (UniqueName: \"kubernetes.io/projected/668c4640-0e5f-4c98-8b6e-dbdffdbfe14e-kube-api-access-bc6ff\") pod \"redhat-marketplace-t6bkt\" (UID: \"668c4640-0e5f-4c98-8b6e-dbdffdbfe14e\") " pod="openshift-marketplace/redhat-marketplace-t6bkt" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.397452 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z2gc7" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.423007 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.507213 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-txp2w" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.509600 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z2gc7"] Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.509828 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.511258 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc6ff\" (UniqueName: \"kubernetes.io/projected/668c4640-0e5f-4c98-8b6e-dbdffdbfe14e-kube-api-access-bc6ff\") pod \"redhat-marketplace-t6bkt\" (UID: \"668c4640-0e5f-4c98-8b6e-dbdffdbfe14e\") " pod="openshift-marketplace/redhat-marketplace-t6bkt" Mar 13 10:07:58 crc kubenswrapper[4632]: E0313 10:07:58.511298 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:59.011270535 +0000 UTC m=+253.033800668 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.511341 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a110c276-8516-4f9e-a6af-d6837cd0f387-catalog-content\") pod \"redhat-operators-z2gc7\" (UID: \"a110c276-8516-4f9e-a6af-d6837cd0f387\") " pod="openshift-marketplace/redhat-operators-z2gc7" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.511494 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a110c276-8516-4f9e-a6af-d6837cd0f387-utilities\") pod \"redhat-operators-z2gc7\" (UID: \"a110c276-8516-4f9e-a6af-d6837cd0f387\") " pod="openshift-marketplace/redhat-operators-z2gc7" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.511546 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.511576 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/668c4640-0e5f-4c98-8b6e-dbdffdbfe14e-utilities\") pod \"redhat-marketplace-t6bkt\" (UID: \"668c4640-0e5f-4c98-8b6e-dbdffdbfe14e\") " pod="openshift-marketplace/redhat-marketplace-t6bkt" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.511599 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfdk8\" (UniqueName: \"kubernetes.io/projected/a110c276-8516-4f9e-a6af-d6837cd0f387-kube-api-access-tfdk8\") pod \"redhat-operators-z2gc7\" (UID: \"a110c276-8516-4f9e-a6af-d6837cd0f387\") " pod="openshift-marketplace/redhat-operators-z2gc7" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.511617 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/668c4640-0e5f-4c98-8b6e-dbdffdbfe14e-catalog-content\") pod \"redhat-marketplace-t6bkt\" (UID: \"668c4640-0e5f-4c98-8b6e-dbdffdbfe14e\") " pod="openshift-marketplace/redhat-marketplace-t6bkt" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.512229 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/668c4640-0e5f-4c98-8b6e-dbdffdbfe14e-catalog-content\") pod \"redhat-marketplace-t6bkt\" (UID: \"668c4640-0e5f-4c98-8b6e-dbdffdbfe14e\") " pod="openshift-marketplace/redhat-marketplace-t6bkt" Mar 13 10:07:58 crc kubenswrapper[4632]: E0313 10:07:58.512519 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:59.01251024 +0000 UTC m=+253.035040373 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.512807 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/668c4640-0e5f-4c98-8b6e-dbdffdbfe14e-utilities\") pod \"redhat-marketplace-t6bkt\" (UID: \"668c4640-0e5f-4c98-8b6e-dbdffdbfe14e\") " pod="openshift-marketplace/redhat-marketplace-t6bkt" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.592631 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xr5l9"] Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.614428 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xr5l9" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.615330 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.615524 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a110c276-8516-4f9e-a6af-d6837cd0f387-catalog-content\") pod \"redhat-operators-z2gc7\" (UID: \"a110c276-8516-4f9e-a6af-d6837cd0f387\") " pod="openshift-marketplace/redhat-operators-z2gc7" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.615595 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a110c276-8516-4f9e-a6af-d6837cd0f387-utilities\") pod \"redhat-operators-z2gc7\" (UID: \"a110c276-8516-4f9e-a6af-d6837cd0f387\") " pod="openshift-marketplace/redhat-operators-z2gc7" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.615626 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfdk8\" (UniqueName: \"kubernetes.io/projected/a110c276-8516-4f9e-a6af-d6837cd0f387-kube-api-access-tfdk8\") pod \"redhat-operators-z2gc7\" (UID: \"a110c276-8516-4f9e-a6af-d6837cd0f387\") " pod="openshift-marketplace/redhat-operators-z2gc7" Mar 13 10:07:58 crc kubenswrapper[4632]: E0313 10:07:58.616018 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:59.116002981 +0000 UTC m=+253.138533114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.616433 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a110c276-8516-4f9e-a6af-d6837cd0f387-catalog-content\") pod \"redhat-operators-z2gc7\" (UID: \"a110c276-8516-4f9e-a6af-d6837cd0f387\") " pod="openshift-marketplace/redhat-operators-z2gc7" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.618001 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a110c276-8516-4f9e-a6af-d6837cd0f387-utilities\") pod \"redhat-operators-z2gc7\" (UID: \"a110c276-8516-4f9e-a6af-d6837cd0f387\") " pod="openshift-marketplace/redhat-operators-z2gc7" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.681496 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xr5l9"] Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.722350 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc6ff\" (UniqueName: \"kubernetes.io/projected/668c4640-0e5f-4c98-8b6e-dbdffdbfe14e-kube-api-access-bc6ff\") pod \"redhat-marketplace-t6bkt\" (UID: \"668c4640-0e5f-4c98-8b6e-dbdffdbfe14e\") " pod="openshift-marketplace/redhat-marketplace-t6bkt" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.723658 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87965e39-b879-4e26-9c8b-b78068c52aa0-utilities\") pod \"redhat-operators-xr5l9\" (UID: \"87965e39-b879-4e26-9c8b-b78068c52aa0\") " pod="openshift-marketplace/redhat-operators-xr5l9" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.723751 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hfz2\" (UniqueName: \"kubernetes.io/projected/87965e39-b879-4e26-9c8b-b78068c52aa0-kube-api-access-6hfz2\") pod \"redhat-operators-xr5l9\" (UID: \"87965e39-b879-4e26-9c8b-b78068c52aa0\") " pod="openshift-marketplace/redhat-operators-xr5l9" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.723778 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87965e39-b879-4e26-9c8b-b78068c52aa0-catalog-content\") pod \"redhat-operators-xr5l9\" (UID: \"87965e39-b879-4e26-9c8b-b78068c52aa0\") " pod="openshift-marketplace/redhat-operators-xr5l9" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.723824 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:58 crc kubenswrapper[4632]: E0313 10:07:58.724205 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:59.224191996 +0000 UTC m=+253.246722129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.751672 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfdk8\" (UniqueName: \"kubernetes.io/projected/a110c276-8516-4f9e-a6af-d6837cd0f387-kube-api-access-tfdk8\") pod \"redhat-operators-z2gc7\" (UID: \"a110c276-8516-4f9e-a6af-d6837cd0f387\") " pod="openshift-marketplace/redhat-operators-z2gc7" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.770764 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvrrc" event={"ID":"09ddc697-7ac1-4896-b9e2-1ae6c59c6f47","Type":"ContainerStarted","Data":"ad8f763365560329a7f776a97375e1fdcc2fb2412beb2907eed4da851706f983"} Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.775708 4632 generic.go:334] "Generic (PLEG): container finished" podID="cd6e3c73-fbc1-4213-bbef-02dd2b0587f8" containerID="39c617653cdae12029a38a740d3aa9e4c08c056d9865caf4f87830fbf0817555" exitCode=0 Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.775773 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xd455" event={"ID":"cd6e3c73-fbc1-4213-bbef-02dd2b0587f8","Type":"ContainerDied","Data":"39c617653cdae12029a38a740d3aa9e4c08c056d9865caf4f87830fbf0817555"} Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.805092 4632 generic.go:334] "Generic (PLEG): container finished" podID="bd46ae04-0610-4aa5-9385-dd45de66c5dd" containerID="eabb475f877c5898896f887fa631fab417c1e3579d0424b2b6c06f4278f091af" exitCode=0 Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.805140 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jvh86" event={"ID":"bd46ae04-0610-4aa5-9385-dd45de66c5dd","Type":"ContainerDied","Data":"eabb475f877c5898896f887fa631fab417c1e3579d0424b2b6c06f4278f091af"} Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.824793 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.824999 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87965e39-b879-4e26-9c8b-b78068c52aa0-utilities\") pod \"redhat-operators-xr5l9\" (UID: \"87965e39-b879-4e26-9c8b-b78068c52aa0\") " pod="openshift-marketplace/redhat-operators-xr5l9" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.825090 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hfz2\" (UniqueName: \"kubernetes.io/projected/87965e39-b879-4e26-9c8b-b78068c52aa0-kube-api-access-6hfz2\") pod \"redhat-operators-xr5l9\" (UID: \"87965e39-b879-4e26-9c8b-b78068c52aa0\") " pod="openshift-marketplace/redhat-operators-xr5l9" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.825115 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87965e39-b879-4e26-9c8b-b78068c52aa0-catalog-content\") pod \"redhat-operators-xr5l9\" (UID: \"87965e39-b879-4e26-9c8b-b78068c52aa0\") " pod="openshift-marketplace/redhat-operators-xr5l9" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.825630 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87965e39-b879-4e26-9c8b-b78068c52aa0-catalog-content\") pod \"redhat-operators-xr5l9\" (UID: \"87965e39-b879-4e26-9c8b-b78068c52aa0\") " pod="openshift-marketplace/redhat-operators-xr5l9" Mar 13 10:07:58 crc kubenswrapper[4632]: E0313 10:07:58.825703 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:59.325686676 +0000 UTC m=+253.348216809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.826106 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87965e39-b879-4e26-9c8b-b78068c52aa0-utilities\") pod \"redhat-operators-xr5l9\" (UID: \"87965e39-b879-4e26-9c8b-b78068c52aa0\") " pod="openshift-marketplace/redhat-operators-xr5l9" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.878766 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hfz2\" (UniqueName: \"kubernetes.io/projected/87965e39-b879-4e26-9c8b-b78068c52aa0-kube-api-access-6hfz2\") pod \"redhat-operators-xr5l9\" (UID: \"87965e39-b879-4e26-9c8b-b78068c52aa0\") " pod="openshift-marketplace/redhat-operators-xr5l9" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.929470 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:58 crc kubenswrapper[4632]: E0313 10:07:58.930379 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:59.430358931 +0000 UTC m=+253.452889064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.978014 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xr5l9" Mar 13 10:07:58 crc kubenswrapper[4632]: I0313 10:07:58.992412 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t6bkt" Mar 13 10:07:59 crc kubenswrapper[4632]: I0313 10:07:59.050436 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z2gc7" Mar 13 10:07:59 crc kubenswrapper[4632]: I0313 10:07:59.051231 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:59 crc kubenswrapper[4632]: E0313 10:07:59.051710 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:59.551688172 +0000 UTC m=+253.574218315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:59 crc kubenswrapper[4632]: I0313 10:07:59.160958 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:59 crc kubenswrapper[4632]: E0313 10:07:59.161484 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:59.661468871 +0000 UTC m=+253.683999004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:59 crc kubenswrapper[4632]: I0313 10:07:59.164243 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" Mar 13 10:07:59 crc kubenswrapper[4632]: I0313 10:07:59.211179 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:07:59 crc kubenswrapper[4632]: [-]has-synced failed: reason withheld Mar 13 10:07:59 crc kubenswrapper[4632]: [+]process-running ok Mar 13 10:07:59 crc kubenswrapper[4632]: healthz check failed Mar 13 10:07:59 crc kubenswrapper[4632]: I0313 10:07:59.211246 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:07:59 crc kubenswrapper[4632]: I0313 10:07:59.271600 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:59 crc kubenswrapper[4632]: E0313 10:07:59.272792 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:59.772772739 +0000 UTC m=+253.795302872 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:59 crc kubenswrapper[4632]: I0313 10:07:59.373146 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:59 crc kubenswrapper[4632]: E0313 10:07:59.373532 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:07:59.873510883 +0000 UTC m=+253.896041076 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:59 crc kubenswrapper[4632]: I0313 10:07:59.476515 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:59 crc kubenswrapper[4632]: E0313 10:07:59.476894 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:07:59.976876111 +0000 UTC m=+253.999406244 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:59 crc kubenswrapper[4632]: I0313 10:07:59.583896 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:59 crc kubenswrapper[4632]: E0313 10:07:59.584563 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:08:00.084550656 +0000 UTC m=+254.107080789 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:59 crc kubenswrapper[4632]: I0313 10:07:59.618463 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8z668"] Mar 13 10:07:59 crc kubenswrapper[4632]: I0313 10:07:59.687974 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:59 crc kubenswrapper[4632]: E0313 10:07:59.688853 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:08:00.188616438 +0000 UTC m=+254.211146571 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:59 crc kubenswrapper[4632]: I0313 10:07:59.793720 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:07:59 crc kubenswrapper[4632]: E0313 10:07:59.794105 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:08:00.294092929 +0000 UTC m=+254.316623062 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:59 crc kubenswrapper[4632]: I0313 10:07:59.892459 4632 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Mar 13 10:07:59 crc kubenswrapper[4632]: I0313 10:07:59.895484 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:07:59 crc kubenswrapper[4632]: E0313 10:07:59.901605 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:08:00.401580149 +0000 UTC m=+254.424110282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:07:59 crc kubenswrapper[4632]: I0313 10:07:59.909691 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvrrc" event={"ID":"09ddc697-7ac1-4896-b9e2-1ae6c59c6f47","Type":"ContainerStarted","Data":"8d96e921fd05bbceb9759f1aec6352a154d23a6ac924ac647eb7c8c7bda71f68"} Mar 13 10:07:59 crc kubenswrapper[4632]: I0313 10:07:59.920162 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8z668" event={"ID":"9845f384-2720-4d6a-aa73-1e66e30f7c2c","Type":"ContainerStarted","Data":"eb6537c579cc3249bae831f8164a219c024fbc6e74b0df55017ce52d6b143567"} Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.002422 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:08:00 crc kubenswrapper[4632]: E0313 10:08:00.002760 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:08:00.502722673 +0000 UTC m=+254.525252806 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.018094 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p8wjg"] Mar 13 10:08:00 crc kubenswrapper[4632]: W0313 10:08:00.069207 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb11a7dff_bf08_44c3_b4f4_923119c13717.slice/crio-67d236def43f1634b091443716f5df0abcd64ee4e8ef6768dd906ab3397df097 WatchSource:0}: Error finding container 67d236def43f1634b091443716f5df0abcd64ee4e8ef6768dd906ab3397df097: Status 404 returned error can't find the container with id 67d236def43f1634b091443716f5df0abcd64ee4e8ef6768dd906ab3397df097 Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.104584 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:08:00 crc kubenswrapper[4632]: E0313 10:08:00.105218 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:08:00.605195082 +0000 UTC m=+254.627725215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.201969 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:08:00 crc kubenswrapper[4632]: [-]has-synced failed: reason withheld Mar 13 10:08:00 crc kubenswrapper[4632]: [+]process-running ok Mar 13 10:08:00 crc kubenswrapper[4632]: healthz check failed Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.202308 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.207011 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:08:00 crc kubenswrapper[4632]: E0313 10:08:00.207376 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:08:00.707363945 +0000 UTC m=+254.729894078 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.318180 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:08:00 crc kubenswrapper[4632]: E0313 10:08:00.318634 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:08:00.818600172 +0000 UTC m=+254.841130305 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.421004 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:08:00 crc kubenswrapper[4632]: E0313 10:08:00.421348 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:08:00.921335128 +0000 UTC m=+254.943865261 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.472099 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556608-9kzfk"] Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.473013 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556608-9kzfk" Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.487198 4632 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-03-13T10:07:59.892863333Z","Handler":null,"Name":""} Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.498810 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.518890 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xr5l9"] Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.525488 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:08:00 crc kubenswrapper[4632]: E0313 10:08:00.525822 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 10:08:01.025793343 +0000 UTC m=+255.048323476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.525856 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.525914 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmxng\" (UniqueName: \"kubernetes.io/projected/37ab6711-478f-4cc7-b9a4-c9baa126b1a3-kube-api-access-dmxng\") pod \"auto-csr-approver-29556608-9kzfk\" (UID: \"37ab6711-478f-4cc7-b9a4-c9baa126b1a3\") " pod="openshift-infra/auto-csr-approver-29556608-9kzfk" Mar 13 10:08:00 crc kubenswrapper[4632]: E0313 10:08:00.526263 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 10:08:01.026246774 +0000 UTC m=+255.048776907 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxs5z" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.547480 4632 patch_prober.go:28] interesting pod/apiserver-76f77b778f-p9gp2 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 13 10:08:00 crc kubenswrapper[4632]: [+]log ok Mar 13 10:08:00 crc kubenswrapper[4632]: [+]etcd ok Mar 13 10:08:00 crc kubenswrapper[4632]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 13 10:08:00 crc kubenswrapper[4632]: [+]poststarthook/generic-apiserver-start-informers ok Mar 13 10:08:00 crc kubenswrapper[4632]: [+]poststarthook/max-in-flight-filter ok Mar 13 10:08:00 crc kubenswrapper[4632]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 13 10:08:00 crc kubenswrapper[4632]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 13 10:08:00 crc kubenswrapper[4632]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 13 10:08:00 crc kubenswrapper[4632]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 13 10:08:00 crc kubenswrapper[4632]: [+]poststarthook/project.openshift.io-projectcache ok Mar 13 10:08:00 crc kubenswrapper[4632]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 13 10:08:00 crc kubenswrapper[4632]: [+]poststarthook/openshift.io-startinformers ok Mar 13 10:08:00 crc kubenswrapper[4632]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 13 10:08:00 crc kubenswrapper[4632]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 13 10:08:00 crc kubenswrapper[4632]: livez check failed Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.547547 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" podUID="37df1143-69fc-4d13-a5d3-790a9d14814a" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.568205 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556608-9kzfk"] Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.586767 4632 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.586814 4632 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.629352 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.629666 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmxng\" (UniqueName: \"kubernetes.io/projected/37ab6711-478f-4cc7-b9a4-c9baa126b1a3-kube-api-access-dmxng\") pod \"auto-csr-approver-29556608-9kzfk\" (UID: \"37ab6711-478f-4cc7-b9a4-c9baa126b1a3\") " pod="openshift-infra/auto-csr-approver-29556608-9kzfk" Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.639106 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.665772 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z2gc7"] Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.700289 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmxng\" (UniqueName: \"kubernetes.io/projected/37ab6711-478f-4cc7-b9a4-c9baa126b1a3-kube-api-access-dmxng\") pod \"auto-csr-approver-29556608-9kzfk\" (UID: \"37ab6711-478f-4cc7-b9a4-c9baa126b1a3\") " pod="openshift-infra/auto-csr-approver-29556608-9kzfk" Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.719289 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-txp2w"] Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.732066 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.821324 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t6bkt"] Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.860462 4632 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.860513 4632 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.888310 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556608-9kzfk" Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.889502 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9v5nn"] Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.889742 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-9v5nn" podUID="70f440bb-5dd8-4863-9749-bc5f7c547750" containerName="controller-manager" containerID="cri-o://cacc884e7672aacd612df662055c2d9769da0a235fec8c1ddc593601e1331830" gracePeriod=30 Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.932202 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xthqz"] Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.932412 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xthqz" podUID="560e6c43-4285-4ca8-98b9-874e9dcb5810" containerName="route-controller-manager" containerID="cri-o://6158a81f875e1232b8d27bb41ad2531364da77cdf3704ac46d4ec2470ad3e550" gracePeriod=30 Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.986629 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xr5l9" event={"ID":"87965e39-b879-4e26-9c8b-b78068c52aa0","Type":"ContainerStarted","Data":"9d37b680fdc1d8687e48df9dab9cd8ad8fcee9b7cdb15c920f34a9cbf7bad5ef"} Mar 13 10:08:00 crc kubenswrapper[4632]: I0313 10:08:00.994647 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z2gc7" event={"ID":"a110c276-8516-4f9e-a6af-d6837cd0f387","Type":"ContainerStarted","Data":"97491a7f994f5c8dffa29a28fb1914c53f3fb5687971c6cdb3d3b5b636967634"} Mar 13 10:08:01 crc kubenswrapper[4632]: I0313 10:08:01.017718 4632 generic.go:334] "Generic (PLEG): container finished" podID="b11a7dff-bf08-44c3-b4f4-923119c13717" containerID="31ed0687958629bbe6ae3de064bae07567e401a6f6f2576bf2e48b7390937742" exitCode=0 Mar 13 10:08:01 crc kubenswrapper[4632]: I0313 10:08:01.018058 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p8wjg" event={"ID":"b11a7dff-bf08-44c3-b4f4-923119c13717","Type":"ContainerDied","Data":"31ed0687958629bbe6ae3de064bae07567e401a6f6f2576bf2e48b7390937742"} Mar 13 10:08:01 crc kubenswrapper[4632]: I0313 10:08:01.018110 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p8wjg" event={"ID":"b11a7dff-bf08-44c3-b4f4-923119c13717","Type":"ContainerStarted","Data":"67d236def43f1634b091443716f5df0abcd64ee4e8ef6768dd906ab3397df097"} Mar 13 10:08:01 crc kubenswrapper[4632]: I0313 10:08:01.074460 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-txp2w" event={"ID":"f0cd0b7e-eded-4a51-8b1e-e67b9381bc87","Type":"ContainerStarted","Data":"3442fa414f5a2c2798e2a9a29c903f3acac1f4e2b61c872fefc305318ea1c556"} Mar 13 10:08:01 crc kubenswrapper[4632]: I0313 10:08:01.111971 4632 generic.go:334] "Generic (PLEG): container finished" podID="9845f384-2720-4d6a-aa73-1e66e30f7c2c" containerID="a609582fe9641518ec575a14e6a93f5bb1f502cb63d1e38602356c26bed99217" exitCode=0 Mar 13 10:08:01 crc kubenswrapper[4632]: I0313 10:08:01.112049 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8z668" event={"ID":"9845f384-2720-4d6a-aa73-1e66e30f7c2c","Type":"ContainerDied","Data":"a609582fe9641518ec575a14e6a93f5bb1f502cb63d1e38602356c26bed99217"} Mar 13 10:08:01 crc kubenswrapper[4632]: I0313 10:08:01.124892 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxs5z\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:08:01 crc kubenswrapper[4632]: I0313 10:08:01.150995 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t6bkt" event={"ID":"668c4640-0e5f-4c98-8b6e-dbdffdbfe14e","Type":"ContainerStarted","Data":"3e3a79d99a0e6a35edab86938ccf523a35c4606e460775b549d1924f20dc4204"} Mar 13 10:08:01 crc kubenswrapper[4632]: I0313 10:08:01.233306 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:08:01 crc kubenswrapper[4632]: [-]has-synced failed: reason withheld Mar 13 10:08:01 crc kubenswrapper[4632]: [+]process-running ok Mar 13 10:08:01 crc kubenswrapper[4632]: healthz check failed Mar 13 10:08:01 crc kubenswrapper[4632]: I0313 10:08:01.233770 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:08:01 crc kubenswrapper[4632]: I0313 10:08:01.443437 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 13 10:08:01 crc kubenswrapper[4632]: I0313 10:08:01.444852 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:08:01 crc kubenswrapper[4632]: I0313 10:08:01.942067 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9v5nn" Mar 13 10:08:01 crc kubenswrapper[4632]: I0313 10:08:01.966837 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-hvrrc" podStartSLOduration=18.966820554999998 podStartE2EDuration="18.966820555s" podCreationTimestamp="2026-03-13 10:07:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:08:01.250843074 +0000 UTC m=+255.273373207" watchObservedRunningTime="2026-03-13 10:08:01.966820555 +0000 UTC m=+255.989350688" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.077503 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.083234 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556608-9kzfk"] Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.101046 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xthqz" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.117443 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70f440bb-5dd8-4863-9749-bc5f7c547750-config\") pod \"70f440bb-5dd8-4863-9749-bc5f7c547750\" (UID: \"70f440bb-5dd8-4863-9749-bc5f7c547750\") " Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.117518 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70f440bb-5dd8-4863-9749-bc5f7c547750-client-ca\") pod \"70f440bb-5dd8-4863-9749-bc5f7c547750\" (UID: \"70f440bb-5dd8-4863-9749-bc5f7c547750\") " Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.117589 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70f440bb-5dd8-4863-9749-bc5f7c547750-proxy-ca-bundles\") pod \"70f440bb-5dd8-4863-9749-bc5f7c547750\" (UID: \"70f440bb-5dd8-4863-9749-bc5f7c547750\") " Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.117610 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vvbt\" (UniqueName: \"kubernetes.io/projected/70f440bb-5dd8-4863-9749-bc5f7c547750-kube-api-access-6vvbt\") pod \"70f440bb-5dd8-4863-9749-bc5f7c547750\" (UID: \"70f440bb-5dd8-4863-9749-bc5f7c547750\") " Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.117642 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70f440bb-5dd8-4863-9749-bc5f7c547750-serving-cert\") pod \"70f440bb-5dd8-4863-9749-bc5f7c547750\" (UID: \"70f440bb-5dd8-4863-9749-bc5f7c547750\") " Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.119015 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70f440bb-5dd8-4863-9749-bc5f7c547750-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "70f440bb-5dd8-4863-9749-bc5f7c547750" (UID: "70f440bb-5dd8-4863-9749-bc5f7c547750"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.121647 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70f440bb-5dd8-4863-9749-bc5f7c547750-client-ca" (OuterVolumeSpecName: "client-ca") pod "70f440bb-5dd8-4863-9749-bc5f7c547750" (UID: "70f440bb-5dd8-4863-9749-bc5f7c547750"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.126181 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70f440bb-5dd8-4863-9749-bc5f7c547750-config" (OuterVolumeSpecName: "config") pod "70f440bb-5dd8-4863-9749-bc5f7c547750" (UID: "70f440bb-5dd8-4863-9749-bc5f7c547750"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.137357 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70f440bb-5dd8-4863-9749-bc5f7c547750-kube-api-access-6vvbt" (OuterVolumeSpecName: "kube-api-access-6vvbt") pod "70f440bb-5dd8-4863-9749-bc5f7c547750" (UID: "70f440bb-5dd8-4863-9749-bc5f7c547750"). InnerVolumeSpecName "kube-api-access-6vvbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.146109 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70f440bb-5dd8-4863-9749-bc5f7c547750-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "70f440bb-5dd8-4863-9749-bc5f7c547750" (UID: "70f440bb-5dd8-4863-9749-bc5f7c547750"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.189447 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556608-9kzfk" event={"ID":"37ab6711-478f-4cc7-b9a4-c9baa126b1a3","Type":"ContainerStarted","Data":"ff362806bee1867b720f220a4cde4dbe8551207f73438d3af60407d151505f16"} Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.198087 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:08:02 crc kubenswrapper[4632]: [-]has-synced failed: reason withheld Mar 13 10:08:02 crc kubenswrapper[4632]: [+]process-running ok Mar 13 10:08:02 crc kubenswrapper[4632]: healthz check failed Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.198157 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.222416 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/560e6c43-4285-4ca8-98b9-874e9dcb5810-serving-cert\") pod \"560e6c43-4285-4ca8-98b9-874e9dcb5810\" (UID: \"560e6c43-4285-4ca8-98b9-874e9dcb5810\") " Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.222499 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/560e6c43-4285-4ca8-98b9-874e9dcb5810-client-ca\") pod \"560e6c43-4285-4ca8-98b9-874e9dcb5810\" (UID: \"560e6c43-4285-4ca8-98b9-874e9dcb5810\") " Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.222523 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/560e6c43-4285-4ca8-98b9-874e9dcb5810-config\") pod \"560e6c43-4285-4ca8-98b9-874e9dcb5810\" (UID: \"560e6c43-4285-4ca8-98b9-874e9dcb5810\") " Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.222637 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdwcs\" (UniqueName: \"kubernetes.io/projected/560e6c43-4285-4ca8-98b9-874e9dcb5810-kube-api-access-sdwcs\") pod \"560e6c43-4285-4ca8-98b9-874e9dcb5810\" (UID: \"560e6c43-4285-4ca8-98b9-874e9dcb5810\") " Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.222983 4632 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70f440bb-5dd8-4863-9749-bc5f7c547750-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.222997 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vvbt\" (UniqueName: \"kubernetes.io/projected/70f440bb-5dd8-4863-9749-bc5f7c547750-kube-api-access-6vvbt\") on node \"crc\" DevicePath \"\"" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.223021 4632 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70f440bb-5dd8-4863-9749-bc5f7c547750-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.223031 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70f440bb-5dd8-4863-9749-bc5f7c547750-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.223042 4632 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70f440bb-5dd8-4863-9749-bc5f7c547750-client-ca\") on node \"crc\" DevicePath \"\"" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.224184 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/560e6c43-4285-4ca8-98b9-874e9dcb5810-client-ca" (OuterVolumeSpecName: "client-ca") pod "560e6c43-4285-4ca8-98b9-874e9dcb5810" (UID: "560e6c43-4285-4ca8-98b9-874e9dcb5810"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.224760 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/560e6c43-4285-4ca8-98b9-874e9dcb5810-config" (OuterVolumeSpecName: "config") pod "560e6c43-4285-4ca8-98b9-874e9dcb5810" (UID: "560e6c43-4285-4ca8-98b9-874e9dcb5810"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.233436 4632 generic.go:334] "Generic (PLEG): container finished" podID="70f440bb-5dd8-4863-9749-bc5f7c547750" containerID="cacc884e7672aacd612df662055c2d9769da0a235fec8c1ddc593601e1331830" exitCode=0 Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.233752 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9v5nn" event={"ID":"70f440bb-5dd8-4863-9749-bc5f7c547750","Type":"ContainerDied","Data":"cacc884e7672aacd612df662055c2d9769da0a235fec8c1ddc593601e1331830"} Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.233830 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9v5nn" event={"ID":"70f440bb-5dd8-4863-9749-bc5f7c547750","Type":"ContainerDied","Data":"365524316a4e3e846e005a856282706fac826be9337ec760f74d5dd19061bccd"} Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.233897 4632 scope.go:117] "RemoveContainer" containerID="cacc884e7672aacd612df662055c2d9769da0a235fec8c1ddc593601e1331830" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.235456 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9v5nn" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.239264 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/560e6c43-4285-4ca8-98b9-874e9dcb5810-kube-api-access-sdwcs" (OuterVolumeSpecName: "kube-api-access-sdwcs") pod "560e6c43-4285-4ca8-98b9-874e9dcb5810" (UID: "560e6c43-4285-4ca8-98b9-874e9dcb5810"). InnerVolumeSpecName "kube-api-access-sdwcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.263322 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/560e6c43-4285-4ca8-98b9-874e9dcb5810-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "560e6c43-4285-4ca8-98b9-874e9dcb5810" (UID: "560e6c43-4285-4ca8-98b9-874e9dcb5810"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.280662 4632 generic.go:334] "Generic (PLEG): container finished" podID="528d3aa9-10bf-4029-a4d2-85768264fde8" containerID="da165dd4ae62fa2ea1c777c8125fcd4bfe4bd102f508da056f1a058689bba35e" exitCode=0 Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.280749 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556600-r9flg" event={"ID":"528d3aa9-10bf-4029-a4d2-85768264fde8","Type":"ContainerDied","Data":"da165dd4ae62fa2ea1c777c8125fcd4bfe4bd102f508da056f1a058689bba35e"} Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.309818 4632 generic.go:334] "Generic (PLEG): container finished" podID="668c4640-0e5f-4c98-8b6e-dbdffdbfe14e" containerID="01ae56b596391f3b7877c67539058596dbfd086754ed8db1f4f40f76d82a4c4c" exitCode=0 Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.309929 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t6bkt" event={"ID":"668c4640-0e5f-4c98-8b6e-dbdffdbfe14e","Type":"ContainerDied","Data":"01ae56b596391f3b7877c67539058596dbfd086754ed8db1f4f40f76d82a4c4c"} Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.326019 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdwcs\" (UniqueName: \"kubernetes.io/projected/560e6c43-4285-4ca8-98b9-874e9dcb5810-kube-api-access-sdwcs\") on node \"crc\" DevicePath \"\"" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.326074 4632 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/560e6c43-4285-4ca8-98b9-874e9dcb5810-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.326797 4632 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/560e6c43-4285-4ca8-98b9-874e9dcb5810-client-ca\") on node \"crc\" DevicePath \"\"" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.326832 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/560e6c43-4285-4ca8-98b9-874e9dcb5810-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.334717 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9v5nn"] Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.348402 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9v5nn"] Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.367101 4632 generic.go:334] "Generic (PLEG): container finished" podID="87965e39-b879-4e26-9c8b-b78068c52aa0" containerID="bafd48184b1b00528329793cfe1af87f0aa9502582cddad85d784407e60c249d" exitCode=0 Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.367167 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xr5l9" event={"ID":"87965e39-b879-4e26-9c8b-b78068c52aa0","Type":"ContainerDied","Data":"bafd48184b1b00528329793cfe1af87f0aa9502582cddad85d784407e60c249d"} Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.374177 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z2gc7" event={"ID":"a110c276-8516-4f9e-a6af-d6837cd0f387","Type":"ContainerDied","Data":"0d073c1adaa82aa87cab8618a50587cfed8b79fe657e3f2878a87c7599c612fb"} Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.374662 4632 generic.go:334] "Generic (PLEG): container finished" podID="a110c276-8516-4f9e-a6af-d6837cd0f387" containerID="0d073c1adaa82aa87cab8618a50587cfed8b79fe657e3f2878a87c7599c612fb" exitCode=0 Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.384890 4632 generic.go:334] "Generic (PLEG): container finished" podID="f0cd0b7e-eded-4a51-8b1e-e67b9381bc87" containerID="d1da7a7847a6ff5346add9e3ed943cdc6232146978e6161d764011992ac73c84" exitCode=0 Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.385000 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-txp2w" event={"ID":"f0cd0b7e-eded-4a51-8b1e-e67b9381bc87","Type":"ContainerDied","Data":"d1da7a7847a6ff5346add9e3ed943cdc6232146978e6161d764011992ac73c84"} Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.399512 4632 generic.go:334] "Generic (PLEG): container finished" podID="560e6c43-4285-4ca8-98b9-874e9dcb5810" containerID="6158a81f875e1232b8d27bb41ad2531364da77cdf3704ac46d4ec2470ad3e550" exitCode=0 Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.400428 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xthqz" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.403915 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xthqz" event={"ID":"560e6c43-4285-4ca8-98b9-874e9dcb5810","Type":"ContainerDied","Data":"6158a81f875e1232b8d27bb41ad2531364da77cdf3704ac46d4ec2470ad3e550"} Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.403994 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xthqz" event={"ID":"560e6c43-4285-4ca8-98b9-874e9dcb5810","Type":"ContainerDied","Data":"e9aab0e9cd1796940dcc2818af221f5b388f490c5b2161fb3217fdbc24d92e66"} Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.438510 4632 scope.go:117] "RemoveContainer" containerID="cacc884e7672aacd612df662055c2d9769da0a235fec8c1ddc593601e1331830" Mar 13 10:08:02 crc kubenswrapper[4632]: E0313 10:08:02.441216 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cacc884e7672aacd612df662055c2d9769da0a235fec8c1ddc593601e1331830\": container with ID starting with cacc884e7672aacd612df662055c2d9769da0a235fec8c1ddc593601e1331830 not found: ID does not exist" containerID="cacc884e7672aacd612df662055c2d9769da0a235fec8c1ddc593601e1331830" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.441269 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cacc884e7672aacd612df662055c2d9769da0a235fec8c1ddc593601e1331830"} err="failed to get container status \"cacc884e7672aacd612df662055c2d9769da0a235fec8c1ddc593601e1331830\": rpc error: code = NotFound desc = could not find container \"cacc884e7672aacd612df662055c2d9769da0a235fec8c1ddc593601e1331830\": container with ID starting with cacc884e7672aacd612df662055c2d9769da0a235fec8c1ddc593601e1331830 not found: ID does not exist" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.441298 4632 scope.go:117] "RemoveContainer" containerID="6158a81f875e1232b8d27bb41ad2531364da77cdf3704ac46d4ec2470ad3e550" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.494343 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fxs5z"] Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.522759 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xthqz"] Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.534712 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xthqz"] Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.554101 4632 scope.go:117] "RemoveContainer" containerID="6158a81f875e1232b8d27bb41ad2531364da77cdf3704ac46d4ec2470ad3e550" Mar 13 10:08:02 crc kubenswrapper[4632]: E0313 10:08:02.554662 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6158a81f875e1232b8d27bb41ad2531364da77cdf3704ac46d4ec2470ad3e550\": container with ID starting with 6158a81f875e1232b8d27bb41ad2531364da77cdf3704ac46d4ec2470ad3e550 not found: ID does not exist" containerID="6158a81f875e1232b8d27bb41ad2531364da77cdf3704ac46d4ec2470ad3e550" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.554704 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6158a81f875e1232b8d27bb41ad2531364da77cdf3704ac46d4ec2470ad3e550"} err="failed to get container status \"6158a81f875e1232b8d27bb41ad2531364da77cdf3704ac46d4ec2470ad3e550\": rpc error: code = NotFound desc = could not find container \"6158a81f875e1232b8d27bb41ad2531364da77cdf3704ac46d4ec2470ad3e550\": container with ID starting with 6158a81f875e1232b8d27bb41ad2531364da77cdf3704ac46d4ec2470ad3e550 not found: ID does not exist" Mar 13 10:08:02 crc kubenswrapper[4632]: W0313 10:08:02.592099 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf56fc09a_e2b7_46db_b938_f276df3f033e.slice/crio-6db13fe4cd83b1210971879bf1313cee58732376958e857687de7da1568c6519 WatchSource:0}: Error finding container 6db13fe4cd83b1210971879bf1313cee58732376958e857687de7da1568c6519: Status 404 returned error can't find the container with id 6db13fe4cd83b1210971879bf1313cee58732376958e857687de7da1568c6519 Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.828215 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-658cc96bdc-92bpr"] Mar 13 10:08:02 crc kubenswrapper[4632]: E0313 10:08:02.828753 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70f440bb-5dd8-4863-9749-bc5f7c547750" containerName="controller-manager" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.828774 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="70f440bb-5dd8-4863-9749-bc5f7c547750" containerName="controller-manager" Mar 13 10:08:02 crc kubenswrapper[4632]: E0313 10:08:02.828787 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="560e6c43-4285-4ca8-98b9-874e9dcb5810" containerName="route-controller-manager" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.828794 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="560e6c43-4285-4ca8-98b9-874e9dcb5810" containerName="route-controller-manager" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.828921 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="70f440bb-5dd8-4863-9749-bc5f7c547750" containerName="controller-manager" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.829015 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="560e6c43-4285-4ca8-98b9-874e9dcb5810" containerName="route-controller-manager" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.829416 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-658cc96bdc-92bpr" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.835378 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.835588 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.839069 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.839292 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.839452 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.839564 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.863368 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d66b8bfb-9wzpp"] Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.865702 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86d66b8bfb-9wzpp" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.869101 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.882414 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.884848 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.886623 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.888384 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.889068 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.893613 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.926343 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-658cc96bdc-92bpr"] Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.937596 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwf9m\" (UniqueName: \"kubernetes.io/projected/c8ffa9c9-d11d-46b5-ac51-6d38a8639d98-kube-api-access-hwf9m\") pod \"controller-manager-658cc96bdc-92bpr\" (UID: \"c8ffa9c9-d11d-46b5-ac51-6d38a8639d98\") " pod="openshift-controller-manager/controller-manager-658cc96bdc-92bpr" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.937679 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0f45c9a-e32c-420e-9106-fcb72dd59350-serving-cert\") pod \"route-controller-manager-86d66b8bfb-9wzpp\" (UID: \"f0f45c9a-e32c-420e-9106-fcb72dd59350\") " pod="openshift-route-controller-manager/route-controller-manager-86d66b8bfb-9wzpp" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.937793 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c8ffa9c9-d11d-46b5-ac51-6d38a8639d98-client-ca\") pod \"controller-manager-658cc96bdc-92bpr\" (UID: \"c8ffa9c9-d11d-46b5-ac51-6d38a8639d98\") " pod="openshift-controller-manager/controller-manager-658cc96bdc-92bpr" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.937821 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0f45c9a-e32c-420e-9106-fcb72dd59350-config\") pod \"route-controller-manager-86d66b8bfb-9wzpp\" (UID: \"f0f45c9a-e32c-420e-9106-fcb72dd59350\") " pod="openshift-route-controller-manager/route-controller-manager-86d66b8bfb-9wzpp" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.937995 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c8ffa9c9-d11d-46b5-ac51-6d38a8639d98-proxy-ca-bundles\") pod \"controller-manager-658cc96bdc-92bpr\" (UID: \"c8ffa9c9-d11d-46b5-ac51-6d38a8639d98\") " pod="openshift-controller-manager/controller-manager-658cc96bdc-92bpr" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.938093 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f0f45c9a-e32c-420e-9106-fcb72dd59350-client-ca\") pod \"route-controller-manager-86d66b8bfb-9wzpp\" (UID: \"f0f45c9a-e32c-420e-9106-fcb72dd59350\") " pod="openshift-route-controller-manager/route-controller-manager-86d66b8bfb-9wzpp" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.938211 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9pxp\" (UniqueName: \"kubernetes.io/projected/f0f45c9a-e32c-420e-9106-fcb72dd59350-kube-api-access-l9pxp\") pod \"route-controller-manager-86d66b8bfb-9wzpp\" (UID: \"f0f45c9a-e32c-420e-9106-fcb72dd59350\") " pod="openshift-route-controller-manager/route-controller-manager-86d66b8bfb-9wzpp" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.938258 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8ffa9c9-d11d-46b5-ac51-6d38a8639d98-config\") pod \"controller-manager-658cc96bdc-92bpr\" (UID: \"c8ffa9c9-d11d-46b5-ac51-6d38a8639d98\") " pod="openshift-controller-manager/controller-manager-658cc96bdc-92bpr" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.938384 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8ffa9c9-d11d-46b5-ac51-6d38a8639d98-serving-cert\") pod \"controller-manager-658cc96bdc-92bpr\" (UID: \"c8ffa9c9-d11d-46b5-ac51-6d38a8639d98\") " pod="openshift-controller-manager/controller-manager-658cc96bdc-92bpr" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.939263 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d66b8bfb-9wzpp"] Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.946265 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.947708 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.948846 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.952380 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 13 10:08:02 crc kubenswrapper[4632]: I0313 10:08:02.952737 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.041716 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9pxp\" (UniqueName: \"kubernetes.io/projected/f0f45c9a-e32c-420e-9106-fcb72dd59350-kube-api-access-l9pxp\") pod \"route-controller-manager-86d66b8bfb-9wzpp\" (UID: \"f0f45c9a-e32c-420e-9106-fcb72dd59350\") " pod="openshift-route-controller-manager/route-controller-manager-86d66b8bfb-9wzpp" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.047644 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8ffa9c9-d11d-46b5-ac51-6d38a8639d98-config\") pod \"controller-manager-658cc96bdc-92bpr\" (UID: \"c8ffa9c9-d11d-46b5-ac51-6d38a8639d98\") " pod="openshift-controller-manager/controller-manager-658cc96bdc-92bpr" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.047755 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8ffa9c9-d11d-46b5-ac51-6d38a8639d98-serving-cert\") pod \"controller-manager-658cc96bdc-92bpr\" (UID: \"c8ffa9c9-d11d-46b5-ac51-6d38a8639d98\") " pod="openshift-controller-manager/controller-manager-658cc96bdc-92bpr" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.047802 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwf9m\" (UniqueName: \"kubernetes.io/projected/c8ffa9c9-d11d-46b5-ac51-6d38a8639d98-kube-api-access-hwf9m\") pod \"controller-manager-658cc96bdc-92bpr\" (UID: \"c8ffa9c9-d11d-46b5-ac51-6d38a8639d98\") " pod="openshift-controller-manager/controller-manager-658cc96bdc-92bpr" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.047856 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0f45c9a-e32c-420e-9106-fcb72dd59350-serving-cert\") pod \"route-controller-manager-86d66b8bfb-9wzpp\" (UID: \"f0f45c9a-e32c-420e-9106-fcb72dd59350\") " pod="openshift-route-controller-manager/route-controller-manager-86d66b8bfb-9wzpp" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.047978 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a4d2988-a460-407b-902a-aeb8eda619a1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5a4d2988-a460-407b-902a-aeb8eda619a1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.048022 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c8ffa9c9-d11d-46b5-ac51-6d38a8639d98-client-ca\") pod \"controller-manager-658cc96bdc-92bpr\" (UID: \"c8ffa9c9-d11d-46b5-ac51-6d38a8639d98\") " pod="openshift-controller-manager/controller-manager-658cc96bdc-92bpr" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.048058 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0f45c9a-e32c-420e-9106-fcb72dd59350-config\") pod \"route-controller-manager-86d66b8bfb-9wzpp\" (UID: \"f0f45c9a-e32c-420e-9106-fcb72dd59350\") " pod="openshift-route-controller-manager/route-controller-manager-86d66b8bfb-9wzpp" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.048083 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c8ffa9c9-d11d-46b5-ac51-6d38a8639d98-proxy-ca-bundles\") pod \"controller-manager-658cc96bdc-92bpr\" (UID: \"c8ffa9c9-d11d-46b5-ac51-6d38a8639d98\") " pod="openshift-controller-manager/controller-manager-658cc96bdc-92bpr" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.048111 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f0f45c9a-e32c-420e-9106-fcb72dd59350-client-ca\") pod \"route-controller-manager-86d66b8bfb-9wzpp\" (UID: \"f0f45c9a-e32c-420e-9106-fcb72dd59350\") " pod="openshift-route-controller-manager/route-controller-manager-86d66b8bfb-9wzpp" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.048151 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a4d2988-a460-407b-902a-aeb8eda619a1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5a4d2988-a460-407b-902a-aeb8eda619a1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.049117 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c8ffa9c9-d11d-46b5-ac51-6d38a8639d98-client-ca\") pod \"controller-manager-658cc96bdc-92bpr\" (UID: \"c8ffa9c9-d11d-46b5-ac51-6d38a8639d98\") " pod="openshift-controller-manager/controller-manager-658cc96bdc-92bpr" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.049308 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8ffa9c9-d11d-46b5-ac51-6d38a8639d98-config\") pod \"controller-manager-658cc96bdc-92bpr\" (UID: \"c8ffa9c9-d11d-46b5-ac51-6d38a8639d98\") " pod="openshift-controller-manager/controller-manager-658cc96bdc-92bpr" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.050448 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c8ffa9c9-d11d-46b5-ac51-6d38a8639d98-proxy-ca-bundles\") pod \"controller-manager-658cc96bdc-92bpr\" (UID: \"c8ffa9c9-d11d-46b5-ac51-6d38a8639d98\") " pod="openshift-controller-manager/controller-manager-658cc96bdc-92bpr" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.051552 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0f45c9a-e32c-420e-9106-fcb72dd59350-config\") pod \"route-controller-manager-86d66b8bfb-9wzpp\" (UID: \"f0f45c9a-e32c-420e-9106-fcb72dd59350\") " pod="openshift-route-controller-manager/route-controller-manager-86d66b8bfb-9wzpp" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.052235 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f0f45c9a-e32c-420e-9106-fcb72dd59350-client-ca\") pod \"route-controller-manager-86d66b8bfb-9wzpp\" (UID: \"f0f45c9a-e32c-420e-9106-fcb72dd59350\") " pod="openshift-route-controller-manager/route-controller-manager-86d66b8bfb-9wzpp" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.069126 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0f45c9a-e32c-420e-9106-fcb72dd59350-serving-cert\") pod \"route-controller-manager-86d66b8bfb-9wzpp\" (UID: \"f0f45c9a-e32c-420e-9106-fcb72dd59350\") " pod="openshift-route-controller-manager/route-controller-manager-86d66b8bfb-9wzpp" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.069677 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8ffa9c9-d11d-46b5-ac51-6d38a8639d98-serving-cert\") pod \"controller-manager-658cc96bdc-92bpr\" (UID: \"c8ffa9c9-d11d-46b5-ac51-6d38a8639d98\") " pod="openshift-controller-manager/controller-manager-658cc96bdc-92bpr" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.080617 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9pxp\" (UniqueName: \"kubernetes.io/projected/f0f45c9a-e32c-420e-9106-fcb72dd59350-kube-api-access-l9pxp\") pod \"route-controller-manager-86d66b8bfb-9wzpp\" (UID: \"f0f45c9a-e32c-420e-9106-fcb72dd59350\") " pod="openshift-route-controller-manager/route-controller-manager-86d66b8bfb-9wzpp" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.094743 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwf9m\" (UniqueName: \"kubernetes.io/projected/c8ffa9c9-d11d-46b5-ac51-6d38a8639d98-kube-api-access-hwf9m\") pod \"controller-manager-658cc96bdc-92bpr\" (UID: \"c8ffa9c9-d11d-46b5-ac51-6d38a8639d98\") " pod="openshift-controller-manager/controller-manager-658cc96bdc-92bpr" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.156845 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a4d2988-a460-407b-902a-aeb8eda619a1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5a4d2988-a460-407b-902a-aeb8eda619a1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.156954 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a4d2988-a460-407b-902a-aeb8eda619a1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5a4d2988-a460-407b-902a-aeb8eda619a1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.157094 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a4d2988-a460-407b-902a-aeb8eda619a1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5a4d2988-a460-407b-902a-aeb8eda619a1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.176877 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a4d2988-a460-407b-902a-aeb8eda619a1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5a4d2988-a460-407b-902a-aeb8eda619a1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.192661 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-658cc96bdc-92bpr" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.195132 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:08:03 crc kubenswrapper[4632]: [-]has-synced failed: reason withheld Mar 13 10:08:03 crc kubenswrapper[4632]: [+]process-running ok Mar 13 10:08:03 crc kubenswrapper[4632]: healthz check failed Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.195204 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.227059 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86d66b8bfb-9wzpp" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.275046 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.414311 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.415156 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.415267 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.436531 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.436752 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.461129 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aa472cd7-9575-473f-b6b6-709b644a5ec4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"aa472cd7-9575-473f-b6b6-709b644a5ec4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.461218 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aa472cd7-9575-473f-b6b6-709b644a5ec4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"aa472cd7-9575-473f-b6b6-709b644a5ec4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.470181 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" event={"ID":"f56fc09a-e2b7-46db-b938-f276df3f033e","Type":"ContainerStarted","Data":"1081e88c7001e64d6f95133ef3938fcdaf6163c9ecf6555e86dc52149387161f"} Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.470230 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" event={"ID":"f56fc09a-e2b7-46db-b938-f276df3f033e","Type":"ContainerStarted","Data":"6db13fe4cd83b1210971879bf1313cee58732376958e857687de7da1568c6519"} Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.471067 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.500713 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" podStartSLOduration=203.500693275 podStartE2EDuration="3m23.500693275s" podCreationTimestamp="2026-03-13 10:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:08:03.496886344 +0000 UTC m=+257.519416477" watchObservedRunningTime="2026-03-13 10:08:03.500693275 +0000 UTC m=+257.523223408" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.551835 4632 ???:1] "http: TLS handshake error from 192.168.126.11:53520: no serving certificate available for the kubelet" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.563123 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aa472cd7-9575-473f-b6b6-709b644a5ec4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"aa472cd7-9575-473f-b6b6-709b644a5ec4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.563289 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aa472cd7-9575-473f-b6b6-709b644a5ec4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"aa472cd7-9575-473f-b6b6-709b644a5ec4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.564453 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aa472cd7-9575-473f-b6b6-709b644a5ec4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"aa472cd7-9575-473f-b6b6-709b644a5ec4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.586197 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aa472cd7-9575-473f-b6b6-709b644a5ec4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"aa472cd7-9575-473f-b6b6-709b644a5ec4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 13 10:08:03 crc kubenswrapper[4632]: I0313 10:08:03.767243 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 13 10:08:04 crc kubenswrapper[4632]: I0313 10:08:04.080621 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="560e6c43-4285-4ca8-98b9-874e9dcb5810" path="/var/lib/kubelet/pods/560e6c43-4285-4ca8-98b9-874e9dcb5810/volumes" Mar 13 10:08:04 crc kubenswrapper[4632]: I0313 10:08:04.082336 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70f440bb-5dd8-4863-9749-bc5f7c547750" path="/var/lib/kubelet/pods/70f440bb-5dd8-4863-9749-bc5f7c547750/volumes" Mar 13 10:08:04 crc kubenswrapper[4632]: I0313 10:08:04.228416 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556600-r9flg" Mar 13 10:08:04 crc kubenswrapper[4632]: I0313 10:08:04.247482 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:08:04 crc kubenswrapper[4632]: [-]has-synced failed: reason withheld Mar 13 10:08:04 crc kubenswrapper[4632]: [+]process-running ok Mar 13 10:08:04 crc kubenswrapper[4632]: healthz check failed Mar 13 10:08:04 crc kubenswrapper[4632]: I0313 10:08:04.247564 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:08:04 crc kubenswrapper[4632]: I0313 10:08:04.320086 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/528d3aa9-10bf-4029-a4d2-85768264fde8-secret-volume\") pod \"528d3aa9-10bf-4029-a4d2-85768264fde8\" (UID: \"528d3aa9-10bf-4029-a4d2-85768264fde8\") " Mar 13 10:08:04 crc kubenswrapper[4632]: I0313 10:08:04.320171 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/528d3aa9-10bf-4029-a4d2-85768264fde8-config-volume\") pod \"528d3aa9-10bf-4029-a4d2-85768264fde8\" (UID: \"528d3aa9-10bf-4029-a4d2-85768264fde8\") " Mar 13 10:08:04 crc kubenswrapper[4632]: I0313 10:08:04.320218 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm8lp\" (UniqueName: \"kubernetes.io/projected/528d3aa9-10bf-4029-a4d2-85768264fde8-kube-api-access-vm8lp\") pod \"528d3aa9-10bf-4029-a4d2-85768264fde8\" (UID: \"528d3aa9-10bf-4029-a4d2-85768264fde8\") " Mar 13 10:08:04 crc kubenswrapper[4632]: I0313 10:08:04.324581 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/528d3aa9-10bf-4029-a4d2-85768264fde8-config-volume" (OuterVolumeSpecName: "config-volume") pod "528d3aa9-10bf-4029-a4d2-85768264fde8" (UID: "528d3aa9-10bf-4029-a4d2-85768264fde8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:08:04 crc kubenswrapper[4632]: I0313 10:08:04.339092 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/528d3aa9-10bf-4029-a4d2-85768264fde8-kube-api-access-vm8lp" (OuterVolumeSpecName: "kube-api-access-vm8lp") pod "528d3aa9-10bf-4029-a4d2-85768264fde8" (UID: "528d3aa9-10bf-4029-a4d2-85768264fde8"). InnerVolumeSpecName "kube-api-access-vm8lp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:08:04 crc kubenswrapper[4632]: I0313 10:08:04.368074 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/528d3aa9-10bf-4029-a4d2-85768264fde8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "528d3aa9-10bf-4029-a4d2-85768264fde8" (UID: "528d3aa9-10bf-4029-a4d2-85768264fde8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:08:04 crc kubenswrapper[4632]: I0313 10:08:04.422664 4632 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/528d3aa9-10bf-4029-a4d2-85768264fde8-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 13 10:08:04 crc kubenswrapper[4632]: I0313 10:08:04.422775 4632 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/528d3aa9-10bf-4029-a4d2-85768264fde8-config-volume\") on node \"crc\" DevicePath \"\"" Mar 13 10:08:04 crc kubenswrapper[4632]: I0313 10:08:04.422791 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm8lp\" (UniqueName: \"kubernetes.io/projected/528d3aa9-10bf-4029-a4d2-85768264fde8-kube-api-access-vm8lp\") on node \"crc\" DevicePath \"\"" Mar 13 10:08:04 crc kubenswrapper[4632]: I0313 10:08:04.510318 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-658cc96bdc-92bpr"] Mar 13 10:08:04 crc kubenswrapper[4632]: I0313 10:08:04.536218 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d66b8bfb-9wzpp"] Mar 13 10:08:04 crc kubenswrapper[4632]: W0313 10:08:04.581532 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8ffa9c9_d11d_46b5_ac51_6d38a8639d98.slice/crio-94bf0543076d266034298c32f3e17bb1e05e21aec66de3f477e64011186c779f WatchSource:0}: Error finding container 94bf0543076d266034298c32f3e17bb1e05e21aec66de3f477e64011186c779f: Status 404 returned error can't find the container with id 94bf0543076d266034298c32f3e17bb1e05e21aec66de3f477e64011186c779f Mar 13 10:08:04 crc kubenswrapper[4632]: I0313 10:08:04.587867 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 13 10:08:04 crc kubenswrapper[4632]: I0313 10:08:04.638123 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556600-r9flg" event={"ID":"528d3aa9-10bf-4029-a4d2-85768264fde8","Type":"ContainerDied","Data":"1137745f79e5dd4b86f11690ff5ed0914b045872452dc8054e30e019f43d068c"} Mar 13 10:08:04 crc kubenswrapper[4632]: I0313 10:08:04.638184 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1137745f79e5dd4b86f11690ff5ed0914b045872452dc8054e30e019f43d068c" Mar 13 10:08:04 crc kubenswrapper[4632]: I0313 10:08:04.638558 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556600-r9flg" Mar 13 10:08:04 crc kubenswrapper[4632]: I0313 10:08:04.693724 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-g2wxc" Mar 13 10:08:04 crc kubenswrapper[4632]: W0313 10:08:04.785766 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0f45c9a_e32c_420e_9106_fcb72dd59350.slice/crio-070e0df69129dffbb7be6eacfbcf901c3d12a23e3166a3f87eca493eaf3d27d9 WatchSource:0}: Error finding container 070e0df69129dffbb7be6eacfbcf901c3d12a23e3166a3f87eca493eaf3d27d9: Status 404 returned error can't find the container with id 070e0df69129dffbb7be6eacfbcf901c3d12a23e3166a3f87eca493eaf3d27d9 Mar 13 10:08:04 crc kubenswrapper[4632]: I0313 10:08:04.991210 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 13 10:08:05 crc kubenswrapper[4632]: I0313 10:08:05.196499 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:08:05 crc kubenswrapper[4632]: [-]has-synced failed: reason withheld Mar 13 10:08:05 crc kubenswrapper[4632]: [+]process-running ok Mar 13 10:08:05 crc kubenswrapper[4632]: healthz check failed Mar 13 10:08:05 crc kubenswrapper[4632]: I0313 10:08:05.196784 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:08:05 crc kubenswrapper[4632]: I0313 10:08:05.522574 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:08:05 crc kubenswrapper[4632]: I0313 10:08:05.536073 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-p9gp2" Mar 13 10:08:05 crc kubenswrapper[4632]: I0313 10:08:05.632674 4632 patch_prober.go:28] interesting pod/downloads-7954f5f757-w2hhj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 13 10:08:05 crc kubenswrapper[4632]: I0313 10:08:05.633273 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w2hhj" podUID="7d155f24-9bfc-4039-9981-10e7f724fa51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 13 10:08:05 crc kubenswrapper[4632]: I0313 10:08:05.633748 4632 patch_prober.go:28] interesting pod/downloads-7954f5f757-w2hhj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 13 10:08:05 crc kubenswrapper[4632]: I0313 10:08:05.633770 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-w2hhj" podUID="7d155f24-9bfc-4039-9981-10e7f724fa51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 13 10:08:05 crc kubenswrapper[4632]: I0313 10:08:05.696513 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"aa472cd7-9575-473f-b6b6-709b644a5ec4","Type":"ContainerStarted","Data":"1e9a3663129374df63b373bcd43c56b70f7622faa20517540c6584e27d165001"} Mar 13 10:08:05 crc kubenswrapper[4632]: I0313 10:08:05.699224 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86d66b8bfb-9wzpp" event={"ID":"f0f45c9a-e32c-420e-9106-fcb72dd59350","Type":"ContainerStarted","Data":"687f3a3904ba4bdede4a24f019fead48648120a9d8e838727216b0ec43fcb3a2"} Mar 13 10:08:05 crc kubenswrapper[4632]: I0313 10:08:05.699277 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86d66b8bfb-9wzpp" event={"ID":"f0f45c9a-e32c-420e-9106-fcb72dd59350","Type":"ContainerStarted","Data":"070e0df69129dffbb7be6eacfbcf901c3d12a23e3166a3f87eca493eaf3d27d9"} Mar 13 10:08:05 crc kubenswrapper[4632]: I0313 10:08:05.703616 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-658cc96bdc-92bpr" event={"ID":"c8ffa9c9-d11d-46b5-ac51-6d38a8639d98","Type":"ContainerStarted","Data":"fe1e770193ae7e14a37be92defae5c64d043b458e01244272a22574e7b2e1f74"} Mar 13 10:08:05 crc kubenswrapper[4632]: I0313 10:08:05.703669 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-658cc96bdc-92bpr" event={"ID":"c8ffa9c9-d11d-46b5-ac51-6d38a8639d98","Type":"ContainerStarted","Data":"94bf0543076d266034298c32f3e17bb1e05e21aec66de3f477e64011186c779f"} Mar 13 10:08:05 crc kubenswrapper[4632]: I0313 10:08:05.716300 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5a4d2988-a460-407b-902a-aeb8eda619a1","Type":"ContainerStarted","Data":"7fd2f4ed963308f80503bea68090613f7b4770625620e790958344e5d09eb8f5"} Mar 13 10:08:05 crc kubenswrapper[4632]: I0313 10:08:05.807253 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-658cc96bdc-92bpr" podStartSLOduration=4.807228306 podStartE2EDuration="4.807228306s" podCreationTimestamp="2026-03-13 10:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:08:05.804626447 +0000 UTC m=+259.827156580" watchObservedRunningTime="2026-03-13 10:08:05.807228306 +0000 UTC m=+259.829758439" Mar 13 10:08:05 crc kubenswrapper[4632]: I0313 10:08:05.808515 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-86d66b8bfb-9wzpp" podStartSLOduration=4.80850823 podStartE2EDuration="4.80850823s" podCreationTimestamp="2026-03-13 10:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:08:05.758512626 +0000 UTC m=+259.781042759" watchObservedRunningTime="2026-03-13 10:08:05.80850823 +0000 UTC m=+259.831038373" Mar 13 10:08:05 crc kubenswrapper[4632]: I0313 10:08:05.983404 4632 patch_prober.go:28] interesting pod/console-f9d7485db-zn7mn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Mar 13 10:08:05 crc kubenswrapper[4632]: I0313 10:08:05.983475 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-zn7mn" podUID="f5a50074-5531-442f-a0e9-0578f15634c1" containerName="console" probeResult="failure" output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" Mar 13 10:08:06 crc kubenswrapper[4632]: I0313 10:08:06.206156 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:08:06 crc kubenswrapper[4632]: [-]has-synced failed: reason withheld Mar 13 10:08:06 crc kubenswrapper[4632]: [+]process-running ok Mar 13 10:08:06 crc kubenswrapper[4632]: healthz check failed Mar 13 10:08:06 crc kubenswrapper[4632]: I0313 10:08:06.206247 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:08:06 crc kubenswrapper[4632]: I0313 10:08:06.568775 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-2n99d" Mar 13 10:08:06 crc kubenswrapper[4632]: I0313 10:08:06.808442 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5a4d2988-a460-407b-902a-aeb8eda619a1","Type":"ContainerStarted","Data":"8a2ad0a9e117d30bdffbdfe34a1c99a012b75db7ee1ca75eb636ee55b6520e15"} Mar 13 10:08:06 crc kubenswrapper[4632]: I0313 10:08:06.812608 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-658cc96bdc-92bpr" Mar 13 10:08:06 crc kubenswrapper[4632]: I0313 10:08:06.813628 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-86d66b8bfb-9wzpp" Mar 13 10:08:06 crc kubenswrapper[4632]: I0313 10:08:06.837206 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-86d66b8bfb-9wzpp" Mar 13 10:08:06 crc kubenswrapper[4632]: I0313 10:08:06.839046 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-658cc96bdc-92bpr" Mar 13 10:08:06 crc kubenswrapper[4632]: I0313 10:08:06.879816 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=4.879797186 podStartE2EDuration="4.879797186s" podCreationTimestamp="2026-03-13 10:08:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:08:06.845475588 +0000 UTC m=+260.868005741" watchObservedRunningTime="2026-03-13 10:08:06.879797186 +0000 UTC m=+260.902327329" Mar 13 10:08:07 crc kubenswrapper[4632]: I0313 10:08:07.196069 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:08:07 crc kubenswrapper[4632]: [-]has-synced failed: reason withheld Mar 13 10:08:07 crc kubenswrapper[4632]: [+]process-running ok Mar 13 10:08:07 crc kubenswrapper[4632]: healthz check failed Mar 13 10:08:07 crc kubenswrapper[4632]: I0313 10:08:07.196135 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:08:07 crc kubenswrapper[4632]: I0313 10:08:07.937917 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"aa472cd7-9575-473f-b6b6-709b644a5ec4","Type":"ContainerStarted","Data":"bfd967ad660cb6670b66e0ac690d69022ebe216196a49d69939801b1e253860e"} Mar 13 10:08:07 crc kubenswrapper[4632]: I0313 10:08:07.945377 4632 generic.go:334] "Generic (PLEG): container finished" podID="5a4d2988-a460-407b-902a-aeb8eda619a1" containerID="8a2ad0a9e117d30bdffbdfe34a1c99a012b75db7ee1ca75eb636ee55b6520e15" exitCode=0 Mar 13 10:08:07 crc kubenswrapper[4632]: I0313 10:08:07.945583 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5a4d2988-a460-407b-902a-aeb8eda619a1","Type":"ContainerDied","Data":"8a2ad0a9e117d30bdffbdfe34a1c99a012b75db7ee1ca75eb636ee55b6520e15"} Mar 13 10:08:07 crc kubenswrapper[4632]: I0313 10:08:07.992034 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=4.991932752 podStartE2EDuration="4.991932752s" podCreationTimestamp="2026-03-13 10:08:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:08:07.963368456 +0000 UTC m=+261.985898599" watchObservedRunningTime="2026-03-13 10:08:07.991932752 +0000 UTC m=+262.014462885" Mar 13 10:08:08 crc kubenswrapper[4632]: I0313 10:08:08.196031 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:08:08 crc kubenswrapper[4632]: [-]has-synced failed: reason withheld Mar 13 10:08:08 crc kubenswrapper[4632]: [+]process-running ok Mar 13 10:08:08 crc kubenswrapper[4632]: healthz check failed Mar 13 10:08:08 crc kubenswrapper[4632]: I0313 10:08:08.196142 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:08:09 crc kubenswrapper[4632]: I0313 10:08:09.002563 4632 generic.go:334] "Generic (PLEG): container finished" podID="aa472cd7-9575-473f-b6b6-709b644a5ec4" containerID="bfd967ad660cb6670b66e0ac690d69022ebe216196a49d69939801b1e253860e" exitCode=0 Mar 13 10:08:09 crc kubenswrapper[4632]: I0313 10:08:09.002673 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"aa472cd7-9575-473f-b6b6-709b644a5ec4","Type":"ContainerDied","Data":"bfd967ad660cb6670b66e0ac690d69022ebe216196a49d69939801b1e253860e"} Mar 13 10:08:09 crc kubenswrapper[4632]: I0313 10:08:09.194589 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:08:09 crc kubenswrapper[4632]: [-]has-synced failed: reason withheld Mar 13 10:08:09 crc kubenswrapper[4632]: [+]process-running ok Mar 13 10:08:09 crc kubenswrapper[4632]: healthz check failed Mar 13 10:08:09 crc kubenswrapper[4632]: I0313 10:08:09.194719 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:08:10 crc kubenswrapper[4632]: I0313 10:08:10.108686 4632 ???:1] "http: TLS handshake error from 192.168.126.11:55198: no serving certificate available for the kubelet" Mar 13 10:08:10 crc kubenswrapper[4632]: I0313 10:08:10.118735 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5a4d2988-a460-407b-902a-aeb8eda619a1","Type":"ContainerDied","Data":"7fd2f4ed963308f80503bea68090613f7b4770625620e790958344e5d09eb8f5"} Mar 13 10:08:10 crc kubenswrapper[4632]: I0313 10:08:10.118775 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fd2f4ed963308f80503bea68090613f7b4770625620e790958344e5d09eb8f5" Mar 13 10:08:10 crc kubenswrapper[4632]: I0313 10:08:10.135153 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 13 10:08:10 crc kubenswrapper[4632]: I0313 10:08:10.174432 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a4d2988-a460-407b-902a-aeb8eda619a1-kube-api-access\") pod \"5a4d2988-a460-407b-902a-aeb8eda619a1\" (UID: \"5a4d2988-a460-407b-902a-aeb8eda619a1\") " Mar 13 10:08:10 crc kubenswrapper[4632]: I0313 10:08:10.174494 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a4d2988-a460-407b-902a-aeb8eda619a1-kubelet-dir\") pod \"5a4d2988-a460-407b-902a-aeb8eda619a1\" (UID: \"5a4d2988-a460-407b-902a-aeb8eda619a1\") " Mar 13 10:08:10 crc kubenswrapper[4632]: I0313 10:08:10.174718 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a4d2988-a460-407b-902a-aeb8eda619a1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5a4d2988-a460-407b-902a-aeb8eda619a1" (UID: "5a4d2988-a460-407b-902a-aeb8eda619a1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:08:10 crc kubenswrapper[4632]: I0313 10:08:10.196214 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:08:10 crc kubenswrapper[4632]: [-]has-synced failed: reason withheld Mar 13 10:08:10 crc kubenswrapper[4632]: [+]process-running ok Mar 13 10:08:10 crc kubenswrapper[4632]: healthz check failed Mar 13 10:08:10 crc kubenswrapper[4632]: I0313 10:08:10.196596 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:08:10 crc kubenswrapper[4632]: I0313 10:08:10.204680 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a4d2988-a460-407b-902a-aeb8eda619a1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5a4d2988-a460-407b-902a-aeb8eda619a1" (UID: "5a4d2988-a460-407b-902a-aeb8eda619a1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:08:10 crc kubenswrapper[4632]: I0313 10:08:10.275689 4632 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a4d2988-a460-407b-902a-aeb8eda619a1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 13 10:08:10 crc kubenswrapper[4632]: I0313 10:08:10.275730 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a4d2988-a460-407b-902a-aeb8eda619a1-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 13 10:08:10 crc kubenswrapper[4632]: I0313 10:08:10.461127 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 10:08:10 crc kubenswrapper[4632]: I0313 10:08:10.461222 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 10:08:10 crc kubenswrapper[4632]: I0313 10:08:10.626464 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:08:10 crc kubenswrapper[4632]: I0313 10:08:10.675486 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 13 10:08:10 crc kubenswrapper[4632]: I0313 10:08:10.806085 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aa472cd7-9575-473f-b6b6-709b644a5ec4-kube-api-access\") pod \"aa472cd7-9575-473f-b6b6-709b644a5ec4\" (UID: \"aa472cd7-9575-473f-b6b6-709b644a5ec4\") " Mar 13 10:08:10 crc kubenswrapper[4632]: I0313 10:08:10.806133 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aa472cd7-9575-473f-b6b6-709b644a5ec4-kubelet-dir\") pod \"aa472cd7-9575-473f-b6b6-709b644a5ec4\" (UID: \"aa472cd7-9575-473f-b6b6-709b644a5ec4\") " Mar 13 10:08:10 crc kubenswrapper[4632]: I0313 10:08:10.813702 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa472cd7-9575-473f-b6b6-709b644a5ec4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "aa472cd7-9575-473f-b6b6-709b644a5ec4" (UID: "aa472cd7-9575-473f-b6b6-709b644a5ec4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:08:10 crc kubenswrapper[4632]: I0313 10:08:10.813853 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa472cd7-9575-473f-b6b6-709b644a5ec4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "aa472cd7-9575-473f-b6b6-709b644a5ec4" (UID: "aa472cd7-9575-473f-b6b6-709b644a5ec4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:08:10 crc kubenswrapper[4632]: I0313 10:08:10.907508 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aa472cd7-9575-473f-b6b6-709b644a5ec4-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 13 10:08:10 crc kubenswrapper[4632]: I0313 10:08:10.907554 4632 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aa472cd7-9575-473f-b6b6-709b644a5ec4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 13 10:08:11 crc kubenswrapper[4632]: I0313 10:08:11.164154 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 13 10:08:11 crc kubenswrapper[4632]: I0313 10:08:11.164257 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"aa472cd7-9575-473f-b6b6-709b644a5ec4","Type":"ContainerDied","Data":"1e9a3663129374df63b373bcd43c56b70f7622faa20517540c6584e27d165001"} Mar 13 10:08:11 crc kubenswrapper[4632]: I0313 10:08:11.164295 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e9a3663129374df63b373bcd43c56b70f7622faa20517540c6584e27d165001" Mar 13 10:08:11 crc kubenswrapper[4632]: I0313 10:08:11.165158 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 13 10:08:11 crc kubenswrapper[4632]: I0313 10:08:11.197594 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:08:11 crc kubenswrapper[4632]: [-]has-synced failed: reason withheld Mar 13 10:08:11 crc kubenswrapper[4632]: [+]process-running ok Mar 13 10:08:11 crc kubenswrapper[4632]: healthz check failed Mar 13 10:08:11 crc kubenswrapper[4632]: I0313 10:08:11.197710 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:08:12 crc kubenswrapper[4632]: I0313 10:08:12.195519 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:08:12 crc kubenswrapper[4632]: [-]has-synced failed: reason withheld Mar 13 10:08:12 crc kubenswrapper[4632]: [+]process-running ok Mar 13 10:08:12 crc kubenswrapper[4632]: healthz check failed Mar 13 10:08:12 crc kubenswrapper[4632]: I0313 10:08:12.195621 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:08:13 crc kubenswrapper[4632]: I0313 10:08:13.200578 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:08:13 crc kubenswrapper[4632]: [-]has-synced failed: reason withheld Mar 13 10:08:13 crc kubenswrapper[4632]: [+]process-running ok Mar 13 10:08:13 crc kubenswrapper[4632]: healthz check failed Mar 13 10:08:13 crc kubenswrapper[4632]: I0313 10:08:13.200670 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:08:13 crc kubenswrapper[4632]: I0313 10:08:13.823292 4632 ???:1] "http: TLS handshake error from 192.168.126.11:55212: no serving certificate available for the kubelet" Mar 13 10:08:14 crc kubenswrapper[4632]: I0313 10:08:14.194985 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:08:14 crc kubenswrapper[4632]: [-]has-synced failed: reason withheld Mar 13 10:08:14 crc kubenswrapper[4632]: [+]process-running ok Mar 13 10:08:14 crc kubenswrapper[4632]: healthz check failed Mar 13 10:08:14 crc kubenswrapper[4632]: I0313 10:08:14.195111 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:08:15 crc kubenswrapper[4632]: I0313 10:08:15.219849 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:08:15 crc kubenswrapper[4632]: [-]has-synced failed: reason withheld Mar 13 10:08:15 crc kubenswrapper[4632]: [+]process-running ok Mar 13 10:08:15 crc kubenswrapper[4632]: healthz check failed Mar 13 10:08:15 crc kubenswrapper[4632]: I0313 10:08:15.220049 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:08:15 crc kubenswrapper[4632]: I0313 10:08:15.633290 4632 patch_prober.go:28] interesting pod/downloads-7954f5f757-w2hhj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 13 10:08:15 crc kubenswrapper[4632]: I0313 10:08:15.633431 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w2hhj" podUID="7d155f24-9bfc-4039-9981-10e7f724fa51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 13 10:08:15 crc kubenswrapper[4632]: I0313 10:08:15.636338 4632 patch_prober.go:28] interesting pod/downloads-7954f5f757-w2hhj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 13 10:08:15 crc kubenswrapper[4632]: I0313 10:08:15.636402 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-w2hhj" podUID="7d155f24-9bfc-4039-9981-10e7f724fa51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 13 10:08:15 crc kubenswrapper[4632]: I0313 10:08:15.636514 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-w2hhj" Mar 13 10:08:15 crc kubenswrapper[4632]: I0313 10:08:15.637484 4632 patch_prober.go:28] interesting pod/downloads-7954f5f757-w2hhj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 13 10:08:15 crc kubenswrapper[4632]: I0313 10:08:15.637605 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w2hhj" podUID="7d155f24-9bfc-4039-9981-10e7f724fa51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 13 10:08:15 crc kubenswrapper[4632]: I0313 10:08:15.637833 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"dc3a73e428d40f73e3034fb8b7d18fcfe7453c6673209f7b70847a9a508f90d4"} pod="openshift-console/downloads-7954f5f757-w2hhj" containerMessage="Container download-server failed liveness probe, will be restarted" Mar 13 10:08:15 crc kubenswrapper[4632]: I0313 10:08:15.637886 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-w2hhj" podUID="7d155f24-9bfc-4039-9981-10e7f724fa51" containerName="download-server" containerID="cri-o://dc3a73e428d40f73e3034fb8b7d18fcfe7453c6673209f7b70847a9a508f90d4" gracePeriod=2 Mar 13 10:08:15 crc kubenswrapper[4632]: I0313 10:08:15.982742 4632 patch_prober.go:28] interesting pod/console-f9d7485db-zn7mn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Mar 13 10:08:15 crc kubenswrapper[4632]: I0313 10:08:15.983404 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-zn7mn" podUID="f5a50074-5531-442f-a0e9-0578f15634c1" containerName="console" probeResult="failure" output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" Mar 13 10:08:16 crc kubenswrapper[4632]: I0313 10:08:16.197375 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:08:16 crc kubenswrapper[4632]: [-]has-synced failed: reason withheld Mar 13 10:08:16 crc kubenswrapper[4632]: [+]process-running ok Mar 13 10:08:16 crc kubenswrapper[4632]: healthz check failed Mar 13 10:08:16 crc kubenswrapper[4632]: I0313 10:08:16.197468 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:08:17 crc kubenswrapper[4632]: I0313 10:08:17.193585 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:08:17 crc kubenswrapper[4632]: [-]has-synced failed: reason withheld Mar 13 10:08:17 crc kubenswrapper[4632]: [+]process-running ok Mar 13 10:08:17 crc kubenswrapper[4632]: healthz check failed Mar 13 10:08:17 crc kubenswrapper[4632]: I0313 10:08:17.193683 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:08:18 crc kubenswrapper[4632]: I0313 10:08:18.195540 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:08:18 crc kubenswrapper[4632]: [-]has-synced failed: reason withheld Mar 13 10:08:18 crc kubenswrapper[4632]: [+]process-running ok Mar 13 10:08:18 crc kubenswrapper[4632]: healthz check failed Mar 13 10:08:18 crc kubenswrapper[4632]: I0313 10:08:18.195648 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:08:18 crc kubenswrapper[4632]: I0313 10:08:18.374921 4632 generic.go:334] "Generic (PLEG): container finished" podID="7d155f24-9bfc-4039-9981-10e7f724fa51" containerID="dc3a73e428d40f73e3034fb8b7d18fcfe7453c6673209f7b70847a9a508f90d4" exitCode=0 Mar 13 10:08:18 crc kubenswrapper[4632]: I0313 10:08:18.375021 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-w2hhj" event={"ID":"7d155f24-9bfc-4039-9981-10e7f724fa51","Type":"ContainerDied","Data":"dc3a73e428d40f73e3034fb8b7d18fcfe7453c6673209f7b70847a9a508f90d4"} Mar 13 10:08:19 crc kubenswrapper[4632]: I0313 10:08:19.060082 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-658cc96bdc-92bpr"] Mar 13 10:08:19 crc kubenswrapper[4632]: I0313 10:08:19.060651 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-658cc96bdc-92bpr" podUID="c8ffa9c9-d11d-46b5-ac51-6d38a8639d98" containerName="controller-manager" containerID="cri-o://fe1e770193ae7e14a37be92defae5c64d043b458e01244272a22574e7b2e1f74" gracePeriod=30 Mar 13 10:08:19 crc kubenswrapper[4632]: I0313 10:08:19.098895 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d66b8bfb-9wzpp"] Mar 13 10:08:19 crc kubenswrapper[4632]: I0313 10:08:19.099397 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-86d66b8bfb-9wzpp" podUID="f0f45c9a-e32c-420e-9106-fcb72dd59350" containerName="route-controller-manager" containerID="cri-o://687f3a3904ba4bdede4a24f019fead48648120a9d8e838727216b0ec43fcb3a2" gracePeriod=30 Mar 13 10:08:19 crc kubenswrapper[4632]: I0313 10:08:19.193702 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:08:19 crc kubenswrapper[4632]: [-]has-synced failed: reason withheld Mar 13 10:08:19 crc kubenswrapper[4632]: [+]process-running ok Mar 13 10:08:19 crc kubenswrapper[4632]: healthz check failed Mar 13 10:08:19 crc kubenswrapper[4632]: I0313 10:08:19.193762 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:08:20 crc kubenswrapper[4632]: I0313 10:08:20.193688 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:08:20 crc kubenswrapper[4632]: [-]has-synced failed: reason withheld Mar 13 10:08:20 crc kubenswrapper[4632]: [+]process-running ok Mar 13 10:08:20 crc kubenswrapper[4632]: healthz check failed Mar 13 10:08:20 crc kubenswrapper[4632]: I0313 10:08:20.193740 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:08:21 crc kubenswrapper[4632]: I0313 10:08:21.192851 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:08:21 crc kubenswrapper[4632]: [-]has-synced failed: reason withheld Mar 13 10:08:21 crc kubenswrapper[4632]: [+]process-running ok Mar 13 10:08:21 crc kubenswrapper[4632]: healthz check failed Mar 13 10:08:21 crc kubenswrapper[4632]: I0313 10:08:21.192898 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:08:21 crc kubenswrapper[4632]: I0313 10:08:21.450156 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:08:22 crc kubenswrapper[4632]: I0313 10:08:22.193896 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:08:22 crc kubenswrapper[4632]: [-]has-synced failed: reason withheld Mar 13 10:08:22 crc kubenswrapper[4632]: [+]process-running ok Mar 13 10:08:22 crc kubenswrapper[4632]: healthz check failed Mar 13 10:08:22 crc kubenswrapper[4632]: I0313 10:08:22.194222 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:08:22 crc kubenswrapper[4632]: I0313 10:08:22.469852 4632 generic.go:334] "Generic (PLEG): container finished" podID="f0f45c9a-e32c-420e-9106-fcb72dd59350" containerID="687f3a3904ba4bdede4a24f019fead48648120a9d8e838727216b0ec43fcb3a2" exitCode=0 Mar 13 10:08:22 crc kubenswrapper[4632]: I0313 10:08:22.469971 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86d66b8bfb-9wzpp" event={"ID":"f0f45c9a-e32c-420e-9106-fcb72dd59350","Type":"ContainerDied","Data":"687f3a3904ba4bdede4a24f019fead48648120a9d8e838727216b0ec43fcb3a2"} Mar 13 10:08:22 crc kubenswrapper[4632]: I0313 10:08:22.475915 4632 generic.go:334] "Generic (PLEG): container finished" podID="c8ffa9c9-d11d-46b5-ac51-6d38a8639d98" containerID="fe1e770193ae7e14a37be92defae5c64d043b458e01244272a22574e7b2e1f74" exitCode=0 Mar 13 10:08:22 crc kubenswrapper[4632]: I0313 10:08:22.476004 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-658cc96bdc-92bpr" event={"ID":"c8ffa9c9-d11d-46b5-ac51-6d38a8639d98","Type":"ContainerDied","Data":"fe1e770193ae7e14a37be92defae5c64d043b458e01244272a22574e7b2e1f74"} Mar 13 10:08:23 crc kubenswrapper[4632]: I0313 10:08:23.194372 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:08:23 crc kubenswrapper[4632]: [-]has-synced failed: reason withheld Mar 13 10:08:23 crc kubenswrapper[4632]: [+]process-running ok Mar 13 10:08:23 crc kubenswrapper[4632]: healthz check failed Mar 13 10:08:23 crc kubenswrapper[4632]: I0313 10:08:23.194495 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:08:23 crc kubenswrapper[4632]: I0313 10:08:23.195140 4632 patch_prober.go:28] interesting pod/controller-manager-658cc96bdc-92bpr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Mar 13 10:08:23 crc kubenswrapper[4632]: I0313 10:08:23.195261 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-658cc96bdc-92bpr" podUID="c8ffa9c9-d11d-46b5-ac51-6d38a8639d98" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Mar 13 10:08:23 crc kubenswrapper[4632]: I0313 10:08:23.229162 4632 patch_prober.go:28] interesting pod/route-controller-manager-86d66b8bfb-9wzpp container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Mar 13 10:08:23 crc kubenswrapper[4632]: I0313 10:08:23.229308 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-86d66b8bfb-9wzpp" podUID="f0f45c9a-e32c-420e-9106-fcb72dd59350" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Mar 13 10:08:24 crc kubenswrapper[4632]: I0313 10:08:24.193580 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:08:24 crc kubenswrapper[4632]: [-]has-synced failed: reason withheld Mar 13 10:08:24 crc kubenswrapper[4632]: [+]process-running ok Mar 13 10:08:24 crc kubenswrapper[4632]: healthz check failed Mar 13 10:08:24 crc kubenswrapper[4632]: I0313 10:08:24.193676 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:08:25 crc kubenswrapper[4632]: I0313 10:08:25.199803 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-t9vht" Mar 13 10:08:25 crc kubenswrapper[4632]: I0313 10:08:25.205716 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-t9vht" Mar 13 10:08:25 crc kubenswrapper[4632]: I0313 10:08:25.632001 4632 patch_prober.go:28] interesting pod/downloads-7954f5f757-w2hhj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 13 10:08:25 crc kubenswrapper[4632]: I0313 10:08:25.632054 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w2hhj" podUID="7d155f24-9bfc-4039-9981-10e7f724fa51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 13 10:08:26 crc kubenswrapper[4632]: I0313 10:08:25.983328 4632 patch_prober.go:28] interesting pod/console-f9d7485db-zn7mn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Mar 13 10:08:26 crc kubenswrapper[4632]: I0313 10:08:25.983703 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-zn7mn" podUID="f5a50074-5531-442f-a0e9-0578f15634c1" containerName="console" probeResult="failure" output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" Mar 13 10:08:26 crc kubenswrapper[4632]: I0313 10:08:26.867154 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfvsc" Mar 13 10:08:33 crc kubenswrapper[4632]: I0313 10:08:33.890904 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86d66b8bfb-9wzpp" Mar 13 10:08:33 crc kubenswrapper[4632]: I0313 10:08:33.901346 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-658cc96bdc-92bpr" Mar 13 10:08:33 crc kubenswrapper[4632]: I0313 10:08:33.941526 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c44777cb6-dkmdb"] Mar 13 10:08:33 crc kubenswrapper[4632]: E0313 10:08:33.941876 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa472cd7-9575-473f-b6b6-709b644a5ec4" containerName="pruner" Mar 13 10:08:33 crc kubenswrapper[4632]: I0313 10:08:33.941894 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa472cd7-9575-473f-b6b6-709b644a5ec4" containerName="pruner" Mar 13 10:08:33 crc kubenswrapper[4632]: E0313 10:08:33.941924 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8ffa9c9-d11d-46b5-ac51-6d38a8639d98" containerName="controller-manager" Mar 13 10:08:33 crc kubenswrapper[4632]: I0313 10:08:33.941935 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ffa9c9-d11d-46b5-ac51-6d38a8639d98" containerName="controller-manager" Mar 13 10:08:33 crc kubenswrapper[4632]: E0313 10:08:33.941964 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="528d3aa9-10bf-4029-a4d2-85768264fde8" containerName="collect-profiles" Mar 13 10:08:33 crc kubenswrapper[4632]: I0313 10:08:33.941972 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="528d3aa9-10bf-4029-a4d2-85768264fde8" containerName="collect-profiles" Mar 13 10:08:33 crc kubenswrapper[4632]: E0313 10:08:33.941988 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a4d2988-a460-407b-902a-aeb8eda619a1" containerName="pruner" Mar 13 10:08:33 crc kubenswrapper[4632]: I0313 10:08:33.941998 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a4d2988-a460-407b-902a-aeb8eda619a1" containerName="pruner" Mar 13 10:08:33 crc kubenswrapper[4632]: E0313 10:08:33.942008 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f45c9a-e32c-420e-9106-fcb72dd59350" containerName="route-controller-manager" Mar 13 10:08:33 crc kubenswrapper[4632]: I0313 10:08:33.942015 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f45c9a-e32c-420e-9106-fcb72dd59350" containerName="route-controller-manager" Mar 13 10:08:33 crc kubenswrapper[4632]: I0313 10:08:33.942146 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8ffa9c9-d11d-46b5-ac51-6d38a8639d98" containerName="controller-manager" Mar 13 10:08:33 crc kubenswrapper[4632]: I0313 10:08:33.942173 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="528d3aa9-10bf-4029-a4d2-85768264fde8" containerName="collect-profiles" Mar 13 10:08:33 crc kubenswrapper[4632]: I0313 10:08:33.942191 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa472cd7-9575-473f-b6b6-709b644a5ec4" containerName="pruner" Mar 13 10:08:33 crc kubenswrapper[4632]: I0313 10:08:33.942204 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0f45c9a-e32c-420e-9106-fcb72dd59350" containerName="route-controller-manager" Mar 13 10:08:33 crc kubenswrapper[4632]: I0313 10:08:33.942213 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a4d2988-a460-407b-902a-aeb8eda619a1" containerName="pruner" Mar 13 10:08:33 crc kubenswrapper[4632]: I0313 10:08:33.942736 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c44777cb6-dkmdb" Mar 13 10:08:33 crc kubenswrapper[4632]: I0313 10:08:33.953421 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c44777cb6-dkmdb"] Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.084091 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0f45c9a-e32c-420e-9106-fcb72dd59350-serving-cert\") pod \"f0f45c9a-e32c-420e-9106-fcb72dd59350\" (UID: \"f0f45c9a-e32c-420e-9106-fcb72dd59350\") " Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.084433 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c8ffa9c9-d11d-46b5-ac51-6d38a8639d98-client-ca\") pod \"c8ffa9c9-d11d-46b5-ac51-6d38a8639d98\" (UID: \"c8ffa9c9-d11d-46b5-ac51-6d38a8639d98\") " Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.084556 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9pxp\" (UniqueName: \"kubernetes.io/projected/f0f45c9a-e32c-420e-9106-fcb72dd59350-kube-api-access-l9pxp\") pod \"f0f45c9a-e32c-420e-9106-fcb72dd59350\" (UID: \"f0f45c9a-e32c-420e-9106-fcb72dd59350\") " Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.084659 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8ffa9c9-d11d-46b5-ac51-6d38a8639d98-serving-cert\") pod \"c8ffa9c9-d11d-46b5-ac51-6d38a8639d98\" (UID: \"c8ffa9c9-d11d-46b5-ac51-6d38a8639d98\") " Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.084772 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwf9m\" (UniqueName: \"kubernetes.io/projected/c8ffa9c9-d11d-46b5-ac51-6d38a8639d98-kube-api-access-hwf9m\") pod \"c8ffa9c9-d11d-46b5-ac51-6d38a8639d98\" (UID: \"c8ffa9c9-d11d-46b5-ac51-6d38a8639d98\") " Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.084885 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8ffa9c9-d11d-46b5-ac51-6d38a8639d98-config\") pod \"c8ffa9c9-d11d-46b5-ac51-6d38a8639d98\" (UID: \"c8ffa9c9-d11d-46b5-ac51-6d38a8639d98\") " Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.085156 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0f45c9a-e32c-420e-9106-fcb72dd59350-config\") pod \"f0f45c9a-e32c-420e-9106-fcb72dd59350\" (UID: \"f0f45c9a-e32c-420e-9106-fcb72dd59350\") " Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.085318 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c8ffa9c9-d11d-46b5-ac51-6d38a8639d98-proxy-ca-bundles\") pod \"c8ffa9c9-d11d-46b5-ac51-6d38a8639d98\" (UID: \"c8ffa9c9-d11d-46b5-ac51-6d38a8639d98\") " Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.085441 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f0f45c9a-e32c-420e-9106-fcb72dd59350-client-ca\") pod \"f0f45c9a-e32c-420e-9106-fcb72dd59350\" (UID: \"f0f45c9a-e32c-420e-9106-fcb72dd59350\") " Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.085667 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0982dbd-62a1-47c5-8510-5045b9ca5785-client-ca\") pod \"route-controller-manager-5c44777cb6-dkmdb\" (UID: \"a0982dbd-62a1-47c5-8510-5045b9ca5785\") " pod="openshift-route-controller-manager/route-controller-manager-5c44777cb6-dkmdb" Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.085862 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0f45c9a-e32c-420e-9106-fcb72dd59350-config" (OuterVolumeSpecName: "config") pod "f0f45c9a-e32c-420e-9106-fcb72dd59350" (UID: "f0f45c9a-e32c-420e-9106-fcb72dd59350"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.085896 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8ffa9c9-d11d-46b5-ac51-6d38a8639d98-config" (OuterVolumeSpecName: "config") pod "c8ffa9c9-d11d-46b5-ac51-6d38a8639d98" (UID: "c8ffa9c9-d11d-46b5-ac51-6d38a8639d98"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.086435 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8ffa9c9-d11d-46b5-ac51-6d38a8639d98-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c8ffa9c9-d11d-46b5-ac51-6d38a8639d98" (UID: "c8ffa9c9-d11d-46b5-ac51-6d38a8639d98"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.086661 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0f45c9a-e32c-420e-9106-fcb72dd59350-client-ca" (OuterVolumeSpecName: "client-ca") pod "f0f45c9a-e32c-420e-9106-fcb72dd59350" (UID: "f0f45c9a-e32c-420e-9106-fcb72dd59350"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.087161 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8ffa9c9-d11d-46b5-ac51-6d38a8639d98-client-ca" (OuterVolumeSpecName: "client-ca") pod "c8ffa9c9-d11d-46b5-ac51-6d38a8639d98" (UID: "c8ffa9c9-d11d-46b5-ac51-6d38a8639d98"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.087614 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0982dbd-62a1-47c5-8510-5045b9ca5785-config\") pod \"route-controller-manager-5c44777cb6-dkmdb\" (UID: \"a0982dbd-62a1-47c5-8510-5045b9ca5785\") " pod="openshift-route-controller-manager/route-controller-manager-5c44777cb6-dkmdb" Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.088065 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0982dbd-62a1-47c5-8510-5045b9ca5785-serving-cert\") pod \"route-controller-manager-5c44777cb6-dkmdb\" (UID: \"a0982dbd-62a1-47c5-8510-5045b9ca5785\") " pod="openshift-route-controller-manager/route-controller-manager-5c44777cb6-dkmdb" Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.088214 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cqqg\" (UniqueName: \"kubernetes.io/projected/a0982dbd-62a1-47c5-8510-5045b9ca5785-kube-api-access-2cqqg\") pod \"route-controller-manager-5c44777cb6-dkmdb\" (UID: \"a0982dbd-62a1-47c5-8510-5045b9ca5785\") " pod="openshift-route-controller-manager/route-controller-manager-5c44777cb6-dkmdb" Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.088392 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8ffa9c9-d11d-46b5-ac51-6d38a8639d98-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.088479 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0f45c9a-e32c-420e-9106-fcb72dd59350-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.088564 4632 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c8ffa9c9-d11d-46b5-ac51-6d38a8639d98-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.088643 4632 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f0f45c9a-e32c-420e-9106-fcb72dd59350-client-ca\") on node \"crc\" DevicePath \"\"" Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.088763 4632 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c8ffa9c9-d11d-46b5-ac51-6d38a8639d98-client-ca\") on node \"crc\" DevicePath \"\"" Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.089906 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8ffa9c9-d11d-46b5-ac51-6d38a8639d98-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c8ffa9c9-d11d-46b5-ac51-6d38a8639d98" (UID: "c8ffa9c9-d11d-46b5-ac51-6d38a8639d98"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.090769 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0f45c9a-e32c-420e-9106-fcb72dd59350-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f0f45c9a-e32c-420e-9106-fcb72dd59350" (UID: "f0f45c9a-e32c-420e-9106-fcb72dd59350"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.091161 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0f45c9a-e32c-420e-9106-fcb72dd59350-kube-api-access-l9pxp" (OuterVolumeSpecName: "kube-api-access-l9pxp") pod "f0f45c9a-e32c-420e-9106-fcb72dd59350" (UID: "f0f45c9a-e32c-420e-9106-fcb72dd59350"). InnerVolumeSpecName "kube-api-access-l9pxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.114812 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8ffa9c9-d11d-46b5-ac51-6d38a8639d98-kube-api-access-hwf9m" (OuterVolumeSpecName: "kube-api-access-hwf9m") pod "c8ffa9c9-d11d-46b5-ac51-6d38a8639d98" (UID: "c8ffa9c9-d11d-46b5-ac51-6d38a8639d98"). InnerVolumeSpecName "kube-api-access-hwf9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.189235 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cqqg\" (UniqueName: \"kubernetes.io/projected/a0982dbd-62a1-47c5-8510-5045b9ca5785-kube-api-access-2cqqg\") pod \"route-controller-manager-5c44777cb6-dkmdb\" (UID: \"a0982dbd-62a1-47c5-8510-5045b9ca5785\") " pod="openshift-route-controller-manager/route-controller-manager-5c44777cb6-dkmdb" Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.189332 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0982dbd-62a1-47c5-8510-5045b9ca5785-client-ca\") pod \"route-controller-manager-5c44777cb6-dkmdb\" (UID: \"a0982dbd-62a1-47c5-8510-5045b9ca5785\") " pod="openshift-route-controller-manager/route-controller-manager-5c44777cb6-dkmdb" Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.189371 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0982dbd-62a1-47c5-8510-5045b9ca5785-config\") pod \"route-controller-manager-5c44777cb6-dkmdb\" (UID: \"a0982dbd-62a1-47c5-8510-5045b9ca5785\") " pod="openshift-route-controller-manager/route-controller-manager-5c44777cb6-dkmdb" Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.189395 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0982dbd-62a1-47c5-8510-5045b9ca5785-serving-cert\") pod \"route-controller-manager-5c44777cb6-dkmdb\" (UID: \"a0982dbd-62a1-47c5-8510-5045b9ca5785\") " pod="openshift-route-controller-manager/route-controller-manager-5c44777cb6-dkmdb" Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.189428 4632 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0f45c9a-e32c-420e-9106-fcb72dd59350-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.189439 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9pxp\" (UniqueName: \"kubernetes.io/projected/f0f45c9a-e32c-420e-9106-fcb72dd59350-kube-api-access-l9pxp\") on node \"crc\" DevicePath \"\"" Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.189449 4632 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8ffa9c9-d11d-46b5-ac51-6d38a8639d98-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.189458 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwf9m\" (UniqueName: \"kubernetes.io/projected/c8ffa9c9-d11d-46b5-ac51-6d38a8639d98-kube-api-access-hwf9m\") on node \"crc\" DevicePath \"\"" Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.191212 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0982dbd-62a1-47c5-8510-5045b9ca5785-config\") pod \"route-controller-manager-5c44777cb6-dkmdb\" (UID: \"a0982dbd-62a1-47c5-8510-5045b9ca5785\") " pod="openshift-route-controller-manager/route-controller-manager-5c44777cb6-dkmdb" Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.194922 4632 patch_prober.go:28] interesting pod/controller-manager-658cc96bdc-92bpr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: i/o timeout" start-of-body= Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.194999 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-658cc96bdc-92bpr" podUID="c8ffa9c9-d11d-46b5-ac51-6d38a8639d98" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: i/o timeout" Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.203212 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0982dbd-62a1-47c5-8510-5045b9ca5785-serving-cert\") pod \"route-controller-manager-5c44777cb6-dkmdb\" (UID: \"a0982dbd-62a1-47c5-8510-5045b9ca5785\") " pod="openshift-route-controller-manager/route-controller-manager-5c44777cb6-dkmdb" Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.203858 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0982dbd-62a1-47c5-8510-5045b9ca5785-client-ca\") pod \"route-controller-manager-5c44777cb6-dkmdb\" (UID: \"a0982dbd-62a1-47c5-8510-5045b9ca5785\") " pod="openshift-route-controller-manager/route-controller-manager-5c44777cb6-dkmdb" Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.209438 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cqqg\" (UniqueName: \"kubernetes.io/projected/a0982dbd-62a1-47c5-8510-5045b9ca5785-kube-api-access-2cqqg\") pod \"route-controller-manager-5c44777cb6-dkmdb\" (UID: \"a0982dbd-62a1-47c5-8510-5045b9ca5785\") " pod="openshift-route-controller-manager/route-controller-manager-5c44777cb6-dkmdb" Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.228452 4632 patch_prober.go:28] interesting pod/route-controller-manager-86d66b8bfb-9wzpp container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.228527 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-86d66b8bfb-9wzpp" podUID="f0f45c9a-e32c-420e-9106-fcb72dd59350" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.274596 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c44777cb6-dkmdb" Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.583270 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86d66b8bfb-9wzpp" Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.583528 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86d66b8bfb-9wzpp" event={"ID":"f0f45c9a-e32c-420e-9106-fcb72dd59350","Type":"ContainerDied","Data":"070e0df69129dffbb7be6eacfbcf901c3d12a23e3166a3f87eca493eaf3d27d9"} Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.583570 4632 scope.go:117] "RemoveContainer" containerID="687f3a3904ba4bdede4a24f019fead48648120a9d8e838727216b0ec43fcb3a2" Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.588379 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-658cc96bdc-92bpr" event={"ID":"c8ffa9c9-d11d-46b5-ac51-6d38a8639d98","Type":"ContainerDied","Data":"94bf0543076d266034298c32f3e17bb1e05e21aec66de3f477e64011186c779f"} Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.588444 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-658cc96bdc-92bpr" Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.628134 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-658cc96bdc-92bpr"] Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.635712 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-658cc96bdc-92bpr"] Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.641396 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d66b8bfb-9wzpp"] Mar 13 10:08:34 crc kubenswrapper[4632]: I0313 10:08:34.644026 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d66b8bfb-9wzpp"] Mar 13 10:08:35 crc kubenswrapper[4632]: I0313 10:08:35.632905 4632 patch_prober.go:28] interesting pod/downloads-7954f5f757-w2hhj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 13 10:08:35 crc kubenswrapper[4632]: I0313 10:08:35.633227 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w2hhj" podUID="7d155f24-9bfc-4039-9981-10e7f724fa51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 13 10:08:35 crc kubenswrapper[4632]: E0313 10:08:35.636127 4632 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Mar 13 10:08:35 crc kubenswrapper[4632]: E0313 10:08:35.636293 4632 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 13 10:08:35 crc kubenswrapper[4632]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Mar 13 10:08:35 crc kubenswrapper[4632]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sd756,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29556606-mkrp2_openshift-infra(c822257d-9d2f-4b6f-87de-131de5cd0efe): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Mar 13 10:08:35 crc kubenswrapper[4632]: > logger="UnhandledError" Mar 13 10:08:35 crc kubenswrapper[4632]: E0313 10:08:35.637653 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29556606-mkrp2" podUID="c822257d-9d2f-4b6f-87de-131de5cd0efe" Mar 13 10:08:35 crc kubenswrapper[4632]: I0313 10:08:35.994259 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-zn7mn" Mar 13 10:08:35 crc kubenswrapper[4632]: I0313 10:08:35.998146 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-zn7mn" Mar 13 10:08:36 crc kubenswrapper[4632]: I0313 10:08:36.056963 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8ffa9c9-d11d-46b5-ac51-6d38a8639d98" path="/var/lib/kubelet/pods/c8ffa9c9-d11d-46b5-ac51-6d38a8639d98/volumes" Mar 13 10:08:36 crc kubenswrapper[4632]: I0313 10:08:36.057994 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0f45c9a-e32c-420e-9106-fcb72dd59350" path="/var/lib/kubelet/pods/f0f45c9a-e32c-420e-9106-fcb72dd59350/volumes" Mar 13 10:08:36 crc kubenswrapper[4632]: E0313 10:08:36.607679 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29556606-mkrp2" podUID="c822257d-9d2f-4b6f-87de-131de5cd0efe" Mar 13 10:08:36 crc kubenswrapper[4632]: I0313 10:08:36.865426 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-747c9765b-lqcx6"] Mar 13 10:08:36 crc kubenswrapper[4632]: I0313 10:08:36.866473 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-747c9765b-lqcx6" Mar 13 10:08:36 crc kubenswrapper[4632]: I0313 10:08:36.874257 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 13 10:08:36 crc kubenswrapper[4632]: I0313 10:08:36.881716 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 10:08:36 crc kubenswrapper[4632]: I0313 10:08:36.883912 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 10:08:36 crc kubenswrapper[4632]: I0313 10:08:36.883985 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 10:08:36 crc kubenswrapper[4632]: I0313 10:08:36.885909 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 10:08:36 crc kubenswrapper[4632]: I0313 10:08:36.889629 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 10:08:36 crc kubenswrapper[4632]: I0313 10:08:36.892680 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-747c9765b-lqcx6"] Mar 13 10:08:36 crc kubenswrapper[4632]: I0313 10:08:36.897483 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 10:08:36 crc kubenswrapper[4632]: I0313 10:08:36.937349 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0816f595-2f7f-425a-9a6b-1022e2a4ca04-config\") pod \"controller-manager-747c9765b-lqcx6\" (UID: \"0816f595-2f7f-425a-9a6b-1022e2a4ca04\") " pod="openshift-controller-manager/controller-manager-747c9765b-lqcx6" Mar 13 10:08:36 crc kubenswrapper[4632]: I0313 10:08:36.937468 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0816f595-2f7f-425a-9a6b-1022e2a4ca04-proxy-ca-bundles\") pod \"controller-manager-747c9765b-lqcx6\" (UID: \"0816f595-2f7f-425a-9a6b-1022e2a4ca04\") " pod="openshift-controller-manager/controller-manager-747c9765b-lqcx6" Mar 13 10:08:36 crc kubenswrapper[4632]: I0313 10:08:36.937501 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2prq4\" (UniqueName: \"kubernetes.io/projected/0816f595-2f7f-425a-9a6b-1022e2a4ca04-kube-api-access-2prq4\") pod \"controller-manager-747c9765b-lqcx6\" (UID: \"0816f595-2f7f-425a-9a6b-1022e2a4ca04\") " pod="openshift-controller-manager/controller-manager-747c9765b-lqcx6" Mar 13 10:08:36 crc kubenswrapper[4632]: I0313 10:08:36.937577 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0816f595-2f7f-425a-9a6b-1022e2a4ca04-client-ca\") pod \"controller-manager-747c9765b-lqcx6\" (UID: \"0816f595-2f7f-425a-9a6b-1022e2a4ca04\") " pod="openshift-controller-manager/controller-manager-747c9765b-lqcx6" Mar 13 10:08:36 crc kubenswrapper[4632]: I0313 10:08:36.937630 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0816f595-2f7f-425a-9a6b-1022e2a4ca04-serving-cert\") pod \"controller-manager-747c9765b-lqcx6\" (UID: \"0816f595-2f7f-425a-9a6b-1022e2a4ca04\") " pod="openshift-controller-manager/controller-manager-747c9765b-lqcx6" Mar 13 10:08:37 crc kubenswrapper[4632]: I0313 10:08:37.039606 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0816f595-2f7f-425a-9a6b-1022e2a4ca04-config\") pod \"controller-manager-747c9765b-lqcx6\" (UID: \"0816f595-2f7f-425a-9a6b-1022e2a4ca04\") " pod="openshift-controller-manager/controller-manager-747c9765b-lqcx6" Mar 13 10:08:37 crc kubenswrapper[4632]: I0313 10:08:37.039736 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0816f595-2f7f-425a-9a6b-1022e2a4ca04-proxy-ca-bundles\") pod \"controller-manager-747c9765b-lqcx6\" (UID: \"0816f595-2f7f-425a-9a6b-1022e2a4ca04\") " pod="openshift-controller-manager/controller-manager-747c9765b-lqcx6" Mar 13 10:08:37 crc kubenswrapper[4632]: I0313 10:08:37.039776 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2prq4\" (UniqueName: \"kubernetes.io/projected/0816f595-2f7f-425a-9a6b-1022e2a4ca04-kube-api-access-2prq4\") pod \"controller-manager-747c9765b-lqcx6\" (UID: \"0816f595-2f7f-425a-9a6b-1022e2a4ca04\") " pod="openshift-controller-manager/controller-manager-747c9765b-lqcx6" Mar 13 10:08:37 crc kubenswrapper[4632]: I0313 10:08:37.039822 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0816f595-2f7f-425a-9a6b-1022e2a4ca04-client-ca\") pod \"controller-manager-747c9765b-lqcx6\" (UID: \"0816f595-2f7f-425a-9a6b-1022e2a4ca04\") " pod="openshift-controller-manager/controller-manager-747c9765b-lqcx6" Mar 13 10:08:37 crc kubenswrapper[4632]: I0313 10:08:37.039859 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0816f595-2f7f-425a-9a6b-1022e2a4ca04-serving-cert\") pod \"controller-manager-747c9765b-lqcx6\" (UID: \"0816f595-2f7f-425a-9a6b-1022e2a4ca04\") " pod="openshift-controller-manager/controller-manager-747c9765b-lqcx6" Mar 13 10:08:37 crc kubenswrapper[4632]: I0313 10:08:37.048602 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0816f595-2f7f-425a-9a6b-1022e2a4ca04-client-ca\") pod \"controller-manager-747c9765b-lqcx6\" (UID: \"0816f595-2f7f-425a-9a6b-1022e2a4ca04\") " pod="openshift-controller-manager/controller-manager-747c9765b-lqcx6" Mar 13 10:08:37 crc kubenswrapper[4632]: I0313 10:08:37.050594 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0816f595-2f7f-425a-9a6b-1022e2a4ca04-proxy-ca-bundles\") pod \"controller-manager-747c9765b-lqcx6\" (UID: \"0816f595-2f7f-425a-9a6b-1022e2a4ca04\") " pod="openshift-controller-manager/controller-manager-747c9765b-lqcx6" Mar 13 10:08:37 crc kubenswrapper[4632]: I0313 10:08:37.052229 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0816f595-2f7f-425a-9a6b-1022e2a4ca04-config\") pod \"controller-manager-747c9765b-lqcx6\" (UID: \"0816f595-2f7f-425a-9a6b-1022e2a4ca04\") " pod="openshift-controller-manager/controller-manager-747c9765b-lqcx6" Mar 13 10:08:37 crc kubenswrapper[4632]: I0313 10:08:37.077073 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0816f595-2f7f-425a-9a6b-1022e2a4ca04-serving-cert\") pod \"controller-manager-747c9765b-lqcx6\" (UID: \"0816f595-2f7f-425a-9a6b-1022e2a4ca04\") " pod="openshift-controller-manager/controller-manager-747c9765b-lqcx6" Mar 13 10:08:37 crc kubenswrapper[4632]: I0313 10:08:37.085594 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2prq4\" (UniqueName: \"kubernetes.io/projected/0816f595-2f7f-425a-9a6b-1022e2a4ca04-kube-api-access-2prq4\") pod \"controller-manager-747c9765b-lqcx6\" (UID: \"0816f595-2f7f-425a-9a6b-1022e2a4ca04\") " pod="openshift-controller-manager/controller-manager-747c9765b-lqcx6" Mar 13 10:08:37 crc kubenswrapper[4632]: I0313 10:08:37.190570 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-747c9765b-lqcx6" Mar 13 10:08:39 crc kubenswrapper[4632]: I0313 10:08:39.023879 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-747c9765b-lqcx6"] Mar 13 10:08:39 crc kubenswrapper[4632]: I0313 10:08:39.059078 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c44777cb6-dkmdb"] Mar 13 10:08:39 crc kubenswrapper[4632]: I0313 10:08:39.404555 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 13 10:08:39 crc kubenswrapper[4632]: I0313 10:08:39.406081 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 13 10:08:39 crc kubenswrapper[4632]: I0313 10:08:39.410991 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 13 10:08:39 crc kubenswrapper[4632]: I0313 10:08:39.412409 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 13 10:08:39 crc kubenswrapper[4632]: I0313 10:08:39.415447 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 13 10:08:39 crc kubenswrapper[4632]: I0313 10:08:39.554003 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a93bf50-5608-4b34-aea5-2f027d469fe7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7a93bf50-5608-4b34-aea5-2f027d469fe7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 13 10:08:39 crc kubenswrapper[4632]: I0313 10:08:39.554312 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a93bf50-5608-4b34-aea5-2f027d469fe7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7a93bf50-5608-4b34-aea5-2f027d469fe7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 13 10:08:39 crc kubenswrapper[4632]: I0313 10:08:39.654802 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a93bf50-5608-4b34-aea5-2f027d469fe7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7a93bf50-5608-4b34-aea5-2f027d469fe7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 13 10:08:39 crc kubenswrapper[4632]: I0313 10:08:39.654977 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a93bf50-5608-4b34-aea5-2f027d469fe7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7a93bf50-5608-4b34-aea5-2f027d469fe7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 13 10:08:39 crc kubenswrapper[4632]: I0313 10:08:39.654987 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a93bf50-5608-4b34-aea5-2f027d469fe7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7a93bf50-5608-4b34-aea5-2f027d469fe7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 13 10:08:39 crc kubenswrapper[4632]: I0313 10:08:39.678073 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a93bf50-5608-4b34-aea5-2f027d469fe7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7a93bf50-5608-4b34-aea5-2f027d469fe7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 13 10:08:39 crc kubenswrapper[4632]: I0313 10:08:39.729826 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 13 10:08:40 crc kubenswrapper[4632]: I0313 10:08:40.461152 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 10:08:40 crc kubenswrapper[4632]: I0313 10:08:40.461227 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 10:08:40 crc kubenswrapper[4632]: I0313 10:08:40.461286 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 10:08:40 crc kubenswrapper[4632]: I0313 10:08:40.461998 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7"} pod="openshift-machine-config-operator/machine-config-daemon-zkscb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 10:08:40 crc kubenswrapper[4632]: I0313 10:08:40.462058 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" containerID="cri-o://8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7" gracePeriod=600 Mar 13 10:08:40 crc kubenswrapper[4632]: I0313 10:08:40.646339 4632 generic.go:334] "Generic (PLEG): container finished" podID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerID="8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7" exitCode=0 Mar 13 10:08:40 crc kubenswrapper[4632]: I0313 10:08:40.646680 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerDied","Data":"8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7"} Mar 13 10:08:43 crc kubenswrapper[4632]: E0313 10:08:43.287443 4632 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 13 10:08:43 crc kubenswrapper[4632]: E0313 10:08:43.287742 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kh6g9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-xd455_openshift-marketplace(cd6e3c73-fbc1-4213-bbef-02dd2b0587f8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 13 10:08:43 crc kubenswrapper[4632]: E0313 10:08:43.288999 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-xd455" podUID="cd6e3c73-fbc1-4213-bbef-02dd2b0587f8" Mar 13 10:08:43 crc kubenswrapper[4632]: E0313 10:08:43.325135 4632 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 13 10:08:43 crc kubenswrapper[4632]: E0313 10:08:43.325393 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5kzjw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-jvh86_openshift-marketplace(bd46ae04-0610-4aa5-9385-dd45de66c5dd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 13 10:08:43 crc kubenswrapper[4632]: E0313 10:08:43.328513 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-jvh86" podUID="bd46ae04-0610-4aa5-9385-dd45de66c5dd" Mar 13 10:08:43 crc kubenswrapper[4632]: I0313 10:08:43.610116 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 13 10:08:43 crc kubenswrapper[4632]: I0313 10:08:43.611073 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 13 10:08:43 crc kubenswrapper[4632]: I0313 10:08:43.623980 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 13 10:08:43 crc kubenswrapper[4632]: I0313 10:08:43.651286 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dc39d207-84a2-4a28-9296-bed684aa308d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"dc39d207-84a2-4a28-9296-bed684aa308d\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 13 10:08:43 crc kubenswrapper[4632]: I0313 10:08:43.651343 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dc39d207-84a2-4a28-9296-bed684aa308d-var-lock\") pod \"installer-9-crc\" (UID: \"dc39d207-84a2-4a28-9296-bed684aa308d\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 13 10:08:43 crc kubenswrapper[4632]: I0313 10:08:43.651391 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc39d207-84a2-4a28-9296-bed684aa308d-kube-api-access\") pod \"installer-9-crc\" (UID: \"dc39d207-84a2-4a28-9296-bed684aa308d\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 13 10:08:43 crc kubenswrapper[4632]: I0313 10:08:43.752832 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dc39d207-84a2-4a28-9296-bed684aa308d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"dc39d207-84a2-4a28-9296-bed684aa308d\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 13 10:08:43 crc kubenswrapper[4632]: I0313 10:08:43.752887 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dc39d207-84a2-4a28-9296-bed684aa308d-var-lock\") pod \"installer-9-crc\" (UID: \"dc39d207-84a2-4a28-9296-bed684aa308d\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 13 10:08:43 crc kubenswrapper[4632]: I0313 10:08:43.752969 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc39d207-84a2-4a28-9296-bed684aa308d-kube-api-access\") pod \"installer-9-crc\" (UID: \"dc39d207-84a2-4a28-9296-bed684aa308d\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 13 10:08:43 crc kubenswrapper[4632]: I0313 10:08:43.753008 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dc39d207-84a2-4a28-9296-bed684aa308d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"dc39d207-84a2-4a28-9296-bed684aa308d\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 13 10:08:43 crc kubenswrapper[4632]: I0313 10:08:43.753092 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dc39d207-84a2-4a28-9296-bed684aa308d-var-lock\") pod \"installer-9-crc\" (UID: \"dc39d207-84a2-4a28-9296-bed684aa308d\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 13 10:08:43 crc kubenswrapper[4632]: I0313 10:08:43.772768 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc39d207-84a2-4a28-9296-bed684aa308d-kube-api-access\") pod \"installer-9-crc\" (UID: \"dc39d207-84a2-4a28-9296-bed684aa308d\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 13 10:08:43 crc kubenswrapper[4632]: I0313 10:08:43.941692 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 13 10:08:45 crc kubenswrapper[4632]: I0313 10:08:45.633189 4632 patch_prober.go:28] interesting pod/downloads-7954f5f757-w2hhj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 13 10:08:45 crc kubenswrapper[4632]: I0313 10:08:45.633244 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w2hhj" podUID="7d155f24-9bfc-4039-9981-10e7f724fa51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 13 10:08:48 crc kubenswrapper[4632]: E0313 10:08:48.663136 4632 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 13 10:08:48 crc kubenswrapper[4632]: E0313 10:08:48.664000 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6hfz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-xr5l9_openshift-marketplace(87965e39-b879-4e26-9c8b-b78068c52aa0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 13 10:08:48 crc kubenswrapper[4632]: E0313 10:08:48.665191 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-xr5l9" podUID="87965e39-b879-4e26-9c8b-b78068c52aa0" Mar 13 10:08:52 crc kubenswrapper[4632]: E0313 10:08:52.258780 4632 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 13 10:08:52 crc kubenswrapper[4632]: E0313 10:08:52.260272 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wlsv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-p8wjg_openshift-marketplace(b11a7dff-bf08-44c3-b4f4-923119c13717): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 13 10:08:52 crc kubenswrapper[4632]: E0313 10:08:52.263281 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-p8wjg" podUID="b11a7dff-bf08-44c3-b4f4-923119c13717" Mar 13 10:08:52 crc kubenswrapper[4632]: E0313 10:08:52.320162 4632 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 13 10:08:52 crc kubenswrapper[4632]: E0313 10:08:52.320337 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tfdk8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-z2gc7_openshift-marketplace(a110c276-8516-4f9e-a6af-d6837cd0f387): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 13 10:08:52 crc kubenswrapper[4632]: E0313 10:08:52.321631 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-z2gc7" podUID="a110c276-8516-4f9e-a6af-d6837cd0f387" Mar 13 10:08:52 crc kubenswrapper[4632]: E0313 10:08:52.508207 4632 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 13 10:08:52 crc kubenswrapper[4632]: E0313 10:08:52.508332 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sdp56,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8z668_openshift-marketplace(9845f384-2720-4d6a-aa73-1e66e30f7c2c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 13 10:08:52 crc kubenswrapper[4632]: E0313 10:08:52.509950 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-8z668" podUID="9845f384-2720-4d6a-aa73-1e66e30f7c2c" Mar 13 10:08:54 crc kubenswrapper[4632]: I0313 10:08:54.813880 4632 ???:1] "http: TLS handshake error from 192.168.126.11:40032: no serving certificate available for the kubelet" Mar 13 10:08:55 crc kubenswrapper[4632]: I0313 10:08:55.632439 4632 patch_prober.go:28] interesting pod/downloads-7954f5f757-w2hhj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 13 10:08:55 crc kubenswrapper[4632]: I0313 10:08:55.632551 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w2hhj" podUID="7d155f24-9bfc-4039-9981-10e7f724fa51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 13 10:08:56 crc kubenswrapper[4632]: I0313 10:08:56.254608 4632 scope.go:117] "RemoveContainer" containerID="fe1e770193ae7e14a37be92defae5c64d043b458e01244272a22574e7b2e1f74" Mar 13 10:08:56 crc kubenswrapper[4632]: E0313 10:08:56.275960 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8z668" podUID="9845f384-2720-4d6a-aa73-1e66e30f7c2c" Mar 13 10:08:56 crc kubenswrapper[4632]: E0313 10:08:56.276047 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-xr5l9" podUID="87965e39-b879-4e26-9c8b-b78068c52aa0" Mar 13 10:08:56 crc kubenswrapper[4632]: E0313 10:08:56.276110 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-p8wjg" podUID="b11a7dff-bf08-44c3-b4f4-923119c13717" Mar 13 10:08:56 crc kubenswrapper[4632]: E0313 10:08:56.276162 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-z2gc7" podUID="a110c276-8516-4f9e-a6af-d6837cd0f387" Mar 13 10:08:56 crc kubenswrapper[4632]: I0313 10:08:56.796630 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 13 10:08:56 crc kubenswrapper[4632]: I0313 10:08:56.799379 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c44777cb6-dkmdb"] Mar 13 10:08:56 crc kubenswrapper[4632]: W0313 10:08:56.806678 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0982dbd_62a1_47c5_8510_5045b9ca5785.slice/crio-e2910cae586a594472f36055ddd013b1c782ed1e61c6b7e0b1e88de17b0d81e9 WatchSource:0}: Error finding container e2910cae586a594472f36055ddd013b1c782ed1e61c6b7e0b1e88de17b0d81e9: Status 404 returned error can't find the container with id e2910cae586a594472f36055ddd013b1c782ed1e61c6b7e0b1e88de17b0d81e9 Mar 13 10:08:56 crc kubenswrapper[4632]: W0313 10:08:56.812414 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7a93bf50_5608_4b34_aea5_2f027d469fe7.slice/crio-8ca4d20a3432f315af224e18c181a59d62f280063326feb2da6b42b782268a63 WatchSource:0}: Error finding container 8ca4d20a3432f315af224e18c181a59d62f280063326feb2da6b42b782268a63: Status 404 returned error can't find the container with id 8ca4d20a3432f315af224e18c181a59d62f280063326feb2da6b42b782268a63 Mar 13 10:08:56 crc kubenswrapper[4632]: I0313 10:08:56.877790 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 13 10:08:56 crc kubenswrapper[4632]: I0313 10:08:56.883762 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-747c9765b-lqcx6"] Mar 13 10:08:57 crc kubenswrapper[4632]: I0313 10:08:57.758176 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"dc39d207-84a2-4a28-9296-bed684aa308d","Type":"ContainerStarted","Data":"c463b62ee1a6928ceb028fe480183c5ca7bb846ec47d4163fa232376e05db524"} Mar 13 10:08:57 crc kubenswrapper[4632]: I0313 10:08:57.758850 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"dc39d207-84a2-4a28-9296-bed684aa308d","Type":"ContainerStarted","Data":"7017997794d887d37a83222e44995b72d5d076c42028b8e1498fdb1f2cb4d188"} Mar 13 10:08:57 crc kubenswrapper[4632]: I0313 10:08:57.762521 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-w2hhj" event={"ID":"7d155f24-9bfc-4039-9981-10e7f724fa51","Type":"ContainerStarted","Data":"8267a2aa0ccf8bef0a1fb4ed1acbdc94b5e3909c757e1794b32f16ebf1d938e7"} Mar 13 10:08:57 crc kubenswrapper[4632]: I0313 10:08:57.762590 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-w2hhj" Mar 13 10:08:57 crc kubenswrapper[4632]: I0313 10:08:57.763339 4632 patch_prober.go:28] interesting pod/downloads-7954f5f757-w2hhj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 13 10:08:57 crc kubenswrapper[4632]: I0313 10:08:57.763390 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w2hhj" podUID="7d155f24-9bfc-4039-9981-10e7f724fa51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 13 10:08:57 crc kubenswrapper[4632]: I0313 10:08:57.763855 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7a93bf50-5608-4b34-aea5-2f027d469fe7","Type":"ContainerStarted","Data":"9397ca1f1b655dbac921f1df09ecbfaa16d86c267f0a805db268730c6e1431c8"} Mar 13 10:08:57 crc kubenswrapper[4632]: I0313 10:08:57.763908 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7a93bf50-5608-4b34-aea5-2f027d469fe7","Type":"ContainerStarted","Data":"8ca4d20a3432f315af224e18c181a59d62f280063326feb2da6b42b782268a63"} Mar 13 10:08:57 crc kubenswrapper[4632]: I0313 10:08:57.767243 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556608-9kzfk" event={"ID":"37ab6711-478f-4cc7-b9a4-c9baa126b1a3","Type":"ContainerStarted","Data":"24d957ae4862987ed76c21db8796ae914a7d2beca83397bc3f90816dc051c956"} Mar 13 10:08:57 crc kubenswrapper[4632]: I0313 10:08:57.772190 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c44777cb6-dkmdb" event={"ID":"a0982dbd-62a1-47c5-8510-5045b9ca5785","Type":"ContainerStarted","Data":"a4aeba6501ad544549300fc2c4204c4a5cf7f1d1edc84405ce7d7b1974c966cc"} Mar 13 10:08:57 crc kubenswrapper[4632]: I0313 10:08:57.772225 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c44777cb6-dkmdb" event={"ID":"a0982dbd-62a1-47c5-8510-5045b9ca5785","Type":"ContainerStarted","Data":"e2910cae586a594472f36055ddd013b1c782ed1e61c6b7e0b1e88de17b0d81e9"} Mar 13 10:08:57 crc kubenswrapper[4632]: I0313 10:08:57.774464 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-747c9765b-lqcx6" event={"ID":"0816f595-2f7f-425a-9a6b-1022e2a4ca04","Type":"ContainerStarted","Data":"83de0881072cb52ab7a7fbd2d8ef18cbb3eb4eb7897fd1301bfd2cbf304913b7"} Mar 13 10:08:57 crc kubenswrapper[4632]: I0313 10:08:57.774584 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-747c9765b-lqcx6" event={"ID":"0816f595-2f7f-425a-9a6b-1022e2a4ca04","Type":"ContainerStarted","Data":"650730d84aa4384d18f0228647070b36709f320f05d4d8ef5c14d5e680d6b8ca"} Mar 13 10:08:57 crc kubenswrapper[4632]: I0313 10:08:57.779423 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerStarted","Data":"e4989d70178427347867288c3fc7b62a339fa6ecdddde954f719a53f3db7fe17"} Mar 13 10:08:58 crc kubenswrapper[4632]: E0313 10:08:58.284074 4632 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 13 10:08:58 crc kubenswrapper[4632]: E0313 10:08:58.284409 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z5jx8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-txp2w_openshift-marketplace(f0cd0b7e-eded-4a51-8b1e-e67b9381bc87): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 13 10:08:58 crc kubenswrapper[4632]: E0313 10:08:58.285504 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-txp2w" podUID="f0cd0b7e-eded-4a51-8b1e-e67b9381bc87" Mar 13 10:08:58 crc kubenswrapper[4632]: I0313 10:08:58.791393 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5c44777cb6-dkmdb" podUID="a0982dbd-62a1-47c5-8510-5045b9ca5785" containerName="route-controller-manager" containerID="cri-o://a4aeba6501ad544549300fc2c4204c4a5cf7f1d1edc84405ce7d7b1974c966cc" gracePeriod=30 Mar 13 10:08:58 crc kubenswrapper[4632]: I0313 10:08:58.792517 4632 patch_prober.go:28] interesting pod/downloads-7954f5f757-w2hhj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 13 10:08:58 crc kubenswrapper[4632]: I0313 10:08:58.792571 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w2hhj" podUID="7d155f24-9bfc-4039-9981-10e7f724fa51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 13 10:08:58 crc kubenswrapper[4632]: I0313 10:08:58.792729 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5c44777cb6-dkmdb" Mar 13 10:08:58 crc kubenswrapper[4632]: I0313 10:08:58.793784 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-747c9765b-lqcx6" podUID="0816f595-2f7f-425a-9a6b-1022e2a4ca04" containerName="controller-manager" containerID="cri-o://83de0881072cb52ab7a7fbd2d8ef18cbb3eb4eb7897fd1301bfd2cbf304913b7" gracePeriod=30 Mar 13 10:08:58 crc kubenswrapper[4632]: E0313 10:08:58.798126 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-txp2w" podUID="f0cd0b7e-eded-4a51-8b1e-e67b9381bc87" Mar 13 10:08:58 crc kubenswrapper[4632]: I0313 10:08:58.807376 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5c44777cb6-dkmdb" Mar 13 10:08:58 crc kubenswrapper[4632]: I0313 10:08:58.859063 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5c44777cb6-dkmdb" podStartSLOduration=39.859041263 podStartE2EDuration="39.859041263s" podCreationTimestamp="2026-03-13 10:08:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:08:58.826250362 +0000 UTC m=+312.848780495" watchObservedRunningTime="2026-03-13 10:08:58.859041263 +0000 UTC m=+312.881571396" Mar 13 10:08:58 crc kubenswrapper[4632]: I0313 10:08:58.892651 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=15.892632546 podStartE2EDuration="15.892632546s" podCreationTimestamp="2026-03-13 10:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:08:58.86364358 +0000 UTC m=+312.886173713" watchObservedRunningTime="2026-03-13 10:08:58.892632546 +0000 UTC m=+312.915162679" Mar 13 10:08:58 crc kubenswrapper[4632]: I0313 10:08:58.910736 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=19.910718974 podStartE2EDuration="19.910718974s" podCreationTimestamp="2026-03-13 10:08:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:08:58.907849035 +0000 UTC m=+312.930379168" watchObservedRunningTime="2026-03-13 10:08:58.910718974 +0000 UTC m=+312.933249107" Mar 13 10:08:58 crc kubenswrapper[4632]: I0313 10:08:58.938399 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556608-9kzfk" podStartSLOduration=4.828758615 podStartE2EDuration="58.938370624s" podCreationTimestamp="2026-03-13 10:08:00 +0000 UTC" firstStartedPulling="2026-03-13 10:08:02.171472122 +0000 UTC m=+256.194002255" lastFinishedPulling="2026-03-13 10:08:56.281084131 +0000 UTC m=+310.303614264" observedRunningTime="2026-03-13 10:08:58.936844462 +0000 UTC m=+312.959374595" watchObservedRunningTime="2026-03-13 10:08:58.938370624 +0000 UTC m=+312.960900777" Mar 13 10:08:58 crc kubenswrapper[4632]: I0313 10:08:58.968210 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-747c9765b-lqcx6" podStartSLOduration=39.968188593 podStartE2EDuration="39.968188593s" podCreationTimestamp="2026-03-13 10:08:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:08:58.964833421 +0000 UTC m=+312.987363554" watchObservedRunningTime="2026-03-13 10:08:58.968188593 +0000 UTC m=+312.990718736" Mar 13 10:08:59 crc kubenswrapper[4632]: I0313 10:08:59.158660 4632 csr.go:261] certificate signing request csr-drwfx is approved, waiting to be issued Mar 13 10:08:59 crc kubenswrapper[4632]: I0313 10:08:59.178184 4632 csr.go:257] certificate signing request csr-drwfx is issued Mar 13 10:08:59 crc kubenswrapper[4632]: E0313 10:08:59.538092 4632 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 13 10:08:59 crc kubenswrapper[4632]: E0313 10:08:59.538273 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bc6ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-t6bkt_openshift-marketplace(668c4640-0e5f-4c98-8b6e-dbdffdbfe14e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 13 10:08:59 crc kubenswrapper[4632]: E0313 10:08:59.539471 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-t6bkt" podUID="668c4640-0e5f-4c98-8b6e-dbdffdbfe14e" Mar 13 10:08:59 crc kubenswrapper[4632]: I0313 10:08:59.824370 4632 generic.go:334] "Generic (PLEG): container finished" podID="7a93bf50-5608-4b34-aea5-2f027d469fe7" containerID="9397ca1f1b655dbac921f1df09ecbfaa16d86c267f0a805db268730c6e1431c8" exitCode=0 Mar 13 10:08:59 crc kubenswrapper[4632]: I0313 10:08:59.824450 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7a93bf50-5608-4b34-aea5-2f027d469fe7","Type":"ContainerDied","Data":"9397ca1f1b655dbac921f1df09ecbfaa16d86c267f0a805db268730c6e1431c8"} Mar 13 10:08:59 crc kubenswrapper[4632]: I0313 10:08:59.829994 4632 generic.go:334] "Generic (PLEG): container finished" podID="37ab6711-478f-4cc7-b9a4-c9baa126b1a3" containerID="24d957ae4862987ed76c21db8796ae914a7d2beca83397bc3f90816dc051c956" exitCode=0 Mar 13 10:08:59 crc kubenswrapper[4632]: I0313 10:08:59.830151 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556608-9kzfk" event={"ID":"37ab6711-478f-4cc7-b9a4-c9baa126b1a3","Type":"ContainerDied","Data":"24d957ae4862987ed76c21db8796ae914a7d2beca83397bc3f90816dc051c956"} Mar 13 10:08:59 crc kubenswrapper[4632]: I0313 10:08:59.849501 4632 generic.go:334] "Generic (PLEG): container finished" podID="0816f595-2f7f-425a-9a6b-1022e2a4ca04" containerID="83de0881072cb52ab7a7fbd2d8ef18cbb3eb4eb7897fd1301bfd2cbf304913b7" exitCode=0 Mar 13 10:08:59 crc kubenswrapper[4632]: I0313 10:08:59.849659 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-747c9765b-lqcx6" event={"ID":"0816f595-2f7f-425a-9a6b-1022e2a4ca04","Type":"ContainerDied","Data":"83de0881072cb52ab7a7fbd2d8ef18cbb3eb4eb7897fd1301bfd2cbf304913b7"} Mar 13 10:08:59 crc kubenswrapper[4632]: I0313 10:08:59.860318 4632 generic.go:334] "Generic (PLEG): container finished" podID="a0982dbd-62a1-47c5-8510-5045b9ca5785" containerID="a4aeba6501ad544549300fc2c4204c4a5cf7f1d1edc84405ce7d7b1974c966cc" exitCode=0 Mar 13 10:08:59 crc kubenswrapper[4632]: I0313 10:08:59.861192 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c44777cb6-dkmdb" event={"ID":"a0982dbd-62a1-47c5-8510-5045b9ca5785","Type":"ContainerDied","Data":"a4aeba6501ad544549300fc2c4204c4a5cf7f1d1edc84405ce7d7b1974c966cc"} Mar 13 10:08:59 crc kubenswrapper[4632]: E0313 10:08:59.865696 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-t6bkt" podUID="668c4640-0e5f-4c98-8b6e-dbdffdbfe14e" Mar 13 10:08:59 crc kubenswrapper[4632]: I0313 10:08:59.928699 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c44777cb6-dkmdb" Mar 13 10:09:00 crc kubenswrapper[4632]: I0313 10:09:00.101887 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0982dbd-62a1-47c5-8510-5045b9ca5785-config\") pod \"a0982dbd-62a1-47c5-8510-5045b9ca5785\" (UID: \"a0982dbd-62a1-47c5-8510-5045b9ca5785\") " Mar 13 10:09:00 crc kubenswrapper[4632]: I0313 10:09:00.101977 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cqqg\" (UniqueName: \"kubernetes.io/projected/a0982dbd-62a1-47c5-8510-5045b9ca5785-kube-api-access-2cqqg\") pod \"a0982dbd-62a1-47c5-8510-5045b9ca5785\" (UID: \"a0982dbd-62a1-47c5-8510-5045b9ca5785\") " Mar 13 10:09:00 crc kubenswrapper[4632]: I0313 10:09:00.102126 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0982dbd-62a1-47c5-8510-5045b9ca5785-client-ca\") pod \"a0982dbd-62a1-47c5-8510-5045b9ca5785\" (UID: \"a0982dbd-62a1-47c5-8510-5045b9ca5785\") " Mar 13 10:09:00 crc kubenswrapper[4632]: I0313 10:09:00.103157 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0982dbd-62a1-47c5-8510-5045b9ca5785-client-ca" (OuterVolumeSpecName: "client-ca") pod "a0982dbd-62a1-47c5-8510-5045b9ca5785" (UID: "a0982dbd-62a1-47c5-8510-5045b9ca5785"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:09:00 crc kubenswrapper[4632]: I0313 10:09:00.103193 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0982dbd-62a1-47c5-8510-5045b9ca5785-config" (OuterVolumeSpecName: "config") pod "a0982dbd-62a1-47c5-8510-5045b9ca5785" (UID: "a0982dbd-62a1-47c5-8510-5045b9ca5785"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:09:00 crc kubenswrapper[4632]: I0313 10:09:00.103220 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0982dbd-62a1-47c5-8510-5045b9ca5785-serving-cert\") pod \"a0982dbd-62a1-47c5-8510-5045b9ca5785\" (UID: \"a0982dbd-62a1-47c5-8510-5045b9ca5785\") " Mar 13 10:09:00 crc kubenswrapper[4632]: I0313 10:09:00.103704 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0982dbd-62a1-47c5-8510-5045b9ca5785-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:09:00 crc kubenswrapper[4632]: I0313 10:09:00.103726 4632 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0982dbd-62a1-47c5-8510-5045b9ca5785-client-ca\") on node \"crc\" DevicePath \"\"" Mar 13 10:09:00 crc kubenswrapper[4632]: I0313 10:09:00.109620 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0982dbd-62a1-47c5-8510-5045b9ca5785-kube-api-access-2cqqg" (OuterVolumeSpecName: "kube-api-access-2cqqg") pod "a0982dbd-62a1-47c5-8510-5045b9ca5785" (UID: "a0982dbd-62a1-47c5-8510-5045b9ca5785"). InnerVolumeSpecName "kube-api-access-2cqqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:09:00 crc kubenswrapper[4632]: I0313 10:09:00.121156 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0982dbd-62a1-47c5-8510-5045b9ca5785-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a0982dbd-62a1-47c5-8510-5045b9ca5785" (UID: "a0982dbd-62a1-47c5-8510-5045b9ca5785"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:09:00 crc kubenswrapper[4632]: I0313 10:09:00.179300 4632 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-18 01:06:32.656724867 +0000 UTC Mar 13 10:09:00 crc kubenswrapper[4632]: I0313 10:09:00.179360 4632 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6710h57m32.477368851s for next certificate rotation Mar 13 10:09:00 crc kubenswrapper[4632]: I0313 10:09:00.204918 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cqqg\" (UniqueName: \"kubernetes.io/projected/a0982dbd-62a1-47c5-8510-5045b9ca5785-kube-api-access-2cqqg\") on node \"crc\" DevicePath \"\"" Mar 13 10:09:00 crc kubenswrapper[4632]: I0313 10:09:00.205017 4632 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0982dbd-62a1-47c5-8510-5045b9ca5785-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:09:00 crc kubenswrapper[4632]: I0313 10:09:00.872556 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-747c9765b-lqcx6" event={"ID":"0816f595-2f7f-425a-9a6b-1022e2a4ca04","Type":"ContainerDied","Data":"650730d84aa4384d18f0228647070b36709f320f05d4d8ef5c14d5e680d6b8ca"} Mar 13 10:09:00 crc kubenswrapper[4632]: I0313 10:09:00.873074 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="650730d84aa4384d18f0228647070b36709f320f05d4d8ef5c14d5e680d6b8ca" Mar 13 10:09:00 crc kubenswrapper[4632]: I0313 10:09:00.874541 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c44777cb6-dkmdb" event={"ID":"a0982dbd-62a1-47c5-8510-5045b9ca5785","Type":"ContainerDied","Data":"e2910cae586a594472f36055ddd013b1c782ed1e61c6b7e0b1e88de17b0d81e9"} Mar 13 10:09:00 crc kubenswrapper[4632]: I0313 10:09:00.874624 4632 scope.go:117] "RemoveContainer" containerID="a4aeba6501ad544549300fc2c4204c4a5cf7f1d1edc84405ce7d7b1974c966cc" Mar 13 10:09:00 crc kubenswrapper[4632]: I0313 10:09:00.874687 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-747c9765b-lqcx6" Mar 13 10:09:00 crc kubenswrapper[4632]: I0313 10:09:00.874837 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c44777cb6-dkmdb" Mar 13 10:09:00 crc kubenswrapper[4632]: I0313 10:09:00.960767 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c44777cb6-dkmdb"] Mar 13 10:09:00 crc kubenswrapper[4632]: I0313 10:09:00.972761 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c44777cb6-dkmdb"] Mar 13 10:09:00 crc kubenswrapper[4632]: I0313 10:09:00.978481 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-568f8cc7b8-srcn5"] Mar 13 10:09:00 crc kubenswrapper[4632]: E0313 10:09:00.978850 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0982dbd-62a1-47c5-8510-5045b9ca5785" containerName="route-controller-manager" Mar 13 10:09:00 crc kubenswrapper[4632]: I0313 10:09:00.978981 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0982dbd-62a1-47c5-8510-5045b9ca5785" containerName="route-controller-manager" Mar 13 10:09:00 crc kubenswrapper[4632]: E0313 10:09:00.979056 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0816f595-2f7f-425a-9a6b-1022e2a4ca04" containerName="controller-manager" Mar 13 10:09:00 crc kubenswrapper[4632]: I0313 10:09:00.979199 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="0816f595-2f7f-425a-9a6b-1022e2a4ca04" containerName="controller-manager" Mar 13 10:09:00 crc kubenswrapper[4632]: I0313 10:09:00.979349 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0982dbd-62a1-47c5-8510-5045b9ca5785" containerName="route-controller-manager" Mar 13 10:09:00 crc kubenswrapper[4632]: I0313 10:09:00.979426 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="0816f595-2f7f-425a-9a6b-1022e2a4ca04" containerName="controller-manager" Mar 13 10:09:00 crc kubenswrapper[4632]: I0313 10:09:00.979824 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-568f8cc7b8-srcn5" Mar 13 10:09:00 crc kubenswrapper[4632]: I0313 10:09:00.993232 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-db59c8bd6-cs8jc"] Mar 13 10:09:00 crc kubenswrapper[4632]: I0313 10:09:00.998833 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-db59c8bd6-cs8jc" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.004620 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-568f8cc7b8-srcn5"] Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.009362 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.009740 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.009509 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.009571 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.009665 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.010455 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.013677 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-db59c8bd6-cs8jc"] Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.023744 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0816f595-2f7f-425a-9a6b-1022e2a4ca04-client-ca\") pod \"0816f595-2f7f-425a-9a6b-1022e2a4ca04\" (UID: \"0816f595-2f7f-425a-9a6b-1022e2a4ca04\") " Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.024070 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0816f595-2f7f-425a-9a6b-1022e2a4ca04-proxy-ca-bundles\") pod \"0816f595-2f7f-425a-9a6b-1022e2a4ca04\" (UID: \"0816f595-2f7f-425a-9a6b-1022e2a4ca04\") " Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.024811 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0816f595-2f7f-425a-9a6b-1022e2a4ca04-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0816f595-2f7f-425a-9a6b-1022e2a4ca04" (UID: "0816f595-2f7f-425a-9a6b-1022e2a4ca04"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.024826 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0816f595-2f7f-425a-9a6b-1022e2a4ca04-client-ca" (OuterVolumeSpecName: "client-ca") pod "0816f595-2f7f-425a-9a6b-1022e2a4ca04" (UID: "0816f595-2f7f-425a-9a6b-1022e2a4ca04"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.025086 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0816f595-2f7f-425a-9a6b-1022e2a4ca04-config\") pod \"0816f595-2f7f-425a-9a6b-1022e2a4ca04\" (UID: \"0816f595-2f7f-425a-9a6b-1022e2a4ca04\") " Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.025205 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0816f595-2f7f-425a-9a6b-1022e2a4ca04-serving-cert\") pod \"0816f595-2f7f-425a-9a6b-1022e2a4ca04\" (UID: \"0816f595-2f7f-425a-9a6b-1022e2a4ca04\") " Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.025465 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2prq4\" (UniqueName: \"kubernetes.io/projected/0816f595-2f7f-425a-9a6b-1022e2a4ca04-kube-api-access-2prq4\") pod \"0816f595-2f7f-425a-9a6b-1022e2a4ca04\" (UID: \"0816f595-2f7f-425a-9a6b-1022e2a4ca04\") " Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.025656 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0816f595-2f7f-425a-9a6b-1022e2a4ca04-config" (OuterVolumeSpecName: "config") pod "0816f595-2f7f-425a-9a6b-1022e2a4ca04" (UID: "0816f595-2f7f-425a-9a6b-1022e2a4ca04"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.025860 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzsfh\" (UniqueName: \"kubernetes.io/projected/6c1bb71f-b506-4779-997a-b45aa2d7f99d-kube-api-access-gzsfh\") pod \"controller-manager-568f8cc7b8-srcn5\" (UID: \"6c1bb71f-b506-4779-997a-b45aa2d7f99d\") " pod="openshift-controller-manager/controller-manager-568f8cc7b8-srcn5" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.025956 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c1bb71f-b506-4779-997a-b45aa2d7f99d-config\") pod \"controller-manager-568f8cc7b8-srcn5\" (UID: \"6c1bb71f-b506-4779-997a-b45aa2d7f99d\") " pod="openshift-controller-manager/controller-manager-568f8cc7b8-srcn5" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.026045 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c1bb71f-b506-4779-997a-b45aa2d7f99d-serving-cert\") pod \"controller-manager-568f8cc7b8-srcn5\" (UID: \"6c1bb71f-b506-4779-997a-b45aa2d7f99d\") " pod="openshift-controller-manager/controller-manager-568f8cc7b8-srcn5" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.026497 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmsjq\" (UniqueName: \"kubernetes.io/projected/1d9f7553-a7a4-47b3-8898-990eb6d2fdfd-kube-api-access-mmsjq\") pod \"route-controller-manager-db59c8bd6-cs8jc\" (UID: \"1d9f7553-a7a4-47b3-8898-990eb6d2fdfd\") " pod="openshift-route-controller-manager/route-controller-manager-db59c8bd6-cs8jc" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.026618 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c1bb71f-b506-4779-997a-b45aa2d7f99d-proxy-ca-bundles\") pod \"controller-manager-568f8cc7b8-srcn5\" (UID: \"6c1bb71f-b506-4779-997a-b45aa2d7f99d\") " pod="openshift-controller-manager/controller-manager-568f8cc7b8-srcn5" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.026723 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d9f7553-a7a4-47b3-8898-990eb6d2fdfd-client-ca\") pod \"route-controller-manager-db59c8bd6-cs8jc\" (UID: \"1d9f7553-a7a4-47b3-8898-990eb6d2fdfd\") " pod="openshift-route-controller-manager/route-controller-manager-db59c8bd6-cs8jc" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.027478 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6c1bb71f-b506-4779-997a-b45aa2d7f99d-client-ca\") pod \"controller-manager-568f8cc7b8-srcn5\" (UID: \"6c1bb71f-b506-4779-997a-b45aa2d7f99d\") " pod="openshift-controller-manager/controller-manager-568f8cc7b8-srcn5" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.027643 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d9f7553-a7a4-47b3-8898-990eb6d2fdfd-config\") pod \"route-controller-manager-db59c8bd6-cs8jc\" (UID: \"1d9f7553-a7a4-47b3-8898-990eb6d2fdfd\") " pod="openshift-route-controller-manager/route-controller-manager-db59c8bd6-cs8jc" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.027825 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d9f7553-a7a4-47b3-8898-990eb6d2fdfd-serving-cert\") pod \"route-controller-manager-db59c8bd6-cs8jc\" (UID: \"1d9f7553-a7a4-47b3-8898-990eb6d2fdfd\") " pod="openshift-route-controller-manager/route-controller-manager-db59c8bd6-cs8jc" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.028052 4632 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0816f595-2f7f-425a-9a6b-1022e2a4ca04-client-ca\") on node \"crc\" DevicePath \"\"" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.028144 4632 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0816f595-2f7f-425a-9a6b-1022e2a4ca04-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.028239 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0816f595-2f7f-425a-9a6b-1022e2a4ca04-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.038230 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0816f595-2f7f-425a-9a6b-1022e2a4ca04-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0816f595-2f7f-425a-9a6b-1022e2a4ca04" (UID: "0816f595-2f7f-425a-9a6b-1022e2a4ca04"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.038246 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0816f595-2f7f-425a-9a6b-1022e2a4ca04-kube-api-access-2prq4" (OuterVolumeSpecName: "kube-api-access-2prq4") pod "0816f595-2f7f-425a-9a6b-1022e2a4ca04" (UID: "0816f595-2f7f-425a-9a6b-1022e2a4ca04"). InnerVolumeSpecName "kube-api-access-2prq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.130673 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c1bb71f-b506-4779-997a-b45aa2d7f99d-serving-cert\") pod \"controller-manager-568f8cc7b8-srcn5\" (UID: \"6c1bb71f-b506-4779-997a-b45aa2d7f99d\") " pod="openshift-controller-manager/controller-manager-568f8cc7b8-srcn5" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.130754 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmsjq\" (UniqueName: \"kubernetes.io/projected/1d9f7553-a7a4-47b3-8898-990eb6d2fdfd-kube-api-access-mmsjq\") pod \"route-controller-manager-db59c8bd6-cs8jc\" (UID: \"1d9f7553-a7a4-47b3-8898-990eb6d2fdfd\") " pod="openshift-route-controller-manager/route-controller-manager-db59c8bd6-cs8jc" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.130786 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c1bb71f-b506-4779-997a-b45aa2d7f99d-proxy-ca-bundles\") pod \"controller-manager-568f8cc7b8-srcn5\" (UID: \"6c1bb71f-b506-4779-997a-b45aa2d7f99d\") " pod="openshift-controller-manager/controller-manager-568f8cc7b8-srcn5" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.130864 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d9f7553-a7a4-47b3-8898-990eb6d2fdfd-client-ca\") pod \"route-controller-manager-db59c8bd6-cs8jc\" (UID: \"1d9f7553-a7a4-47b3-8898-990eb6d2fdfd\") " pod="openshift-route-controller-manager/route-controller-manager-db59c8bd6-cs8jc" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.130885 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6c1bb71f-b506-4779-997a-b45aa2d7f99d-client-ca\") pod \"controller-manager-568f8cc7b8-srcn5\" (UID: \"6c1bb71f-b506-4779-997a-b45aa2d7f99d\") " pod="openshift-controller-manager/controller-manager-568f8cc7b8-srcn5" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.130911 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d9f7553-a7a4-47b3-8898-990eb6d2fdfd-config\") pod \"route-controller-manager-db59c8bd6-cs8jc\" (UID: \"1d9f7553-a7a4-47b3-8898-990eb6d2fdfd\") " pod="openshift-route-controller-manager/route-controller-manager-db59c8bd6-cs8jc" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.130968 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d9f7553-a7a4-47b3-8898-990eb6d2fdfd-serving-cert\") pod \"route-controller-manager-db59c8bd6-cs8jc\" (UID: \"1d9f7553-a7a4-47b3-8898-990eb6d2fdfd\") " pod="openshift-route-controller-manager/route-controller-manager-db59c8bd6-cs8jc" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.131042 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzsfh\" (UniqueName: \"kubernetes.io/projected/6c1bb71f-b506-4779-997a-b45aa2d7f99d-kube-api-access-gzsfh\") pod \"controller-manager-568f8cc7b8-srcn5\" (UID: \"6c1bb71f-b506-4779-997a-b45aa2d7f99d\") " pod="openshift-controller-manager/controller-manager-568f8cc7b8-srcn5" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.131062 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c1bb71f-b506-4779-997a-b45aa2d7f99d-config\") pod \"controller-manager-568f8cc7b8-srcn5\" (UID: \"6c1bb71f-b506-4779-997a-b45aa2d7f99d\") " pod="openshift-controller-manager/controller-manager-568f8cc7b8-srcn5" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.131119 4632 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0816f595-2f7f-425a-9a6b-1022e2a4ca04-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.131132 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2prq4\" (UniqueName: \"kubernetes.io/projected/0816f595-2f7f-425a-9a6b-1022e2a4ca04-kube-api-access-2prq4\") on node \"crc\" DevicePath \"\"" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.133430 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c1bb71f-b506-4779-997a-b45aa2d7f99d-config\") pod \"controller-manager-568f8cc7b8-srcn5\" (UID: \"6c1bb71f-b506-4779-997a-b45aa2d7f99d\") " pod="openshift-controller-manager/controller-manager-568f8cc7b8-srcn5" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.133801 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6c1bb71f-b506-4779-997a-b45aa2d7f99d-client-ca\") pod \"controller-manager-568f8cc7b8-srcn5\" (UID: \"6c1bb71f-b506-4779-997a-b45aa2d7f99d\") " pod="openshift-controller-manager/controller-manager-568f8cc7b8-srcn5" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.134825 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d9f7553-a7a4-47b3-8898-990eb6d2fdfd-client-ca\") pod \"route-controller-manager-db59c8bd6-cs8jc\" (UID: \"1d9f7553-a7a4-47b3-8898-990eb6d2fdfd\") " pod="openshift-route-controller-manager/route-controller-manager-db59c8bd6-cs8jc" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.135543 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c1bb71f-b506-4779-997a-b45aa2d7f99d-proxy-ca-bundles\") pod \"controller-manager-568f8cc7b8-srcn5\" (UID: \"6c1bb71f-b506-4779-997a-b45aa2d7f99d\") " pod="openshift-controller-manager/controller-manager-568f8cc7b8-srcn5" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.145033 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d9f7553-a7a4-47b3-8898-990eb6d2fdfd-serving-cert\") pod \"route-controller-manager-db59c8bd6-cs8jc\" (UID: \"1d9f7553-a7a4-47b3-8898-990eb6d2fdfd\") " pod="openshift-route-controller-manager/route-controller-manager-db59c8bd6-cs8jc" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.145580 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c1bb71f-b506-4779-997a-b45aa2d7f99d-serving-cert\") pod \"controller-manager-568f8cc7b8-srcn5\" (UID: \"6c1bb71f-b506-4779-997a-b45aa2d7f99d\") " pod="openshift-controller-manager/controller-manager-568f8cc7b8-srcn5" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.160673 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmsjq\" (UniqueName: \"kubernetes.io/projected/1d9f7553-a7a4-47b3-8898-990eb6d2fdfd-kube-api-access-mmsjq\") pod \"route-controller-manager-db59c8bd6-cs8jc\" (UID: \"1d9f7553-a7a4-47b3-8898-990eb6d2fdfd\") " pod="openshift-route-controller-manager/route-controller-manager-db59c8bd6-cs8jc" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.177282 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzsfh\" (UniqueName: \"kubernetes.io/projected/6c1bb71f-b506-4779-997a-b45aa2d7f99d-kube-api-access-gzsfh\") pod \"controller-manager-568f8cc7b8-srcn5\" (UID: \"6c1bb71f-b506-4779-997a-b45aa2d7f99d\") " pod="openshift-controller-manager/controller-manager-568f8cc7b8-srcn5" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.179712 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d9f7553-a7a4-47b3-8898-990eb6d2fdfd-config\") pod \"route-controller-manager-db59c8bd6-cs8jc\" (UID: \"1d9f7553-a7a4-47b3-8898-990eb6d2fdfd\") " pod="openshift-route-controller-manager/route-controller-manager-db59c8bd6-cs8jc" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.179846 4632 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-29 12:43:02.614047778 +0000 UTC Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.179866 4632 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6266h34m1.43418472s for next certificate rotation Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.241045 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556608-9kzfk" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.301302 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-568f8cc7b8-srcn5" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.326422 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-db59c8bd6-cs8jc" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.334575 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmxng\" (UniqueName: \"kubernetes.io/projected/37ab6711-478f-4cc7-b9a4-c9baa126b1a3-kube-api-access-dmxng\") pod \"37ab6711-478f-4cc7-b9a4-c9baa126b1a3\" (UID: \"37ab6711-478f-4cc7-b9a4-c9baa126b1a3\") " Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.344143 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.345133 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37ab6711-478f-4cc7-b9a4-c9baa126b1a3-kube-api-access-dmxng" (OuterVolumeSpecName: "kube-api-access-dmxng") pod "37ab6711-478f-4cc7-b9a4-c9baa126b1a3" (UID: "37ab6711-478f-4cc7-b9a4-c9baa126b1a3"). InnerVolumeSpecName "kube-api-access-dmxng". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.436572 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a93bf50-5608-4b34-aea5-2f027d469fe7-kubelet-dir\") pod \"7a93bf50-5608-4b34-aea5-2f027d469fe7\" (UID: \"7a93bf50-5608-4b34-aea5-2f027d469fe7\") " Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.436657 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a93bf50-5608-4b34-aea5-2f027d469fe7-kube-api-access\") pod \"7a93bf50-5608-4b34-aea5-2f027d469fe7\" (UID: \"7a93bf50-5608-4b34-aea5-2f027d469fe7\") " Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.437076 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmxng\" (UniqueName: \"kubernetes.io/projected/37ab6711-478f-4cc7-b9a4-c9baa126b1a3-kube-api-access-dmxng\") on node \"crc\" DevicePath \"\"" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.440521 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a93bf50-5608-4b34-aea5-2f027d469fe7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7a93bf50-5608-4b34-aea5-2f027d469fe7" (UID: "7a93bf50-5608-4b34-aea5-2f027d469fe7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.445207 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a93bf50-5608-4b34-aea5-2f027d469fe7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7a93bf50-5608-4b34-aea5-2f027d469fe7" (UID: "7a93bf50-5608-4b34-aea5-2f027d469fe7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.538529 4632 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a93bf50-5608-4b34-aea5-2f027d469fe7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.538578 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a93bf50-5608-4b34-aea5-2f027d469fe7-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.641257 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-db59c8bd6-cs8jc"] Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.674263 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-568f8cc7b8-srcn5"] Mar 13 10:09:01 crc kubenswrapper[4632]: W0313 10:09:01.685089 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c1bb71f_b506_4779_997a_b45aa2d7f99d.slice/crio-cafd0a70d6f6f5cc33c18e32a6f9118b758897e41e807521c630031166429878 WatchSource:0}: Error finding container cafd0a70d6f6f5cc33c18e32a6f9118b758897e41e807521c630031166429878: Status 404 returned error can't find the container with id cafd0a70d6f6f5cc33c18e32a6f9118b758897e41e807521c630031166429878 Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.881993 4632 generic.go:334] "Generic (PLEG): container finished" podID="c822257d-9d2f-4b6f-87de-131de5cd0efe" containerID="481e1788f663e81921b410cd12a9e3666afaa2b706dda68096288fee3498f2fa" exitCode=0 Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.882140 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556606-mkrp2" event={"ID":"c822257d-9d2f-4b6f-87de-131de5cd0efe","Type":"ContainerDied","Data":"481e1788f663e81921b410cd12a9e3666afaa2b706dda68096288fee3498f2fa"} Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.885074 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-568f8cc7b8-srcn5" event={"ID":"6c1bb71f-b506-4779-997a-b45aa2d7f99d","Type":"ContainerStarted","Data":"cafd0a70d6f6f5cc33c18e32a6f9118b758897e41e807521c630031166429878"} Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.885999 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-db59c8bd6-cs8jc" event={"ID":"1d9f7553-a7a4-47b3-8898-990eb6d2fdfd","Type":"ContainerStarted","Data":"354150b09f5c97255b0f1b1d13a0e96b39ea150051699e29f1947c848d48cfc7"} Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.887963 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7a93bf50-5608-4b34-aea5-2f027d469fe7","Type":"ContainerDied","Data":"8ca4d20a3432f315af224e18c181a59d62f280063326feb2da6b42b782268a63"} Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.887993 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ca4d20a3432f315af224e18c181a59d62f280063326feb2da6b42b782268a63" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.888024 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.899962 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-747c9765b-lqcx6" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.899972 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556608-9kzfk" event={"ID":"37ab6711-478f-4cc7-b9a4-c9baa126b1a3","Type":"ContainerDied","Data":"ff362806bee1867b720f220a4cde4dbe8551207f73438d3af60407d151505f16"} Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.900065 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff362806bee1867b720f220a4cde4dbe8551207f73438d3af60407d151505f16" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.900009 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556608-9kzfk" Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.942186 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-747c9765b-lqcx6"] Mar 13 10:09:01 crc kubenswrapper[4632]: I0313 10:09:01.947495 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-747c9765b-lqcx6"] Mar 13 10:09:02 crc kubenswrapper[4632]: I0313 10:09:02.066470 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0816f595-2f7f-425a-9a6b-1022e2a4ca04" path="/var/lib/kubelet/pods/0816f595-2f7f-425a-9a6b-1022e2a4ca04/volumes" Mar 13 10:09:02 crc kubenswrapper[4632]: I0313 10:09:02.067743 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0982dbd-62a1-47c5-8510-5045b9ca5785" path="/var/lib/kubelet/pods/a0982dbd-62a1-47c5-8510-5045b9ca5785/volumes" Mar 13 10:09:02 crc kubenswrapper[4632]: I0313 10:09:02.910289 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-568f8cc7b8-srcn5" event={"ID":"6c1bb71f-b506-4779-997a-b45aa2d7f99d","Type":"ContainerStarted","Data":"e6647232ecd3206958018db1d05543d27c1d81899737af2d56a6d6a78463f69b"} Mar 13 10:09:02 crc kubenswrapper[4632]: I0313 10:09:02.911177 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-568f8cc7b8-srcn5" Mar 13 10:09:02 crc kubenswrapper[4632]: I0313 10:09:02.915547 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-db59c8bd6-cs8jc" event={"ID":"1d9f7553-a7a4-47b3-8898-990eb6d2fdfd","Type":"ContainerStarted","Data":"c21dcff3106ebcb8e41bf57ec34ca478155cc655a5099ecf6ed4d6d8ef778c01"} Mar 13 10:09:02 crc kubenswrapper[4632]: I0313 10:09:02.916726 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-568f8cc7b8-srcn5" Mar 13 10:09:02 crc kubenswrapper[4632]: I0313 10:09:02.940168 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-568f8cc7b8-srcn5" podStartSLOduration=3.940146764 podStartE2EDuration="3.940146764s" podCreationTimestamp="2026-03-13 10:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:09:02.937666035 +0000 UTC m=+316.960196198" watchObservedRunningTime="2026-03-13 10:09:02.940146764 +0000 UTC m=+316.962676897" Mar 13 10:09:02 crc kubenswrapper[4632]: I0313 10:09:02.963047 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-db59c8bd6-cs8jc" podStartSLOduration=3.963018793 podStartE2EDuration="3.963018793s" podCreationTimestamp="2026-03-13 10:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:09:02.961427239 +0000 UTC m=+316.983957392" watchObservedRunningTime="2026-03-13 10:09:02.963018793 +0000 UTC m=+316.985548926" Mar 13 10:09:03 crc kubenswrapper[4632]: I0313 10:09:03.924144 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-db59c8bd6-cs8jc" Mar 13 10:09:03 crc kubenswrapper[4632]: I0313 10:09:03.930423 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-db59c8bd6-cs8jc" Mar 13 10:09:04 crc kubenswrapper[4632]: I0313 10:09:04.640361 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556606-mkrp2" Mar 13 10:09:04 crc kubenswrapper[4632]: I0313 10:09:04.710708 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sd756\" (UniqueName: \"kubernetes.io/projected/c822257d-9d2f-4b6f-87de-131de5cd0efe-kube-api-access-sd756\") pod \"c822257d-9d2f-4b6f-87de-131de5cd0efe\" (UID: \"c822257d-9d2f-4b6f-87de-131de5cd0efe\") " Mar 13 10:09:04 crc kubenswrapper[4632]: I0313 10:09:04.726994 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c822257d-9d2f-4b6f-87de-131de5cd0efe-kube-api-access-sd756" (OuterVolumeSpecName: "kube-api-access-sd756") pod "c822257d-9d2f-4b6f-87de-131de5cd0efe" (UID: "c822257d-9d2f-4b6f-87de-131de5cd0efe"). InnerVolumeSpecName "kube-api-access-sd756". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:09:04 crc kubenswrapper[4632]: I0313 10:09:04.814172 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sd756\" (UniqueName: \"kubernetes.io/projected/c822257d-9d2f-4b6f-87de-131de5cd0efe-kube-api-access-sd756\") on node \"crc\" DevicePath \"\"" Mar 13 10:09:04 crc kubenswrapper[4632]: I0313 10:09:04.935181 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556606-mkrp2" event={"ID":"c822257d-9d2f-4b6f-87de-131de5cd0efe","Type":"ContainerDied","Data":"4b486b426e38ba0d310d07052394a9d5bdba25cfa8d2705294f114f94eaedc81"} Mar 13 10:09:04 crc kubenswrapper[4632]: I0313 10:09:04.935219 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b486b426e38ba0d310d07052394a9d5bdba25cfa8d2705294f114f94eaedc81" Mar 13 10:09:04 crc kubenswrapper[4632]: I0313 10:09:04.935468 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556606-mkrp2" Mar 13 10:09:05 crc kubenswrapper[4632]: I0313 10:09:05.632830 4632 patch_prober.go:28] interesting pod/downloads-7954f5f757-w2hhj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 13 10:09:05 crc kubenswrapper[4632]: I0313 10:09:05.632894 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w2hhj" podUID="7d155f24-9bfc-4039-9981-10e7f724fa51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 13 10:09:05 crc kubenswrapper[4632]: I0313 10:09:05.633591 4632 patch_prober.go:28] interesting pod/downloads-7954f5f757-w2hhj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 13 10:09:05 crc kubenswrapper[4632]: I0313 10:09:05.633751 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-w2hhj" podUID="7d155f24-9bfc-4039-9981-10e7f724fa51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 13 10:09:07 crc kubenswrapper[4632]: I0313 10:09:07.971791 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jvh86" event={"ID":"bd46ae04-0610-4aa5-9385-dd45de66c5dd","Type":"ContainerStarted","Data":"ba15fa8797c3390ead2f6a2f6b5a64ad766bc4a942dfc13cbdc76a3242dd09c0"} Mar 13 10:09:08 crc kubenswrapper[4632]: I0313 10:09:08.980105 4632 generic.go:334] "Generic (PLEG): container finished" podID="cd6e3c73-fbc1-4213-bbef-02dd2b0587f8" containerID="1c7d4d3dbdb9375cd1f14c42f62f344139bab8e0abb1403e1fe655b1b72e40c4" exitCode=0 Mar 13 10:09:08 crc kubenswrapper[4632]: I0313 10:09:08.980175 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xd455" event={"ID":"cd6e3c73-fbc1-4213-bbef-02dd2b0587f8","Type":"ContainerDied","Data":"1c7d4d3dbdb9375cd1f14c42f62f344139bab8e0abb1403e1fe655b1b72e40c4"} Mar 13 10:09:08 crc kubenswrapper[4632]: I0313 10:09:08.981804 4632 generic.go:334] "Generic (PLEG): container finished" podID="bd46ae04-0610-4aa5-9385-dd45de66c5dd" containerID="ba15fa8797c3390ead2f6a2f6b5a64ad766bc4a942dfc13cbdc76a3242dd09c0" exitCode=0 Mar 13 10:09:08 crc kubenswrapper[4632]: I0313 10:09:08.981829 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jvh86" event={"ID":"bd46ae04-0610-4aa5-9385-dd45de66c5dd","Type":"ContainerDied","Data":"ba15fa8797c3390ead2f6a2f6b5a64ad766bc4a942dfc13cbdc76a3242dd09c0"} Mar 13 10:09:13 crc kubenswrapper[4632]: I0313 10:09:13.027923 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xd455" event={"ID":"cd6e3c73-fbc1-4213-bbef-02dd2b0587f8","Type":"ContainerStarted","Data":"f1dbecf7ff84705a27018ceaf7e07f776f8da213446108c63db8f788119a4f28"} Mar 13 10:09:13 crc kubenswrapper[4632]: I0313 10:09:13.048721 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z2gc7" event={"ID":"a110c276-8516-4f9e-a6af-d6837cd0f387","Type":"ContainerStarted","Data":"ef106624caa843911d5171f0d70f22c07e7e2bd19b6992932276ca1226b858e3"} Mar 13 10:09:13 crc kubenswrapper[4632]: I0313 10:09:13.056481 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p8wjg" event={"ID":"b11a7dff-bf08-44c3-b4f4-923119c13717","Type":"ContainerStarted","Data":"06491b70d16bc5a697f5518128f63de5fdeb769cc33d09d9262078f5aa75a5b8"} Mar 13 10:09:13 crc kubenswrapper[4632]: I0313 10:09:13.057671 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xd455" podStartSLOduration=4.379077818 podStartE2EDuration="1m18.05761769s" podCreationTimestamp="2026-03-13 10:07:55 +0000 UTC" firstStartedPulling="2026-03-13 10:07:58.786823407 +0000 UTC m=+252.809353540" lastFinishedPulling="2026-03-13 10:09:12.465363279 +0000 UTC m=+326.487893412" observedRunningTime="2026-03-13 10:09:13.05276817 +0000 UTC m=+327.075298313" watchObservedRunningTime="2026-03-13 10:09:13.05761769 +0000 UTC m=+327.080147833" Mar 13 10:09:13 crc kubenswrapper[4632]: I0313 10:09:13.060856 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jvh86" event={"ID":"bd46ae04-0610-4aa5-9385-dd45de66c5dd","Type":"ContainerStarted","Data":"5a10aa8d51646d1f515364874b0426c82d85f03f52a4924f31299cb0395b0607"} Mar 13 10:09:13 crc kubenswrapper[4632]: I0313 10:09:13.133692 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jvh86" podStartSLOduration=5.474108285 podStartE2EDuration="1m19.13366979s" podCreationTimestamp="2026-03-13 10:07:54 +0000 UTC" firstStartedPulling="2026-03-13 10:07:58.814511709 +0000 UTC m=+252.837041842" lastFinishedPulling="2026-03-13 10:09:12.474073224 +0000 UTC m=+326.496603347" observedRunningTime="2026-03-13 10:09:13.130487784 +0000 UTC m=+327.153017917" watchObservedRunningTime="2026-03-13 10:09:13.13366979 +0000 UTC m=+327.156199923" Mar 13 10:09:14 crc kubenswrapper[4632]: I0313 10:09:14.071428 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xr5l9" event={"ID":"87965e39-b879-4e26-9c8b-b78068c52aa0","Type":"ContainerStarted","Data":"e3753ece91188912f076d532ed434a938805848842e1fdb1b100800bcaa42763"} Mar 13 10:09:14 crc kubenswrapper[4632]: I0313 10:09:14.073731 4632 generic.go:334] "Generic (PLEG): container finished" podID="b11a7dff-bf08-44c3-b4f4-923119c13717" containerID="06491b70d16bc5a697f5518128f63de5fdeb769cc33d09d9262078f5aa75a5b8" exitCode=0 Mar 13 10:09:14 crc kubenswrapper[4632]: I0313 10:09:14.073803 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p8wjg" event={"ID":"b11a7dff-bf08-44c3-b4f4-923119c13717","Type":"ContainerDied","Data":"06491b70d16bc5a697f5518128f63de5fdeb769cc33d09d9262078f5aa75a5b8"} Mar 13 10:09:14 crc kubenswrapper[4632]: I0313 10:09:14.077201 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-txp2w" event={"ID":"f0cd0b7e-eded-4a51-8b1e-e67b9381bc87","Type":"ContainerStarted","Data":"643ad1b648678ed35dcc10aaf9a844460c880f38f688c0da6821345eaf872208"} Mar 13 10:09:15 crc kubenswrapper[4632]: I0313 10:09:15.085074 4632 generic.go:334] "Generic (PLEG): container finished" podID="a110c276-8516-4f9e-a6af-d6837cd0f387" containerID="ef106624caa843911d5171f0d70f22c07e7e2bd19b6992932276ca1226b858e3" exitCode=0 Mar 13 10:09:15 crc kubenswrapper[4632]: I0313 10:09:15.085159 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z2gc7" event={"ID":"a110c276-8516-4f9e-a6af-d6837cd0f387","Type":"ContainerDied","Data":"ef106624caa843911d5171f0d70f22c07e7e2bd19b6992932276ca1226b858e3"} Mar 13 10:09:15 crc kubenswrapper[4632]: I0313 10:09:15.087768 4632 generic.go:334] "Generic (PLEG): container finished" podID="f0cd0b7e-eded-4a51-8b1e-e67b9381bc87" containerID="643ad1b648678ed35dcc10aaf9a844460c880f38f688c0da6821345eaf872208" exitCode=0 Mar 13 10:09:15 crc kubenswrapper[4632]: I0313 10:09:15.087801 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-txp2w" event={"ID":"f0cd0b7e-eded-4a51-8b1e-e67b9381bc87","Type":"ContainerDied","Data":"643ad1b648678ed35dcc10aaf9a844460c880f38f688c0da6821345eaf872208"} Mar 13 10:09:15 crc kubenswrapper[4632]: I0313 10:09:15.334830 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jvh86" Mar 13 10:09:15 crc kubenswrapper[4632]: I0313 10:09:15.334892 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jvh86" Mar 13 10:09:15 crc kubenswrapper[4632]: I0313 10:09:15.639777 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-w2hhj" Mar 13 10:09:15 crc kubenswrapper[4632]: I0313 10:09:15.707657 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xd455" Mar 13 10:09:15 crc kubenswrapper[4632]: I0313 10:09:15.708146 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xd455" Mar 13 10:09:16 crc kubenswrapper[4632]: I0313 10:09:16.055541 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xd455" Mar 13 10:09:16 crc kubenswrapper[4632]: I0313 10:09:16.094567 4632 generic.go:334] "Generic (PLEG): container finished" podID="87965e39-b879-4e26-9c8b-b78068c52aa0" containerID="e3753ece91188912f076d532ed434a938805848842e1fdb1b100800bcaa42763" exitCode=0 Mar 13 10:09:16 crc kubenswrapper[4632]: I0313 10:09:16.094631 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xr5l9" event={"ID":"87965e39-b879-4e26-9c8b-b78068c52aa0","Type":"ContainerDied","Data":"e3753ece91188912f076d532ed434a938805848842e1fdb1b100800bcaa42763"} Mar 13 10:09:16 crc kubenswrapper[4632]: I0313 10:09:16.955136 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-jvh86" podUID="bd46ae04-0610-4aa5-9385-dd45de66c5dd" containerName="registry-server" probeResult="failure" output=< Mar 13 10:09:16 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 10:09:16 crc kubenswrapper[4632]: > Mar 13 10:09:19 crc kubenswrapper[4632]: I0313 10:09:19.059462 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-568f8cc7b8-srcn5"] Mar 13 10:09:19 crc kubenswrapper[4632]: I0313 10:09:19.060406 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-568f8cc7b8-srcn5" podUID="6c1bb71f-b506-4779-997a-b45aa2d7f99d" containerName="controller-manager" containerID="cri-o://e6647232ecd3206958018db1d05543d27c1d81899737af2d56a6d6a78463f69b" gracePeriod=30 Mar 13 10:09:19 crc kubenswrapper[4632]: I0313 10:09:19.111145 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8z668" event={"ID":"9845f384-2720-4d6a-aa73-1e66e30f7c2c","Type":"ContainerStarted","Data":"577f89a2c63ffc7af6c8d9a11a12240e1b316c59a7b108fcd47eb2cd9dc3c8ec"} Mar 13 10:09:19 crc kubenswrapper[4632]: I0313 10:09:19.114972 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p8wjg" event={"ID":"b11a7dff-bf08-44c3-b4f4-923119c13717","Type":"ContainerStarted","Data":"55bfc00a5732a457ecbee5c7be945027bdb42c0137a6b22125d44dafb5924f59"} Mar 13 10:09:19 crc kubenswrapper[4632]: I0313 10:09:19.130612 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-db59c8bd6-cs8jc"] Mar 13 10:09:19 crc kubenswrapper[4632]: I0313 10:09:19.131051 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-db59c8bd6-cs8jc" podUID="1d9f7553-a7a4-47b3-8898-990eb6d2fdfd" containerName="route-controller-manager" containerID="cri-o://c21dcff3106ebcb8e41bf57ec34ca478155cc655a5099ecf6ed4d6d8ef778c01" gracePeriod=30 Mar 13 10:09:19 crc kubenswrapper[4632]: I0313 10:09:19.185537 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p8wjg" podStartSLOduration=7.37858684 podStartE2EDuration="1m24.185514459s" podCreationTimestamp="2026-03-13 10:07:55 +0000 UTC" firstStartedPulling="2026-03-13 10:08:01.052597086 +0000 UTC m=+255.075127219" lastFinishedPulling="2026-03-13 10:09:17.859524705 +0000 UTC m=+331.882054838" observedRunningTime="2026-03-13 10:09:19.180438962 +0000 UTC m=+333.202969105" watchObservedRunningTime="2026-03-13 10:09:19.185514459 +0000 UTC m=+333.208044602" Mar 13 10:09:20 crc kubenswrapper[4632]: I0313 10:09:20.121173 4632 generic.go:334] "Generic (PLEG): container finished" podID="1d9f7553-a7a4-47b3-8898-990eb6d2fdfd" containerID="c21dcff3106ebcb8e41bf57ec34ca478155cc655a5099ecf6ed4d6d8ef778c01" exitCode=0 Mar 13 10:09:20 crc kubenswrapper[4632]: I0313 10:09:20.121211 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-db59c8bd6-cs8jc" event={"ID":"1d9f7553-a7a4-47b3-8898-990eb6d2fdfd","Type":"ContainerDied","Data":"c21dcff3106ebcb8e41bf57ec34ca478155cc655a5099ecf6ed4d6d8ef778c01"} Mar 13 10:09:20 crc kubenswrapper[4632]: I0313 10:09:20.123375 4632 generic.go:334] "Generic (PLEG): container finished" podID="9845f384-2720-4d6a-aa73-1e66e30f7c2c" containerID="577f89a2c63ffc7af6c8d9a11a12240e1b316c59a7b108fcd47eb2cd9dc3c8ec" exitCode=0 Mar 13 10:09:20 crc kubenswrapper[4632]: I0313 10:09:20.123508 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8z668" event={"ID":"9845f384-2720-4d6a-aa73-1e66e30f7c2c","Type":"ContainerDied","Data":"577f89a2c63ffc7af6c8d9a11a12240e1b316c59a7b108fcd47eb2cd9dc3c8ec"} Mar 13 10:09:20 crc kubenswrapper[4632]: I0313 10:09:20.124918 4632 generic.go:334] "Generic (PLEG): container finished" podID="6c1bb71f-b506-4779-997a-b45aa2d7f99d" containerID="e6647232ecd3206958018db1d05543d27c1d81899737af2d56a6d6a78463f69b" exitCode=0 Mar 13 10:09:20 crc kubenswrapper[4632]: I0313 10:09:20.125433 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-568f8cc7b8-srcn5" event={"ID":"6c1bb71f-b506-4779-997a-b45aa2d7f99d","Type":"ContainerDied","Data":"e6647232ecd3206958018db1d05543d27c1d81899737af2d56a6d6a78463f69b"} Mar 13 10:09:20 crc kubenswrapper[4632]: I0313 10:09:20.171801 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:09:20 crc kubenswrapper[4632]: I0313 10:09:20.172760 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:09:20 crc kubenswrapper[4632]: I0313 10:09:20.176273 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 13 10:09:20 crc kubenswrapper[4632]: I0313 10:09:20.177417 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 13 10:09:20 crc kubenswrapper[4632]: I0313 10:09:20.191600 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:09:20 crc kubenswrapper[4632]: I0313 10:09:20.194835 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:09:20 crc kubenswrapper[4632]: I0313 10:09:20.275726 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:09:20 crc kubenswrapper[4632]: I0313 10:09:20.276337 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:09:20 crc kubenswrapper[4632]: I0313 10:09:20.276494 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad-metrics-certs\") pod \"network-metrics-daemon-z2vlz\" (UID: \"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\") " pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:09:20 crc kubenswrapper[4632]: I0313 10:09:20.279542 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 13 10:09:20 crc kubenswrapper[4632]: I0313 10:09:20.279849 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 13 10:09:20 crc kubenswrapper[4632]: I0313 10:09:20.288442 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 13 10:09:20 crc kubenswrapper[4632]: I0313 10:09:20.290716 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad-metrics-certs\") pod \"network-metrics-daemon-z2vlz\" (UID: \"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad\") " pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:09:20 crc kubenswrapper[4632]: I0313 10:09:20.302580 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:09:20 crc kubenswrapper[4632]: I0313 10:09:20.309633 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:09:20 crc kubenswrapper[4632]: I0313 10:09:20.359275 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:09:20 crc kubenswrapper[4632]: I0313 10:09:20.369118 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 10:09:20 crc kubenswrapper[4632]: I0313 10:09:20.378426 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 10:09:20 crc kubenswrapper[4632]: I0313 10:09:20.459385 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 13 10:09:20 crc kubenswrapper[4632]: I0313 10:09:20.467719 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z2vlz" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.274854 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-z2vlz"] Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.516232 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-db59c8bd6-cs8jc" Mar 13 10:09:21 crc kubenswrapper[4632]: W0313 10:09:21.520432 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-95cf426241c42da749498523009aa75a5678b42076db750b6c0577fe95ada46f WatchSource:0}: Error finding container 95cf426241c42da749498523009aa75a5678b42076db750b6c0577fe95ada46f: Status 404 returned error can't find the container with id 95cf426241c42da749498523009aa75a5678b42076db750b6c0577fe95ada46f Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.549078 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-568f8cc7b8-srcn5" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.556139 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2"] Mar 13 10:09:21 crc kubenswrapper[4632]: E0313 10:09:21.556354 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d9f7553-a7a4-47b3-8898-990eb6d2fdfd" containerName="route-controller-manager" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.556367 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d9f7553-a7a4-47b3-8898-990eb6d2fdfd" containerName="route-controller-manager" Mar 13 10:09:21 crc kubenswrapper[4632]: E0313 10:09:21.556377 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c822257d-9d2f-4b6f-87de-131de5cd0efe" containerName="oc" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.556383 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="c822257d-9d2f-4b6f-87de-131de5cd0efe" containerName="oc" Mar 13 10:09:21 crc kubenswrapper[4632]: E0313 10:09:21.556404 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c1bb71f-b506-4779-997a-b45aa2d7f99d" containerName="controller-manager" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.556411 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c1bb71f-b506-4779-997a-b45aa2d7f99d" containerName="controller-manager" Mar 13 10:09:21 crc kubenswrapper[4632]: E0313 10:09:21.556419 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37ab6711-478f-4cc7-b9a4-c9baa126b1a3" containerName="oc" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.556425 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="37ab6711-478f-4cc7-b9a4-c9baa126b1a3" containerName="oc" Mar 13 10:09:21 crc kubenswrapper[4632]: E0313 10:09:21.556443 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a93bf50-5608-4b34-aea5-2f027d469fe7" containerName="pruner" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.556448 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a93bf50-5608-4b34-aea5-2f027d469fe7" containerName="pruner" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.556534 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="c822257d-9d2f-4b6f-87de-131de5cd0efe" containerName="oc" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.556546 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="37ab6711-478f-4cc7-b9a4-c9baa126b1a3" containerName="oc" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.556556 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a93bf50-5608-4b34-aea5-2f027d469fe7" containerName="pruner" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.556565 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c1bb71f-b506-4779-997a-b45aa2d7f99d" containerName="controller-manager" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.556572 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d9f7553-a7a4-47b3-8898-990eb6d2fdfd" containerName="route-controller-manager" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.556962 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.562603 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2"] Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.603826 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d9f7553-a7a4-47b3-8898-990eb6d2fdfd-client-ca\") pod \"1d9f7553-a7a4-47b3-8898-990eb6d2fdfd\" (UID: \"1d9f7553-a7a4-47b3-8898-990eb6d2fdfd\") " Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.603881 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d9f7553-a7a4-47b3-8898-990eb6d2fdfd-serving-cert\") pod \"1d9f7553-a7a4-47b3-8898-990eb6d2fdfd\" (UID: \"1d9f7553-a7a4-47b3-8898-990eb6d2fdfd\") " Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.603964 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzsfh\" (UniqueName: \"kubernetes.io/projected/6c1bb71f-b506-4779-997a-b45aa2d7f99d-kube-api-access-gzsfh\") pod \"6c1bb71f-b506-4779-997a-b45aa2d7f99d\" (UID: \"6c1bb71f-b506-4779-997a-b45aa2d7f99d\") " Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.604023 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmsjq\" (UniqueName: \"kubernetes.io/projected/1d9f7553-a7a4-47b3-8898-990eb6d2fdfd-kube-api-access-mmsjq\") pod \"1d9f7553-a7a4-47b3-8898-990eb6d2fdfd\" (UID: \"1d9f7553-a7a4-47b3-8898-990eb6d2fdfd\") " Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.604053 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c1bb71f-b506-4779-997a-b45aa2d7f99d-serving-cert\") pod \"6c1bb71f-b506-4779-997a-b45aa2d7f99d\" (UID: \"6c1bb71f-b506-4779-997a-b45aa2d7f99d\") " Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.604084 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6c1bb71f-b506-4779-997a-b45aa2d7f99d-client-ca\") pod \"6c1bb71f-b506-4779-997a-b45aa2d7f99d\" (UID: \"6c1bb71f-b506-4779-997a-b45aa2d7f99d\") " Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.604131 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c1bb71f-b506-4779-997a-b45aa2d7f99d-config\") pod \"6c1bb71f-b506-4779-997a-b45aa2d7f99d\" (UID: \"6c1bb71f-b506-4779-997a-b45aa2d7f99d\") " Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.604165 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d9f7553-a7a4-47b3-8898-990eb6d2fdfd-config\") pod \"1d9f7553-a7a4-47b3-8898-990eb6d2fdfd\" (UID: \"1d9f7553-a7a4-47b3-8898-990eb6d2fdfd\") " Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.604188 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c1bb71f-b506-4779-997a-b45aa2d7f99d-proxy-ca-bundles\") pod \"6c1bb71f-b506-4779-997a-b45aa2d7f99d\" (UID: \"6c1bb71f-b506-4779-997a-b45aa2d7f99d\") " Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.604378 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mrsj\" (UniqueName: \"kubernetes.io/projected/2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3-kube-api-access-8mrsj\") pod \"route-controller-manager-db6b8fbf8-pllt2\" (UID: \"2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3\") " pod="openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.604416 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3-client-ca\") pod \"route-controller-manager-db6b8fbf8-pllt2\" (UID: \"2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3\") " pod="openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.604965 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d9f7553-a7a4-47b3-8898-990eb6d2fdfd-client-ca" (OuterVolumeSpecName: "client-ca") pod "1d9f7553-a7a4-47b3-8898-990eb6d2fdfd" (UID: "1d9f7553-a7a4-47b3-8898-990eb6d2fdfd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.605325 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d9f7553-a7a4-47b3-8898-990eb6d2fdfd-config" (OuterVolumeSpecName: "config") pod "1d9f7553-a7a4-47b3-8898-990eb6d2fdfd" (UID: "1d9f7553-a7a4-47b3-8898-990eb6d2fdfd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.605544 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c1bb71f-b506-4779-997a-b45aa2d7f99d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6c1bb71f-b506-4779-997a-b45aa2d7f99d" (UID: "6c1bb71f-b506-4779-997a-b45aa2d7f99d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.605555 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c1bb71f-b506-4779-997a-b45aa2d7f99d-config" (OuterVolumeSpecName: "config") pod "6c1bb71f-b506-4779-997a-b45aa2d7f99d" (UID: "6c1bb71f-b506-4779-997a-b45aa2d7f99d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.604447 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3-serving-cert\") pod \"route-controller-manager-db6b8fbf8-pllt2\" (UID: \"2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3\") " pod="openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.605664 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3-config\") pod \"route-controller-manager-db6b8fbf8-pllt2\" (UID: \"2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3\") " pod="openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.605747 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c1bb71f-b506-4779-997a-b45aa2d7f99d-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.605771 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d9f7553-a7a4-47b3-8898-990eb6d2fdfd-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.605784 4632 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c1bb71f-b506-4779-997a-b45aa2d7f99d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.605795 4632 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d9f7553-a7a4-47b3-8898-990eb6d2fdfd-client-ca\") on node \"crc\" DevicePath \"\"" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.605588 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c1bb71f-b506-4779-997a-b45aa2d7f99d-client-ca" (OuterVolumeSpecName: "client-ca") pod "6c1bb71f-b506-4779-997a-b45aa2d7f99d" (UID: "6c1bb71f-b506-4779-997a-b45aa2d7f99d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.611385 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c1bb71f-b506-4779-997a-b45aa2d7f99d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6c1bb71f-b506-4779-997a-b45aa2d7f99d" (UID: "6c1bb71f-b506-4779-997a-b45aa2d7f99d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.611827 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c1bb71f-b506-4779-997a-b45aa2d7f99d-kube-api-access-gzsfh" (OuterVolumeSpecName: "kube-api-access-gzsfh") pod "6c1bb71f-b506-4779-997a-b45aa2d7f99d" (UID: "6c1bb71f-b506-4779-997a-b45aa2d7f99d"). InnerVolumeSpecName "kube-api-access-gzsfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.617627 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d9f7553-a7a4-47b3-8898-990eb6d2fdfd-kube-api-access-mmsjq" (OuterVolumeSpecName: "kube-api-access-mmsjq") pod "1d9f7553-a7a4-47b3-8898-990eb6d2fdfd" (UID: "1d9f7553-a7a4-47b3-8898-990eb6d2fdfd"). InnerVolumeSpecName "kube-api-access-mmsjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.611380 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d9f7553-a7a4-47b3-8898-990eb6d2fdfd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1d9f7553-a7a4-47b3-8898-990eb6d2fdfd" (UID: "1d9f7553-a7a4-47b3-8898-990eb6d2fdfd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.706540 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3-config\") pod \"route-controller-manager-db6b8fbf8-pllt2\" (UID: \"2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3\") " pod="openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.706631 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mrsj\" (UniqueName: \"kubernetes.io/projected/2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3-kube-api-access-8mrsj\") pod \"route-controller-manager-db6b8fbf8-pllt2\" (UID: \"2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3\") " pod="openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.706652 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3-client-ca\") pod \"route-controller-manager-db6b8fbf8-pllt2\" (UID: \"2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3\") " pod="openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.706669 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3-serving-cert\") pod \"route-controller-manager-db6b8fbf8-pllt2\" (UID: \"2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3\") " pod="openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.706712 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmsjq\" (UniqueName: \"kubernetes.io/projected/1d9f7553-a7a4-47b3-8898-990eb6d2fdfd-kube-api-access-mmsjq\") on node \"crc\" DevicePath \"\"" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.706724 4632 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c1bb71f-b506-4779-997a-b45aa2d7f99d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.706732 4632 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6c1bb71f-b506-4779-997a-b45aa2d7f99d-client-ca\") on node \"crc\" DevicePath \"\"" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.706741 4632 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d9f7553-a7a4-47b3-8898-990eb6d2fdfd-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.706751 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzsfh\" (UniqueName: \"kubernetes.io/projected/6c1bb71f-b506-4779-997a-b45aa2d7f99d-kube-api-access-gzsfh\") on node \"crc\" DevicePath \"\"" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.710418 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3-config\") pod \"route-controller-manager-db6b8fbf8-pllt2\" (UID: \"2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3\") " pod="openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.711501 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3-client-ca\") pod \"route-controller-manager-db6b8fbf8-pllt2\" (UID: \"2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3\") " pod="openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.712472 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3-serving-cert\") pod \"route-controller-manager-db6b8fbf8-pllt2\" (UID: \"2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3\") " pod="openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.726393 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mrsj\" (UniqueName: \"kubernetes.io/projected/2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3-kube-api-access-8mrsj\") pod \"route-controller-manager-db6b8fbf8-pllt2\" (UID: \"2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3\") " pod="openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2" Mar 13 10:09:21 crc kubenswrapper[4632]: I0313 10:09:21.883224 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2" Mar 13 10:09:22 crc kubenswrapper[4632]: I0313 10:09:22.149441 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"95cf426241c42da749498523009aa75a5678b42076db750b6c0577fe95ada46f"} Mar 13 10:09:22 crc kubenswrapper[4632]: I0313 10:09:22.152564 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-568f8cc7b8-srcn5" event={"ID":"6c1bb71f-b506-4779-997a-b45aa2d7f99d","Type":"ContainerDied","Data":"cafd0a70d6f6f5cc33c18e32a6f9118b758897e41e807521c630031166429878"} Mar 13 10:09:22 crc kubenswrapper[4632]: I0313 10:09:22.152611 4632 scope.go:117] "RemoveContainer" containerID="e6647232ecd3206958018db1d05543d27c1d81899737af2d56a6d6a78463f69b" Mar 13 10:09:22 crc kubenswrapper[4632]: I0313 10:09:22.152723 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-568f8cc7b8-srcn5" Mar 13 10:09:22 crc kubenswrapper[4632]: I0313 10:09:22.157632 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" event={"ID":"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad","Type":"ContainerStarted","Data":"33797dc2ede59a88fcccf5c3bd0b68134eb179bcb79626eff62306cd5a0425bc"} Mar 13 10:09:22 crc kubenswrapper[4632]: I0313 10:09:22.160197 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-db59c8bd6-cs8jc" Mar 13 10:09:22 crc kubenswrapper[4632]: I0313 10:09:22.160138 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-db59c8bd6-cs8jc" event={"ID":"1d9f7553-a7a4-47b3-8898-990eb6d2fdfd","Type":"ContainerDied","Data":"354150b09f5c97255b0f1b1d13a0e96b39ea150051699e29f1947c848d48cfc7"} Mar 13 10:09:22 crc kubenswrapper[4632]: I0313 10:09:22.173931 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-568f8cc7b8-srcn5"] Mar 13 10:09:22 crc kubenswrapper[4632]: I0313 10:09:22.178890 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-568f8cc7b8-srcn5"] Mar 13 10:09:22 crc kubenswrapper[4632]: I0313 10:09:22.187088 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-db59c8bd6-cs8jc"] Mar 13 10:09:22 crc kubenswrapper[4632]: I0313 10:09:22.190449 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-db59c8bd6-cs8jc"] Mar 13 10:09:22 crc kubenswrapper[4632]: I0313 10:09:22.302099 4632 patch_prober.go:28] interesting pod/controller-manager-568f8cc7b8-srcn5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 10:09:22 crc kubenswrapper[4632]: I0313 10:09:22.302181 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-568f8cc7b8-srcn5" podUID="6c1bb71f-b506-4779-997a-b45aa2d7f99d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 10:09:22 crc kubenswrapper[4632]: I0313 10:09:22.327670 4632 patch_prober.go:28] interesting pod/route-controller-manager-db59c8bd6-cs8jc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 10:09:22 crc kubenswrapper[4632]: I0313 10:09:22.327751 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-db59c8bd6-cs8jc" podUID="1d9f7553-a7a4-47b3-8898-990eb6d2fdfd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 10:09:22 crc kubenswrapper[4632]: I0313 10:09:22.507577 4632 scope.go:117] "RemoveContainer" containerID="c21dcff3106ebcb8e41bf57ec34ca478155cc655a5099ecf6ed4d6d8ef778c01" Mar 13 10:09:22 crc kubenswrapper[4632]: W0313 10:09:22.792970 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-d4757da4c3828cd79b3324b61ff4d762684579f8dd7aad23fbdcd12f80e73a08 WatchSource:0}: Error finding container d4757da4c3828cd79b3324b61ff4d762684579f8dd7aad23fbdcd12f80e73a08: Status 404 returned error can't find the container with id d4757da4c3828cd79b3324b61ff4d762684579f8dd7aad23fbdcd12f80e73a08 Mar 13 10:09:23 crc kubenswrapper[4632]: I0313 10:09:23.041596 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2"] Mar 13 10:09:23 crc kubenswrapper[4632]: W0313 10:09:23.078474 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-7e24d73dd23728b8d14b7681596c8391aadc6bb6f428216ce8897f1024604589 WatchSource:0}: Error finding container 7e24d73dd23728b8d14b7681596c8391aadc6bb6f428216ce8897f1024604589: Status 404 returned error can't find the container with id 7e24d73dd23728b8d14b7681596c8391aadc6bb6f428216ce8897f1024604589 Mar 13 10:09:23 crc kubenswrapper[4632]: W0313 10:09:23.094327 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f5d4f7c_4d7b_4347_bd38_d5fd29fed3f3.slice/crio-271f65d9f3e67553609f1c4c12988dd76116340addb8ca064d609462b229d7b2 WatchSource:0}: Error finding container 271f65d9f3e67553609f1c4c12988dd76116340addb8ca064d609462b229d7b2: Status 404 returned error can't find the container with id 271f65d9f3e67553609f1c4c12988dd76116340addb8ca064d609462b229d7b2 Mar 13 10:09:23 crc kubenswrapper[4632]: I0313 10:09:23.210726 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8z668" event={"ID":"9845f384-2720-4d6a-aa73-1e66e30f7c2c","Type":"ContainerStarted","Data":"001fe1eb309384fca387c523307ccbff0d5d514d2d7b29f074cc94a2210761cb"} Mar 13 10:09:23 crc kubenswrapper[4632]: I0313 10:09:23.211932 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2" event={"ID":"2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3","Type":"ContainerStarted","Data":"271f65d9f3e67553609f1c4c12988dd76116340addb8ca064d609462b229d7b2"} Mar 13 10:09:23 crc kubenswrapper[4632]: I0313 10:09:23.249820 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8z668" podStartSLOduration=6.7169681 podStartE2EDuration="1m28.249784366s" podCreationTimestamp="2026-03-13 10:07:55 +0000 UTC" firstStartedPulling="2026-03-13 10:08:01.200664416 +0000 UTC m=+255.223194549" lastFinishedPulling="2026-03-13 10:09:22.733480672 +0000 UTC m=+336.756010815" observedRunningTime="2026-03-13 10:09:23.239782896 +0000 UTC m=+337.262313039" watchObservedRunningTime="2026-03-13 10:09:23.249784366 +0000 UTC m=+337.272314499" Mar 13 10:09:23 crc kubenswrapper[4632]: I0313 10:09:23.251756 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xr5l9" event={"ID":"87965e39-b879-4e26-9c8b-b78068c52aa0","Type":"ContainerStarted","Data":"f3ec0c7ec6e706f7309cf337b9ffdf829c043fe63c66f9db4fb7e26f23ae6edd"} Mar 13 10:09:23 crc kubenswrapper[4632]: I0313 10:09:23.265461 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" event={"ID":"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad","Type":"ContainerStarted","Data":"a38ae589a44a7a92b909e7238ebce17db2b6570b0803136419752878342215ee"} Mar 13 10:09:23 crc kubenswrapper[4632]: I0313 10:09:23.288806 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z2gc7" event={"ID":"a110c276-8516-4f9e-a6af-d6837cd0f387","Type":"ContainerStarted","Data":"f7b31d5849d6707802fb373a1fe6f70b7a45ddade6fd6d9f2c7e5319e74f32d3"} Mar 13 10:09:23 crc kubenswrapper[4632]: I0313 10:09:23.297558 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7e24d73dd23728b8d14b7681596c8391aadc6bb6f428216ce8897f1024604589"} Mar 13 10:09:23 crc kubenswrapper[4632]: I0313 10:09:23.299832 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xr5l9" podStartSLOduration=4.877968654 podStartE2EDuration="1m25.299794323s" podCreationTimestamp="2026-03-13 10:07:58 +0000 UTC" firstStartedPulling="2026-03-13 10:08:02.373301144 +0000 UTC m=+256.395831277" lastFinishedPulling="2026-03-13 10:09:22.795126823 +0000 UTC m=+336.817656946" observedRunningTime="2026-03-13 10:09:23.298375365 +0000 UTC m=+337.320905508" watchObservedRunningTime="2026-03-13 10:09:23.299794323 +0000 UTC m=+337.322324466" Mar 13 10:09:23 crc kubenswrapper[4632]: I0313 10:09:23.314838 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"7d5e1531a06fe80e285365120ae28d66926c7bc01a7be4f92a00c0ec7d705f3c"} Mar 13 10:09:23 crc kubenswrapper[4632]: I0313 10:09:23.325852 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z2gc7" podStartSLOduration=5.193410114 podStartE2EDuration="1m25.325832166s" podCreationTimestamp="2026-03-13 10:07:58 +0000 UTC" firstStartedPulling="2026-03-13 10:08:02.379147069 +0000 UTC m=+256.401677202" lastFinishedPulling="2026-03-13 10:09:22.511569101 +0000 UTC m=+336.534099254" observedRunningTime="2026-03-13 10:09:23.322210068 +0000 UTC m=+337.344740211" watchObservedRunningTime="2026-03-13 10:09:23.325832166 +0000 UTC m=+337.348362299" Mar 13 10:09:23 crc kubenswrapper[4632]: I0313 10:09:23.326457 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t6bkt" event={"ID":"668c4640-0e5f-4c98-8b6e-dbdffdbfe14e","Type":"ContainerStarted","Data":"9fa8755799160674c9b9254d4cc4cd33b06b805f90944344a392116f94021c66"} Mar 13 10:09:23 crc kubenswrapper[4632]: I0313 10:09:23.334617 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-txp2w" event={"ID":"f0cd0b7e-eded-4a51-8b1e-e67b9381bc87","Type":"ContainerStarted","Data":"d3932d25c3aaf08a595c2af7ee315a6a0b2efd503369ee7398e6b39ad609dc3c"} Mar 13 10:09:23 crc kubenswrapper[4632]: I0313 10:09:23.377444 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"5956ba337b2cab47b490c63dbc5d8cd7763461bc0270d79a46c262ba948b7af6"} Mar 13 10:09:23 crc kubenswrapper[4632]: I0313 10:09:23.377503 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d4757da4c3828cd79b3324b61ff4d762684579f8dd7aad23fbdcd12f80e73a08"} Mar 13 10:09:23 crc kubenswrapper[4632]: I0313 10:09:23.378220 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:09:23 crc kubenswrapper[4632]: I0313 10:09:23.383550 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-txp2w" podStartSLOduration=7.762531822 podStartE2EDuration="1m26.383535891s" podCreationTimestamp="2026-03-13 10:07:57 +0000 UTC" firstStartedPulling="2026-03-13 10:08:02.395244955 +0000 UTC m=+256.417775088" lastFinishedPulling="2026-03-13 10:09:21.016249024 +0000 UTC m=+335.038779157" observedRunningTime="2026-03-13 10:09:23.383359096 +0000 UTC m=+337.405889229" watchObservedRunningTime="2026-03-13 10:09:23.383535891 +0000 UTC m=+337.406066024" Mar 13 10:09:23 crc kubenswrapper[4632]: I0313 10:09:23.984114 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7469657588-kpf64"] Mar 13 10:09:23 crc kubenswrapper[4632]: I0313 10:09:23.985175 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7469657588-kpf64" Mar 13 10:09:23 crc kubenswrapper[4632]: W0313 10:09:23.988584 4632 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Mar 13 10:09:23 crc kubenswrapper[4632]: E0313 10:09:23.988657 4632 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 13 10:09:23 crc kubenswrapper[4632]: I0313 10:09:23.988674 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 10:09:23 crc kubenswrapper[4632]: I0313 10:09:23.988978 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 10:09:23 crc kubenswrapper[4632]: I0313 10:09:23.989034 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 10:09:23 crc kubenswrapper[4632]: W0313 10:09:23.989049 4632 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: secrets "openshift-controller-manager-sa-dockercfg-msq4c" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Mar 13 10:09:23 crc kubenswrapper[4632]: E0313 10:09:23.989131 4632 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-controller-manager-sa-dockercfg-msq4c\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 13 10:09:23 crc kubenswrapper[4632]: I0313 10:09:23.992968 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 10:09:24 crc kubenswrapper[4632]: I0313 10:09:24.012541 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 10:09:24 crc kubenswrapper[4632]: I0313 10:09:24.051971 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d9f7553-a7a4-47b3-8898-990eb6d2fdfd" path="/var/lib/kubelet/pods/1d9f7553-a7a4-47b3-8898-990eb6d2fdfd/volumes" Mar 13 10:09:24 crc kubenswrapper[4632]: I0313 10:09:24.052818 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c1bb71f-b506-4779-997a-b45aa2d7f99d" path="/var/lib/kubelet/pods/6c1bb71f-b506-4779-997a-b45aa2d7f99d/volumes" Mar 13 10:09:24 crc kubenswrapper[4632]: I0313 10:09:24.058737 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7469657588-kpf64"] Mar 13 10:09:24 crc kubenswrapper[4632]: I0313 10:09:24.147418 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a8ff14f9-e25c-4839-acab-a622f6f70f88-proxy-ca-bundles\") pod \"controller-manager-7469657588-kpf64\" (UID: \"a8ff14f9-e25c-4839-acab-a622f6f70f88\") " pod="openshift-controller-manager/controller-manager-7469657588-kpf64" Mar 13 10:09:24 crc kubenswrapper[4632]: I0313 10:09:24.147524 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8ff14f9-e25c-4839-acab-a622f6f70f88-config\") pod \"controller-manager-7469657588-kpf64\" (UID: \"a8ff14f9-e25c-4839-acab-a622f6f70f88\") " pod="openshift-controller-manager/controller-manager-7469657588-kpf64" Mar 13 10:09:24 crc kubenswrapper[4632]: I0313 10:09:24.147562 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfhrk\" (UniqueName: \"kubernetes.io/projected/a8ff14f9-e25c-4839-acab-a622f6f70f88-kube-api-access-qfhrk\") pod \"controller-manager-7469657588-kpf64\" (UID: \"a8ff14f9-e25c-4839-acab-a622f6f70f88\") " pod="openshift-controller-manager/controller-manager-7469657588-kpf64" Mar 13 10:09:24 crc kubenswrapper[4632]: I0313 10:09:24.147834 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8ff14f9-e25c-4839-acab-a622f6f70f88-serving-cert\") pod \"controller-manager-7469657588-kpf64\" (UID: \"a8ff14f9-e25c-4839-acab-a622f6f70f88\") " pod="openshift-controller-manager/controller-manager-7469657588-kpf64" Mar 13 10:09:24 crc kubenswrapper[4632]: I0313 10:09:24.147898 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8ff14f9-e25c-4839-acab-a622f6f70f88-client-ca\") pod \"controller-manager-7469657588-kpf64\" (UID: \"a8ff14f9-e25c-4839-acab-a622f6f70f88\") " pod="openshift-controller-manager/controller-manager-7469657588-kpf64" Mar 13 10:09:24 crc kubenswrapper[4632]: I0313 10:09:24.249774 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8ff14f9-e25c-4839-acab-a622f6f70f88-config\") pod \"controller-manager-7469657588-kpf64\" (UID: \"a8ff14f9-e25c-4839-acab-a622f6f70f88\") " pod="openshift-controller-manager/controller-manager-7469657588-kpf64" Mar 13 10:09:24 crc kubenswrapper[4632]: I0313 10:09:24.249850 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfhrk\" (UniqueName: \"kubernetes.io/projected/a8ff14f9-e25c-4839-acab-a622f6f70f88-kube-api-access-qfhrk\") pod \"controller-manager-7469657588-kpf64\" (UID: \"a8ff14f9-e25c-4839-acab-a622f6f70f88\") " pod="openshift-controller-manager/controller-manager-7469657588-kpf64" Mar 13 10:09:24 crc kubenswrapper[4632]: I0313 10:09:24.249905 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8ff14f9-e25c-4839-acab-a622f6f70f88-serving-cert\") pod \"controller-manager-7469657588-kpf64\" (UID: \"a8ff14f9-e25c-4839-acab-a622f6f70f88\") " pod="openshift-controller-manager/controller-manager-7469657588-kpf64" Mar 13 10:09:24 crc kubenswrapper[4632]: I0313 10:09:24.250053 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8ff14f9-e25c-4839-acab-a622f6f70f88-client-ca\") pod \"controller-manager-7469657588-kpf64\" (UID: \"a8ff14f9-e25c-4839-acab-a622f6f70f88\") " pod="openshift-controller-manager/controller-manager-7469657588-kpf64" Mar 13 10:09:24 crc kubenswrapper[4632]: I0313 10:09:24.250089 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a8ff14f9-e25c-4839-acab-a622f6f70f88-proxy-ca-bundles\") pod \"controller-manager-7469657588-kpf64\" (UID: \"a8ff14f9-e25c-4839-acab-a622f6f70f88\") " pod="openshift-controller-manager/controller-manager-7469657588-kpf64" Mar 13 10:09:24 crc kubenswrapper[4632]: I0313 10:09:24.251542 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a8ff14f9-e25c-4839-acab-a622f6f70f88-proxy-ca-bundles\") pod \"controller-manager-7469657588-kpf64\" (UID: \"a8ff14f9-e25c-4839-acab-a622f6f70f88\") " pod="openshift-controller-manager/controller-manager-7469657588-kpf64" Mar 13 10:09:24 crc kubenswrapper[4632]: I0313 10:09:24.251587 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8ff14f9-e25c-4839-acab-a622f6f70f88-client-ca\") pod \"controller-manager-7469657588-kpf64\" (UID: \"a8ff14f9-e25c-4839-acab-a622f6f70f88\") " pod="openshift-controller-manager/controller-manager-7469657588-kpf64" Mar 13 10:09:24 crc kubenswrapper[4632]: I0313 10:09:24.263272 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8ff14f9-e25c-4839-acab-a622f6f70f88-serving-cert\") pod \"controller-manager-7469657588-kpf64\" (UID: \"a8ff14f9-e25c-4839-acab-a622f6f70f88\") " pod="openshift-controller-manager/controller-manager-7469657588-kpf64" Mar 13 10:09:24 crc kubenswrapper[4632]: I0313 10:09:24.272042 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfhrk\" (UniqueName: \"kubernetes.io/projected/a8ff14f9-e25c-4839-acab-a622f6f70f88-kube-api-access-qfhrk\") pod \"controller-manager-7469657588-kpf64\" (UID: \"a8ff14f9-e25c-4839-acab-a622f6f70f88\") " pod="openshift-controller-manager/controller-manager-7469657588-kpf64" Mar 13 10:09:24 crc kubenswrapper[4632]: I0313 10:09:24.396825 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e030373dea9ffbc53b1f37a054b8fe51529b1e5690243c2f3a5c0c872134f808"} Mar 13 10:09:24 crc kubenswrapper[4632]: I0313 10:09:24.398672 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2" event={"ID":"2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3","Type":"ContainerStarted","Data":"71808a85287e54b9fb184ad4c73a074a1ff3d6b35824bd6122d42af589681e05"} Mar 13 10:09:24 crc kubenswrapper[4632]: I0313 10:09:24.398837 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2" Mar 13 10:09:24 crc kubenswrapper[4632]: I0313 10:09:24.400926 4632 generic.go:334] "Generic (PLEG): container finished" podID="668c4640-0e5f-4c98-8b6e-dbdffdbfe14e" containerID="9fa8755799160674c9b9254d4cc4cd33b06b805f90944344a392116f94021c66" exitCode=0 Mar 13 10:09:24 crc kubenswrapper[4632]: I0313 10:09:24.400964 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t6bkt" event={"ID":"668c4640-0e5f-4c98-8b6e-dbdffdbfe14e","Type":"ContainerDied","Data":"9fa8755799160674c9b9254d4cc4cd33b06b805f90944344a392116f94021c66"} Mar 13 10:09:24 crc kubenswrapper[4632]: I0313 10:09:24.403456 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-z2vlz" event={"ID":"ab3aaffc-bf11-41a1-9a91-3bf97d2be4ad","Type":"ContainerStarted","Data":"048d443b7ebb101ab26472c762a7d62976644eb2a8a52ba9e023d75be07775dd"} Mar 13 10:09:24 crc kubenswrapper[4632]: I0313 10:09:24.440635 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-z2vlz" podStartSLOduration=284.440611377 podStartE2EDuration="4m44.440611377s" podCreationTimestamp="2026-03-13 10:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:09:24.43813261 +0000 UTC m=+338.460662763" watchObservedRunningTime="2026-03-13 10:09:24.440611377 +0000 UTC m=+338.463141510" Mar 13 10:09:24 crc kubenswrapper[4632]: I0313 10:09:24.468373 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2" podStartSLOduration=5.468351565 podStartE2EDuration="5.468351565s" podCreationTimestamp="2026-03-13 10:09:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:09:24.465757075 +0000 UTC m=+338.488287208" watchObservedRunningTime="2026-03-13 10:09:24.468351565 +0000 UTC m=+338.490881698" Mar 13 10:09:24 crc kubenswrapper[4632]: I0313 10:09:24.566886 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2" Mar 13 10:09:25 crc kubenswrapper[4632]: I0313 10:09:25.017078 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 13 10:09:25 crc kubenswrapper[4632]: I0313 10:09:25.125029 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 10:09:25 crc kubenswrapper[4632]: I0313 10:09:25.134202 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8ff14f9-e25c-4839-acab-a622f6f70f88-config\") pod \"controller-manager-7469657588-kpf64\" (UID: \"a8ff14f9-e25c-4839-acab-a622f6f70f88\") " pod="openshift-controller-manager/controller-manager-7469657588-kpf64" Mar 13 10:09:25 crc kubenswrapper[4632]: I0313 10:09:25.201323 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7469657588-kpf64" Mar 13 10:09:25 crc kubenswrapper[4632]: I0313 10:09:25.416146 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t6bkt" event={"ID":"668c4640-0e5f-4c98-8b6e-dbdffdbfe14e","Type":"ContainerStarted","Data":"18dc7d50bd20cdcc8bd750dbb90186346da79c165a2b1398fb9fce2a58fd1224"} Mar 13 10:09:25 crc kubenswrapper[4632]: I0313 10:09:25.446168 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jvh86" Mar 13 10:09:25 crc kubenswrapper[4632]: I0313 10:09:25.452105 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t6bkt" podStartSLOduration=4.839266753 podStartE2EDuration="1m27.452077464s" podCreationTimestamp="2026-03-13 10:07:58 +0000 UTC" firstStartedPulling="2026-03-13 10:08:02.326755102 +0000 UTC m=+256.349285235" lastFinishedPulling="2026-03-13 10:09:24.939565813 +0000 UTC m=+338.962095946" observedRunningTime="2026-03-13 10:09:25.44667907 +0000 UTC m=+339.469209213" watchObservedRunningTime="2026-03-13 10:09:25.452077464 +0000 UTC m=+339.474607607" Mar 13 10:09:25 crc kubenswrapper[4632]: I0313 10:09:25.571403 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jvh86" Mar 13 10:09:25 crc kubenswrapper[4632]: I0313 10:09:25.817729 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xd455" Mar 13 10:09:25 crc kubenswrapper[4632]: I0313 10:09:25.947722 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7469657588-kpf64"] Mar 13 10:09:26 crc kubenswrapper[4632]: I0313 10:09:26.421262 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7469657588-kpf64" event={"ID":"a8ff14f9-e25c-4839-acab-a622f6f70f88","Type":"ContainerStarted","Data":"432a739d763fd09cb52fbc4a7bbe481e0fb4c89b88f7822f73b594d3596d0d39"} Mar 13 10:09:26 crc kubenswrapper[4632]: I0313 10:09:26.422561 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7469657588-kpf64" event={"ID":"a8ff14f9-e25c-4839-acab-a622f6f70f88","Type":"ContainerStarted","Data":"0c9127c8a737402e76c86b1f73dd05b17a90037c7a9a2c7f8c0120f64d74a91d"} Mar 13 10:09:26 crc kubenswrapper[4632]: I0313 10:09:26.866717 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p8wjg" Mar 13 10:09:26 crc kubenswrapper[4632]: I0313 10:09:26.866804 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p8wjg" Mar 13 10:09:26 crc kubenswrapper[4632]: I0313 10:09:26.932704 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p8wjg" Mar 13 10:09:27 crc kubenswrapper[4632]: I0313 10:09:27.095431 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8z668" Mar 13 10:09:27 crc kubenswrapper[4632]: I0313 10:09:27.095492 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8z668" Mar 13 10:09:27 crc kubenswrapper[4632]: I0313 10:09:27.149394 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8z668" Mar 13 10:09:27 crc kubenswrapper[4632]: I0313 10:09:27.430464 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7469657588-kpf64" Mar 13 10:09:27 crc kubenswrapper[4632]: I0313 10:09:27.438864 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7469657588-kpf64" Mar 13 10:09:27 crc kubenswrapper[4632]: I0313 10:09:27.454202 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7469657588-kpf64" podStartSLOduration=8.454183168 podStartE2EDuration="8.454183168s" podCreationTimestamp="2026-03-13 10:09:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:09:27.45385313 +0000 UTC m=+341.476383273" watchObservedRunningTime="2026-03-13 10:09:27.454183168 +0000 UTC m=+341.476713291" Mar 13 10:09:27 crc kubenswrapper[4632]: I0313 10:09:27.481384 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p8wjg" Mar 13 10:09:27 crc kubenswrapper[4632]: I0313 10:09:27.496085 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8z668" Mar 13 10:09:28 crc kubenswrapper[4632]: I0313 10:09:28.508437 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-txp2w" Mar 13 10:09:28 crc kubenswrapper[4632]: I0313 10:09:28.508505 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-txp2w" Mar 13 10:09:28 crc kubenswrapper[4632]: I0313 10:09:28.560348 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-txp2w" Mar 13 10:09:28 crc kubenswrapper[4632]: I0313 10:09:28.979334 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xr5l9" Mar 13 10:09:28 crc kubenswrapper[4632]: I0313 10:09:28.979674 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xr5l9" Mar 13 10:09:28 crc kubenswrapper[4632]: I0313 10:09:28.993982 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t6bkt" Mar 13 10:09:28 crc kubenswrapper[4632]: I0313 10:09:28.994028 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t6bkt" Mar 13 10:09:29 crc kubenswrapper[4632]: I0313 10:09:29.038017 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t6bkt" Mar 13 10:09:29 crc kubenswrapper[4632]: I0313 10:09:29.051930 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z2gc7" Mar 13 10:09:29 crc kubenswrapper[4632]: I0313 10:09:29.052308 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z2gc7" Mar 13 10:09:29 crc kubenswrapper[4632]: I0313 10:09:29.089962 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xd455"] Mar 13 10:09:29 crc kubenswrapper[4632]: I0313 10:09:29.090254 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xd455" podUID="cd6e3c73-fbc1-4213-bbef-02dd2b0587f8" containerName="registry-server" containerID="cri-o://f1dbecf7ff84705a27018ceaf7e07f776f8da213446108c63db8f788119a4f28" gracePeriod=2 Mar 13 10:09:29 crc kubenswrapper[4632]: I0313 10:09:29.445080 4632 generic.go:334] "Generic (PLEG): container finished" podID="cd6e3c73-fbc1-4213-bbef-02dd2b0587f8" containerID="f1dbecf7ff84705a27018ceaf7e07f776f8da213446108c63db8f788119a4f28" exitCode=0 Mar 13 10:09:29 crc kubenswrapper[4632]: I0313 10:09:29.445196 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xd455" event={"ID":"cd6e3c73-fbc1-4213-bbef-02dd2b0587f8","Type":"ContainerDied","Data":"f1dbecf7ff84705a27018ceaf7e07f776f8da213446108c63db8f788119a4f28"} Mar 13 10:09:29 crc kubenswrapper[4632]: I0313 10:09:29.491726 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-txp2w" Mar 13 10:09:29 crc kubenswrapper[4632]: I0313 10:09:29.584363 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xd455" Mar 13 10:09:29 crc kubenswrapper[4632]: I0313 10:09:29.653029 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd6e3c73-fbc1-4213-bbef-02dd2b0587f8-catalog-content\") pod \"cd6e3c73-fbc1-4213-bbef-02dd2b0587f8\" (UID: \"cd6e3c73-fbc1-4213-bbef-02dd2b0587f8\") " Mar 13 10:09:29 crc kubenswrapper[4632]: I0313 10:09:29.653148 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd6e3c73-fbc1-4213-bbef-02dd2b0587f8-utilities\") pod \"cd6e3c73-fbc1-4213-bbef-02dd2b0587f8\" (UID: \"cd6e3c73-fbc1-4213-bbef-02dd2b0587f8\") " Mar 13 10:09:29 crc kubenswrapper[4632]: I0313 10:09:29.653191 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kh6g9\" (UniqueName: \"kubernetes.io/projected/cd6e3c73-fbc1-4213-bbef-02dd2b0587f8-kube-api-access-kh6g9\") pod \"cd6e3c73-fbc1-4213-bbef-02dd2b0587f8\" (UID: \"cd6e3c73-fbc1-4213-bbef-02dd2b0587f8\") " Mar 13 10:09:29 crc kubenswrapper[4632]: I0313 10:09:29.654165 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd6e3c73-fbc1-4213-bbef-02dd2b0587f8-utilities" (OuterVolumeSpecName: "utilities") pod "cd6e3c73-fbc1-4213-bbef-02dd2b0587f8" (UID: "cd6e3c73-fbc1-4213-bbef-02dd2b0587f8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:09:29 crc kubenswrapper[4632]: I0313 10:09:29.659194 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd6e3c73-fbc1-4213-bbef-02dd2b0587f8-kube-api-access-kh6g9" (OuterVolumeSpecName: "kube-api-access-kh6g9") pod "cd6e3c73-fbc1-4213-bbef-02dd2b0587f8" (UID: "cd6e3c73-fbc1-4213-bbef-02dd2b0587f8"). InnerVolumeSpecName "kube-api-access-kh6g9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:09:29 crc kubenswrapper[4632]: I0313 10:09:29.691498 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8z668"] Mar 13 10:09:29 crc kubenswrapper[4632]: I0313 10:09:29.691748 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8z668" podUID="9845f384-2720-4d6a-aa73-1e66e30f7c2c" containerName="registry-server" containerID="cri-o://001fe1eb309384fca387c523307ccbff0d5d514d2d7b29f074cc94a2210761cb" gracePeriod=2 Mar 13 10:09:29 crc kubenswrapper[4632]: I0313 10:09:29.716011 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd6e3c73-fbc1-4213-bbef-02dd2b0587f8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd6e3c73-fbc1-4213-bbef-02dd2b0587f8" (UID: "cd6e3c73-fbc1-4213-bbef-02dd2b0587f8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:09:29 crc kubenswrapper[4632]: I0313 10:09:29.756202 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd6e3c73-fbc1-4213-bbef-02dd2b0587f8-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 10:09:29 crc kubenswrapper[4632]: I0313 10:09:29.756247 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd6e3c73-fbc1-4213-bbef-02dd2b0587f8-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 10:09:29 crc kubenswrapper[4632]: I0313 10:09:29.756260 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kh6g9\" (UniqueName: \"kubernetes.io/projected/cd6e3c73-fbc1-4213-bbef-02dd2b0587f8-kube-api-access-kh6g9\") on node \"crc\" DevicePath \"\"" Mar 13 10:09:30 crc kubenswrapper[4632]: I0313 10:09:30.015311 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xr5l9" podUID="87965e39-b879-4e26-9c8b-b78068c52aa0" containerName="registry-server" probeResult="failure" output=< Mar 13 10:09:30 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 10:09:30 crc kubenswrapper[4632]: > Mar 13 10:09:30 crc kubenswrapper[4632]: I0313 10:09:30.089445 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8z668" Mar 13 10:09:30 crc kubenswrapper[4632]: I0313 10:09:30.103861 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z2gc7" podUID="a110c276-8516-4f9e-a6af-d6837cd0f387" containerName="registry-server" probeResult="failure" output=< Mar 13 10:09:30 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 10:09:30 crc kubenswrapper[4632]: > Mar 13 10:09:30 crc kubenswrapper[4632]: I0313 10:09:30.162342 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9845f384-2720-4d6a-aa73-1e66e30f7c2c-utilities\") pod \"9845f384-2720-4d6a-aa73-1e66e30f7c2c\" (UID: \"9845f384-2720-4d6a-aa73-1e66e30f7c2c\") " Mar 13 10:09:30 crc kubenswrapper[4632]: I0313 10:09:30.162424 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9845f384-2720-4d6a-aa73-1e66e30f7c2c-catalog-content\") pod \"9845f384-2720-4d6a-aa73-1e66e30f7c2c\" (UID: \"9845f384-2720-4d6a-aa73-1e66e30f7c2c\") " Mar 13 10:09:30 crc kubenswrapper[4632]: I0313 10:09:30.162517 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdp56\" (UniqueName: \"kubernetes.io/projected/9845f384-2720-4d6a-aa73-1e66e30f7c2c-kube-api-access-sdp56\") pod \"9845f384-2720-4d6a-aa73-1e66e30f7c2c\" (UID: \"9845f384-2720-4d6a-aa73-1e66e30f7c2c\") " Mar 13 10:09:30 crc kubenswrapper[4632]: I0313 10:09:30.163310 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9845f384-2720-4d6a-aa73-1e66e30f7c2c-utilities" (OuterVolumeSpecName: "utilities") pod "9845f384-2720-4d6a-aa73-1e66e30f7c2c" (UID: "9845f384-2720-4d6a-aa73-1e66e30f7c2c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:09:30 crc kubenswrapper[4632]: I0313 10:09:30.165736 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9845f384-2720-4d6a-aa73-1e66e30f7c2c-kube-api-access-sdp56" (OuterVolumeSpecName: "kube-api-access-sdp56") pod "9845f384-2720-4d6a-aa73-1e66e30f7c2c" (UID: "9845f384-2720-4d6a-aa73-1e66e30f7c2c"). InnerVolumeSpecName "kube-api-access-sdp56". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:09:30 crc kubenswrapper[4632]: I0313 10:09:30.223084 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9845f384-2720-4d6a-aa73-1e66e30f7c2c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9845f384-2720-4d6a-aa73-1e66e30f7c2c" (UID: "9845f384-2720-4d6a-aa73-1e66e30f7c2c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:09:30 crc kubenswrapper[4632]: I0313 10:09:30.264117 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9845f384-2720-4d6a-aa73-1e66e30f7c2c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 10:09:30 crc kubenswrapper[4632]: I0313 10:09:30.264162 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdp56\" (UniqueName: \"kubernetes.io/projected/9845f384-2720-4d6a-aa73-1e66e30f7c2c-kube-api-access-sdp56\") on node \"crc\" DevicePath \"\"" Mar 13 10:09:30 crc kubenswrapper[4632]: I0313 10:09:30.264174 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9845f384-2720-4d6a-aa73-1e66e30f7c2c-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 10:09:30 crc kubenswrapper[4632]: I0313 10:09:30.452500 4632 generic.go:334] "Generic (PLEG): container finished" podID="9845f384-2720-4d6a-aa73-1e66e30f7c2c" containerID="001fe1eb309384fca387c523307ccbff0d5d514d2d7b29f074cc94a2210761cb" exitCode=0 Mar 13 10:09:30 crc kubenswrapper[4632]: I0313 10:09:30.452583 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8z668" event={"ID":"9845f384-2720-4d6a-aa73-1e66e30f7c2c","Type":"ContainerDied","Data":"001fe1eb309384fca387c523307ccbff0d5d514d2d7b29f074cc94a2210761cb"} Mar 13 10:09:30 crc kubenswrapper[4632]: I0313 10:09:30.452587 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8z668" Mar 13 10:09:30 crc kubenswrapper[4632]: I0313 10:09:30.452624 4632 scope.go:117] "RemoveContainer" containerID="001fe1eb309384fca387c523307ccbff0d5d514d2d7b29f074cc94a2210761cb" Mar 13 10:09:30 crc kubenswrapper[4632]: I0313 10:09:30.452614 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8z668" event={"ID":"9845f384-2720-4d6a-aa73-1e66e30f7c2c","Type":"ContainerDied","Data":"eb6537c579cc3249bae831f8164a219c024fbc6e74b0df55017ce52d6b143567"} Mar 13 10:09:30 crc kubenswrapper[4632]: I0313 10:09:30.455556 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xd455" event={"ID":"cd6e3c73-fbc1-4213-bbef-02dd2b0587f8","Type":"ContainerDied","Data":"33259f14f07cee3a1d7261a44a8f74cbd0957ccef81016b9631c3a0a7ccd4085"} Mar 13 10:09:30 crc kubenswrapper[4632]: I0313 10:09:30.455607 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xd455" Mar 13 10:09:30 crc kubenswrapper[4632]: I0313 10:09:30.473085 4632 scope.go:117] "RemoveContainer" containerID="577f89a2c63ffc7af6c8d9a11a12240e1b316c59a7b108fcd47eb2cd9dc3c8ec" Mar 13 10:09:30 crc kubenswrapper[4632]: I0313 10:09:30.480155 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8z668"] Mar 13 10:09:30 crc kubenswrapper[4632]: I0313 10:09:30.491150 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8z668"] Mar 13 10:09:30 crc kubenswrapper[4632]: I0313 10:09:30.497463 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xd455"] Mar 13 10:09:30 crc kubenswrapper[4632]: I0313 10:09:30.500735 4632 scope.go:117] "RemoveContainer" containerID="a609582fe9641518ec575a14e6a93f5bb1f502cb63d1e38602356c26bed99217" Mar 13 10:09:30 crc kubenswrapper[4632]: I0313 10:09:30.502342 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xd455"] Mar 13 10:09:30 crc kubenswrapper[4632]: I0313 10:09:30.524666 4632 scope.go:117] "RemoveContainer" containerID="001fe1eb309384fca387c523307ccbff0d5d514d2d7b29f074cc94a2210761cb" Mar 13 10:09:30 crc kubenswrapper[4632]: E0313 10:09:30.525072 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"001fe1eb309384fca387c523307ccbff0d5d514d2d7b29f074cc94a2210761cb\": container with ID starting with 001fe1eb309384fca387c523307ccbff0d5d514d2d7b29f074cc94a2210761cb not found: ID does not exist" containerID="001fe1eb309384fca387c523307ccbff0d5d514d2d7b29f074cc94a2210761cb" Mar 13 10:09:30 crc kubenswrapper[4632]: I0313 10:09:30.525101 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"001fe1eb309384fca387c523307ccbff0d5d514d2d7b29f074cc94a2210761cb"} err="failed to get container status \"001fe1eb309384fca387c523307ccbff0d5d514d2d7b29f074cc94a2210761cb\": rpc error: code = NotFound desc = could not find container \"001fe1eb309384fca387c523307ccbff0d5d514d2d7b29f074cc94a2210761cb\": container with ID starting with 001fe1eb309384fca387c523307ccbff0d5d514d2d7b29f074cc94a2210761cb not found: ID does not exist" Mar 13 10:09:30 crc kubenswrapper[4632]: I0313 10:09:30.525120 4632 scope.go:117] "RemoveContainer" containerID="577f89a2c63ffc7af6c8d9a11a12240e1b316c59a7b108fcd47eb2cd9dc3c8ec" Mar 13 10:09:30 crc kubenswrapper[4632]: E0313 10:09:30.525310 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"577f89a2c63ffc7af6c8d9a11a12240e1b316c59a7b108fcd47eb2cd9dc3c8ec\": container with ID starting with 577f89a2c63ffc7af6c8d9a11a12240e1b316c59a7b108fcd47eb2cd9dc3c8ec not found: ID does not exist" containerID="577f89a2c63ffc7af6c8d9a11a12240e1b316c59a7b108fcd47eb2cd9dc3c8ec" Mar 13 10:09:30 crc kubenswrapper[4632]: I0313 10:09:30.525326 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"577f89a2c63ffc7af6c8d9a11a12240e1b316c59a7b108fcd47eb2cd9dc3c8ec"} err="failed to get container status \"577f89a2c63ffc7af6c8d9a11a12240e1b316c59a7b108fcd47eb2cd9dc3c8ec\": rpc error: code = NotFound desc = could not find container \"577f89a2c63ffc7af6c8d9a11a12240e1b316c59a7b108fcd47eb2cd9dc3c8ec\": container with ID starting with 577f89a2c63ffc7af6c8d9a11a12240e1b316c59a7b108fcd47eb2cd9dc3c8ec not found: ID does not exist" Mar 13 10:09:30 crc kubenswrapper[4632]: I0313 10:09:30.525338 4632 scope.go:117] "RemoveContainer" containerID="a609582fe9641518ec575a14e6a93f5bb1f502cb63d1e38602356c26bed99217" Mar 13 10:09:30 crc kubenswrapper[4632]: E0313 10:09:30.525490 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a609582fe9641518ec575a14e6a93f5bb1f502cb63d1e38602356c26bed99217\": container with ID starting with a609582fe9641518ec575a14e6a93f5bb1f502cb63d1e38602356c26bed99217 not found: ID does not exist" containerID="a609582fe9641518ec575a14e6a93f5bb1f502cb63d1e38602356c26bed99217" Mar 13 10:09:30 crc kubenswrapper[4632]: I0313 10:09:30.525503 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a609582fe9641518ec575a14e6a93f5bb1f502cb63d1e38602356c26bed99217"} err="failed to get container status \"a609582fe9641518ec575a14e6a93f5bb1f502cb63d1e38602356c26bed99217\": rpc error: code = NotFound desc = could not find container \"a609582fe9641518ec575a14e6a93f5bb1f502cb63d1e38602356c26bed99217\": container with ID starting with a609582fe9641518ec575a14e6a93f5bb1f502cb63d1e38602356c26bed99217 not found: ID does not exist" Mar 13 10:09:30 crc kubenswrapper[4632]: I0313 10:09:30.525517 4632 scope.go:117] "RemoveContainer" containerID="f1dbecf7ff84705a27018ceaf7e07f776f8da213446108c63db8f788119a4f28" Mar 13 10:09:30 crc kubenswrapper[4632]: I0313 10:09:30.572495 4632 scope.go:117] "RemoveContainer" containerID="1c7d4d3dbdb9375cd1f14c42f62f344139bab8e0abb1403e1fe655b1b72e40c4" Mar 13 10:09:30 crc kubenswrapper[4632]: I0313 10:09:30.587565 4632 scope.go:117] "RemoveContainer" containerID="39c617653cdae12029a38a740d3aa9e4c08c056d9865caf4f87830fbf0817555" Mar 13 10:09:32 crc kubenswrapper[4632]: I0313 10:09:32.054060 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9845f384-2720-4d6a-aa73-1e66e30f7c2c" path="/var/lib/kubelet/pods/9845f384-2720-4d6a-aa73-1e66e30f7c2c/volumes" Mar 13 10:09:32 crc kubenswrapper[4632]: I0313 10:09:32.055752 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd6e3c73-fbc1-4213-bbef-02dd2b0587f8" path="/var/lib/kubelet/pods/cd6e3c73-fbc1-4213-bbef-02dd2b0587f8/volumes" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.668046 4632 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.668636 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54" gracePeriod=15 Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.668791 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://21d33939d9cfb29a666b40f15b0bd1e73ec4c62db28db999433541739f2a1c94" gracePeriod=15 Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.668832 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc" gracePeriod=15 Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.668862 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf" gracePeriod=15 Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.668890 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe" gracePeriod=15 Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.669758 4632 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 13 10:09:35 crc kubenswrapper[4632]: E0313 10:09:35.670142 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd6e3c73-fbc1-4213-bbef-02dd2b0587f8" containerName="extract-utilities" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.670171 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd6e3c73-fbc1-4213-bbef-02dd2b0587f8" containerName="extract-utilities" Mar 13 10:09:35 crc kubenswrapper[4632]: E0313 10:09:35.670192 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.670224 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 13 10:09:35 crc kubenswrapper[4632]: E0313 10:09:35.670236 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd6e3c73-fbc1-4213-bbef-02dd2b0587f8" containerName="extract-content" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.670244 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd6e3c73-fbc1-4213-bbef-02dd2b0587f8" containerName="extract-content" Mar 13 10:09:35 crc kubenswrapper[4632]: E0313 10:09:35.670255 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9845f384-2720-4d6a-aa73-1e66e30f7c2c" containerName="registry-server" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.670262 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="9845f384-2720-4d6a-aa73-1e66e30f7c2c" containerName="registry-server" Mar 13 10:09:35 crc kubenswrapper[4632]: E0313 10:09:35.670274 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9845f384-2720-4d6a-aa73-1e66e30f7c2c" containerName="extract-content" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.670311 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="9845f384-2720-4d6a-aa73-1e66e30f7c2c" containerName="extract-content" Mar 13 10:09:35 crc kubenswrapper[4632]: E0313 10:09:35.670323 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.670344 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 13 10:09:35 crc kubenswrapper[4632]: E0313 10:09:35.670352 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.670385 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 13 10:09:35 crc kubenswrapper[4632]: E0313 10:09:35.670398 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.670405 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 13 10:09:35 crc kubenswrapper[4632]: E0313 10:09:35.670416 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.670423 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 13 10:09:35 crc kubenswrapper[4632]: E0313 10:09:35.670434 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.670441 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 13 10:09:35 crc kubenswrapper[4632]: E0313 10:09:35.670479 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9845f384-2720-4d6a-aa73-1e66e30f7c2c" containerName="extract-utilities" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.670486 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="9845f384-2720-4d6a-aa73-1e66e30f7c2c" containerName="extract-utilities" Mar 13 10:09:35 crc kubenswrapper[4632]: E0313 10:09:35.670500 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.670507 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 13 10:09:35 crc kubenswrapper[4632]: E0313 10:09:35.670516 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.670523 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 13 10:09:35 crc kubenswrapper[4632]: E0313 10:09:35.670558 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.670565 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 13 10:09:35 crc kubenswrapper[4632]: E0313 10:09:35.670573 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd6e3c73-fbc1-4213-bbef-02dd2b0587f8" containerName="registry-server" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.670579 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd6e3c73-fbc1-4213-bbef-02dd2b0587f8" containerName="registry-server" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.670729 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd6e3c73-fbc1-4213-bbef-02dd2b0587f8" containerName="registry-server" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.670754 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.670764 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.670795 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.670802 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.670812 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.670821 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="9845f384-2720-4d6a-aa73-1e66e30f7c2c" containerName="registry-server" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.670830 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.670839 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 13 10:09:35 crc kubenswrapper[4632]: E0313 10:09:35.671008 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.671039 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.671215 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.671231 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.672502 4632 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.672962 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.694621 4632 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.718841 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.742008 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.742280 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.742395 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.742493 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.742601 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.742737 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.742831 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.742976 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.844538 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.844632 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.844693 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.844713 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.844744 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.844764 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.844725 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.844888 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.844810 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.844924 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.844968 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.844996 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.845007 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.845019 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.845072 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 10:09:35 crc kubenswrapper[4632]: I0313 10:09:35.845149 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 10:09:36 crc kubenswrapper[4632]: I0313 10:09:36.014846 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 10:09:36 crc kubenswrapper[4632]: E0313 10:09:36.047777 4632 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.182:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189c5eca412cae33 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:09:36.046984755 +0000 UTC m=+350.069514888,LastTimestamp:2026-03-13 10:09:36.046984755 +0000 UTC m=+350.069514888,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:09:36 crc kubenswrapper[4632]: E0313 10:09:36.083384 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:09:36Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:09:36Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:09:36Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:09:36Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1295a1f0e74ae87f51a733e28b64c6fdb6b9a5b069a6897b3870fe52cc1c3b0b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:505eeaa3f051e9f4ea6a622aca92e5c4eae07078ca185d9fecfe8cc9b6dfc899\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1739173859},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4855408bd0e4d0711383d0c14dcad53c98255ff9f83f6cbefb57e47eacc1f1f1\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:97bdbb5854e4ad7976209a44cff02c8a2b9542f58ad007c06a5c3a5e8266def1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1284762325},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:70c85e2aeb7db0a454101307851f490057ab53449c50ad9d86c54a698dd4913a\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:762bdf2da1fce19a4a24a6931f555b482c5c2314895b2f68aed74658266819a7\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221741278},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-cli@sha256:69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9\\\",\\\"registry.redhat.io/openshift4/ose-cli@sha256:ef83967297f619f45075e7fd1428a1eb981622a6c174c46fb53b158ed24bed85\\\",\\\"registry.redhat.io/openshift4/ose-cli:latest\\\"],\\\"sizeBytes\\\":584351326},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:36 crc kubenswrapper[4632]: E0313 10:09:36.083836 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:36 crc kubenswrapper[4632]: E0313 10:09:36.084108 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:36 crc kubenswrapper[4632]: E0313 10:09:36.084376 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:36 crc kubenswrapper[4632]: E0313 10:09:36.084594 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:36 crc kubenswrapper[4632]: E0313 10:09:36.084616 4632 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 10:09:36 crc kubenswrapper[4632]: I0313 10:09:36.495214 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 13 10:09:36 crc kubenswrapper[4632]: I0313 10:09:36.496493 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 13 10:09:36 crc kubenswrapper[4632]: I0313 10:09:36.497173 4632 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="21d33939d9cfb29a666b40f15b0bd1e73ec4c62db28db999433541739f2a1c94" exitCode=0 Mar 13 10:09:36 crc kubenswrapper[4632]: I0313 10:09:36.497196 4632 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc" exitCode=0 Mar 13 10:09:36 crc kubenswrapper[4632]: I0313 10:09:36.497204 4632 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf" exitCode=0 Mar 13 10:09:36 crc kubenswrapper[4632]: I0313 10:09:36.497212 4632 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe" exitCode=2 Mar 13 10:09:36 crc kubenswrapper[4632]: I0313 10:09:36.497289 4632 scope.go:117] "RemoveContainer" containerID="9898f7d8c644921fdc43a7906faeef577b236192f94a1ed911525b50ba8e68ae" Mar 13 10:09:36 crc kubenswrapper[4632]: I0313 10:09:36.499696 4632 generic.go:334] "Generic (PLEG): container finished" podID="dc39d207-84a2-4a28-9296-bed684aa308d" containerID="c463b62ee1a6928ceb028fe480183c5ca7bb846ec47d4163fa232376e05db524" exitCode=0 Mar 13 10:09:36 crc kubenswrapper[4632]: I0313 10:09:36.499765 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"dc39d207-84a2-4a28-9296-bed684aa308d","Type":"ContainerDied","Data":"c463b62ee1a6928ceb028fe480183c5ca7bb846ec47d4163fa232376e05db524"} Mar 13 10:09:36 crc kubenswrapper[4632]: I0313 10:09:36.501010 4632 status_manager.go:851] "Failed to get status for pod" podUID="dc39d207-84a2-4a28-9296-bed684aa308d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:36 crc kubenswrapper[4632]: I0313 10:09:36.501459 4632 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:36 crc kubenswrapper[4632]: I0313 10:09:36.502226 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"80143265ebaee0f3a54053c9c203e48c0b8ae49b675b972452da2268c99bd9ad"} Mar 13 10:09:36 crc kubenswrapper[4632]: I0313 10:09:36.502261 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"7655fa97fbc58dc4d8162ba613e0a9424aef91ec5c3ab185bf572e8fb571eb1a"} Mar 13 10:09:36 crc kubenswrapper[4632]: I0313 10:09:36.502676 4632 status_manager.go:851] "Failed to get status for pod" podUID="dc39d207-84a2-4a28-9296-bed684aa308d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:36 crc kubenswrapper[4632]: I0313 10:09:36.503014 4632 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:37 crc kubenswrapper[4632]: I0313 10:09:37.524096 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 13 10:09:37 crc kubenswrapper[4632]: E0313 10:09:37.709386 4632 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.182:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189c5eca412cae33 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:09:36.046984755 +0000 UTC m=+350.069514888,LastTimestamp:2026-03-13 10:09:36.046984755 +0000 UTC m=+350.069514888,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.046349 4632 status_manager.go:851] "Failed to get status for pod" podUID="dc39d207-84a2-4a28-9296-bed684aa308d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.047152 4632 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.062817 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.063326 4632 status_manager.go:851] "Failed to get status for pod" podUID="dc39d207-84a2-4a28-9296-bed684aa308d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.063729 4632 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.117921 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.118871 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.119677 4632 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.120187 4632 status_manager.go:851] "Failed to get status for pod" podUID="dc39d207-84a2-4a28-9296-bed684aa308d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.120692 4632 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.177850 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.177958 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dc39d207-84a2-4a28-9296-bed684aa308d-var-lock\") pod \"dc39d207-84a2-4a28-9296-bed684aa308d\" (UID: \"dc39d207-84a2-4a28-9296-bed684aa308d\") " Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.178005 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.178047 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.178064 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc39d207-84a2-4a28-9296-bed684aa308d-var-lock" (OuterVolumeSpecName: "var-lock") pod "dc39d207-84a2-4a28-9296-bed684aa308d" (UID: "dc39d207-84a2-4a28-9296-bed684aa308d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.178094 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.178150 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.178151 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.178188 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc39d207-84a2-4a28-9296-bed684aa308d-kube-api-access\") pod \"dc39d207-84a2-4a28-9296-bed684aa308d\" (UID: \"dc39d207-84a2-4a28-9296-bed684aa308d\") " Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.178235 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dc39d207-84a2-4a28-9296-bed684aa308d-kubelet-dir\") pod \"dc39d207-84a2-4a28-9296-bed684aa308d\" (UID: \"dc39d207-84a2-4a28-9296-bed684aa308d\") " Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.178693 4632 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.178720 4632 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dc39d207-84a2-4a28-9296-bed684aa308d-var-lock\") on node \"crc\" DevicePath \"\"" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.178732 4632 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.178743 4632 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.178801 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc39d207-84a2-4a28-9296-bed684aa308d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "dc39d207-84a2-4a28-9296-bed684aa308d" (UID: "dc39d207-84a2-4a28-9296-bed684aa308d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.183968 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc39d207-84a2-4a28-9296-bed684aa308d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "dc39d207-84a2-4a28-9296-bed684aa308d" (UID: "dc39d207-84a2-4a28-9296-bed684aa308d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.280052 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc39d207-84a2-4a28-9296-bed684aa308d-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.280594 4632 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dc39d207-84a2-4a28-9296-bed684aa308d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.554167 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.555008 4632 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54" exitCode=0 Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.555076 4632 scope.go:117] "RemoveContainer" containerID="21d33939d9cfb29a666b40f15b0bd1e73ec4c62db28db999433541739f2a1c94" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.555252 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.564098 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"dc39d207-84a2-4a28-9296-bed684aa308d","Type":"ContainerDied","Data":"7017997794d887d37a83222e44995b72d5d076c42028b8e1498fdb1f2cb4d188"} Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.564140 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7017997794d887d37a83222e44995b72d5d076c42028b8e1498fdb1f2cb4d188" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.564220 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.573105 4632 status_manager.go:851] "Failed to get status for pod" podUID="dc39d207-84a2-4a28-9296-bed684aa308d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.574062 4632 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.574309 4632 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.576417 4632 status_manager.go:851] "Failed to get status for pod" podUID="dc39d207-84a2-4a28-9296-bed684aa308d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.576637 4632 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.576911 4632 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.581053 4632 scope.go:117] "RemoveContainer" containerID="706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.597771 4632 scope.go:117] "RemoveContainer" containerID="516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.611314 4632 scope.go:117] "RemoveContainer" containerID="cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.624105 4632 scope.go:117] "RemoveContainer" containerID="5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.640393 4632 scope.go:117] "RemoveContainer" containerID="7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.659119 4632 scope.go:117] "RemoveContainer" containerID="21d33939d9cfb29a666b40f15b0bd1e73ec4c62db28db999433541739f2a1c94" Mar 13 10:09:38 crc kubenswrapper[4632]: E0313 10:09:38.659882 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21d33939d9cfb29a666b40f15b0bd1e73ec4c62db28db999433541739f2a1c94\": container with ID starting with 21d33939d9cfb29a666b40f15b0bd1e73ec4c62db28db999433541739f2a1c94 not found: ID does not exist" containerID="21d33939d9cfb29a666b40f15b0bd1e73ec4c62db28db999433541739f2a1c94" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.659919 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21d33939d9cfb29a666b40f15b0bd1e73ec4c62db28db999433541739f2a1c94"} err="failed to get container status \"21d33939d9cfb29a666b40f15b0bd1e73ec4c62db28db999433541739f2a1c94\": rpc error: code = NotFound desc = could not find container \"21d33939d9cfb29a666b40f15b0bd1e73ec4c62db28db999433541739f2a1c94\": container with ID starting with 21d33939d9cfb29a666b40f15b0bd1e73ec4c62db28db999433541739f2a1c94 not found: ID does not exist" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.659963 4632 scope.go:117] "RemoveContainer" containerID="706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc" Mar 13 10:09:38 crc kubenswrapper[4632]: E0313 10:09:38.661509 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\": container with ID starting with 706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc not found: ID does not exist" containerID="706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.661547 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc"} err="failed to get container status \"706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\": rpc error: code = NotFound desc = could not find container \"706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc\": container with ID starting with 706e2c8f5823a8e4b5a39a6a6869078c9e5fdf615672ecea030e48b8ab5d13fc not found: ID does not exist" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.661568 4632 scope.go:117] "RemoveContainer" containerID="516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf" Mar 13 10:09:38 crc kubenswrapper[4632]: E0313 10:09:38.661918 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\": container with ID starting with 516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf not found: ID does not exist" containerID="516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.662214 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf"} err="failed to get container status \"516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\": rpc error: code = NotFound desc = could not find container \"516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf\": container with ID starting with 516cedf1e60450f8fef80ef00bf0c4e23dd2a43c5296c49bd4b969c053aa73bf not found: ID does not exist" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.662308 4632 scope.go:117] "RemoveContainer" containerID="cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe" Mar 13 10:09:38 crc kubenswrapper[4632]: E0313 10:09:38.662930 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\": container with ID starting with cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe not found: ID does not exist" containerID="cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.663101 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe"} err="failed to get container status \"cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\": rpc error: code = NotFound desc = could not find container \"cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe\": container with ID starting with cc09fb371b9e509733ffbb2d3e0190320c5ab77049f1e7c01c74eeb4799944fe not found: ID does not exist" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.663373 4632 scope.go:117] "RemoveContainer" containerID="5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54" Mar 13 10:09:38 crc kubenswrapper[4632]: E0313 10:09:38.663874 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\": container with ID starting with 5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54 not found: ID does not exist" containerID="5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.663902 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54"} err="failed to get container status \"5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\": rpc error: code = NotFound desc = could not find container \"5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54\": container with ID starting with 5feec47af6cb6bcb0c36f1d5d8a17f568b30ee5b49c27d96b97d82626d650d54 not found: ID does not exist" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.663918 4632 scope.go:117] "RemoveContainer" containerID="7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990" Mar 13 10:09:38 crc kubenswrapper[4632]: E0313 10:09:38.664549 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\": container with ID starting with 7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990 not found: ID does not exist" containerID="7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990" Mar 13 10:09:38 crc kubenswrapper[4632]: I0313 10:09:38.664581 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990"} err="failed to get container status \"7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\": rpc error: code = NotFound desc = could not find container \"7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990\": container with ID starting with 7deddadce58678bd2e5b5e8f190024255509ceb687563db8f9d8984bba296990 not found: ID does not exist" Mar 13 10:09:39 crc kubenswrapper[4632]: I0313 10:09:39.026326 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xr5l9" Mar 13 10:09:39 crc kubenswrapper[4632]: I0313 10:09:39.027730 4632 status_manager.go:851] "Failed to get status for pod" podUID="dc39d207-84a2-4a28-9296-bed684aa308d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:39 crc kubenswrapper[4632]: I0313 10:09:39.028051 4632 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:39 crc kubenswrapper[4632]: I0313 10:09:39.028318 4632 status_manager.go:851] "Failed to get status for pod" podUID="87965e39-b879-4e26-9c8b-b78068c52aa0" pod="openshift-marketplace/redhat-operators-xr5l9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xr5l9\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:39 crc kubenswrapper[4632]: I0313 10:09:39.028662 4632 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:39 crc kubenswrapper[4632]: I0313 10:09:39.053554 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t6bkt" Mar 13 10:09:39 crc kubenswrapper[4632]: I0313 10:09:39.054056 4632 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:39 crc kubenswrapper[4632]: I0313 10:09:39.054331 4632 status_manager.go:851] "Failed to get status for pod" podUID="dc39d207-84a2-4a28-9296-bed684aa308d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:39 crc kubenswrapper[4632]: I0313 10:09:39.054523 4632 status_manager.go:851] "Failed to get status for pod" podUID="87965e39-b879-4e26-9c8b-b78068c52aa0" pod="openshift-marketplace/redhat-operators-xr5l9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xr5l9\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:39 crc kubenswrapper[4632]: I0313 10:09:39.054747 4632 status_manager.go:851] "Failed to get status for pod" podUID="668c4640-0e5f-4c98-8b6e-dbdffdbfe14e" pod="openshift-marketplace/redhat-marketplace-t6bkt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-t6bkt\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:39 crc kubenswrapper[4632]: I0313 10:09:39.055154 4632 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:39 crc kubenswrapper[4632]: I0313 10:09:39.071773 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xr5l9" Mar 13 10:09:39 crc kubenswrapper[4632]: I0313 10:09:39.072287 4632 status_manager.go:851] "Failed to get status for pod" podUID="dc39d207-84a2-4a28-9296-bed684aa308d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:39 crc kubenswrapper[4632]: I0313 10:09:39.072732 4632 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:39 crc kubenswrapper[4632]: I0313 10:09:39.073022 4632 status_manager.go:851] "Failed to get status for pod" podUID="87965e39-b879-4e26-9c8b-b78068c52aa0" pod="openshift-marketplace/redhat-operators-xr5l9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xr5l9\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:39 crc kubenswrapper[4632]: I0313 10:09:39.073354 4632 status_manager.go:851] "Failed to get status for pod" podUID="668c4640-0e5f-4c98-8b6e-dbdffdbfe14e" pod="openshift-marketplace/redhat-marketplace-t6bkt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-t6bkt\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:39 crc kubenswrapper[4632]: I0313 10:09:39.073565 4632 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:39 crc kubenswrapper[4632]: I0313 10:09:39.094748 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z2gc7" Mar 13 10:09:39 crc kubenswrapper[4632]: I0313 10:09:39.095361 4632 status_manager.go:851] "Failed to get status for pod" podUID="a110c276-8516-4f9e-a6af-d6837cd0f387" pod="openshift-marketplace/redhat-operators-z2gc7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-z2gc7\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:39 crc kubenswrapper[4632]: I0313 10:09:39.095706 4632 status_manager.go:851] "Failed to get status for pod" podUID="dc39d207-84a2-4a28-9296-bed684aa308d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:39 crc kubenswrapper[4632]: I0313 10:09:39.095996 4632 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:39 crc kubenswrapper[4632]: I0313 10:09:39.096302 4632 status_manager.go:851] "Failed to get status for pod" podUID="87965e39-b879-4e26-9c8b-b78068c52aa0" pod="openshift-marketplace/redhat-operators-xr5l9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xr5l9\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:39 crc kubenswrapper[4632]: I0313 10:09:39.096729 4632 status_manager.go:851] "Failed to get status for pod" podUID="668c4640-0e5f-4c98-8b6e-dbdffdbfe14e" pod="openshift-marketplace/redhat-marketplace-t6bkt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-t6bkt\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:39 crc kubenswrapper[4632]: I0313 10:09:39.097059 4632 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:39 crc kubenswrapper[4632]: I0313 10:09:39.133198 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z2gc7" Mar 13 10:09:39 crc kubenswrapper[4632]: I0313 10:09:39.133738 4632 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:39 crc kubenswrapper[4632]: I0313 10:09:39.134172 4632 status_manager.go:851] "Failed to get status for pod" podUID="a110c276-8516-4f9e-a6af-d6837cd0f387" pod="openshift-marketplace/redhat-operators-z2gc7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-z2gc7\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:39 crc kubenswrapper[4632]: I0313 10:09:39.134548 4632 status_manager.go:851] "Failed to get status for pod" podUID="dc39d207-84a2-4a28-9296-bed684aa308d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:39 crc kubenswrapper[4632]: I0313 10:09:39.134895 4632 status_manager.go:851] "Failed to get status for pod" podUID="87965e39-b879-4e26-9c8b-b78068c52aa0" pod="openshift-marketplace/redhat-operators-xr5l9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xr5l9\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:39 crc kubenswrapper[4632]: I0313 10:09:39.135179 4632 status_manager.go:851] "Failed to get status for pod" podUID="668c4640-0e5f-4c98-8b6e-dbdffdbfe14e" pod="openshift-marketplace/redhat-marketplace-t6bkt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-t6bkt\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:39 crc kubenswrapper[4632]: I0313 10:09:39.135545 4632 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:40 crc kubenswrapper[4632]: I0313 10:09:40.050279 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Mar 13 10:09:45 crc kubenswrapper[4632]: E0313 10:09:45.331634 4632 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:45 crc kubenswrapper[4632]: E0313 10:09:45.332691 4632 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:45 crc kubenswrapper[4632]: E0313 10:09:45.333188 4632 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:45 crc kubenswrapper[4632]: E0313 10:09:45.333440 4632 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:45 crc kubenswrapper[4632]: E0313 10:09:45.333881 4632 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:45 crc kubenswrapper[4632]: I0313 10:09:45.333914 4632 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 13 10:09:45 crc kubenswrapper[4632]: E0313 10:09:45.334395 4632 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="200ms" Mar 13 10:09:45 crc kubenswrapper[4632]: E0313 10:09:45.535395 4632 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="400ms" Mar 13 10:09:45 crc kubenswrapper[4632]: E0313 10:09:45.936848 4632 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="800ms" Mar 13 10:09:46 crc kubenswrapper[4632]: E0313 10:09:46.301110 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:09:46Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:09:46Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:09:46Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:09:46Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1295a1f0e74ae87f51a733e28b64c6fdb6b9a5b069a6897b3870fe52cc1c3b0b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:505eeaa3f051e9f4ea6a622aca92e5c4eae07078ca185d9fecfe8cc9b6dfc899\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1739173859},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4855408bd0e4d0711383d0c14dcad53c98255ff9f83f6cbefb57e47eacc1f1f1\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:97bdbb5854e4ad7976209a44cff02c8a2b9542f58ad007c06a5c3a5e8266def1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1284762325},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:70c85e2aeb7db0a454101307851f490057ab53449c50ad9d86c54a698dd4913a\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:762bdf2da1fce19a4a24a6931f555b482c5c2314895b2f68aed74658266819a7\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221741278},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-cli@sha256:69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9\\\",\\\"registry.redhat.io/openshift4/ose-cli@sha256:ef83967297f619f45075e7fd1428a1eb981622a6c174c46fb53b158ed24bed85\\\",\\\"registry.redhat.io/openshift4/ose-cli:latest\\\"],\\\"sizeBytes\\\":584351326},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:46 crc kubenswrapper[4632]: E0313 10:09:46.301772 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:46 crc kubenswrapper[4632]: E0313 10:09:46.302334 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:46 crc kubenswrapper[4632]: E0313 10:09:46.302609 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:46 crc kubenswrapper[4632]: E0313 10:09:46.302979 4632 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:46 crc kubenswrapper[4632]: E0313 10:09:46.303010 4632 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 10:09:46 crc kubenswrapper[4632]: E0313 10:09:46.738284 4632 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="1.6s" Mar 13 10:09:47 crc kubenswrapper[4632]: E0313 10:09:47.711268 4632 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.182:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189c5eca412cae33 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 10:09:36.046984755 +0000 UTC m=+350.069514888,LastTimestamp:2026-03-13 10:09:36.046984755 +0000 UTC m=+350.069514888,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 10:09:48 crc kubenswrapper[4632]: I0313 10:09:48.046070 4632 status_manager.go:851] "Failed to get status for pod" podUID="87965e39-b879-4e26-9c8b-b78068c52aa0" pod="openshift-marketplace/redhat-operators-xr5l9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xr5l9\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:48 crc kubenswrapper[4632]: I0313 10:09:48.047682 4632 status_manager.go:851] "Failed to get status for pod" podUID="668c4640-0e5f-4c98-8b6e-dbdffdbfe14e" pod="openshift-marketplace/redhat-marketplace-t6bkt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-t6bkt\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:48 crc kubenswrapper[4632]: I0313 10:09:48.048012 4632 status_manager.go:851] "Failed to get status for pod" podUID="dc39d207-84a2-4a28-9296-bed684aa308d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:48 crc kubenswrapper[4632]: I0313 10:09:48.048485 4632 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:48 crc kubenswrapper[4632]: I0313 10:09:48.048818 4632 status_manager.go:851] "Failed to get status for pod" podUID="a110c276-8516-4f9e-a6af-d6837cd0f387" pod="openshift-marketplace/redhat-operators-z2gc7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-z2gc7\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:48 crc kubenswrapper[4632]: E0313 10:09:48.339446 4632 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="3.2s" Mar 13 10:09:48 crc kubenswrapper[4632]: I0313 10:09:48.638800 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 13 10:09:48 crc kubenswrapper[4632]: I0313 10:09:48.639470 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 13 10:09:48 crc kubenswrapper[4632]: I0313 10:09:48.639522 4632 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="8207ad7aa524df5af853ad8235c24a6addbc04e168248391629f00124901672d" exitCode=1 Mar 13 10:09:48 crc kubenswrapper[4632]: I0313 10:09:48.639558 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"8207ad7aa524df5af853ad8235c24a6addbc04e168248391629f00124901672d"} Mar 13 10:09:48 crc kubenswrapper[4632]: I0313 10:09:48.640091 4632 scope.go:117] "RemoveContainer" containerID="8207ad7aa524df5af853ad8235c24a6addbc04e168248391629f00124901672d" Mar 13 10:09:48 crc kubenswrapper[4632]: I0313 10:09:48.644525 4632 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:48 crc kubenswrapper[4632]: I0313 10:09:48.645139 4632 status_manager.go:851] "Failed to get status for pod" podUID="a110c276-8516-4f9e-a6af-d6837cd0f387" pod="openshift-marketplace/redhat-operators-z2gc7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-z2gc7\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:48 crc kubenswrapper[4632]: I0313 10:09:48.645724 4632 status_manager.go:851] "Failed to get status for pod" podUID="dc39d207-84a2-4a28-9296-bed684aa308d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:48 crc kubenswrapper[4632]: I0313 10:09:48.645973 4632 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:48 crc kubenswrapper[4632]: I0313 10:09:48.646217 4632 status_manager.go:851] "Failed to get status for pod" podUID="87965e39-b879-4e26-9c8b-b78068c52aa0" pod="openshift-marketplace/redhat-operators-xr5l9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xr5l9\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:48 crc kubenswrapper[4632]: I0313 10:09:48.646599 4632 status_manager.go:851] "Failed to get status for pod" podUID="668c4640-0e5f-4c98-8b6e-dbdffdbfe14e" pod="openshift-marketplace/redhat-marketplace-t6bkt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-t6bkt\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:49 crc kubenswrapper[4632]: I0313 10:09:49.464191 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 10:09:49 crc kubenswrapper[4632]: I0313 10:09:49.647565 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 13 10:09:49 crc kubenswrapper[4632]: I0313 10:09:49.648181 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 13 10:09:49 crc kubenswrapper[4632]: I0313 10:09:49.648224 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"baae4c96bdfec2410a2abf4602bb303365672a79eb0060c14f3d9416601f60d1"} Mar 13 10:09:49 crc kubenswrapper[4632]: I0313 10:09:49.648932 4632 status_manager.go:851] "Failed to get status for pod" podUID="668c4640-0e5f-4c98-8b6e-dbdffdbfe14e" pod="openshift-marketplace/redhat-marketplace-t6bkt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-t6bkt\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:49 crc kubenswrapper[4632]: I0313 10:09:49.649286 4632 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:49 crc kubenswrapper[4632]: I0313 10:09:49.649558 4632 status_manager.go:851] "Failed to get status for pod" podUID="dc39d207-84a2-4a28-9296-bed684aa308d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:49 crc kubenswrapper[4632]: I0313 10:09:49.649820 4632 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:49 crc kubenswrapper[4632]: I0313 10:09:49.650190 4632 status_manager.go:851] "Failed to get status for pod" podUID="a110c276-8516-4f9e-a6af-d6837cd0f387" pod="openshift-marketplace/redhat-operators-z2gc7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-z2gc7\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:49 crc kubenswrapper[4632]: I0313 10:09:49.650431 4632 status_manager.go:851] "Failed to get status for pod" podUID="87965e39-b879-4e26-9c8b-b78068c52aa0" pod="openshift-marketplace/redhat-operators-xr5l9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xr5l9\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:50 crc kubenswrapper[4632]: I0313 10:09:50.659264 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 10:09:51 crc kubenswrapper[4632]: I0313 10:09:51.043605 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:09:51 crc kubenswrapper[4632]: I0313 10:09:51.045669 4632 status_manager.go:851] "Failed to get status for pod" podUID="87965e39-b879-4e26-9c8b-b78068c52aa0" pod="openshift-marketplace/redhat-operators-xr5l9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xr5l9\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:51 crc kubenswrapper[4632]: I0313 10:09:51.046336 4632 status_manager.go:851] "Failed to get status for pod" podUID="668c4640-0e5f-4c98-8b6e-dbdffdbfe14e" pod="openshift-marketplace/redhat-marketplace-t6bkt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-t6bkt\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:51 crc kubenswrapper[4632]: I0313 10:09:51.046735 4632 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:51 crc kubenswrapper[4632]: I0313 10:09:51.048381 4632 status_manager.go:851] "Failed to get status for pod" podUID="a110c276-8516-4f9e-a6af-d6837cd0f387" pod="openshift-marketplace/redhat-operators-z2gc7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-z2gc7\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:51 crc kubenswrapper[4632]: I0313 10:09:51.049144 4632 status_manager.go:851] "Failed to get status for pod" podUID="dc39d207-84a2-4a28-9296-bed684aa308d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:51 crc kubenswrapper[4632]: I0313 10:09:51.049551 4632 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:51 crc kubenswrapper[4632]: I0313 10:09:51.063767 4632 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="894cdc70-0747-4975-a22f-0dbd657e91a3" Mar 13 10:09:51 crc kubenswrapper[4632]: I0313 10:09:51.063841 4632 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="894cdc70-0747-4975-a22f-0dbd657e91a3" Mar 13 10:09:51 crc kubenswrapper[4632]: E0313 10:09:51.064551 4632 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:09:51 crc kubenswrapper[4632]: I0313 10:09:51.065180 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:09:51 crc kubenswrapper[4632]: W0313 10:09:51.093925 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-feb3d08f78c5327c6b227ef5a9b9e96cb8c32626a27e5135a0eef53c7204e559 WatchSource:0}: Error finding container feb3d08f78c5327c6b227ef5a9b9e96cb8c32626a27e5135a0eef53c7204e559: Status 404 returned error can't find the container with id feb3d08f78c5327c6b227ef5a9b9e96cb8c32626a27e5135a0eef53c7204e559 Mar 13 10:09:51 crc kubenswrapper[4632]: E0313 10:09:51.371892 4632 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-conmon-5b221fb36af28b42296aad6aec56f5d67570fe622107623a4d8c3a607f65ef16.scope\": RecentStats: unable to find data in memory cache]" Mar 13 10:09:51 crc kubenswrapper[4632]: E0313 10:09:51.540755 4632 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="6.4s" Mar 13 10:09:51 crc kubenswrapper[4632]: I0313 10:09:51.664800 4632 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="5b221fb36af28b42296aad6aec56f5d67570fe622107623a4d8c3a607f65ef16" exitCode=0 Mar 13 10:09:51 crc kubenswrapper[4632]: I0313 10:09:51.664866 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"5b221fb36af28b42296aad6aec56f5d67570fe622107623a4d8c3a607f65ef16"} Mar 13 10:09:51 crc kubenswrapper[4632]: I0313 10:09:51.664972 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"feb3d08f78c5327c6b227ef5a9b9e96cb8c32626a27e5135a0eef53c7204e559"} Mar 13 10:09:51 crc kubenswrapper[4632]: I0313 10:09:51.665453 4632 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="894cdc70-0747-4975-a22f-0dbd657e91a3" Mar 13 10:09:51 crc kubenswrapper[4632]: I0313 10:09:51.665480 4632 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="894cdc70-0747-4975-a22f-0dbd657e91a3" Mar 13 10:09:51 crc kubenswrapper[4632]: E0313 10:09:51.666351 4632 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:09:51 crc kubenswrapper[4632]: I0313 10:09:51.666376 4632 status_manager.go:851] "Failed to get status for pod" podUID="87965e39-b879-4e26-9c8b-b78068c52aa0" pod="openshift-marketplace/redhat-operators-xr5l9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xr5l9\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:51 crc kubenswrapper[4632]: I0313 10:09:51.667892 4632 status_manager.go:851] "Failed to get status for pod" podUID="668c4640-0e5f-4c98-8b6e-dbdffdbfe14e" pod="openshift-marketplace/redhat-marketplace-t6bkt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-t6bkt\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:51 crc kubenswrapper[4632]: I0313 10:09:51.668493 4632 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:51 crc kubenswrapper[4632]: I0313 10:09:51.669343 4632 status_manager.go:851] "Failed to get status for pod" podUID="a110c276-8516-4f9e-a6af-d6837cd0f387" pod="openshift-marketplace/redhat-operators-z2gc7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-z2gc7\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:51 crc kubenswrapper[4632]: I0313 10:09:51.669855 4632 status_manager.go:851] "Failed to get status for pod" podUID="dc39d207-84a2-4a28-9296-bed684aa308d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:51 crc kubenswrapper[4632]: I0313 10:09:51.670318 4632 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Mar 13 10:09:52 crc kubenswrapper[4632]: I0313 10:09:52.678564 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"016bae0f85c65923e3cb3cd2dcce29a8981231a5ebd3e4c3946a0114414ae9c4"} Mar 13 10:09:52 crc kubenswrapper[4632]: I0313 10:09:52.679055 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fdd6e9435e3bd2de1b2abf061d353492f0fc229bccd25152f17a70a41c909a82"} Mar 13 10:09:52 crc kubenswrapper[4632]: I0313 10:09:52.679071 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"da8aaa640f92e70999f9e593cd4ac8ad057243816be267692cf6551a0391d4ff"} Mar 13 10:09:52 crc kubenswrapper[4632]: I0313 10:09:52.679087 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"985806ca915f8813ea8bb973aff61c5c2b66f7004966f949041798df4ba45a99"} Mar 13 10:09:53 crc kubenswrapper[4632]: I0313 10:09:53.688004 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5181b69ae7fe22887bd111820025dd22fb49a6daf2e2154f18ad4508ff0af707"} Mar 13 10:09:53 crc kubenswrapper[4632]: I0313 10:09:53.688309 4632 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="894cdc70-0747-4975-a22f-0dbd657e91a3" Mar 13 10:09:53 crc kubenswrapper[4632]: I0313 10:09:53.688325 4632 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="894cdc70-0747-4975-a22f-0dbd657e91a3" Mar 13 10:09:53 crc kubenswrapper[4632]: I0313 10:09:53.688519 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:09:56 crc kubenswrapper[4632]: I0313 10:09:56.066139 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:09:56 crc kubenswrapper[4632]: I0313 10:09:56.066534 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:09:56 crc kubenswrapper[4632]: I0313 10:09:56.071906 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:09:58 crc kubenswrapper[4632]: I0313 10:09:58.712665 4632 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:09:58 crc kubenswrapper[4632]: I0313 10:09:58.965660 4632 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="40937936-798b-414f-b10d-bc9cd5536d78" Mar 13 10:09:59 crc kubenswrapper[4632]: I0313 10:09:59.464129 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 10:09:59 crc kubenswrapper[4632]: I0313 10:09:59.469175 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 10:09:59 crc kubenswrapper[4632]: I0313 10:09:59.720554 4632 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="894cdc70-0747-4975-a22f-0dbd657e91a3" Mar 13 10:09:59 crc kubenswrapper[4632]: I0313 10:09:59.720607 4632 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="894cdc70-0747-4975-a22f-0dbd657e91a3" Mar 13 10:09:59 crc kubenswrapper[4632]: I0313 10:09:59.725178 4632 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="40937936-798b-414f-b10d-bc9cd5536d78" Mar 13 10:09:59 crc kubenswrapper[4632]: I0313 10:09:59.727442 4632 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://985806ca915f8813ea8bb973aff61c5c2b66f7004966f949041798df4ba45a99" Mar 13 10:09:59 crc kubenswrapper[4632]: I0313 10:09:59.727475 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:09:59 crc kubenswrapper[4632]: I0313 10:09:59.727638 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 10:10:00 crc kubenswrapper[4632]: I0313 10:10:00.365712 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 10:10:00 crc kubenswrapper[4632]: I0313 10:10:00.729388 4632 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="894cdc70-0747-4975-a22f-0dbd657e91a3" Mar 13 10:10:00 crc kubenswrapper[4632]: I0313 10:10:00.729744 4632 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="894cdc70-0747-4975-a22f-0dbd657e91a3" Mar 13 10:10:00 crc kubenswrapper[4632]: I0313 10:10:00.733714 4632 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="40937936-798b-414f-b10d-bc9cd5536d78" Mar 13 10:10:08 crc kubenswrapper[4632]: I0313 10:10:08.200040 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 13 10:10:09 crc kubenswrapper[4632]: I0313 10:10:09.212185 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 13 10:10:09 crc kubenswrapper[4632]: I0313 10:10:09.339015 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 13 10:10:09 crc kubenswrapper[4632]: I0313 10:10:09.472930 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 13 10:10:09 crc kubenswrapper[4632]: I0313 10:10:09.503962 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 13 10:10:09 crc kubenswrapper[4632]: I0313 10:10:09.653500 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 13 10:10:09 crc kubenswrapper[4632]: I0313 10:10:09.820255 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 13 10:10:09 crc kubenswrapper[4632]: I0313 10:10:09.944366 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 13 10:10:09 crc kubenswrapper[4632]: I0313 10:10:09.980279 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 13 10:10:10 crc kubenswrapper[4632]: I0313 10:10:10.015309 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 13 10:10:10 crc kubenswrapper[4632]: I0313 10:10:10.301572 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 13 10:10:10 crc kubenswrapper[4632]: I0313 10:10:10.470062 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 13 10:10:10 crc kubenswrapper[4632]: I0313 10:10:10.487167 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 13 10:10:10 crc kubenswrapper[4632]: I0313 10:10:10.563044 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 13 10:10:10 crc kubenswrapper[4632]: I0313 10:10:10.608484 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 13 10:10:10 crc kubenswrapper[4632]: I0313 10:10:10.635830 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 13 10:10:10 crc kubenswrapper[4632]: I0313 10:10:10.826590 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 13 10:10:10 crc kubenswrapper[4632]: I0313 10:10:10.878575 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 13 10:10:10 crc kubenswrapper[4632]: I0313 10:10:10.892154 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 10:10:10 crc kubenswrapper[4632]: I0313 10:10:10.970712 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 13 10:10:11 crc kubenswrapper[4632]: I0313 10:10:11.100787 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 13 10:10:11 crc kubenswrapper[4632]: I0313 10:10:11.239978 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 13 10:10:11 crc kubenswrapper[4632]: I0313 10:10:11.392994 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 13 10:10:11 crc kubenswrapper[4632]: I0313 10:10:11.591293 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 13 10:10:11 crc kubenswrapper[4632]: I0313 10:10:11.591673 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 13 10:10:11 crc kubenswrapper[4632]: I0313 10:10:11.604147 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 13 10:10:11 crc kubenswrapper[4632]: I0313 10:10:11.723371 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 13 10:10:11 crc kubenswrapper[4632]: I0313 10:10:11.750175 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 13 10:10:11 crc kubenswrapper[4632]: I0313 10:10:11.756140 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 13 10:10:11 crc kubenswrapper[4632]: I0313 10:10:11.845156 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 13 10:10:11 crc kubenswrapper[4632]: I0313 10:10:11.922550 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 13 10:10:11 crc kubenswrapper[4632]: I0313 10:10:11.975557 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 13 10:10:11 crc kubenswrapper[4632]: I0313 10:10:11.975713 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 13 10:10:12 crc kubenswrapper[4632]: I0313 10:10:12.037587 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 13 10:10:12 crc kubenswrapper[4632]: I0313 10:10:12.188821 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 13 10:10:12 crc kubenswrapper[4632]: I0313 10:10:12.213862 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 13 10:10:12 crc kubenswrapper[4632]: I0313 10:10:12.241077 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 10:10:12 crc kubenswrapper[4632]: I0313 10:10:12.294920 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 13 10:10:12 crc kubenswrapper[4632]: I0313 10:10:12.317515 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 13 10:10:12 crc kubenswrapper[4632]: I0313 10:10:12.442500 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 13 10:10:12 crc kubenswrapper[4632]: I0313 10:10:12.623449 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 13 10:10:12 crc kubenswrapper[4632]: I0313 10:10:12.648497 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 13 10:10:12 crc kubenswrapper[4632]: I0313 10:10:12.679992 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 13 10:10:12 crc kubenswrapper[4632]: I0313 10:10:12.712567 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 13 10:10:12 crc kubenswrapper[4632]: I0313 10:10:12.715710 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 13 10:10:12 crc kubenswrapper[4632]: I0313 10:10:12.734773 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 13 10:10:12 crc kubenswrapper[4632]: I0313 10:10:12.750360 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 13 10:10:12 crc kubenswrapper[4632]: I0313 10:10:12.767491 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 13 10:10:12 crc kubenswrapper[4632]: I0313 10:10:12.843622 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 13 10:10:12 crc kubenswrapper[4632]: I0313 10:10:12.941301 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 13 10:10:12 crc kubenswrapper[4632]: I0313 10:10:12.973682 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 13 10:10:13 crc kubenswrapper[4632]: I0313 10:10:13.002688 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 13 10:10:13 crc kubenswrapper[4632]: I0313 10:10:13.052984 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 13 10:10:13 crc kubenswrapper[4632]: I0313 10:10:13.075100 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 13 10:10:13 crc kubenswrapper[4632]: I0313 10:10:13.312283 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 13 10:10:13 crc kubenswrapper[4632]: I0313 10:10:13.357416 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 13 10:10:13 crc kubenswrapper[4632]: I0313 10:10:13.383806 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 13 10:10:13 crc kubenswrapper[4632]: I0313 10:10:13.468932 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 13 10:10:13 crc kubenswrapper[4632]: I0313 10:10:13.510032 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 13 10:10:13 crc kubenswrapper[4632]: I0313 10:10:13.730596 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 13 10:10:13 crc kubenswrapper[4632]: I0313 10:10:13.738575 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 13 10:10:13 crc kubenswrapper[4632]: I0313 10:10:13.745712 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 13 10:10:13 crc kubenswrapper[4632]: I0313 10:10:13.885578 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 10:10:13 crc kubenswrapper[4632]: I0313 10:10:13.896458 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 13 10:10:13 crc kubenswrapper[4632]: I0313 10:10:13.964067 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 13 10:10:13 crc kubenswrapper[4632]: I0313 10:10:13.972262 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 13 10:10:13 crc kubenswrapper[4632]: I0313 10:10:13.973130 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 13 10:10:13 crc kubenswrapper[4632]: I0313 10:10:13.980160 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 13 10:10:14 crc kubenswrapper[4632]: I0313 10:10:14.166801 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 13 10:10:14 crc kubenswrapper[4632]: I0313 10:10:14.231170 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 13 10:10:14 crc kubenswrapper[4632]: I0313 10:10:14.241061 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 13 10:10:14 crc kubenswrapper[4632]: I0313 10:10:14.319101 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 13 10:10:14 crc kubenswrapper[4632]: I0313 10:10:14.352865 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 13 10:10:14 crc kubenswrapper[4632]: I0313 10:10:14.478395 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 13 10:10:14 crc kubenswrapper[4632]: I0313 10:10:14.506671 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 13 10:10:14 crc kubenswrapper[4632]: I0313 10:10:14.560846 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 13 10:10:14 crc kubenswrapper[4632]: I0313 10:10:14.667909 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 13 10:10:14 crc kubenswrapper[4632]: I0313 10:10:14.704228 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 13 10:10:14 crc kubenswrapper[4632]: I0313 10:10:14.707630 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 13 10:10:14 crc kubenswrapper[4632]: I0313 10:10:14.796474 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 13 10:10:14 crc kubenswrapper[4632]: I0313 10:10:14.816826 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 13 10:10:14 crc kubenswrapper[4632]: I0313 10:10:14.883147 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 13 10:10:15 crc kubenswrapper[4632]: I0313 10:10:15.083676 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 13 10:10:15 crc kubenswrapper[4632]: I0313 10:10:15.083796 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 13 10:10:15 crc kubenswrapper[4632]: I0313 10:10:15.202270 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 13 10:10:15 crc kubenswrapper[4632]: I0313 10:10:15.234201 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 13 10:10:15 crc kubenswrapper[4632]: I0313 10:10:15.438044 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 13 10:10:15 crc kubenswrapper[4632]: I0313 10:10:15.441819 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 13 10:10:15 crc kubenswrapper[4632]: I0313 10:10:15.458597 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 13 10:10:15 crc kubenswrapper[4632]: I0313 10:10:15.481630 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 13 10:10:15 crc kubenswrapper[4632]: I0313 10:10:15.501257 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 13 10:10:15 crc kubenswrapper[4632]: I0313 10:10:15.520572 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 13 10:10:15 crc kubenswrapper[4632]: I0313 10:10:15.631884 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 13 10:10:15 crc kubenswrapper[4632]: I0313 10:10:15.806578 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 13 10:10:15 crc kubenswrapper[4632]: I0313 10:10:15.874968 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 13 10:10:15 crc kubenswrapper[4632]: I0313 10:10:15.882891 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 13 10:10:15 crc kubenswrapper[4632]: I0313 10:10:15.921142 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 13 10:10:15 crc kubenswrapper[4632]: I0313 10:10:15.979962 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 13 10:10:16 crc kubenswrapper[4632]: I0313 10:10:16.046017 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 13 10:10:16 crc kubenswrapper[4632]: I0313 10:10:16.049034 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 13 10:10:16 crc kubenswrapper[4632]: I0313 10:10:16.122288 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 13 10:10:16 crc kubenswrapper[4632]: I0313 10:10:16.156302 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 13 10:10:16 crc kubenswrapper[4632]: I0313 10:10:16.178054 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 13 10:10:16 crc kubenswrapper[4632]: I0313 10:10:16.184667 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 13 10:10:16 crc kubenswrapper[4632]: I0313 10:10:16.186890 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 13 10:10:16 crc kubenswrapper[4632]: I0313 10:10:16.193958 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 13 10:10:16 crc kubenswrapper[4632]: I0313 10:10:16.211818 4632 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 13 10:10:16 crc kubenswrapper[4632]: I0313 10:10:16.223795 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 13 10:10:16 crc kubenswrapper[4632]: I0313 10:10:16.265845 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 13 10:10:16 crc kubenswrapper[4632]: I0313 10:10:16.321023 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 13 10:10:16 crc kubenswrapper[4632]: I0313 10:10:16.361882 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 13 10:10:16 crc kubenswrapper[4632]: I0313 10:10:16.392332 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 13 10:10:16 crc kubenswrapper[4632]: I0313 10:10:16.440647 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 13 10:10:16 crc kubenswrapper[4632]: I0313 10:10:16.454335 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 13 10:10:16 crc kubenswrapper[4632]: I0313 10:10:16.463381 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 13 10:10:16 crc kubenswrapper[4632]: I0313 10:10:16.508922 4632 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 13 10:10:16 crc kubenswrapper[4632]: I0313 10:10:16.510869 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=41.510848116 podStartE2EDuration="41.510848116s" podCreationTimestamp="2026-03-13 10:09:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:09:58.71875137 +0000 UTC m=+372.741281513" watchObservedRunningTime="2026-03-13 10:10:16.510848116 +0000 UTC m=+390.533378249" Mar 13 10:10:16 crc kubenswrapper[4632]: I0313 10:10:16.518330 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 13 10:10:16 crc kubenswrapper[4632]: I0313 10:10:16.518428 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 13 10:10:16 crc kubenswrapper[4632]: I0313 10:10:16.527142 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 13 10:10:16 crc kubenswrapper[4632]: I0313 10:10:16.546479 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=18.546444216 podStartE2EDuration="18.546444216s" podCreationTimestamp="2026-03-13 10:09:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:10:16.542735966 +0000 UTC m=+390.565266119" watchObservedRunningTime="2026-03-13 10:10:16.546444216 +0000 UTC m=+390.568974349" Mar 13 10:10:16 crc kubenswrapper[4632]: I0313 10:10:16.666369 4632 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 13 10:10:16 crc kubenswrapper[4632]: I0313 10:10:16.674508 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 13 10:10:16 crc kubenswrapper[4632]: I0313 10:10:16.700048 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 13 10:10:16 crc kubenswrapper[4632]: I0313 10:10:16.718367 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 13 10:10:16 crc kubenswrapper[4632]: I0313 10:10:16.733528 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 13 10:10:16 crc kubenswrapper[4632]: I0313 10:10:16.784585 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 13 10:10:16 crc kubenswrapper[4632]: I0313 10:10:16.797253 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 13 10:10:16 crc kubenswrapper[4632]: I0313 10:10:16.868227 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 13 10:10:16 crc kubenswrapper[4632]: I0313 10:10:16.881577 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 10:10:16 crc kubenswrapper[4632]: I0313 10:10:16.930005 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 13 10:10:16 crc kubenswrapper[4632]: I0313 10:10:16.947772 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 13 10:10:17 crc kubenswrapper[4632]: I0313 10:10:17.019012 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 13 10:10:17 crc kubenswrapper[4632]: I0313 10:10:17.024496 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 13 10:10:17 crc kubenswrapper[4632]: I0313 10:10:17.031870 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 13 10:10:17 crc kubenswrapper[4632]: I0313 10:10:17.060104 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556610-sg5bx"] Mar 13 10:10:17 crc kubenswrapper[4632]: E0313 10:10:17.060816 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc39d207-84a2-4a28-9296-bed684aa308d" containerName="installer" Mar 13 10:10:17 crc kubenswrapper[4632]: I0313 10:10:17.060897 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc39d207-84a2-4a28-9296-bed684aa308d" containerName="installer" Mar 13 10:10:17 crc kubenswrapper[4632]: I0313 10:10:17.061120 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc39d207-84a2-4a28-9296-bed684aa308d" containerName="installer" Mar 13 10:10:17 crc kubenswrapper[4632]: I0313 10:10:17.061726 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556610-sg5bx" Mar 13 10:10:17 crc kubenswrapper[4632]: I0313 10:10:17.064396 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 10:10:17 crc kubenswrapper[4632]: I0313 10:10:17.064652 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 10:10:17 crc kubenswrapper[4632]: I0313 10:10:17.071732 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 10:10:17 crc kubenswrapper[4632]: I0313 10:10:17.127325 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmrds\" (UniqueName: \"kubernetes.io/projected/795727b7-7a2e-4e97-8707-aecf893fd332-kube-api-access-tmrds\") pod \"auto-csr-approver-29556610-sg5bx\" (UID: \"795727b7-7a2e-4e97-8707-aecf893fd332\") " pod="openshift-infra/auto-csr-approver-29556610-sg5bx" Mar 13 10:10:17 crc kubenswrapper[4632]: I0313 10:10:17.168281 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 13 10:10:17 crc kubenswrapper[4632]: I0313 10:10:17.228295 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmrds\" (UniqueName: \"kubernetes.io/projected/795727b7-7a2e-4e97-8707-aecf893fd332-kube-api-access-tmrds\") pod \"auto-csr-approver-29556610-sg5bx\" (UID: \"795727b7-7a2e-4e97-8707-aecf893fd332\") " pod="openshift-infra/auto-csr-approver-29556610-sg5bx" Mar 13 10:10:17 crc kubenswrapper[4632]: I0313 10:10:17.257279 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmrds\" (UniqueName: \"kubernetes.io/projected/795727b7-7a2e-4e97-8707-aecf893fd332-kube-api-access-tmrds\") pod \"auto-csr-approver-29556610-sg5bx\" (UID: \"795727b7-7a2e-4e97-8707-aecf893fd332\") " pod="openshift-infra/auto-csr-approver-29556610-sg5bx" Mar 13 10:10:17 crc kubenswrapper[4632]: I0313 10:10:17.409755 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556610-sg5bx" Mar 13 10:10:17 crc kubenswrapper[4632]: I0313 10:10:17.459556 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 13 10:10:17 crc kubenswrapper[4632]: I0313 10:10:17.494829 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 13 10:10:17 crc kubenswrapper[4632]: I0313 10:10:17.521703 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 13 10:10:17 crc kubenswrapper[4632]: I0313 10:10:17.578384 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 13 10:10:17 crc kubenswrapper[4632]: I0313 10:10:17.612816 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 13 10:10:17 crc kubenswrapper[4632]: I0313 10:10:17.637127 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 13 10:10:17 crc kubenswrapper[4632]: I0313 10:10:17.756738 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 13 10:10:17 crc kubenswrapper[4632]: I0313 10:10:17.775605 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 13 10:10:17 crc kubenswrapper[4632]: I0313 10:10:17.798899 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 13 10:10:17 crc kubenswrapper[4632]: I0313 10:10:17.962222 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 13 10:10:17 crc kubenswrapper[4632]: I0313 10:10:17.978801 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 13 10:10:18 crc kubenswrapper[4632]: I0313 10:10:18.015216 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 13 10:10:18 crc kubenswrapper[4632]: I0313 10:10:18.074077 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 13 10:10:18 crc kubenswrapper[4632]: I0313 10:10:18.124248 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 13 10:10:18 crc kubenswrapper[4632]: I0313 10:10:18.153614 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 13 10:10:18 crc kubenswrapper[4632]: I0313 10:10:18.153679 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 13 10:10:18 crc kubenswrapper[4632]: I0313 10:10:18.222372 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 13 10:10:18 crc kubenswrapper[4632]: I0313 10:10:18.259905 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 10:10:18 crc kubenswrapper[4632]: I0313 10:10:18.295932 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 13 10:10:18 crc kubenswrapper[4632]: I0313 10:10:18.311835 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 13 10:10:18 crc kubenswrapper[4632]: I0313 10:10:18.339888 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 13 10:10:18 crc kubenswrapper[4632]: I0313 10:10:18.384585 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 13 10:10:18 crc kubenswrapper[4632]: I0313 10:10:18.484925 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 13 10:10:18 crc kubenswrapper[4632]: I0313 10:10:18.492537 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 13 10:10:18 crc kubenswrapper[4632]: I0313 10:10:18.534564 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 13 10:10:18 crc kubenswrapper[4632]: I0313 10:10:18.672613 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 13 10:10:18 crc kubenswrapper[4632]: I0313 10:10:18.893518 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 13 10:10:18 crc kubenswrapper[4632]: I0313 10:10:18.920136 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 13 10:10:18 crc kubenswrapper[4632]: I0313 10:10:18.922106 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 13 10:10:18 crc kubenswrapper[4632]: I0313 10:10:18.937442 4632 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 13 10:10:19 crc kubenswrapper[4632]: I0313 10:10:19.051914 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 13 10:10:19 crc kubenswrapper[4632]: I0313 10:10:19.123188 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 13 10:10:19 crc kubenswrapper[4632]: I0313 10:10:19.125324 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 13 10:10:19 crc kubenswrapper[4632]: I0313 10:10:19.154853 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 13 10:10:19 crc kubenswrapper[4632]: I0313 10:10:19.483237 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 13 10:10:19 crc kubenswrapper[4632]: I0313 10:10:19.483630 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 13 10:10:19 crc kubenswrapper[4632]: I0313 10:10:19.558585 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 13 10:10:19 crc kubenswrapper[4632]: I0313 10:10:19.561204 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 13 10:10:19 crc kubenswrapper[4632]: I0313 10:10:19.610295 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 13 10:10:19 crc kubenswrapper[4632]: I0313 10:10:19.636023 4632 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 13 10:10:19 crc kubenswrapper[4632]: I0313 10:10:19.638843 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 13 10:10:19 crc kubenswrapper[4632]: I0313 10:10:19.841070 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 13 10:10:19 crc kubenswrapper[4632]: I0313 10:10:19.860159 4632 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 13 10:10:19 crc kubenswrapper[4632]: I0313 10:10:19.892744 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 13 10:10:19 crc kubenswrapper[4632]: I0313 10:10:19.904540 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 13 10:10:19 crc kubenswrapper[4632]: I0313 10:10:19.917781 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 13 10:10:20 crc kubenswrapper[4632]: I0313 10:10:20.064226 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 13 10:10:20 crc kubenswrapper[4632]: I0313 10:10:20.109041 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 13 10:10:20 crc kubenswrapper[4632]: I0313 10:10:20.142527 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 13 10:10:20 crc kubenswrapper[4632]: I0313 10:10:20.159710 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 10:10:20 crc kubenswrapper[4632]: I0313 10:10:20.229691 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 13 10:10:20 crc kubenswrapper[4632]: I0313 10:10:20.302255 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 13 10:10:20 crc kubenswrapper[4632]: I0313 10:10:20.330760 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 13 10:10:20 crc kubenswrapper[4632]: I0313 10:10:20.415082 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 13 10:10:20 crc kubenswrapper[4632]: I0313 10:10:20.458089 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 13 10:10:20 crc kubenswrapper[4632]: I0313 10:10:20.506721 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 13 10:10:20 crc kubenswrapper[4632]: I0313 10:10:20.557438 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 13 10:10:20 crc kubenswrapper[4632]: I0313 10:10:20.576687 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 13 10:10:20 crc kubenswrapper[4632]: I0313 10:10:20.635631 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 13 10:10:20 crc kubenswrapper[4632]: I0313 10:10:20.723127 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 13 10:10:20 crc kubenswrapper[4632]: I0313 10:10:20.725149 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556610-sg5bx"] Mar 13 10:10:20 crc kubenswrapper[4632]: I0313 10:10:20.805033 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 13 10:10:20 crc kubenswrapper[4632]: I0313 10:10:20.883997 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 13 10:10:20 crc kubenswrapper[4632]: I0313 10:10:20.886485 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 13 10:10:20 crc kubenswrapper[4632]: I0313 10:10:20.887254 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 13 10:10:20 crc kubenswrapper[4632]: I0313 10:10:20.893815 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 13 10:10:21 crc kubenswrapper[4632]: I0313 10:10:21.050872 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 13 10:10:21 crc kubenswrapper[4632]: I0313 10:10:21.074786 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 10:10:21 crc kubenswrapper[4632]: I0313 10:10:21.081726 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 13 10:10:21 crc kubenswrapper[4632]: I0313 10:10:21.283737 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 13 10:10:21 crc kubenswrapper[4632]: I0313 10:10:21.337333 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 13 10:10:21 crc kubenswrapper[4632]: I0313 10:10:21.423264 4632 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 13 10:10:21 crc kubenswrapper[4632]: I0313 10:10:21.423633 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://80143265ebaee0f3a54053c9c203e48c0b8ae49b675b972452da2268c99bd9ad" gracePeriod=5 Mar 13 10:10:21 crc kubenswrapper[4632]: I0313 10:10:21.439833 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 13 10:10:21 crc kubenswrapper[4632]: E0313 10:10:21.467291 4632 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 10:10:21 crc kubenswrapper[4632]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29556610-sg5bx_openshift-infra_795727b7-7a2e-4e97-8707-aecf893fd332_0(f44c58a66c3c09c74b677cf4ec9f76b205bf4b131d95b02c4a9987dbc9e9bd8a): error adding pod openshift-infra_auto-csr-approver-29556610-sg5bx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f44c58a66c3c09c74b677cf4ec9f76b205bf4b131d95b02c4a9987dbc9e9bd8a" Netns:"/var/run/netns/731b0fe9-200c-49e0-9ae2-97aa60457e8f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-infra;K8S_POD_NAME=auto-csr-approver-29556610-sg5bx;K8S_POD_INFRA_CONTAINER_ID=f44c58a66c3c09c74b677cf4ec9f76b205bf4b131d95b02c4a9987dbc9e9bd8a;K8S_POD_UID=795727b7-7a2e-4e97-8707-aecf893fd332" Path:"" ERRORED: error configuring pod [openshift-infra/auto-csr-approver-29556610-sg5bx] networking: Multus: [openshift-infra/auto-csr-approver-29556610-sg5bx/795727b7-7a2e-4e97-8707-aecf893fd332]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod auto-csr-approver-29556610-sg5bx in out of cluster comm: pod "auto-csr-approver-29556610-sg5bx" not found Mar 13 10:10:21 crc kubenswrapper[4632]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 10:10:21 crc kubenswrapper[4632]: > Mar 13 10:10:21 crc kubenswrapper[4632]: E0313 10:10:21.467366 4632 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 10:10:21 crc kubenswrapper[4632]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29556610-sg5bx_openshift-infra_795727b7-7a2e-4e97-8707-aecf893fd332_0(f44c58a66c3c09c74b677cf4ec9f76b205bf4b131d95b02c4a9987dbc9e9bd8a): error adding pod openshift-infra_auto-csr-approver-29556610-sg5bx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f44c58a66c3c09c74b677cf4ec9f76b205bf4b131d95b02c4a9987dbc9e9bd8a" Netns:"/var/run/netns/731b0fe9-200c-49e0-9ae2-97aa60457e8f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-infra;K8S_POD_NAME=auto-csr-approver-29556610-sg5bx;K8S_POD_INFRA_CONTAINER_ID=f44c58a66c3c09c74b677cf4ec9f76b205bf4b131d95b02c4a9987dbc9e9bd8a;K8S_POD_UID=795727b7-7a2e-4e97-8707-aecf893fd332" Path:"" ERRORED: error configuring pod [openshift-infra/auto-csr-approver-29556610-sg5bx] networking: Multus: [openshift-infra/auto-csr-approver-29556610-sg5bx/795727b7-7a2e-4e97-8707-aecf893fd332]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod auto-csr-approver-29556610-sg5bx in out of cluster comm: pod "auto-csr-approver-29556610-sg5bx" not found Mar 13 10:10:21 crc kubenswrapper[4632]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 10:10:21 crc kubenswrapper[4632]: > pod="openshift-infra/auto-csr-approver-29556610-sg5bx" Mar 13 10:10:21 crc kubenswrapper[4632]: E0313 10:10:21.467456 4632 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 10:10:21 crc kubenswrapper[4632]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29556610-sg5bx_openshift-infra_795727b7-7a2e-4e97-8707-aecf893fd332_0(f44c58a66c3c09c74b677cf4ec9f76b205bf4b131d95b02c4a9987dbc9e9bd8a): error adding pod openshift-infra_auto-csr-approver-29556610-sg5bx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f44c58a66c3c09c74b677cf4ec9f76b205bf4b131d95b02c4a9987dbc9e9bd8a" Netns:"/var/run/netns/731b0fe9-200c-49e0-9ae2-97aa60457e8f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-infra;K8S_POD_NAME=auto-csr-approver-29556610-sg5bx;K8S_POD_INFRA_CONTAINER_ID=f44c58a66c3c09c74b677cf4ec9f76b205bf4b131d95b02c4a9987dbc9e9bd8a;K8S_POD_UID=795727b7-7a2e-4e97-8707-aecf893fd332" Path:"" ERRORED: error configuring pod [openshift-infra/auto-csr-approver-29556610-sg5bx] networking: Multus: [openshift-infra/auto-csr-approver-29556610-sg5bx/795727b7-7a2e-4e97-8707-aecf893fd332]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod auto-csr-approver-29556610-sg5bx in out of cluster comm: pod "auto-csr-approver-29556610-sg5bx" not found Mar 13 10:10:21 crc kubenswrapper[4632]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 10:10:21 crc kubenswrapper[4632]: > pod="openshift-infra/auto-csr-approver-29556610-sg5bx" Mar 13 10:10:21 crc kubenswrapper[4632]: E0313 10:10:21.467552 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"auto-csr-approver-29556610-sg5bx_openshift-infra(795727b7-7a2e-4e97-8707-aecf893fd332)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"auto-csr-approver-29556610-sg5bx_openshift-infra(795727b7-7a2e-4e97-8707-aecf893fd332)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29556610-sg5bx_openshift-infra_795727b7-7a2e-4e97-8707-aecf893fd332_0(f44c58a66c3c09c74b677cf4ec9f76b205bf4b131d95b02c4a9987dbc9e9bd8a): error adding pod openshift-infra_auto-csr-approver-29556610-sg5bx to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"f44c58a66c3c09c74b677cf4ec9f76b205bf4b131d95b02c4a9987dbc9e9bd8a\\\" Netns:\\\"/var/run/netns/731b0fe9-200c-49e0-9ae2-97aa60457e8f\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-infra;K8S_POD_NAME=auto-csr-approver-29556610-sg5bx;K8S_POD_INFRA_CONTAINER_ID=f44c58a66c3c09c74b677cf4ec9f76b205bf4b131d95b02c4a9987dbc9e9bd8a;K8S_POD_UID=795727b7-7a2e-4e97-8707-aecf893fd332\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-infra/auto-csr-approver-29556610-sg5bx] networking: Multus: [openshift-infra/auto-csr-approver-29556610-sg5bx/795727b7-7a2e-4e97-8707-aecf893fd332]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod auto-csr-approver-29556610-sg5bx in out of cluster comm: pod \\\"auto-csr-approver-29556610-sg5bx\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-infra/auto-csr-approver-29556610-sg5bx" podUID="795727b7-7a2e-4e97-8707-aecf893fd332" Mar 13 10:10:21 crc kubenswrapper[4632]: I0313 10:10:21.536794 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 13 10:10:21 crc kubenswrapper[4632]: I0313 10:10:21.558525 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 13 10:10:21 crc kubenswrapper[4632]: I0313 10:10:21.567434 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 13 10:10:21 crc kubenswrapper[4632]: I0313 10:10:21.624440 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 13 10:10:21 crc kubenswrapper[4632]: I0313 10:10:21.644349 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 13 10:10:21 crc kubenswrapper[4632]: I0313 10:10:21.690442 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 13 10:10:21 crc kubenswrapper[4632]: I0313 10:10:21.709858 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 13 10:10:21 crc kubenswrapper[4632]: I0313 10:10:21.737621 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 13 10:10:21 crc kubenswrapper[4632]: I0313 10:10:21.791064 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 13 10:10:21 crc kubenswrapper[4632]: I0313 10:10:21.855916 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 13 10:10:21 crc kubenswrapper[4632]: I0313 10:10:21.882166 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 13 10:10:21 crc kubenswrapper[4632]: I0313 10:10:21.893332 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556610-sg5bx" Mar 13 10:10:21 crc kubenswrapper[4632]: I0313 10:10:21.894620 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556610-sg5bx" Mar 13 10:10:21 crc kubenswrapper[4632]: I0313 10:10:21.932394 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 13 10:10:21 crc kubenswrapper[4632]: I0313 10:10:21.944588 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 13 10:10:21 crc kubenswrapper[4632]: I0313 10:10:21.949076 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 10:10:22 crc kubenswrapper[4632]: I0313 10:10:22.005231 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 13 10:10:22 crc kubenswrapper[4632]: I0313 10:10:22.058082 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 13 10:10:22 crc kubenswrapper[4632]: I0313 10:10:22.112706 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 13 10:10:22 crc kubenswrapper[4632]: I0313 10:10:22.295215 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 13 10:10:22 crc kubenswrapper[4632]: I0313 10:10:22.432358 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 13 10:10:22 crc kubenswrapper[4632]: I0313 10:10:22.628659 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 13 10:10:22 crc kubenswrapper[4632]: I0313 10:10:22.676903 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 13 10:10:22 crc kubenswrapper[4632]: I0313 10:10:22.687809 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 13 10:10:22 crc kubenswrapper[4632]: I0313 10:10:22.704980 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 13 10:10:22 crc kubenswrapper[4632]: I0313 10:10:22.748458 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 13 10:10:22 crc kubenswrapper[4632]: I0313 10:10:22.898560 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 13 10:10:23 crc kubenswrapper[4632]: I0313 10:10:23.015996 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 13 10:10:23 crc kubenswrapper[4632]: I0313 10:10:23.165505 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 13 10:10:23 crc kubenswrapper[4632]: I0313 10:10:23.268654 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 13 10:10:23 crc kubenswrapper[4632]: I0313 10:10:23.300148 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 13 10:10:23 crc kubenswrapper[4632]: I0313 10:10:23.369241 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 13 10:10:23 crc kubenswrapper[4632]: I0313 10:10:23.669832 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 13 10:10:23 crc kubenswrapper[4632]: I0313 10:10:23.764464 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 13 10:10:23 crc kubenswrapper[4632]: I0313 10:10:23.852190 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 13 10:10:23 crc kubenswrapper[4632]: I0313 10:10:23.876113 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 13 10:10:24 crc kubenswrapper[4632]: I0313 10:10:24.051034 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 13 10:10:24 crc kubenswrapper[4632]: I0313 10:10:24.060739 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 13 10:10:24 crc kubenswrapper[4632]: I0313 10:10:24.121973 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 13 10:10:24 crc kubenswrapper[4632]: I0313 10:10:24.140001 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 13 10:10:24 crc kubenswrapper[4632]: I0313 10:10:24.144604 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 13 10:10:24 crc kubenswrapper[4632]: I0313 10:10:24.264838 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 13 10:10:24 crc kubenswrapper[4632]: I0313 10:10:24.321206 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556610-sg5bx"] Mar 13 10:10:24 crc kubenswrapper[4632]: I0313 10:10:24.331265 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 13 10:10:24 crc kubenswrapper[4632]: I0313 10:10:24.446900 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 13 10:10:24 crc kubenswrapper[4632]: I0313 10:10:24.539882 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 13 10:10:24 crc kubenswrapper[4632]: I0313 10:10:24.911572 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556610-sg5bx" event={"ID":"795727b7-7a2e-4e97-8707-aecf893fd332","Type":"ContainerStarted","Data":"10a2d90beda673bcf86de10163ece976f67cb0baa2a32060af9801525b02c6e6"} Mar 13 10:10:25 crc kubenswrapper[4632]: I0313 10:10:25.296863 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 13 10:10:25 crc kubenswrapper[4632]: I0313 10:10:25.918183 4632 generic.go:334] "Generic (PLEG): container finished" podID="795727b7-7a2e-4e97-8707-aecf893fd332" containerID="2858228a654d1c5c1b9a9a04d00ea882bfe929e6c810389040bc3c0ba67d7a46" exitCode=0 Mar 13 10:10:25 crc kubenswrapper[4632]: I0313 10:10:25.918379 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556610-sg5bx" event={"ID":"795727b7-7a2e-4e97-8707-aecf893fd332","Type":"ContainerDied","Data":"2858228a654d1c5c1b9a9a04d00ea882bfe929e6c810389040bc3c0ba67d7a46"} Mar 13 10:10:26 crc kubenswrapper[4632]: I0313 10:10:26.927269 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 13 10:10:26 crc kubenswrapper[4632]: I0313 10:10:26.927612 4632 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="80143265ebaee0f3a54053c9c203e48c0b8ae49b675b972452da2268c99bd9ad" exitCode=137 Mar 13 10:10:27 crc kubenswrapper[4632]: I0313 10:10:27.008032 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 13 10:10:27 crc kubenswrapper[4632]: I0313 10:10:27.008154 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 10:10:27 crc kubenswrapper[4632]: I0313 10:10:27.105504 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 13 10:10:27 crc kubenswrapper[4632]: I0313 10:10:27.105589 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 13 10:10:27 crc kubenswrapper[4632]: I0313 10:10:27.105622 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 13 10:10:27 crc kubenswrapper[4632]: I0313 10:10:27.105753 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 13 10:10:27 crc kubenswrapper[4632]: I0313 10:10:27.105786 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 13 10:10:27 crc kubenswrapper[4632]: I0313 10:10:27.105844 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:10:27 crc kubenswrapper[4632]: I0313 10:10:27.105897 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:10:27 crc kubenswrapper[4632]: I0313 10:10:27.105978 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:10:27 crc kubenswrapper[4632]: I0313 10:10:27.105844 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:10:27 crc kubenswrapper[4632]: I0313 10:10:27.106240 4632 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 13 10:10:27 crc kubenswrapper[4632]: I0313 10:10:27.106268 4632 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Mar 13 10:10:27 crc kubenswrapper[4632]: I0313 10:10:27.106337 4632 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Mar 13 10:10:27 crc kubenswrapper[4632]: I0313 10:10:27.106349 4632 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Mar 13 10:10:27 crc kubenswrapper[4632]: I0313 10:10:27.116808 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:10:27 crc kubenswrapper[4632]: I0313 10:10:27.167688 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556610-sg5bx" Mar 13 10:10:27 crc kubenswrapper[4632]: I0313 10:10:27.206959 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmrds\" (UniqueName: \"kubernetes.io/projected/795727b7-7a2e-4e97-8707-aecf893fd332-kube-api-access-tmrds\") pod \"795727b7-7a2e-4e97-8707-aecf893fd332\" (UID: \"795727b7-7a2e-4e97-8707-aecf893fd332\") " Mar 13 10:10:27 crc kubenswrapper[4632]: I0313 10:10:27.207169 4632 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 13 10:10:27 crc kubenswrapper[4632]: I0313 10:10:27.211321 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/795727b7-7a2e-4e97-8707-aecf893fd332-kube-api-access-tmrds" (OuterVolumeSpecName: "kube-api-access-tmrds") pod "795727b7-7a2e-4e97-8707-aecf893fd332" (UID: "795727b7-7a2e-4e97-8707-aecf893fd332"). InnerVolumeSpecName "kube-api-access-tmrds". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:10:27 crc kubenswrapper[4632]: I0313 10:10:27.309313 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmrds\" (UniqueName: \"kubernetes.io/projected/795727b7-7a2e-4e97-8707-aecf893fd332-kube-api-access-tmrds\") on node \"crc\" DevicePath \"\"" Mar 13 10:10:27 crc kubenswrapper[4632]: I0313 10:10:27.938427 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 13 10:10:27 crc kubenswrapper[4632]: I0313 10:10:27.938588 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 10:10:27 crc kubenswrapper[4632]: I0313 10:10:27.938677 4632 scope.go:117] "RemoveContainer" containerID="80143265ebaee0f3a54053c9c203e48c0b8ae49b675b972452da2268c99bd9ad" Mar 13 10:10:27 crc kubenswrapper[4632]: I0313 10:10:27.944640 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556610-sg5bx" event={"ID":"795727b7-7a2e-4e97-8707-aecf893fd332","Type":"ContainerDied","Data":"10a2d90beda673bcf86de10163ece976f67cb0baa2a32060af9801525b02c6e6"} Mar 13 10:10:27 crc kubenswrapper[4632]: I0313 10:10:27.944690 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10a2d90beda673bcf86de10163ece976f67cb0baa2a32060af9801525b02c6e6" Mar 13 10:10:27 crc kubenswrapper[4632]: I0313 10:10:27.944750 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556610-sg5bx" Mar 13 10:10:28 crc kubenswrapper[4632]: I0313 10:10:28.053817 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Mar 13 10:10:28 crc kubenswrapper[4632]: I0313 10:10:28.054248 4632 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Mar 13 10:10:28 crc kubenswrapper[4632]: I0313 10:10:28.066070 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 13 10:10:28 crc kubenswrapper[4632]: I0313 10:10:28.066116 4632 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="79d3accd-e208-422a-b26e-bb6023b74edf" Mar 13 10:10:28 crc kubenswrapper[4632]: I0313 10:10:28.068873 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 13 10:10:28 crc kubenswrapper[4632]: I0313 10:10:28.068901 4632 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="79d3accd-e208-422a-b26e-bb6023b74edf" Mar 13 10:10:36 crc kubenswrapper[4632]: I0313 10:10:36.595132 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 13 10:10:38 crc kubenswrapper[4632]: I0313 10:10:38.064972 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 13 10:10:41 crc kubenswrapper[4632]: I0313 10:10:41.177826 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 13 10:10:43 crc kubenswrapper[4632]: I0313 10:10:43.222682 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 13 10:10:46 crc kubenswrapper[4632]: I0313 10:10:46.840237 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 13 10:10:51 crc kubenswrapper[4632]: I0313 10:10:51.080239 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t6bkt"] Mar 13 10:10:51 crc kubenswrapper[4632]: I0313 10:10:51.081912 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t6bkt" podUID="668c4640-0e5f-4c98-8b6e-dbdffdbfe14e" containerName="registry-server" containerID="cri-o://18dc7d50bd20cdcc8bd750dbb90186346da79c165a2b1398fb9fce2a58fd1224" gracePeriod=2 Mar 13 10:10:51 crc kubenswrapper[4632]: I0313 10:10:51.280054 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xr5l9"] Mar 13 10:10:51 crc kubenswrapper[4632]: I0313 10:10:51.280374 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xr5l9" podUID="87965e39-b879-4e26-9c8b-b78068c52aa0" containerName="registry-server" containerID="cri-o://f3ec0c7ec6e706f7309cf337b9ffdf829c043fe63c66f9db4fb7e26f23ae6edd" gracePeriod=2 Mar 13 10:10:51 crc kubenswrapper[4632]: I0313 10:10:51.535550 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t6bkt" Mar 13 10:10:51 crc kubenswrapper[4632]: I0313 10:10:51.621419 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/668c4640-0e5f-4c98-8b6e-dbdffdbfe14e-catalog-content\") pod \"668c4640-0e5f-4c98-8b6e-dbdffdbfe14e\" (UID: \"668c4640-0e5f-4c98-8b6e-dbdffdbfe14e\") " Mar 13 10:10:51 crc kubenswrapper[4632]: I0313 10:10:51.621486 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/668c4640-0e5f-4c98-8b6e-dbdffdbfe14e-utilities\") pod \"668c4640-0e5f-4c98-8b6e-dbdffdbfe14e\" (UID: \"668c4640-0e5f-4c98-8b6e-dbdffdbfe14e\") " Mar 13 10:10:51 crc kubenswrapper[4632]: I0313 10:10:51.621540 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc6ff\" (UniqueName: \"kubernetes.io/projected/668c4640-0e5f-4c98-8b6e-dbdffdbfe14e-kube-api-access-bc6ff\") pod \"668c4640-0e5f-4c98-8b6e-dbdffdbfe14e\" (UID: \"668c4640-0e5f-4c98-8b6e-dbdffdbfe14e\") " Mar 13 10:10:51 crc kubenswrapper[4632]: I0313 10:10:51.624007 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/668c4640-0e5f-4c98-8b6e-dbdffdbfe14e-utilities" (OuterVolumeSpecName: "utilities") pod "668c4640-0e5f-4c98-8b6e-dbdffdbfe14e" (UID: "668c4640-0e5f-4c98-8b6e-dbdffdbfe14e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:10:51 crc kubenswrapper[4632]: I0313 10:10:51.630390 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/668c4640-0e5f-4c98-8b6e-dbdffdbfe14e-kube-api-access-bc6ff" (OuterVolumeSpecName: "kube-api-access-bc6ff") pod "668c4640-0e5f-4c98-8b6e-dbdffdbfe14e" (UID: "668c4640-0e5f-4c98-8b6e-dbdffdbfe14e"). InnerVolumeSpecName "kube-api-access-bc6ff". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:10:51 crc kubenswrapper[4632]: I0313 10:10:51.647838 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/668c4640-0e5f-4c98-8b6e-dbdffdbfe14e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "668c4640-0e5f-4c98-8b6e-dbdffdbfe14e" (UID: "668c4640-0e5f-4c98-8b6e-dbdffdbfe14e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:10:51 crc kubenswrapper[4632]: I0313 10:10:51.664837 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xr5l9" Mar 13 10:10:51 crc kubenswrapper[4632]: I0313 10:10:51.722758 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87965e39-b879-4e26-9c8b-b78068c52aa0-utilities\") pod \"87965e39-b879-4e26-9c8b-b78068c52aa0\" (UID: \"87965e39-b879-4e26-9c8b-b78068c52aa0\") " Mar 13 10:10:51 crc kubenswrapper[4632]: I0313 10:10:51.722824 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hfz2\" (UniqueName: \"kubernetes.io/projected/87965e39-b879-4e26-9c8b-b78068c52aa0-kube-api-access-6hfz2\") pod \"87965e39-b879-4e26-9c8b-b78068c52aa0\" (UID: \"87965e39-b879-4e26-9c8b-b78068c52aa0\") " Mar 13 10:10:51 crc kubenswrapper[4632]: I0313 10:10:51.722897 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87965e39-b879-4e26-9c8b-b78068c52aa0-catalog-content\") pod \"87965e39-b879-4e26-9c8b-b78068c52aa0\" (UID: \"87965e39-b879-4e26-9c8b-b78068c52aa0\") " Mar 13 10:10:51 crc kubenswrapper[4632]: I0313 10:10:51.723212 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bc6ff\" (UniqueName: \"kubernetes.io/projected/668c4640-0e5f-4c98-8b6e-dbdffdbfe14e-kube-api-access-bc6ff\") on node \"crc\" DevicePath \"\"" Mar 13 10:10:51 crc kubenswrapper[4632]: I0313 10:10:51.723238 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/668c4640-0e5f-4c98-8b6e-dbdffdbfe14e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 10:10:51 crc kubenswrapper[4632]: I0313 10:10:51.723249 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/668c4640-0e5f-4c98-8b6e-dbdffdbfe14e-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 10:10:51 crc kubenswrapper[4632]: I0313 10:10:51.723881 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87965e39-b879-4e26-9c8b-b78068c52aa0-utilities" (OuterVolumeSpecName: "utilities") pod "87965e39-b879-4e26-9c8b-b78068c52aa0" (UID: "87965e39-b879-4e26-9c8b-b78068c52aa0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:10:51 crc kubenswrapper[4632]: I0313 10:10:51.728154 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87965e39-b879-4e26-9c8b-b78068c52aa0-kube-api-access-6hfz2" (OuterVolumeSpecName: "kube-api-access-6hfz2") pod "87965e39-b879-4e26-9c8b-b78068c52aa0" (UID: "87965e39-b879-4e26-9c8b-b78068c52aa0"). InnerVolumeSpecName "kube-api-access-6hfz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:10:51 crc kubenswrapper[4632]: I0313 10:10:51.824697 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87965e39-b879-4e26-9c8b-b78068c52aa0-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 10:10:51 crc kubenswrapper[4632]: I0313 10:10:51.824743 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hfz2\" (UniqueName: \"kubernetes.io/projected/87965e39-b879-4e26-9c8b-b78068c52aa0-kube-api-access-6hfz2\") on node \"crc\" DevicePath \"\"" Mar 13 10:10:51 crc kubenswrapper[4632]: I0313 10:10:51.875893 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87965e39-b879-4e26-9c8b-b78068c52aa0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87965e39-b879-4e26-9c8b-b78068c52aa0" (UID: "87965e39-b879-4e26-9c8b-b78068c52aa0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:10:51 crc kubenswrapper[4632]: I0313 10:10:51.926719 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87965e39-b879-4e26-9c8b-b78068c52aa0-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 10:10:52 crc kubenswrapper[4632]: I0313 10:10:52.126054 4632 generic.go:334] "Generic (PLEG): container finished" podID="668c4640-0e5f-4c98-8b6e-dbdffdbfe14e" containerID="18dc7d50bd20cdcc8bd750dbb90186346da79c165a2b1398fb9fce2a58fd1224" exitCode=0 Mar 13 10:10:52 crc kubenswrapper[4632]: I0313 10:10:52.126168 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t6bkt" event={"ID":"668c4640-0e5f-4c98-8b6e-dbdffdbfe14e","Type":"ContainerDied","Data":"18dc7d50bd20cdcc8bd750dbb90186346da79c165a2b1398fb9fce2a58fd1224"} Mar 13 10:10:52 crc kubenswrapper[4632]: I0313 10:10:52.126217 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t6bkt" event={"ID":"668c4640-0e5f-4c98-8b6e-dbdffdbfe14e","Type":"ContainerDied","Data":"3e3a79d99a0e6a35edab86938ccf523a35c4606e460775b549d1924f20dc4204"} Mar 13 10:10:52 crc kubenswrapper[4632]: I0313 10:10:52.126237 4632 scope.go:117] "RemoveContainer" containerID="18dc7d50bd20cdcc8bd750dbb90186346da79c165a2b1398fb9fce2a58fd1224" Mar 13 10:10:52 crc kubenswrapper[4632]: I0313 10:10:52.126205 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t6bkt" Mar 13 10:10:52 crc kubenswrapper[4632]: I0313 10:10:52.128469 4632 generic.go:334] "Generic (PLEG): container finished" podID="87965e39-b879-4e26-9c8b-b78068c52aa0" containerID="f3ec0c7ec6e706f7309cf337b9ffdf829c043fe63c66f9db4fb7e26f23ae6edd" exitCode=0 Mar 13 10:10:52 crc kubenswrapper[4632]: I0313 10:10:52.128495 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xr5l9" event={"ID":"87965e39-b879-4e26-9c8b-b78068c52aa0","Type":"ContainerDied","Data":"f3ec0c7ec6e706f7309cf337b9ffdf829c043fe63c66f9db4fb7e26f23ae6edd"} Mar 13 10:10:52 crc kubenswrapper[4632]: I0313 10:10:52.128513 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xr5l9" event={"ID":"87965e39-b879-4e26-9c8b-b78068c52aa0","Type":"ContainerDied","Data":"9d37b680fdc1d8687e48df9dab9cd8ad8fcee9b7cdb15c920f34a9cbf7bad5ef"} Mar 13 10:10:52 crc kubenswrapper[4632]: I0313 10:10:52.128574 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xr5l9" Mar 13 10:10:52 crc kubenswrapper[4632]: I0313 10:10:52.175906 4632 scope.go:117] "RemoveContainer" containerID="9fa8755799160674c9b9254d4cc4cd33b06b805f90944344a392116f94021c66" Mar 13 10:10:52 crc kubenswrapper[4632]: I0313 10:10:52.208647 4632 scope.go:117] "RemoveContainer" containerID="01ae56b596391f3b7877c67539058596dbfd086754ed8db1f4f40f76d82a4c4c" Mar 13 10:10:52 crc kubenswrapper[4632]: I0313 10:10:52.215199 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xr5l9"] Mar 13 10:10:52 crc kubenswrapper[4632]: I0313 10:10:52.218875 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xr5l9"] Mar 13 10:10:52 crc kubenswrapper[4632]: I0313 10:10:52.233451 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t6bkt"] Mar 13 10:10:52 crc kubenswrapper[4632]: I0313 10:10:52.238138 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t6bkt"] Mar 13 10:10:52 crc kubenswrapper[4632]: I0313 10:10:52.253812 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 13 10:10:52 crc kubenswrapper[4632]: I0313 10:10:52.274675 4632 scope.go:117] "RemoveContainer" containerID="18dc7d50bd20cdcc8bd750dbb90186346da79c165a2b1398fb9fce2a58fd1224" Mar 13 10:10:52 crc kubenswrapper[4632]: E0313 10:10:52.275445 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18dc7d50bd20cdcc8bd750dbb90186346da79c165a2b1398fb9fce2a58fd1224\": container with ID starting with 18dc7d50bd20cdcc8bd750dbb90186346da79c165a2b1398fb9fce2a58fd1224 not found: ID does not exist" containerID="18dc7d50bd20cdcc8bd750dbb90186346da79c165a2b1398fb9fce2a58fd1224" Mar 13 10:10:52 crc kubenswrapper[4632]: I0313 10:10:52.275504 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18dc7d50bd20cdcc8bd750dbb90186346da79c165a2b1398fb9fce2a58fd1224"} err="failed to get container status \"18dc7d50bd20cdcc8bd750dbb90186346da79c165a2b1398fb9fce2a58fd1224\": rpc error: code = NotFound desc = could not find container \"18dc7d50bd20cdcc8bd750dbb90186346da79c165a2b1398fb9fce2a58fd1224\": container with ID starting with 18dc7d50bd20cdcc8bd750dbb90186346da79c165a2b1398fb9fce2a58fd1224 not found: ID does not exist" Mar 13 10:10:52 crc kubenswrapper[4632]: I0313 10:10:52.275539 4632 scope.go:117] "RemoveContainer" containerID="9fa8755799160674c9b9254d4cc4cd33b06b805f90944344a392116f94021c66" Mar 13 10:10:52 crc kubenswrapper[4632]: E0313 10:10:52.275912 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fa8755799160674c9b9254d4cc4cd33b06b805f90944344a392116f94021c66\": container with ID starting with 9fa8755799160674c9b9254d4cc4cd33b06b805f90944344a392116f94021c66 not found: ID does not exist" containerID="9fa8755799160674c9b9254d4cc4cd33b06b805f90944344a392116f94021c66" Mar 13 10:10:52 crc kubenswrapper[4632]: I0313 10:10:52.276027 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fa8755799160674c9b9254d4cc4cd33b06b805f90944344a392116f94021c66"} err="failed to get container status \"9fa8755799160674c9b9254d4cc4cd33b06b805f90944344a392116f94021c66\": rpc error: code = NotFound desc = could not find container \"9fa8755799160674c9b9254d4cc4cd33b06b805f90944344a392116f94021c66\": container with ID starting with 9fa8755799160674c9b9254d4cc4cd33b06b805f90944344a392116f94021c66 not found: ID does not exist" Mar 13 10:10:52 crc kubenswrapper[4632]: I0313 10:10:52.276059 4632 scope.go:117] "RemoveContainer" containerID="01ae56b596391f3b7877c67539058596dbfd086754ed8db1f4f40f76d82a4c4c" Mar 13 10:10:52 crc kubenswrapper[4632]: E0313 10:10:52.276351 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01ae56b596391f3b7877c67539058596dbfd086754ed8db1f4f40f76d82a4c4c\": container with ID starting with 01ae56b596391f3b7877c67539058596dbfd086754ed8db1f4f40f76d82a4c4c not found: ID does not exist" containerID="01ae56b596391f3b7877c67539058596dbfd086754ed8db1f4f40f76d82a4c4c" Mar 13 10:10:52 crc kubenswrapper[4632]: I0313 10:10:52.276378 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01ae56b596391f3b7877c67539058596dbfd086754ed8db1f4f40f76d82a4c4c"} err="failed to get container status \"01ae56b596391f3b7877c67539058596dbfd086754ed8db1f4f40f76d82a4c4c\": rpc error: code = NotFound desc = could not find container \"01ae56b596391f3b7877c67539058596dbfd086754ed8db1f4f40f76d82a4c4c\": container with ID starting with 01ae56b596391f3b7877c67539058596dbfd086754ed8db1f4f40f76d82a4c4c not found: ID does not exist" Mar 13 10:10:52 crc kubenswrapper[4632]: I0313 10:10:52.276398 4632 scope.go:117] "RemoveContainer" containerID="f3ec0c7ec6e706f7309cf337b9ffdf829c043fe63c66f9db4fb7e26f23ae6edd" Mar 13 10:10:52 crc kubenswrapper[4632]: I0313 10:10:52.293427 4632 scope.go:117] "RemoveContainer" containerID="e3753ece91188912f076d532ed434a938805848842e1fdb1b100800bcaa42763" Mar 13 10:10:52 crc kubenswrapper[4632]: I0313 10:10:52.310741 4632 scope.go:117] "RemoveContainer" containerID="bafd48184b1b00528329793cfe1af87f0aa9502582cddad85d784407e60c249d" Mar 13 10:10:52 crc kubenswrapper[4632]: I0313 10:10:52.330906 4632 scope.go:117] "RemoveContainer" containerID="f3ec0c7ec6e706f7309cf337b9ffdf829c043fe63c66f9db4fb7e26f23ae6edd" Mar 13 10:10:52 crc kubenswrapper[4632]: E0313 10:10:52.331411 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3ec0c7ec6e706f7309cf337b9ffdf829c043fe63c66f9db4fb7e26f23ae6edd\": container with ID starting with f3ec0c7ec6e706f7309cf337b9ffdf829c043fe63c66f9db4fb7e26f23ae6edd not found: ID does not exist" containerID="f3ec0c7ec6e706f7309cf337b9ffdf829c043fe63c66f9db4fb7e26f23ae6edd" Mar 13 10:10:52 crc kubenswrapper[4632]: I0313 10:10:52.331439 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3ec0c7ec6e706f7309cf337b9ffdf829c043fe63c66f9db4fb7e26f23ae6edd"} err="failed to get container status \"f3ec0c7ec6e706f7309cf337b9ffdf829c043fe63c66f9db4fb7e26f23ae6edd\": rpc error: code = NotFound desc = could not find container \"f3ec0c7ec6e706f7309cf337b9ffdf829c043fe63c66f9db4fb7e26f23ae6edd\": container with ID starting with f3ec0c7ec6e706f7309cf337b9ffdf829c043fe63c66f9db4fb7e26f23ae6edd not found: ID does not exist" Mar 13 10:10:52 crc kubenswrapper[4632]: I0313 10:10:52.331459 4632 scope.go:117] "RemoveContainer" containerID="e3753ece91188912f076d532ed434a938805848842e1fdb1b100800bcaa42763" Mar 13 10:10:52 crc kubenswrapper[4632]: E0313 10:10:52.331721 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3753ece91188912f076d532ed434a938805848842e1fdb1b100800bcaa42763\": container with ID starting with e3753ece91188912f076d532ed434a938805848842e1fdb1b100800bcaa42763 not found: ID does not exist" containerID="e3753ece91188912f076d532ed434a938805848842e1fdb1b100800bcaa42763" Mar 13 10:10:52 crc kubenswrapper[4632]: I0313 10:10:52.331762 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3753ece91188912f076d532ed434a938805848842e1fdb1b100800bcaa42763"} err="failed to get container status \"e3753ece91188912f076d532ed434a938805848842e1fdb1b100800bcaa42763\": rpc error: code = NotFound desc = could not find container \"e3753ece91188912f076d532ed434a938805848842e1fdb1b100800bcaa42763\": container with ID starting with e3753ece91188912f076d532ed434a938805848842e1fdb1b100800bcaa42763 not found: ID does not exist" Mar 13 10:10:52 crc kubenswrapper[4632]: I0313 10:10:52.331788 4632 scope.go:117] "RemoveContainer" containerID="bafd48184b1b00528329793cfe1af87f0aa9502582cddad85d784407e60c249d" Mar 13 10:10:52 crc kubenswrapper[4632]: E0313 10:10:52.332097 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bafd48184b1b00528329793cfe1af87f0aa9502582cddad85d784407e60c249d\": container with ID starting with bafd48184b1b00528329793cfe1af87f0aa9502582cddad85d784407e60c249d not found: ID does not exist" containerID="bafd48184b1b00528329793cfe1af87f0aa9502582cddad85d784407e60c249d" Mar 13 10:10:52 crc kubenswrapper[4632]: I0313 10:10:52.332121 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bafd48184b1b00528329793cfe1af87f0aa9502582cddad85d784407e60c249d"} err="failed to get container status \"bafd48184b1b00528329793cfe1af87f0aa9502582cddad85d784407e60c249d\": rpc error: code = NotFound desc = could not find container \"bafd48184b1b00528329793cfe1af87f0aa9502582cddad85d784407e60c249d\": container with ID starting with bafd48184b1b00528329793cfe1af87f0aa9502582cddad85d784407e60c249d not found: ID does not exist" Mar 13 10:10:54 crc kubenswrapper[4632]: I0313 10:10:54.056130 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="668c4640-0e5f-4c98-8b6e-dbdffdbfe14e" path="/var/lib/kubelet/pods/668c4640-0e5f-4c98-8b6e-dbdffdbfe14e/volumes" Mar 13 10:10:54 crc kubenswrapper[4632]: I0313 10:10:54.057820 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87965e39-b879-4e26-9c8b-b78068c52aa0" path="/var/lib/kubelet/pods/87965e39-b879-4e26-9c8b-b78068c52aa0/volumes" Mar 13 10:10:54 crc kubenswrapper[4632]: I0313 10:10:54.573634 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8sl88"] Mar 13 10:11:10 crc kubenswrapper[4632]: I0313 10:11:10.461236 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 10:11:10 crc kubenswrapper[4632]: I0313 10:11:10.463163 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 10:11:19 crc kubenswrapper[4632]: I0313 10:11:19.599728 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" podUID="f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a" containerName="oauth-openshift" containerID="cri-o://7595f698758f3f9ece4af82a45628aeb01bfa58cde4c80fefdeaf746f39aba12" gracePeriod=15 Mar 13 10:11:19 crc kubenswrapper[4632]: I0313 10:11:19.997931 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.034168 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x"] Mar 13 10:11:20 crc kubenswrapper[4632]: E0313 10:11:20.034447 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87965e39-b879-4e26-9c8b-b78068c52aa0" containerName="extract-content" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.034460 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="87965e39-b879-4e26-9c8b-b78068c52aa0" containerName="extract-content" Mar 13 10:11:20 crc kubenswrapper[4632]: E0313 10:11:20.034473 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="668c4640-0e5f-4c98-8b6e-dbdffdbfe14e" containerName="registry-server" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.034480 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="668c4640-0e5f-4c98-8b6e-dbdffdbfe14e" containerName="registry-server" Mar 13 10:11:20 crc kubenswrapper[4632]: E0313 10:11:20.034490 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="668c4640-0e5f-4c98-8b6e-dbdffdbfe14e" containerName="extract-utilities" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.034497 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="668c4640-0e5f-4c98-8b6e-dbdffdbfe14e" containerName="extract-utilities" Mar 13 10:11:20 crc kubenswrapper[4632]: E0313 10:11:20.034507 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="668c4640-0e5f-4c98-8b6e-dbdffdbfe14e" containerName="extract-content" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.034514 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="668c4640-0e5f-4c98-8b6e-dbdffdbfe14e" containerName="extract-content" Mar 13 10:11:20 crc kubenswrapper[4632]: E0313 10:11:20.034524 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a" containerName="oauth-openshift" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.034530 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a" containerName="oauth-openshift" Mar 13 10:11:20 crc kubenswrapper[4632]: E0313 10:11:20.034541 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="795727b7-7a2e-4e97-8707-aecf893fd332" containerName="oc" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.034547 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="795727b7-7a2e-4e97-8707-aecf893fd332" containerName="oc" Mar 13 10:11:20 crc kubenswrapper[4632]: E0313 10:11:20.034558 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87965e39-b879-4e26-9c8b-b78068c52aa0" containerName="extract-utilities" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.034564 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="87965e39-b879-4e26-9c8b-b78068c52aa0" containerName="extract-utilities" Mar 13 10:11:20 crc kubenswrapper[4632]: E0313 10:11:20.034571 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87965e39-b879-4e26-9c8b-b78068c52aa0" containerName="registry-server" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.034593 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="87965e39-b879-4e26-9c8b-b78068c52aa0" containerName="registry-server" Mar 13 10:11:20 crc kubenswrapper[4632]: E0313 10:11:20.034607 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.034614 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.034724 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a" containerName="oauth-openshift" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.034749 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="87965e39-b879-4e26-9c8b-b78068c52aa0" containerName="registry-server" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.034759 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="795727b7-7a2e-4e97-8707-aecf893fd332" containerName="oc" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.034769 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="668c4640-0e5f-4c98-8b6e-dbdffdbfe14e" containerName="registry-server" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.034782 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.035214 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.064884 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x"] Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.125825 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-ocp-branding-template\") pod \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.125882 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-session\") pod \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.125913 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-user-template-provider-selection\") pod \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.125957 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-user-idp-0-file-data\") pod \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.125988 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-audit-dir\") pod \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.126011 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-user-template-error\") pod \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.126053 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-cliconfig\") pod \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.126092 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-router-certs\") pod \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.126115 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nsdx\" (UniqueName: \"kubernetes.io/projected/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-kube-api-access-9nsdx\") pod \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.126136 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-user-template-login\") pod \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.126158 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-trusted-ca-bundle\") pod \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.126196 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-service-ca\") pod \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.126216 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-audit-policies\") pod \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.126242 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-serving-cert\") pod \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\" (UID: \"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a\") " Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.127835 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a" (UID: "f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.127901 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a" (UID: "f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.127964 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a" (UID: "f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.128789 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a" (UID: "f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.132553 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-kube-api-access-9nsdx" (OuterVolumeSpecName: "kube-api-access-9nsdx") pod "f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a" (UID: "f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a"). InnerVolumeSpecName "kube-api-access-9nsdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.132620 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a" (UID: "f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.133317 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a" (UID: "f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.136909 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a" (UID: "f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.137594 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a" (UID: "f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.140295 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a" (UID: "f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.140908 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a" (UID: "f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.143091 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a" (UID: "f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.145101 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a" (UID: "f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.145839 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a" (UID: "f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.227140 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.227194 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.227213 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9sch\" (UniqueName: \"kubernetes.io/projected/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-kube-api-access-q9sch\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.227244 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-v4-0-config-system-session\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.227260 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-v4-0-config-user-template-login\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.227278 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-audit-policies\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.227310 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.227331 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.227349 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-v4-0-config-system-service-ca\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.227369 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-v4-0-config-system-router-certs\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.227384 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-v4-0-config-user-template-error\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.227398 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.227414 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-audit-dir\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.227430 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.227464 4632 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.227474 4632 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.227484 4632 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.227494 4632 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.227504 4632 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.227513 4632 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.227521 4632 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.227529 4632 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.227538 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nsdx\" (UniqueName: \"kubernetes.io/projected/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-kube-api-access-9nsdx\") on node \"crc\" DevicePath \"\"" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.227546 4632 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.227557 4632 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.227567 4632 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.227577 4632 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.227585 4632 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.271240 4632 generic.go:334] "Generic (PLEG): container finished" podID="f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a" containerID="7595f698758f3f9ece4af82a45628aeb01bfa58cde4c80fefdeaf746f39aba12" exitCode=0 Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.271284 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" event={"ID":"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a","Type":"ContainerDied","Data":"7595f698758f3f9ece4af82a45628aeb01bfa58cde4c80fefdeaf746f39aba12"} Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.271310 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" event={"ID":"f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a","Type":"ContainerDied","Data":"46d5cd8b5a8d1e4d5e145a625b40cd39a2bdcba910908f1195bf38b9cf2ad7c8"} Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.271327 4632 scope.go:117] "RemoveContainer" containerID="7595f698758f3f9ece4af82a45628aeb01bfa58cde4c80fefdeaf746f39aba12" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.271420 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8sl88" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.294047 4632 scope.go:117] "RemoveContainer" containerID="7595f698758f3f9ece4af82a45628aeb01bfa58cde4c80fefdeaf746f39aba12" Mar 13 10:11:20 crc kubenswrapper[4632]: E0313 10:11:20.294628 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7595f698758f3f9ece4af82a45628aeb01bfa58cde4c80fefdeaf746f39aba12\": container with ID starting with 7595f698758f3f9ece4af82a45628aeb01bfa58cde4c80fefdeaf746f39aba12 not found: ID does not exist" containerID="7595f698758f3f9ece4af82a45628aeb01bfa58cde4c80fefdeaf746f39aba12" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.294663 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7595f698758f3f9ece4af82a45628aeb01bfa58cde4c80fefdeaf746f39aba12"} err="failed to get container status \"7595f698758f3f9ece4af82a45628aeb01bfa58cde4c80fefdeaf746f39aba12\": rpc error: code = NotFound desc = could not find container \"7595f698758f3f9ece4af82a45628aeb01bfa58cde4c80fefdeaf746f39aba12\": container with ID starting with 7595f698758f3f9ece4af82a45628aeb01bfa58cde4c80fefdeaf746f39aba12 not found: ID does not exist" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.307475 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8sl88"] Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.311449 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8sl88"] Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.329141 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.329212 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.329237 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9sch\" (UniqueName: \"kubernetes.io/projected/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-kube-api-access-q9sch\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.329318 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-v4-0-config-system-session\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.329406 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-v4-0-config-user-template-login\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.329450 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-audit-policies\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.329482 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.329526 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.329554 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-v4-0-config-system-service-ca\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.329603 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-v4-0-config-system-router-certs\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.329621 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-v4-0-config-user-template-error\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.329638 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.329674 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-audit-dir\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.329694 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.330482 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-audit-dir\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.332183 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-v4-0-config-system-service-ca\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.332197 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-audit-policies\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.332975 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.335057 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.336853 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.339237 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-v4-0-config-user-template-error\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.339561 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.339562 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-v4-0-config-system-router-certs\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.340582 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.342098 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-v4-0-config-user-template-login\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.342433 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.343207 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-v4-0-config-system-session\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.347048 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9sch\" (UniqueName: \"kubernetes.io/projected/48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd-kube-api-access-q9sch\") pod \"oauth-openshift-75bb75cfd7-8sh2x\" (UID: \"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd\") " pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.364013 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:20 crc kubenswrapper[4632]: I0313 10:11:20.759205 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x"] Mar 13 10:11:20 crc kubenswrapper[4632]: W0313 10:11:20.767378 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48d2bc7e_c929_42c9_b3f2_9e78c7eac8cd.slice/crio-9668ce6a02f8a0451908f7868d18802f5139657e99d614dabe929ff79634f8e7 WatchSource:0}: Error finding container 9668ce6a02f8a0451908f7868d18802f5139657e99d614dabe929ff79634f8e7: Status 404 returned error can't find the container with id 9668ce6a02f8a0451908f7868d18802f5139657e99d614dabe929ff79634f8e7 Mar 13 10:11:21 crc kubenswrapper[4632]: I0313 10:11:21.276587 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" event={"ID":"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd","Type":"ContainerStarted","Data":"224837f104bcdbc6545d62209161e349a9d07cdcaf5c66e47c1de75b3af4b369"} Mar 13 10:11:21 crc kubenswrapper[4632]: I0313 10:11:21.277436 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:21 crc kubenswrapper[4632]: I0313 10:11:21.277456 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" event={"ID":"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd","Type":"ContainerStarted","Data":"9668ce6a02f8a0451908f7868d18802f5139657e99d614dabe929ff79634f8e7"} Mar 13 10:11:21 crc kubenswrapper[4632]: I0313 10:11:21.299139 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" podStartSLOduration=27.299120215 podStartE2EDuration="27.299120215s" podCreationTimestamp="2026-03-13 10:10:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:11:21.296162827 +0000 UTC m=+455.318692960" watchObservedRunningTime="2026-03-13 10:11:21.299120215 +0000 UTC m=+455.321650388" Mar 13 10:11:21 crc kubenswrapper[4632]: I0313 10:11:21.331693 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 10:11:22 crc kubenswrapper[4632]: I0313 10:11:22.052304 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a" path="/var/lib/kubelet/pods/f6f7bdd0-1aaa-48f1-a3b7-55bd145aec0a/volumes" Mar 13 10:11:36 crc kubenswrapper[4632]: I0313 10:11:36.420822 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-sbzcm"] Mar 13 10:11:36 crc kubenswrapper[4632]: I0313 10:11:36.422348 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-sbzcm" Mar 13 10:11:36 crc kubenswrapper[4632]: I0313 10:11:36.442052 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-sbzcm"] Mar 13 10:11:36 crc kubenswrapper[4632]: I0313 10:11:36.534228 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/121318f2-259b-4187-a348-1282d0e63995-trusted-ca\") pod \"image-registry-66df7c8f76-sbzcm\" (UID: \"121318f2-259b-4187-a348-1282d0e63995\") " pod="openshift-image-registry/image-registry-66df7c8f76-sbzcm" Mar 13 10:11:36 crc kubenswrapper[4632]: I0313 10:11:36.534483 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/121318f2-259b-4187-a348-1282d0e63995-ca-trust-extracted\") pod \"image-registry-66df7c8f76-sbzcm\" (UID: \"121318f2-259b-4187-a348-1282d0e63995\") " pod="openshift-image-registry/image-registry-66df7c8f76-sbzcm" Mar 13 10:11:36 crc kubenswrapper[4632]: I0313 10:11:36.534523 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/121318f2-259b-4187-a348-1282d0e63995-installation-pull-secrets\") pod \"image-registry-66df7c8f76-sbzcm\" (UID: \"121318f2-259b-4187-a348-1282d0e63995\") " pod="openshift-image-registry/image-registry-66df7c8f76-sbzcm" Mar 13 10:11:36 crc kubenswrapper[4632]: I0313 10:11:36.534550 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/121318f2-259b-4187-a348-1282d0e63995-registry-tls\") pod \"image-registry-66df7c8f76-sbzcm\" (UID: \"121318f2-259b-4187-a348-1282d0e63995\") " pod="openshift-image-registry/image-registry-66df7c8f76-sbzcm" Mar 13 10:11:36 crc kubenswrapper[4632]: I0313 10:11:36.534586 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-sbzcm\" (UID: \"121318f2-259b-4187-a348-1282d0e63995\") " pod="openshift-image-registry/image-registry-66df7c8f76-sbzcm" Mar 13 10:11:36 crc kubenswrapper[4632]: I0313 10:11:36.534684 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/121318f2-259b-4187-a348-1282d0e63995-bound-sa-token\") pod \"image-registry-66df7c8f76-sbzcm\" (UID: \"121318f2-259b-4187-a348-1282d0e63995\") " pod="openshift-image-registry/image-registry-66df7c8f76-sbzcm" Mar 13 10:11:36 crc kubenswrapper[4632]: I0313 10:11:36.534749 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/121318f2-259b-4187-a348-1282d0e63995-registry-certificates\") pod \"image-registry-66df7c8f76-sbzcm\" (UID: \"121318f2-259b-4187-a348-1282d0e63995\") " pod="openshift-image-registry/image-registry-66df7c8f76-sbzcm" Mar 13 10:11:36 crc kubenswrapper[4632]: I0313 10:11:36.534784 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zznsv\" (UniqueName: \"kubernetes.io/projected/121318f2-259b-4187-a348-1282d0e63995-kube-api-access-zznsv\") pod \"image-registry-66df7c8f76-sbzcm\" (UID: \"121318f2-259b-4187-a348-1282d0e63995\") " pod="openshift-image-registry/image-registry-66df7c8f76-sbzcm" Mar 13 10:11:36 crc kubenswrapper[4632]: I0313 10:11:36.586345 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-sbzcm\" (UID: \"121318f2-259b-4187-a348-1282d0e63995\") " pod="openshift-image-registry/image-registry-66df7c8f76-sbzcm" Mar 13 10:11:36 crc kubenswrapper[4632]: I0313 10:11:36.636116 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/121318f2-259b-4187-a348-1282d0e63995-registry-certificates\") pod \"image-registry-66df7c8f76-sbzcm\" (UID: \"121318f2-259b-4187-a348-1282d0e63995\") " pod="openshift-image-registry/image-registry-66df7c8f76-sbzcm" Mar 13 10:11:36 crc kubenswrapper[4632]: I0313 10:11:36.636164 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/121318f2-259b-4187-a348-1282d0e63995-bound-sa-token\") pod \"image-registry-66df7c8f76-sbzcm\" (UID: \"121318f2-259b-4187-a348-1282d0e63995\") " pod="openshift-image-registry/image-registry-66df7c8f76-sbzcm" Mar 13 10:11:36 crc kubenswrapper[4632]: I0313 10:11:36.636517 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zznsv\" (UniqueName: \"kubernetes.io/projected/121318f2-259b-4187-a348-1282d0e63995-kube-api-access-zznsv\") pod \"image-registry-66df7c8f76-sbzcm\" (UID: \"121318f2-259b-4187-a348-1282d0e63995\") " pod="openshift-image-registry/image-registry-66df7c8f76-sbzcm" Mar 13 10:11:36 crc kubenswrapper[4632]: I0313 10:11:36.636734 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/121318f2-259b-4187-a348-1282d0e63995-trusted-ca\") pod \"image-registry-66df7c8f76-sbzcm\" (UID: \"121318f2-259b-4187-a348-1282d0e63995\") " pod="openshift-image-registry/image-registry-66df7c8f76-sbzcm" Mar 13 10:11:36 crc kubenswrapper[4632]: I0313 10:11:36.637781 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/121318f2-259b-4187-a348-1282d0e63995-registry-certificates\") pod \"image-registry-66df7c8f76-sbzcm\" (UID: \"121318f2-259b-4187-a348-1282d0e63995\") " pod="openshift-image-registry/image-registry-66df7c8f76-sbzcm" Mar 13 10:11:36 crc kubenswrapper[4632]: I0313 10:11:36.638196 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/121318f2-259b-4187-a348-1282d0e63995-trusted-ca\") pod \"image-registry-66df7c8f76-sbzcm\" (UID: \"121318f2-259b-4187-a348-1282d0e63995\") " pod="openshift-image-registry/image-registry-66df7c8f76-sbzcm" Mar 13 10:11:36 crc kubenswrapper[4632]: I0313 10:11:36.636767 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/121318f2-259b-4187-a348-1282d0e63995-ca-trust-extracted\") pod \"image-registry-66df7c8f76-sbzcm\" (UID: \"121318f2-259b-4187-a348-1282d0e63995\") " pod="openshift-image-registry/image-registry-66df7c8f76-sbzcm" Mar 13 10:11:36 crc kubenswrapper[4632]: I0313 10:11:36.638302 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/121318f2-259b-4187-a348-1282d0e63995-installation-pull-secrets\") pod \"image-registry-66df7c8f76-sbzcm\" (UID: \"121318f2-259b-4187-a348-1282d0e63995\") " pod="openshift-image-registry/image-registry-66df7c8f76-sbzcm" Mar 13 10:11:36 crc kubenswrapper[4632]: I0313 10:11:36.638335 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/121318f2-259b-4187-a348-1282d0e63995-registry-tls\") pod \"image-registry-66df7c8f76-sbzcm\" (UID: \"121318f2-259b-4187-a348-1282d0e63995\") " pod="openshift-image-registry/image-registry-66df7c8f76-sbzcm" Mar 13 10:11:36 crc kubenswrapper[4632]: I0313 10:11:36.638334 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/121318f2-259b-4187-a348-1282d0e63995-ca-trust-extracted\") pod \"image-registry-66df7c8f76-sbzcm\" (UID: \"121318f2-259b-4187-a348-1282d0e63995\") " pod="openshift-image-registry/image-registry-66df7c8f76-sbzcm" Mar 13 10:11:36 crc kubenswrapper[4632]: I0313 10:11:36.644591 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/121318f2-259b-4187-a348-1282d0e63995-registry-tls\") pod \"image-registry-66df7c8f76-sbzcm\" (UID: \"121318f2-259b-4187-a348-1282d0e63995\") " pod="openshift-image-registry/image-registry-66df7c8f76-sbzcm" Mar 13 10:11:36 crc kubenswrapper[4632]: I0313 10:11:36.646691 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/121318f2-259b-4187-a348-1282d0e63995-installation-pull-secrets\") pod \"image-registry-66df7c8f76-sbzcm\" (UID: \"121318f2-259b-4187-a348-1282d0e63995\") " pod="openshift-image-registry/image-registry-66df7c8f76-sbzcm" Mar 13 10:11:36 crc kubenswrapper[4632]: I0313 10:11:36.654670 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zznsv\" (UniqueName: \"kubernetes.io/projected/121318f2-259b-4187-a348-1282d0e63995-kube-api-access-zznsv\") pod \"image-registry-66df7c8f76-sbzcm\" (UID: \"121318f2-259b-4187-a348-1282d0e63995\") " pod="openshift-image-registry/image-registry-66df7c8f76-sbzcm" Mar 13 10:11:36 crc kubenswrapper[4632]: I0313 10:11:36.658645 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/121318f2-259b-4187-a348-1282d0e63995-bound-sa-token\") pod \"image-registry-66df7c8f76-sbzcm\" (UID: \"121318f2-259b-4187-a348-1282d0e63995\") " pod="openshift-image-registry/image-registry-66df7c8f76-sbzcm" Mar 13 10:11:36 crc kubenswrapper[4632]: I0313 10:11:36.740197 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-sbzcm" Mar 13 10:11:37 crc kubenswrapper[4632]: I0313 10:11:37.237820 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-sbzcm"] Mar 13 10:11:37 crc kubenswrapper[4632]: W0313 10:11:37.249070 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod121318f2_259b_4187_a348_1282d0e63995.slice/crio-194c14a843a5324b5d9fa98672e41df35bc8eea13219d905d3a50858030e3537 WatchSource:0}: Error finding container 194c14a843a5324b5d9fa98672e41df35bc8eea13219d905d3a50858030e3537: Status 404 returned error can't find the container with id 194c14a843a5324b5d9fa98672e41df35bc8eea13219d905d3a50858030e3537 Mar 13 10:11:37 crc kubenswrapper[4632]: I0313 10:11:37.383577 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-sbzcm" event={"ID":"121318f2-259b-4187-a348-1282d0e63995","Type":"ContainerStarted","Data":"194c14a843a5324b5d9fa98672e41df35bc8eea13219d905d3a50858030e3537"} Mar 13 10:11:38 crc kubenswrapper[4632]: I0313 10:11:38.391449 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-sbzcm" event={"ID":"121318f2-259b-4187-a348-1282d0e63995","Type":"ContainerStarted","Data":"d8762342ae6795e17a6078aa65bfe76494086e447cc848fc87a90a6d426b1bdf"} Mar 13 10:11:38 crc kubenswrapper[4632]: I0313 10:11:38.392883 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-sbzcm" Mar 13 10:11:38 crc kubenswrapper[4632]: I0313 10:11:38.417629 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-sbzcm" podStartSLOduration=2.417611691 podStartE2EDuration="2.417611691s" podCreationTimestamp="2026-03-13 10:11:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:11:38.41494862 +0000 UTC m=+472.437478753" watchObservedRunningTime="2026-03-13 10:11:38.417611691 +0000 UTC m=+472.440141824" Mar 13 10:11:40 crc kubenswrapper[4632]: I0313 10:11:40.461092 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 10:11:40 crc kubenswrapper[4632]: I0313 10:11:40.463057 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.028074 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p8wjg"] Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.028587 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p8wjg" podUID="b11a7dff-bf08-44c3-b4f4-923119c13717" containerName="registry-server" containerID="cri-o://55bfc00a5732a457ecbee5c7be945027bdb42c0137a6b22125d44dafb5924f59" gracePeriod=30 Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.041847 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jvh86"] Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.042232 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jvh86" podUID="bd46ae04-0610-4aa5-9385-dd45de66c5dd" containerName="registry-server" containerID="cri-o://5a10aa8d51646d1f515364874b0426c82d85f03f52a4924f31299cb0395b0607" gracePeriod=30 Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.077289 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2n99d"] Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.077697 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-2n99d" podUID="797176c6-dd56-48d6-8004-ff1dd5353a50" containerName="marketplace-operator" containerID="cri-o://0fd5f07ae3c28f8c24cc66a585de93acc08f170fb621bbeb190cd66596980871" gracePeriod=30 Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.095225 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-txp2w"] Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.095468 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-txp2w" podUID="f0cd0b7e-eded-4a51-8b1e-e67b9381bc87" containerName="registry-server" containerID="cri-o://d3932d25c3aaf08a595c2af7ee315a6a0b2efd503369ee7398e6b39ad609dc3c" gracePeriod=30 Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.117421 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z2gc7"] Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.117780 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z2gc7" podUID="a110c276-8516-4f9e-a6af-d6837cd0f387" containerName="registry-server" containerID="cri-o://f7b31d5849d6707802fb373a1fe6f70b7a45ddade6fd6d9f2c7e5319e74f32d3" gracePeriod=30 Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.130728 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d9n25"] Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.131447 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-d9n25" Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.174308 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d9n25"] Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.295564 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf5vg\" (UniqueName: \"kubernetes.io/projected/023be687-a773-401c-981b-e3d7136f53b6-kube-api-access-bf5vg\") pod \"marketplace-operator-79b997595-d9n25\" (UID: \"023be687-a773-401c-981b-e3d7136f53b6\") " pod="openshift-marketplace/marketplace-operator-79b997595-d9n25" Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.295654 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/023be687-a773-401c-981b-e3d7136f53b6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d9n25\" (UID: \"023be687-a773-401c-981b-e3d7136f53b6\") " pod="openshift-marketplace/marketplace-operator-79b997595-d9n25" Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.295702 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/023be687-a773-401c-981b-e3d7136f53b6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d9n25\" (UID: \"023be687-a773-401c-981b-e3d7136f53b6\") " pod="openshift-marketplace/marketplace-operator-79b997595-d9n25" Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.397088 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf5vg\" (UniqueName: \"kubernetes.io/projected/023be687-a773-401c-981b-e3d7136f53b6-kube-api-access-bf5vg\") pod \"marketplace-operator-79b997595-d9n25\" (UID: \"023be687-a773-401c-981b-e3d7136f53b6\") " pod="openshift-marketplace/marketplace-operator-79b997595-d9n25" Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.397181 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/023be687-a773-401c-981b-e3d7136f53b6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d9n25\" (UID: \"023be687-a773-401c-981b-e3d7136f53b6\") " pod="openshift-marketplace/marketplace-operator-79b997595-d9n25" Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.397207 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/023be687-a773-401c-981b-e3d7136f53b6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d9n25\" (UID: \"023be687-a773-401c-981b-e3d7136f53b6\") " pod="openshift-marketplace/marketplace-operator-79b997595-d9n25" Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.400120 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/023be687-a773-401c-981b-e3d7136f53b6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d9n25\" (UID: \"023be687-a773-401c-981b-e3d7136f53b6\") " pod="openshift-marketplace/marketplace-operator-79b997595-d9n25" Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.423149 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/023be687-a773-401c-981b-e3d7136f53b6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d9n25\" (UID: \"023be687-a773-401c-981b-e3d7136f53b6\") " pod="openshift-marketplace/marketplace-operator-79b997595-d9n25" Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.443815 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf5vg\" (UniqueName: \"kubernetes.io/projected/023be687-a773-401c-981b-e3d7136f53b6-kube-api-access-bf5vg\") pod \"marketplace-operator-79b997595-d9n25\" (UID: \"023be687-a773-401c-981b-e3d7136f53b6\") " pod="openshift-marketplace/marketplace-operator-79b997595-d9n25" Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.455655 4632 generic.go:334] "Generic (PLEG): container finished" podID="797176c6-dd56-48d6-8004-ff1dd5353a50" containerID="0fd5f07ae3c28f8c24cc66a585de93acc08f170fb621bbeb190cd66596980871" exitCode=0 Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.455851 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2n99d" event={"ID":"797176c6-dd56-48d6-8004-ff1dd5353a50","Type":"ContainerDied","Data":"0fd5f07ae3c28f8c24cc66a585de93acc08f170fb621bbeb190cd66596980871"} Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.464335 4632 generic.go:334] "Generic (PLEG): container finished" podID="a110c276-8516-4f9e-a6af-d6837cd0f387" containerID="f7b31d5849d6707802fb373a1fe6f70b7a45ddade6fd6d9f2c7e5319e74f32d3" exitCode=0 Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.464421 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z2gc7" event={"ID":"a110c276-8516-4f9e-a6af-d6837cd0f387","Type":"ContainerDied","Data":"f7b31d5849d6707802fb373a1fe6f70b7a45ddade6fd6d9f2c7e5319e74f32d3"} Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.469050 4632 generic.go:334] "Generic (PLEG): container finished" podID="b11a7dff-bf08-44c3-b4f4-923119c13717" containerID="55bfc00a5732a457ecbee5c7be945027bdb42c0137a6b22125d44dafb5924f59" exitCode=0 Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.469124 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p8wjg" event={"ID":"b11a7dff-bf08-44c3-b4f4-923119c13717","Type":"ContainerDied","Data":"55bfc00a5732a457ecbee5c7be945027bdb42c0137a6b22125d44dafb5924f59"} Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.472154 4632 generic.go:334] "Generic (PLEG): container finished" podID="f0cd0b7e-eded-4a51-8b1e-e67b9381bc87" containerID="d3932d25c3aaf08a595c2af7ee315a6a0b2efd503369ee7398e6b39ad609dc3c" exitCode=0 Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.472218 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-txp2w" event={"ID":"f0cd0b7e-eded-4a51-8b1e-e67b9381bc87","Type":"ContainerDied","Data":"d3932d25c3aaf08a595c2af7ee315a6a0b2efd503369ee7398e6b39ad609dc3c"} Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.493042 4632 generic.go:334] "Generic (PLEG): container finished" podID="bd46ae04-0610-4aa5-9385-dd45de66c5dd" containerID="5a10aa8d51646d1f515364874b0426c82d85f03f52a4924f31299cb0395b0607" exitCode=0 Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.493166 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jvh86" event={"ID":"bd46ae04-0610-4aa5-9385-dd45de66c5dd","Type":"ContainerDied","Data":"5a10aa8d51646d1f515364874b0426c82d85f03f52a4924f31299cb0395b0607"} Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.504641 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p8wjg" Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.592378 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-txp2w" Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.599753 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11a7dff-bf08-44c3-b4f4-923119c13717-catalog-content\") pod \"b11a7dff-bf08-44c3-b4f4-923119c13717\" (UID: \"b11a7dff-bf08-44c3-b4f4-923119c13717\") " Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.599839 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11a7dff-bf08-44c3-b4f4-923119c13717-utilities\") pod \"b11a7dff-bf08-44c3-b4f4-923119c13717\" (UID: \"b11a7dff-bf08-44c3-b4f4-923119c13717\") " Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.599878 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlsv8\" (UniqueName: \"kubernetes.io/projected/b11a7dff-bf08-44c3-b4f4-923119c13717-kube-api-access-wlsv8\") pod \"b11a7dff-bf08-44c3-b4f4-923119c13717\" (UID: \"b11a7dff-bf08-44c3-b4f4-923119c13717\") " Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.604426 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11a7dff-bf08-44c3-b4f4-923119c13717-utilities" (OuterVolumeSpecName: "utilities") pod "b11a7dff-bf08-44c3-b4f4-923119c13717" (UID: "b11a7dff-bf08-44c3-b4f4-923119c13717"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.606090 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11a7dff-bf08-44c3-b4f4-923119c13717-kube-api-access-wlsv8" (OuterVolumeSpecName: "kube-api-access-wlsv8") pod "b11a7dff-bf08-44c3-b4f4-923119c13717" (UID: "b11a7dff-bf08-44c3-b4f4-923119c13717"). InnerVolumeSpecName "kube-api-access-wlsv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.689633 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-d9n25" Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.695282 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11a7dff-bf08-44c3-b4f4-923119c13717-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11a7dff-bf08-44c3-b4f4-923119c13717" (UID: "b11a7dff-bf08-44c3-b4f4-923119c13717"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.701318 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0cd0b7e-eded-4a51-8b1e-e67b9381bc87-utilities\") pod \"f0cd0b7e-eded-4a51-8b1e-e67b9381bc87\" (UID: \"f0cd0b7e-eded-4a51-8b1e-e67b9381bc87\") " Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.701394 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5jx8\" (UniqueName: \"kubernetes.io/projected/f0cd0b7e-eded-4a51-8b1e-e67b9381bc87-kube-api-access-z5jx8\") pod \"f0cd0b7e-eded-4a51-8b1e-e67b9381bc87\" (UID: \"f0cd0b7e-eded-4a51-8b1e-e67b9381bc87\") " Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.701494 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0cd0b7e-eded-4a51-8b1e-e67b9381bc87-catalog-content\") pod \"f0cd0b7e-eded-4a51-8b1e-e67b9381bc87\" (UID: \"f0cd0b7e-eded-4a51-8b1e-e67b9381bc87\") " Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.701751 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11a7dff-bf08-44c3-b4f4-923119c13717-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.701772 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11a7dff-bf08-44c3-b4f4-923119c13717-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.701784 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlsv8\" (UniqueName: \"kubernetes.io/projected/b11a7dff-bf08-44c3-b4f4-923119c13717-kube-api-access-wlsv8\") on node \"crc\" DevicePath \"\"" Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.703784 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0cd0b7e-eded-4a51-8b1e-e67b9381bc87-utilities" (OuterVolumeSpecName: "utilities") pod "f0cd0b7e-eded-4a51-8b1e-e67b9381bc87" (UID: "f0cd0b7e-eded-4a51-8b1e-e67b9381bc87"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.706842 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0cd0b7e-eded-4a51-8b1e-e67b9381bc87-kube-api-access-z5jx8" (OuterVolumeSpecName: "kube-api-access-z5jx8") pod "f0cd0b7e-eded-4a51-8b1e-e67b9381bc87" (UID: "f0cd0b7e-eded-4a51-8b1e-e67b9381bc87"). InnerVolumeSpecName "kube-api-access-z5jx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.746001 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0cd0b7e-eded-4a51-8b1e-e67b9381bc87-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f0cd0b7e-eded-4a51-8b1e-e67b9381bc87" (UID: "f0cd0b7e-eded-4a51-8b1e-e67b9381bc87"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.757133 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jvh86" Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.774657 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z2gc7" Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.803402 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5jx8\" (UniqueName: \"kubernetes.io/projected/f0cd0b7e-eded-4a51-8b1e-e67b9381bc87-kube-api-access-z5jx8\") on node \"crc\" DevicePath \"\"" Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.803443 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0cd0b7e-eded-4a51-8b1e-e67b9381bc87-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.803454 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0cd0b7e-eded-4a51-8b1e-e67b9381bc87-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.905866 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd46ae04-0610-4aa5-9385-dd45de66c5dd-catalog-content\") pod \"bd46ae04-0610-4aa5-9385-dd45de66c5dd\" (UID: \"bd46ae04-0610-4aa5-9385-dd45de66c5dd\") " Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.905951 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd46ae04-0610-4aa5-9385-dd45de66c5dd-utilities\") pod \"bd46ae04-0610-4aa5-9385-dd45de66c5dd\" (UID: \"bd46ae04-0610-4aa5-9385-dd45de66c5dd\") " Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.905994 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a110c276-8516-4f9e-a6af-d6837cd0f387-utilities\") pod \"a110c276-8516-4f9e-a6af-d6837cd0f387\" (UID: \"a110c276-8516-4f9e-a6af-d6837cd0f387\") " Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.906015 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kzjw\" (UniqueName: \"kubernetes.io/projected/bd46ae04-0610-4aa5-9385-dd45de66c5dd-kube-api-access-5kzjw\") pod \"bd46ae04-0610-4aa5-9385-dd45de66c5dd\" (UID: \"bd46ae04-0610-4aa5-9385-dd45de66c5dd\") " Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.906063 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfdk8\" (UniqueName: \"kubernetes.io/projected/a110c276-8516-4f9e-a6af-d6837cd0f387-kube-api-access-tfdk8\") pod \"a110c276-8516-4f9e-a6af-d6837cd0f387\" (UID: \"a110c276-8516-4f9e-a6af-d6837cd0f387\") " Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.906099 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a110c276-8516-4f9e-a6af-d6837cd0f387-catalog-content\") pod \"a110c276-8516-4f9e-a6af-d6837cd0f387\" (UID: \"a110c276-8516-4f9e-a6af-d6837cd0f387\") " Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.907860 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd46ae04-0610-4aa5-9385-dd45de66c5dd-utilities" (OuterVolumeSpecName: "utilities") pod "bd46ae04-0610-4aa5-9385-dd45de66c5dd" (UID: "bd46ae04-0610-4aa5-9385-dd45de66c5dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.908466 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a110c276-8516-4f9e-a6af-d6837cd0f387-utilities" (OuterVolumeSpecName: "utilities") pod "a110c276-8516-4f9e-a6af-d6837cd0f387" (UID: "a110c276-8516-4f9e-a6af-d6837cd0f387"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.914817 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a110c276-8516-4f9e-a6af-d6837cd0f387-kube-api-access-tfdk8" (OuterVolumeSpecName: "kube-api-access-tfdk8") pod "a110c276-8516-4f9e-a6af-d6837cd0f387" (UID: "a110c276-8516-4f9e-a6af-d6837cd0f387"). InnerVolumeSpecName "kube-api-access-tfdk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:11:41 crc kubenswrapper[4632]: I0313 10:11:41.947681 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd46ae04-0610-4aa5-9385-dd45de66c5dd-kube-api-access-5kzjw" (OuterVolumeSpecName: "kube-api-access-5kzjw") pod "bd46ae04-0610-4aa5-9385-dd45de66c5dd" (UID: "bd46ae04-0610-4aa5-9385-dd45de66c5dd"). InnerVolumeSpecName "kube-api-access-5kzjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.007727 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a110c276-8516-4f9e-a6af-d6837cd0f387-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.007755 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kzjw\" (UniqueName: \"kubernetes.io/projected/bd46ae04-0610-4aa5-9385-dd45de66c5dd-kube-api-access-5kzjw\") on node \"crc\" DevicePath \"\"" Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.007766 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfdk8\" (UniqueName: \"kubernetes.io/projected/a110c276-8516-4f9e-a6af-d6837cd0f387-kube-api-access-tfdk8\") on node \"crc\" DevicePath \"\"" Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.007775 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd46ae04-0610-4aa5-9385-dd45de66c5dd-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.009272 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2n99d" Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.009773 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd46ae04-0610-4aa5-9385-dd45de66c5dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd46ae04-0610-4aa5-9385-dd45de66c5dd" (UID: "bd46ae04-0610-4aa5-9385-dd45de66c5dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.078111 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d9n25"] Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.108716 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/797176c6-dd56-48d6-8004-ff1dd5353a50-marketplace-operator-metrics\") pod \"797176c6-dd56-48d6-8004-ff1dd5353a50\" (UID: \"797176c6-dd56-48d6-8004-ff1dd5353a50\") " Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.108903 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qtrb\" (UniqueName: \"kubernetes.io/projected/797176c6-dd56-48d6-8004-ff1dd5353a50-kube-api-access-8qtrb\") pod \"797176c6-dd56-48d6-8004-ff1dd5353a50\" (UID: \"797176c6-dd56-48d6-8004-ff1dd5353a50\") " Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.108984 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/797176c6-dd56-48d6-8004-ff1dd5353a50-marketplace-trusted-ca\") pod \"797176c6-dd56-48d6-8004-ff1dd5353a50\" (UID: \"797176c6-dd56-48d6-8004-ff1dd5353a50\") " Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.109396 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd46ae04-0610-4aa5-9385-dd45de66c5dd-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.111108 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/797176c6-dd56-48d6-8004-ff1dd5353a50-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "797176c6-dd56-48d6-8004-ff1dd5353a50" (UID: "797176c6-dd56-48d6-8004-ff1dd5353a50"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.115109 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/797176c6-dd56-48d6-8004-ff1dd5353a50-kube-api-access-8qtrb" (OuterVolumeSpecName: "kube-api-access-8qtrb") pod "797176c6-dd56-48d6-8004-ff1dd5353a50" (UID: "797176c6-dd56-48d6-8004-ff1dd5353a50"). InnerVolumeSpecName "kube-api-access-8qtrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.119473 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/797176c6-dd56-48d6-8004-ff1dd5353a50-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "797176c6-dd56-48d6-8004-ff1dd5353a50" (UID: "797176c6-dd56-48d6-8004-ff1dd5353a50"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.148757 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a110c276-8516-4f9e-a6af-d6837cd0f387-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a110c276-8516-4f9e-a6af-d6837cd0f387" (UID: "a110c276-8516-4f9e-a6af-d6837cd0f387"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.211125 4632 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/797176c6-dd56-48d6-8004-ff1dd5353a50-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.211173 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a110c276-8516-4f9e-a6af-d6837cd0f387-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.211195 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qtrb\" (UniqueName: \"kubernetes.io/projected/797176c6-dd56-48d6-8004-ff1dd5353a50-kube-api-access-8qtrb\") on node \"crc\" DevicePath \"\"" Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.211208 4632 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/797176c6-dd56-48d6-8004-ff1dd5353a50-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.500269 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d9n25" event={"ID":"023be687-a773-401c-981b-e3d7136f53b6","Type":"ContainerStarted","Data":"077a793783663fc0a0919deecac0d9a526695fee5caabb2c2420a407d92820f6"} Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.500361 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-d9n25" Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.500424 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d9n25" event={"ID":"023be687-a773-401c-981b-e3d7136f53b6","Type":"ContainerStarted","Data":"4a96453b15b7d7e2a7c4a1760c93d848d945f3f4fd15bd89c331bde321bc785c"} Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.503818 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z2gc7" event={"ID":"a110c276-8516-4f9e-a6af-d6837cd0f387","Type":"ContainerDied","Data":"97491a7f994f5c8dffa29a28fb1914c53f3fb5687971c6cdb3d3b5b636967634"} Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.503884 4632 scope.go:117] "RemoveContainer" containerID="f7b31d5849d6707802fb373a1fe6f70b7a45ddade6fd6d9f2c7e5319e74f32d3" Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.504065 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z2gc7" Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.513746 4632 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-d9n25 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.70:8080/healthz\": dial tcp 10.217.0.70:8080: connect: connection refused" start-of-body= Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.513787 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-d9n25" podUID="023be687-a773-401c-981b-e3d7136f53b6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.70:8080/healthz\": dial tcp 10.217.0.70:8080: connect: connection refused" Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.519598 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p8wjg" event={"ID":"b11a7dff-bf08-44c3-b4f4-923119c13717","Type":"ContainerDied","Data":"67d236def43f1634b091443716f5df0abcd64ee4e8ef6768dd906ab3397df097"} Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.519711 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p8wjg" Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.526659 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-txp2w" event={"ID":"f0cd0b7e-eded-4a51-8b1e-e67b9381bc87","Type":"ContainerDied","Data":"3442fa414f5a2c2798e2a9a29c903f3acac1f4e2b61c872fefc305318ea1c556"} Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.526746 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-txp2w" Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.532568 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2n99d" event={"ID":"797176c6-dd56-48d6-8004-ff1dd5353a50","Type":"ContainerDied","Data":"f11fbb0ec92177c2b8cb772cacb63ff7d8a26b02bee6907aaa00dedbedf68d98"} Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.532838 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2n99d" Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.538997 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jvh86" event={"ID":"bd46ae04-0610-4aa5-9385-dd45de66c5dd","Type":"ContainerDied","Data":"c1e7366f3326cfd08308453ff8a94a3f8d3ce8ebc6a33b2bfafadd960643927e"} Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.539290 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jvh86" Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.539882 4632 scope.go:117] "RemoveContainer" containerID="ef106624caa843911d5171f0d70f22c07e7e2bd19b6992932276ca1226b858e3" Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.544416 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-d9n25" podStartSLOduration=1.5444010430000001 podStartE2EDuration="1.544401043s" podCreationTimestamp="2026-03-13 10:11:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:11:42.533229578 +0000 UTC m=+476.555759731" watchObservedRunningTime="2026-03-13 10:11:42.544401043 +0000 UTC m=+476.566931176" Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.569248 4632 scope.go:117] "RemoveContainer" containerID="0d073c1adaa82aa87cab8618a50587cfed8b79fe657e3f2878a87c7599c612fb" Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.572200 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p8wjg"] Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.581622 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p8wjg"] Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.594023 4632 scope.go:117] "RemoveContainer" containerID="55bfc00a5732a457ecbee5c7be945027bdb42c0137a6b22125d44dafb5924f59" Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.598376 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z2gc7"] Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.616873 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z2gc7"] Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.618917 4632 scope.go:117] "RemoveContainer" containerID="06491b70d16bc5a697f5518128f63de5fdeb769cc33d09d9262078f5aa75a5b8" Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.621364 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2n99d"] Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.641267 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2n99d"] Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.656038 4632 scope.go:117] "RemoveContainer" containerID="31ed0687958629bbe6ae3de064bae07567e401a6f6f2576bf2e48b7390937742" Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.656888 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-txp2w"] Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.661330 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-txp2w"] Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.669501 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jvh86"] Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.675001 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jvh86"] Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.679798 4632 scope.go:117] "RemoveContainer" containerID="d3932d25c3aaf08a595c2af7ee315a6a0b2efd503369ee7398e6b39ad609dc3c" Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.702931 4632 scope.go:117] "RemoveContainer" containerID="643ad1b648678ed35dcc10aaf9a844460c880f38f688c0da6821345eaf872208" Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.729613 4632 scope.go:117] "RemoveContainer" containerID="d1da7a7847a6ff5346add9e3ed943cdc6232146978e6161d764011992ac73c84" Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.747951 4632 scope.go:117] "RemoveContainer" containerID="0fd5f07ae3c28f8c24cc66a585de93acc08f170fb621bbeb190cd66596980871" Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.770558 4632 scope.go:117] "RemoveContainer" containerID="5a10aa8d51646d1f515364874b0426c82d85f03f52a4924f31299cb0395b0607" Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.792341 4632 scope.go:117] "RemoveContainer" containerID="ba15fa8797c3390ead2f6a2f6b5a64ad766bc4a942dfc13cbdc76a3242dd09c0" Mar 13 10:11:42 crc kubenswrapper[4632]: I0313 10:11:42.823457 4632 scope.go:117] "RemoveContainer" containerID="eabb475f877c5898896f887fa631fab417c1e3579d0424b2b6c06f4278f091af" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.245279 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7ksc5"] Mar 13 10:11:43 crc kubenswrapper[4632]: E0313 10:11:43.245515 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd46ae04-0610-4aa5-9385-dd45de66c5dd" containerName="extract-content" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.245542 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd46ae04-0610-4aa5-9385-dd45de66c5dd" containerName="extract-content" Mar 13 10:11:43 crc kubenswrapper[4632]: E0313 10:11:43.245556 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0cd0b7e-eded-4a51-8b1e-e67b9381bc87" containerName="extract-content" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.245564 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0cd0b7e-eded-4a51-8b1e-e67b9381bc87" containerName="extract-content" Mar 13 10:11:43 crc kubenswrapper[4632]: E0313 10:11:43.245572 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0cd0b7e-eded-4a51-8b1e-e67b9381bc87" containerName="extract-utilities" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.245581 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0cd0b7e-eded-4a51-8b1e-e67b9381bc87" containerName="extract-utilities" Mar 13 10:11:43 crc kubenswrapper[4632]: E0313 10:11:43.245592 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b11a7dff-bf08-44c3-b4f4-923119c13717" containerName="registry-server" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.245599 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="b11a7dff-bf08-44c3-b4f4-923119c13717" containerName="registry-server" Mar 13 10:11:43 crc kubenswrapper[4632]: E0313 10:11:43.245608 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b11a7dff-bf08-44c3-b4f4-923119c13717" containerName="extract-utilities" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.245615 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="b11a7dff-bf08-44c3-b4f4-923119c13717" containerName="extract-utilities" Mar 13 10:11:43 crc kubenswrapper[4632]: E0313 10:11:43.245625 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd46ae04-0610-4aa5-9385-dd45de66c5dd" containerName="extract-utilities" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.245632 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd46ae04-0610-4aa5-9385-dd45de66c5dd" containerName="extract-utilities" Mar 13 10:11:43 crc kubenswrapper[4632]: E0313 10:11:43.245642 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a110c276-8516-4f9e-a6af-d6837cd0f387" containerName="registry-server" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.245649 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="a110c276-8516-4f9e-a6af-d6837cd0f387" containerName="registry-server" Mar 13 10:11:43 crc kubenswrapper[4632]: E0313 10:11:43.245662 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd46ae04-0610-4aa5-9385-dd45de66c5dd" containerName="registry-server" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.245669 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd46ae04-0610-4aa5-9385-dd45de66c5dd" containerName="registry-server" Mar 13 10:11:43 crc kubenswrapper[4632]: E0313 10:11:43.245678 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="797176c6-dd56-48d6-8004-ff1dd5353a50" containerName="marketplace-operator" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.245685 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="797176c6-dd56-48d6-8004-ff1dd5353a50" containerName="marketplace-operator" Mar 13 10:11:43 crc kubenswrapper[4632]: E0313 10:11:43.245695 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a110c276-8516-4f9e-a6af-d6837cd0f387" containerName="extract-content" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.245702 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="a110c276-8516-4f9e-a6af-d6837cd0f387" containerName="extract-content" Mar 13 10:11:43 crc kubenswrapper[4632]: E0313 10:11:43.245713 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0cd0b7e-eded-4a51-8b1e-e67b9381bc87" containerName="registry-server" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.245721 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0cd0b7e-eded-4a51-8b1e-e67b9381bc87" containerName="registry-server" Mar 13 10:11:43 crc kubenswrapper[4632]: E0313 10:11:43.245731 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a110c276-8516-4f9e-a6af-d6837cd0f387" containerName="extract-utilities" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.245739 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="a110c276-8516-4f9e-a6af-d6837cd0f387" containerName="extract-utilities" Mar 13 10:11:43 crc kubenswrapper[4632]: E0313 10:11:43.245750 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b11a7dff-bf08-44c3-b4f4-923119c13717" containerName="extract-content" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.245758 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="b11a7dff-bf08-44c3-b4f4-923119c13717" containerName="extract-content" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.245863 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="797176c6-dd56-48d6-8004-ff1dd5353a50" containerName="marketplace-operator" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.245891 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0cd0b7e-eded-4a51-8b1e-e67b9381bc87" containerName="registry-server" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.245903 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="b11a7dff-bf08-44c3-b4f4-923119c13717" containerName="registry-server" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.245918 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd46ae04-0610-4aa5-9385-dd45de66c5dd" containerName="registry-server" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.245931 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="a110c276-8516-4f9e-a6af-d6837cd0f387" containerName="registry-server" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.246846 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7ksc5" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.249392 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.255149 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7ksc5"] Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.325915 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fa3faab-9e82-4fde-afff-3de6939a17d1-utilities\") pod \"certified-operators-7ksc5\" (UID: \"0fa3faab-9e82-4fde-afff-3de6939a17d1\") " pod="openshift-marketplace/certified-operators-7ksc5" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.326032 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fa3faab-9e82-4fde-afff-3de6939a17d1-catalog-content\") pod \"certified-operators-7ksc5\" (UID: \"0fa3faab-9e82-4fde-afff-3de6939a17d1\") " pod="openshift-marketplace/certified-operators-7ksc5" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.326120 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72v2p\" (UniqueName: \"kubernetes.io/projected/0fa3faab-9e82-4fde-afff-3de6939a17d1-kube-api-access-72v2p\") pod \"certified-operators-7ksc5\" (UID: \"0fa3faab-9e82-4fde-afff-3de6939a17d1\") " pod="openshift-marketplace/certified-operators-7ksc5" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.427979 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fa3faab-9e82-4fde-afff-3de6939a17d1-utilities\") pod \"certified-operators-7ksc5\" (UID: \"0fa3faab-9e82-4fde-afff-3de6939a17d1\") " pod="openshift-marketplace/certified-operators-7ksc5" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.428064 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fa3faab-9e82-4fde-afff-3de6939a17d1-catalog-content\") pod \"certified-operators-7ksc5\" (UID: \"0fa3faab-9e82-4fde-afff-3de6939a17d1\") " pod="openshift-marketplace/certified-operators-7ksc5" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.428147 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72v2p\" (UniqueName: \"kubernetes.io/projected/0fa3faab-9e82-4fde-afff-3de6939a17d1-kube-api-access-72v2p\") pod \"certified-operators-7ksc5\" (UID: \"0fa3faab-9e82-4fde-afff-3de6939a17d1\") " pod="openshift-marketplace/certified-operators-7ksc5" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.429376 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fa3faab-9e82-4fde-afff-3de6939a17d1-catalog-content\") pod \"certified-operators-7ksc5\" (UID: \"0fa3faab-9e82-4fde-afff-3de6939a17d1\") " pod="openshift-marketplace/certified-operators-7ksc5" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.429471 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fa3faab-9e82-4fde-afff-3de6939a17d1-utilities\") pod \"certified-operators-7ksc5\" (UID: \"0fa3faab-9e82-4fde-afff-3de6939a17d1\") " pod="openshift-marketplace/certified-operators-7ksc5" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.448054 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72v2p\" (UniqueName: \"kubernetes.io/projected/0fa3faab-9e82-4fde-afff-3de6939a17d1-kube-api-access-72v2p\") pod \"certified-operators-7ksc5\" (UID: \"0fa3faab-9e82-4fde-afff-3de6939a17d1\") " pod="openshift-marketplace/certified-operators-7ksc5" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.449038 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gdt8x"] Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.450400 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gdt8x" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.452001 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.472631 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gdt8x"] Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.529647 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6s75\" (UniqueName: \"kubernetes.io/projected/f7f61b75-16bf-4c5a-be30-c88d155c203f-kube-api-access-k6s75\") pod \"redhat-marketplace-gdt8x\" (UID: \"f7f61b75-16bf-4c5a-be30-c88d155c203f\") " pod="openshift-marketplace/redhat-marketplace-gdt8x" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.529715 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7f61b75-16bf-4c5a-be30-c88d155c203f-utilities\") pod \"redhat-marketplace-gdt8x\" (UID: \"f7f61b75-16bf-4c5a-be30-c88d155c203f\") " pod="openshift-marketplace/redhat-marketplace-gdt8x" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.529739 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7f61b75-16bf-4c5a-be30-c88d155c203f-catalog-content\") pod \"redhat-marketplace-gdt8x\" (UID: \"f7f61b75-16bf-4c5a-be30-c88d155c203f\") " pod="openshift-marketplace/redhat-marketplace-gdt8x" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.552250 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-d9n25" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.575556 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7ksc5" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.631837 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6s75\" (UniqueName: \"kubernetes.io/projected/f7f61b75-16bf-4c5a-be30-c88d155c203f-kube-api-access-k6s75\") pod \"redhat-marketplace-gdt8x\" (UID: \"f7f61b75-16bf-4c5a-be30-c88d155c203f\") " pod="openshift-marketplace/redhat-marketplace-gdt8x" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.631924 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7f61b75-16bf-4c5a-be30-c88d155c203f-utilities\") pod \"redhat-marketplace-gdt8x\" (UID: \"f7f61b75-16bf-4c5a-be30-c88d155c203f\") " pod="openshift-marketplace/redhat-marketplace-gdt8x" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.632029 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7f61b75-16bf-4c5a-be30-c88d155c203f-catalog-content\") pod \"redhat-marketplace-gdt8x\" (UID: \"f7f61b75-16bf-4c5a-be30-c88d155c203f\") " pod="openshift-marketplace/redhat-marketplace-gdt8x" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.632577 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7f61b75-16bf-4c5a-be30-c88d155c203f-catalog-content\") pod \"redhat-marketplace-gdt8x\" (UID: \"f7f61b75-16bf-4c5a-be30-c88d155c203f\") " pod="openshift-marketplace/redhat-marketplace-gdt8x" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.633291 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7f61b75-16bf-4c5a-be30-c88d155c203f-utilities\") pod \"redhat-marketplace-gdt8x\" (UID: \"f7f61b75-16bf-4c5a-be30-c88d155c203f\") " pod="openshift-marketplace/redhat-marketplace-gdt8x" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.657726 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6s75\" (UniqueName: \"kubernetes.io/projected/f7f61b75-16bf-4c5a-be30-c88d155c203f-kube-api-access-k6s75\") pod \"redhat-marketplace-gdt8x\" (UID: \"f7f61b75-16bf-4c5a-be30-c88d155c203f\") " pod="openshift-marketplace/redhat-marketplace-gdt8x" Mar 13 10:11:43 crc kubenswrapper[4632]: I0313 10:11:43.787227 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gdt8x" Mar 13 10:11:44 crc kubenswrapper[4632]: I0313 10:11:44.029081 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7ksc5"] Mar 13 10:11:44 crc kubenswrapper[4632]: I0313 10:11:44.038313 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gdt8x"] Mar 13 10:11:44 crc kubenswrapper[4632]: W0313 10:11:44.041208 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fa3faab_9e82_4fde_afff_3de6939a17d1.slice/crio-05d058946411b3aa47f2245f16be7366b96dd2370570fc574b5f1accea9ab081 WatchSource:0}: Error finding container 05d058946411b3aa47f2245f16be7366b96dd2370570fc574b5f1accea9ab081: Status 404 returned error can't find the container with id 05d058946411b3aa47f2245f16be7366b96dd2370570fc574b5f1accea9ab081 Mar 13 10:11:44 crc kubenswrapper[4632]: I0313 10:11:44.054147 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="797176c6-dd56-48d6-8004-ff1dd5353a50" path="/var/lib/kubelet/pods/797176c6-dd56-48d6-8004-ff1dd5353a50/volumes" Mar 13 10:11:44 crc kubenswrapper[4632]: I0313 10:11:44.055710 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a110c276-8516-4f9e-a6af-d6837cd0f387" path="/var/lib/kubelet/pods/a110c276-8516-4f9e-a6af-d6837cd0f387/volumes" Mar 13 10:11:44 crc kubenswrapper[4632]: I0313 10:11:44.058292 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11a7dff-bf08-44c3-b4f4-923119c13717" path="/var/lib/kubelet/pods/b11a7dff-bf08-44c3-b4f4-923119c13717/volumes" Mar 13 10:11:44 crc kubenswrapper[4632]: W0313 10:11:44.058313 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7f61b75_16bf_4c5a_be30_c88d155c203f.slice/crio-0a9cc2740ac0ee6593eb911e246dbb00bc5e6cd7aaf49b2f0c50b4d82a976c9f WatchSource:0}: Error finding container 0a9cc2740ac0ee6593eb911e246dbb00bc5e6cd7aaf49b2f0c50b4d82a976c9f: Status 404 returned error can't find the container with id 0a9cc2740ac0ee6593eb911e246dbb00bc5e6cd7aaf49b2f0c50b4d82a976c9f Mar 13 10:11:44 crc kubenswrapper[4632]: I0313 10:11:44.062024 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd46ae04-0610-4aa5-9385-dd45de66c5dd" path="/var/lib/kubelet/pods/bd46ae04-0610-4aa5-9385-dd45de66c5dd/volumes" Mar 13 10:11:44 crc kubenswrapper[4632]: I0313 10:11:44.063552 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0cd0b7e-eded-4a51-8b1e-e67b9381bc87" path="/var/lib/kubelet/pods/f0cd0b7e-eded-4a51-8b1e-e67b9381bc87/volumes" Mar 13 10:11:44 crc kubenswrapper[4632]: I0313 10:11:44.559401 4632 generic.go:334] "Generic (PLEG): container finished" podID="f7f61b75-16bf-4c5a-be30-c88d155c203f" containerID="ea39ae0061ed4982483c38e5508ad6a45dafe3af038369c1ee929d7fd6c1f92c" exitCode=0 Mar 13 10:11:44 crc kubenswrapper[4632]: I0313 10:11:44.559777 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gdt8x" event={"ID":"f7f61b75-16bf-4c5a-be30-c88d155c203f","Type":"ContainerDied","Data":"ea39ae0061ed4982483c38e5508ad6a45dafe3af038369c1ee929d7fd6c1f92c"} Mar 13 10:11:44 crc kubenswrapper[4632]: I0313 10:11:44.559814 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gdt8x" event={"ID":"f7f61b75-16bf-4c5a-be30-c88d155c203f","Type":"ContainerStarted","Data":"0a9cc2740ac0ee6593eb911e246dbb00bc5e6cd7aaf49b2f0c50b4d82a976c9f"} Mar 13 10:11:44 crc kubenswrapper[4632]: I0313 10:11:44.562769 4632 generic.go:334] "Generic (PLEG): container finished" podID="0fa3faab-9e82-4fde-afff-3de6939a17d1" containerID="a8c4b44ced612e786b4a2c37400abd9d40040bd937d3c7c8a679c1986d317cb9" exitCode=0 Mar 13 10:11:44 crc kubenswrapper[4632]: I0313 10:11:44.562825 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ksc5" event={"ID":"0fa3faab-9e82-4fde-afff-3de6939a17d1","Type":"ContainerDied","Data":"a8c4b44ced612e786b4a2c37400abd9d40040bd937d3c7c8a679c1986d317cb9"} Mar 13 10:11:44 crc kubenswrapper[4632]: I0313 10:11:44.562900 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ksc5" event={"ID":"0fa3faab-9e82-4fde-afff-3de6939a17d1","Type":"ContainerStarted","Data":"05d058946411b3aa47f2245f16be7366b96dd2370570fc574b5f1accea9ab081"} Mar 13 10:11:45 crc kubenswrapper[4632]: I0313 10:11:45.569832 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gdt8x" event={"ID":"f7f61b75-16bf-4c5a-be30-c88d155c203f","Type":"ContainerStarted","Data":"9d93b23ee83cf056c79c74de74877ea34c2e598d1e294be73d5438be718035cc"} Mar 13 10:11:45 crc kubenswrapper[4632]: I0313 10:11:45.661048 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vwgfr"] Mar 13 10:11:45 crc kubenswrapper[4632]: I0313 10:11:45.662597 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vwgfr" Mar 13 10:11:45 crc kubenswrapper[4632]: I0313 10:11:45.665296 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 13 10:11:45 crc kubenswrapper[4632]: I0313 10:11:45.668111 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vwgfr"] Mar 13 10:11:45 crc kubenswrapper[4632]: I0313 10:11:45.762719 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5cc71d2-1901-4778-8e20-93646cfc1a85-utilities\") pod \"redhat-operators-vwgfr\" (UID: \"f5cc71d2-1901-4778-8e20-93646cfc1a85\") " pod="openshift-marketplace/redhat-operators-vwgfr" Mar 13 10:11:45 crc kubenswrapper[4632]: I0313 10:11:45.762770 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5cc71d2-1901-4778-8e20-93646cfc1a85-catalog-content\") pod \"redhat-operators-vwgfr\" (UID: \"f5cc71d2-1901-4778-8e20-93646cfc1a85\") " pod="openshift-marketplace/redhat-operators-vwgfr" Mar 13 10:11:45 crc kubenswrapper[4632]: I0313 10:11:45.762826 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkq4h\" (UniqueName: \"kubernetes.io/projected/f5cc71d2-1901-4778-8e20-93646cfc1a85-kube-api-access-qkq4h\") pod \"redhat-operators-vwgfr\" (UID: \"f5cc71d2-1901-4778-8e20-93646cfc1a85\") " pod="openshift-marketplace/redhat-operators-vwgfr" Mar 13 10:11:45 crc kubenswrapper[4632]: I0313 10:11:45.849575 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lgwff"] Mar 13 10:11:45 crc kubenswrapper[4632]: I0313 10:11:45.852171 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lgwff" Mar 13 10:11:45 crc kubenswrapper[4632]: W0313 10:11:45.857125 4632 reflector.go:561] object-"openshift-marketplace"/"community-operators-dockercfg-dmngl": failed to list *v1.Secret: secrets "community-operators-dockercfg-dmngl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Mar 13 10:11:45 crc kubenswrapper[4632]: E0313 10:11:45.857412 4632 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"community-operators-dockercfg-dmngl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"community-operators-dockercfg-dmngl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 13 10:11:45 crc kubenswrapper[4632]: I0313 10:11:45.863576 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkq4h\" (UniqueName: \"kubernetes.io/projected/f5cc71d2-1901-4778-8e20-93646cfc1a85-kube-api-access-qkq4h\") pod \"redhat-operators-vwgfr\" (UID: \"f5cc71d2-1901-4778-8e20-93646cfc1a85\") " pod="openshift-marketplace/redhat-operators-vwgfr" Mar 13 10:11:45 crc kubenswrapper[4632]: I0313 10:11:45.863652 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5cc71d2-1901-4778-8e20-93646cfc1a85-utilities\") pod \"redhat-operators-vwgfr\" (UID: \"f5cc71d2-1901-4778-8e20-93646cfc1a85\") " pod="openshift-marketplace/redhat-operators-vwgfr" Mar 13 10:11:45 crc kubenswrapper[4632]: I0313 10:11:45.863693 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5cc71d2-1901-4778-8e20-93646cfc1a85-catalog-content\") pod \"redhat-operators-vwgfr\" (UID: \"f5cc71d2-1901-4778-8e20-93646cfc1a85\") " pod="openshift-marketplace/redhat-operators-vwgfr" Mar 13 10:11:45 crc kubenswrapper[4632]: I0313 10:11:45.864300 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5cc71d2-1901-4778-8e20-93646cfc1a85-catalog-content\") pod \"redhat-operators-vwgfr\" (UID: \"f5cc71d2-1901-4778-8e20-93646cfc1a85\") " pod="openshift-marketplace/redhat-operators-vwgfr" Mar 13 10:11:45 crc kubenswrapper[4632]: I0313 10:11:45.864367 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5cc71d2-1901-4778-8e20-93646cfc1a85-utilities\") pod \"redhat-operators-vwgfr\" (UID: \"f5cc71d2-1901-4778-8e20-93646cfc1a85\") " pod="openshift-marketplace/redhat-operators-vwgfr" Mar 13 10:11:45 crc kubenswrapper[4632]: I0313 10:11:45.877404 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lgwff"] Mar 13 10:11:45 crc kubenswrapper[4632]: I0313 10:11:45.912072 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkq4h\" (UniqueName: \"kubernetes.io/projected/f5cc71d2-1901-4778-8e20-93646cfc1a85-kube-api-access-qkq4h\") pod \"redhat-operators-vwgfr\" (UID: \"f5cc71d2-1901-4778-8e20-93646cfc1a85\") " pod="openshift-marketplace/redhat-operators-vwgfr" Mar 13 10:11:45 crc kubenswrapper[4632]: I0313 10:11:45.965281 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d0fc567-0682-4bbc-981b-b4d1df62aa4e-utilities\") pod \"community-operators-lgwff\" (UID: \"0d0fc567-0682-4bbc-981b-b4d1df62aa4e\") " pod="openshift-marketplace/community-operators-lgwff" Mar 13 10:11:45 crc kubenswrapper[4632]: I0313 10:11:45.965415 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw5gz\" (UniqueName: \"kubernetes.io/projected/0d0fc567-0682-4bbc-981b-b4d1df62aa4e-kube-api-access-sw5gz\") pod \"community-operators-lgwff\" (UID: \"0d0fc567-0682-4bbc-981b-b4d1df62aa4e\") " pod="openshift-marketplace/community-operators-lgwff" Mar 13 10:11:45 crc kubenswrapper[4632]: I0313 10:11:45.965463 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d0fc567-0682-4bbc-981b-b4d1df62aa4e-catalog-content\") pod \"community-operators-lgwff\" (UID: \"0d0fc567-0682-4bbc-981b-b4d1df62aa4e\") " pod="openshift-marketplace/community-operators-lgwff" Mar 13 10:11:46 crc kubenswrapper[4632]: I0313 10:11:46.028672 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vwgfr" Mar 13 10:11:46 crc kubenswrapper[4632]: I0313 10:11:46.066791 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw5gz\" (UniqueName: \"kubernetes.io/projected/0d0fc567-0682-4bbc-981b-b4d1df62aa4e-kube-api-access-sw5gz\") pod \"community-operators-lgwff\" (UID: \"0d0fc567-0682-4bbc-981b-b4d1df62aa4e\") " pod="openshift-marketplace/community-operators-lgwff" Mar 13 10:11:46 crc kubenswrapper[4632]: I0313 10:11:46.066861 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d0fc567-0682-4bbc-981b-b4d1df62aa4e-catalog-content\") pod \"community-operators-lgwff\" (UID: \"0d0fc567-0682-4bbc-981b-b4d1df62aa4e\") " pod="openshift-marketplace/community-operators-lgwff" Mar 13 10:11:46 crc kubenswrapper[4632]: I0313 10:11:46.067326 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d0fc567-0682-4bbc-981b-b4d1df62aa4e-catalog-content\") pod \"community-operators-lgwff\" (UID: \"0d0fc567-0682-4bbc-981b-b4d1df62aa4e\") " pod="openshift-marketplace/community-operators-lgwff" Mar 13 10:11:46 crc kubenswrapper[4632]: I0313 10:11:46.069056 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d0fc567-0682-4bbc-981b-b4d1df62aa4e-utilities\") pod \"community-operators-lgwff\" (UID: \"0d0fc567-0682-4bbc-981b-b4d1df62aa4e\") " pod="openshift-marketplace/community-operators-lgwff" Mar 13 10:11:46 crc kubenswrapper[4632]: I0313 10:11:46.068616 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d0fc567-0682-4bbc-981b-b4d1df62aa4e-utilities\") pod \"community-operators-lgwff\" (UID: \"0d0fc567-0682-4bbc-981b-b4d1df62aa4e\") " pod="openshift-marketplace/community-operators-lgwff" Mar 13 10:11:46 crc kubenswrapper[4632]: I0313 10:11:46.087000 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw5gz\" (UniqueName: \"kubernetes.io/projected/0d0fc567-0682-4bbc-981b-b4d1df62aa4e-kube-api-access-sw5gz\") pod \"community-operators-lgwff\" (UID: \"0d0fc567-0682-4bbc-981b-b4d1df62aa4e\") " pod="openshift-marketplace/community-operators-lgwff" Mar 13 10:11:46 crc kubenswrapper[4632]: I0313 10:11:46.236553 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vwgfr"] Mar 13 10:11:46 crc kubenswrapper[4632]: W0313 10:11:46.243669 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cc71d2_1901_4778_8e20_93646cfc1a85.slice/crio-02f7137cb66e9d8ffc3cef2380010ceb3716ead7cd44ba15fbfbc983daf62896 WatchSource:0}: Error finding container 02f7137cb66e9d8ffc3cef2380010ceb3716ead7cd44ba15fbfbc983daf62896: Status 404 returned error can't find the container with id 02f7137cb66e9d8ffc3cef2380010ceb3716ead7cd44ba15fbfbc983daf62896 Mar 13 10:11:46 crc kubenswrapper[4632]: I0313 10:11:46.576622 4632 generic.go:334] "Generic (PLEG): container finished" podID="f5cc71d2-1901-4778-8e20-93646cfc1a85" containerID="5cd341447acd60a5969c3c5d69de4256985df7ef4a0d5e8e53aa3477a222e75f" exitCode=0 Mar 13 10:11:46 crc kubenswrapper[4632]: I0313 10:11:46.576672 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vwgfr" event={"ID":"f5cc71d2-1901-4778-8e20-93646cfc1a85","Type":"ContainerDied","Data":"5cd341447acd60a5969c3c5d69de4256985df7ef4a0d5e8e53aa3477a222e75f"} Mar 13 10:11:46 crc kubenswrapper[4632]: I0313 10:11:46.576747 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vwgfr" event={"ID":"f5cc71d2-1901-4778-8e20-93646cfc1a85","Type":"ContainerStarted","Data":"02f7137cb66e9d8ffc3cef2380010ceb3716ead7cd44ba15fbfbc983daf62896"} Mar 13 10:11:46 crc kubenswrapper[4632]: I0313 10:11:46.579261 4632 generic.go:334] "Generic (PLEG): container finished" podID="0fa3faab-9e82-4fde-afff-3de6939a17d1" containerID="aad77b665209df43df4f12654ddb920ecda5a086061cc28b2dbcc76554ecf7e4" exitCode=0 Mar 13 10:11:46 crc kubenswrapper[4632]: I0313 10:11:46.579885 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ksc5" event={"ID":"0fa3faab-9e82-4fde-afff-3de6939a17d1","Type":"ContainerDied","Data":"aad77b665209df43df4f12654ddb920ecda5a086061cc28b2dbcc76554ecf7e4"} Mar 13 10:11:46 crc kubenswrapper[4632]: I0313 10:11:46.583558 4632 generic.go:334] "Generic (PLEG): container finished" podID="f7f61b75-16bf-4c5a-be30-c88d155c203f" containerID="9d93b23ee83cf056c79c74de74877ea34c2e598d1e294be73d5438be718035cc" exitCode=0 Mar 13 10:11:46 crc kubenswrapper[4632]: I0313 10:11:46.583595 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gdt8x" event={"ID":"f7f61b75-16bf-4c5a-be30-c88d155c203f","Type":"ContainerDied","Data":"9d93b23ee83cf056c79c74de74877ea34c2e598d1e294be73d5438be718035cc"} Mar 13 10:11:47 crc kubenswrapper[4632]: I0313 10:11:47.117455 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 13 10:11:47 crc kubenswrapper[4632]: I0313 10:11:47.125571 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lgwff" Mar 13 10:11:47 crc kubenswrapper[4632]: I0313 10:11:47.590707 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gdt8x" event={"ID":"f7f61b75-16bf-4c5a-be30-c88d155c203f","Type":"ContainerStarted","Data":"7d35b444d1bd4f368370f7c4bf5efbd36d548f0061deecb9c5a6f7055dac2899"} Mar 13 10:11:47 crc kubenswrapper[4632]: I0313 10:11:47.593005 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vwgfr" event={"ID":"f5cc71d2-1901-4778-8e20-93646cfc1a85","Type":"ContainerStarted","Data":"d7f8d7b027c206a1d65a3b139e82c14e6d7189a3bd2c2246d3677e3545220555"} Mar 13 10:11:47 crc kubenswrapper[4632]: W0313 10:11:47.618495 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d0fc567_0682_4bbc_981b_b4d1df62aa4e.slice/crio-4266d075e4d04ea92ccfdc02ec4b3551e54779fe4f2f2c386ac2a209fda18404 WatchSource:0}: Error finding container 4266d075e4d04ea92ccfdc02ec4b3551e54779fe4f2f2c386ac2a209fda18404: Status 404 returned error can't find the container with id 4266d075e4d04ea92ccfdc02ec4b3551e54779fe4f2f2c386ac2a209fda18404 Mar 13 10:11:47 crc kubenswrapper[4632]: I0313 10:11:47.620564 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lgwff"] Mar 13 10:11:47 crc kubenswrapper[4632]: I0313 10:11:47.641513 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gdt8x" podStartSLOduration=2.187043059 podStartE2EDuration="4.641496544s" podCreationTimestamp="2026-03-13 10:11:43 +0000 UTC" firstStartedPulling="2026-03-13 10:11:44.562136069 +0000 UTC m=+478.584666222" lastFinishedPulling="2026-03-13 10:11:47.016589564 +0000 UTC m=+481.039119707" observedRunningTime="2026-03-13 10:11:47.637925372 +0000 UTC m=+481.660455505" watchObservedRunningTime="2026-03-13 10:11:47.641496544 +0000 UTC m=+481.664026667" Mar 13 10:11:48 crc kubenswrapper[4632]: I0313 10:11:48.602914 4632 generic.go:334] "Generic (PLEG): container finished" podID="f5cc71d2-1901-4778-8e20-93646cfc1a85" containerID="d7f8d7b027c206a1d65a3b139e82c14e6d7189a3bd2c2246d3677e3545220555" exitCode=0 Mar 13 10:11:48 crc kubenswrapper[4632]: I0313 10:11:48.602993 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vwgfr" event={"ID":"f5cc71d2-1901-4778-8e20-93646cfc1a85","Type":"ContainerDied","Data":"d7f8d7b027c206a1d65a3b139e82c14e6d7189a3bd2c2246d3677e3545220555"} Mar 13 10:11:48 crc kubenswrapper[4632]: I0313 10:11:48.604901 4632 generic.go:334] "Generic (PLEG): container finished" podID="0d0fc567-0682-4bbc-981b-b4d1df62aa4e" containerID="647f33468b0e454866917d9beec3f31ad6bc8dca469daccdfcf7e8df5de24312" exitCode=0 Mar 13 10:11:48 crc kubenswrapper[4632]: I0313 10:11:48.604965 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lgwff" event={"ID":"0d0fc567-0682-4bbc-981b-b4d1df62aa4e","Type":"ContainerDied","Data":"647f33468b0e454866917d9beec3f31ad6bc8dca469daccdfcf7e8df5de24312"} Mar 13 10:11:48 crc kubenswrapper[4632]: I0313 10:11:48.605006 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lgwff" event={"ID":"0d0fc567-0682-4bbc-981b-b4d1df62aa4e","Type":"ContainerStarted","Data":"4266d075e4d04ea92ccfdc02ec4b3551e54779fe4f2f2c386ac2a209fda18404"} Mar 13 10:11:49 crc kubenswrapper[4632]: I0313 10:11:49.612496 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vwgfr" event={"ID":"f5cc71d2-1901-4778-8e20-93646cfc1a85","Type":"ContainerStarted","Data":"bb3ba6ce6125af4b8eb6420d2a43fb97d9fe1916c470da9c74f26fdd0873591b"} Mar 13 10:11:49 crc kubenswrapper[4632]: I0313 10:11:49.654756 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vwgfr" podStartSLOduration=2.221006935 podStartE2EDuration="4.654729226s" podCreationTimestamp="2026-03-13 10:11:45 +0000 UTC" firstStartedPulling="2026-03-13 10:11:46.578262367 +0000 UTC m=+480.600792500" lastFinishedPulling="2026-03-13 10:11:49.011984658 +0000 UTC m=+483.034514791" observedRunningTime="2026-03-13 10:11:49.64705096 +0000 UTC m=+483.669581093" watchObservedRunningTime="2026-03-13 10:11:49.654729226 +0000 UTC m=+483.677259359" Mar 13 10:11:50 crc kubenswrapper[4632]: I0313 10:11:50.620303 4632 generic.go:334] "Generic (PLEG): container finished" podID="0d0fc567-0682-4bbc-981b-b4d1df62aa4e" containerID="6ed6c1b1b2793ab4b788dc1723932bf9c4121a7bf0945a697809d4c945eec749" exitCode=0 Mar 13 10:11:50 crc kubenswrapper[4632]: I0313 10:11:50.622737 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lgwff" event={"ID":"0d0fc567-0682-4bbc-981b-b4d1df62aa4e","Type":"ContainerDied","Data":"6ed6c1b1b2793ab4b788dc1723932bf9c4121a7bf0945a697809d4c945eec749"} Mar 13 10:11:51 crc kubenswrapper[4632]: I0313 10:11:51.638210 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ksc5" event={"ID":"0fa3faab-9e82-4fde-afff-3de6939a17d1","Type":"ContainerStarted","Data":"43406af3f8edd079e1b933465df01dbcc93292bb25548398ea4066e38533524f"} Mar 13 10:11:51 crc kubenswrapper[4632]: I0313 10:11:51.640212 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lgwff" event={"ID":"0d0fc567-0682-4bbc-981b-b4d1df62aa4e","Type":"ContainerStarted","Data":"6fa56a0ef2065ba4287ddc46b227aad0c8d55e685aeeb7889682c05acb775492"} Mar 13 10:11:51 crc kubenswrapper[4632]: I0313 10:11:51.659066 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7ksc5" podStartSLOduration=2.160780737 podStartE2EDuration="8.659050714s" podCreationTimestamp="2026-03-13 10:11:43 +0000 UTC" firstStartedPulling="2026-03-13 10:11:44.566208232 +0000 UTC m=+478.588738365" lastFinishedPulling="2026-03-13 10:11:51.064478209 +0000 UTC m=+485.087008342" observedRunningTime="2026-03-13 10:11:51.656879025 +0000 UTC m=+485.679409168" watchObservedRunningTime="2026-03-13 10:11:51.659050714 +0000 UTC m=+485.681580847" Mar 13 10:11:53 crc kubenswrapper[4632]: I0313 10:11:53.576264 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7ksc5" Mar 13 10:11:53 crc kubenswrapper[4632]: I0313 10:11:53.576332 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7ksc5" Mar 13 10:11:53 crc kubenswrapper[4632]: I0313 10:11:53.618319 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7ksc5" Mar 13 10:11:53 crc kubenswrapper[4632]: I0313 10:11:53.637335 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lgwff" podStartSLOduration=6.078483538 podStartE2EDuration="8.637318265s" podCreationTimestamp="2026-03-13 10:11:45 +0000 UTC" firstStartedPulling="2026-03-13 10:11:48.606607154 +0000 UTC m=+482.629137287" lastFinishedPulling="2026-03-13 10:11:51.165441881 +0000 UTC m=+485.187972014" observedRunningTime="2026-03-13 10:11:51.676715899 +0000 UTC m=+485.699246032" watchObservedRunningTime="2026-03-13 10:11:53.637318265 +0000 UTC m=+487.659848388" Mar 13 10:11:53 crc kubenswrapper[4632]: I0313 10:11:53.787988 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gdt8x" Mar 13 10:11:53 crc kubenswrapper[4632]: I0313 10:11:53.788169 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gdt8x" Mar 13 10:11:53 crc kubenswrapper[4632]: I0313 10:11:53.830169 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gdt8x" Mar 13 10:11:54 crc kubenswrapper[4632]: I0313 10:11:54.711761 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gdt8x" Mar 13 10:11:56 crc kubenswrapper[4632]: I0313 10:11:56.030467 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vwgfr" Mar 13 10:11:56 crc kubenswrapper[4632]: I0313 10:11:56.030533 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vwgfr" Mar 13 10:11:56 crc kubenswrapper[4632]: I0313 10:11:56.073766 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vwgfr" Mar 13 10:11:56 crc kubenswrapper[4632]: I0313 10:11:56.720582 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vwgfr" Mar 13 10:11:56 crc kubenswrapper[4632]: I0313 10:11:56.751532 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-sbzcm" Mar 13 10:11:56 crc kubenswrapper[4632]: I0313 10:11:56.803711 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fxs5z"] Mar 13 10:11:57 crc kubenswrapper[4632]: I0313 10:11:57.126315 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lgwff" Mar 13 10:11:57 crc kubenswrapper[4632]: I0313 10:11:57.126396 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lgwff" Mar 13 10:11:57 crc kubenswrapper[4632]: I0313 10:11:57.166084 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lgwff" Mar 13 10:11:57 crc kubenswrapper[4632]: I0313 10:11:57.722544 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lgwff" Mar 13 10:12:00 crc kubenswrapper[4632]: I0313 10:12:00.134685 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556612-5t5ct"] Mar 13 10:12:00 crc kubenswrapper[4632]: I0313 10:12:00.135538 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556612-5t5ct" Mar 13 10:12:00 crc kubenswrapper[4632]: I0313 10:12:00.139370 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 10:12:00 crc kubenswrapper[4632]: I0313 10:12:00.140034 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 10:12:00 crc kubenswrapper[4632]: I0313 10:12:00.140197 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 10:12:00 crc kubenswrapper[4632]: I0313 10:12:00.145341 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556612-5t5ct"] Mar 13 10:12:00 crc kubenswrapper[4632]: I0313 10:12:00.277366 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6pkz\" (UniqueName: \"kubernetes.io/projected/050ee655-a62f-4991-b493-d98493762823-kube-api-access-v6pkz\") pod \"auto-csr-approver-29556612-5t5ct\" (UID: \"050ee655-a62f-4991-b493-d98493762823\") " pod="openshift-infra/auto-csr-approver-29556612-5t5ct" Mar 13 10:12:00 crc kubenswrapper[4632]: I0313 10:12:00.378852 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6pkz\" (UniqueName: \"kubernetes.io/projected/050ee655-a62f-4991-b493-d98493762823-kube-api-access-v6pkz\") pod \"auto-csr-approver-29556612-5t5ct\" (UID: \"050ee655-a62f-4991-b493-d98493762823\") " pod="openshift-infra/auto-csr-approver-29556612-5t5ct" Mar 13 10:12:00 crc kubenswrapper[4632]: I0313 10:12:00.400390 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6pkz\" (UniqueName: \"kubernetes.io/projected/050ee655-a62f-4991-b493-d98493762823-kube-api-access-v6pkz\") pod \"auto-csr-approver-29556612-5t5ct\" (UID: \"050ee655-a62f-4991-b493-d98493762823\") " pod="openshift-infra/auto-csr-approver-29556612-5t5ct" Mar 13 10:12:00 crc kubenswrapper[4632]: I0313 10:12:00.459535 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556612-5t5ct" Mar 13 10:12:00 crc kubenswrapper[4632]: I0313 10:12:00.685043 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556612-5t5ct"] Mar 13 10:12:01 crc kubenswrapper[4632]: I0313 10:12:01.714183 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556612-5t5ct" event={"ID":"050ee655-a62f-4991-b493-d98493762823","Type":"ContainerStarted","Data":"72656c6d5a8d1a8b63e7d9f1bae015750738acdded0ca8534eb3a2e0a68316b7"} Mar 13 10:12:02 crc kubenswrapper[4632]: I0313 10:12:02.723204 4632 generic.go:334] "Generic (PLEG): container finished" podID="050ee655-a62f-4991-b493-d98493762823" containerID="3025e6a57984dbcc7f1272476cb4a6a1339dea799f52af43239e5a72f7479138" exitCode=0 Mar 13 10:12:02 crc kubenswrapper[4632]: I0313 10:12:02.723324 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556612-5t5ct" event={"ID":"050ee655-a62f-4991-b493-d98493762823","Type":"ContainerDied","Data":"3025e6a57984dbcc7f1272476cb4a6a1339dea799f52af43239e5a72f7479138"} Mar 13 10:12:03 crc kubenswrapper[4632]: I0313 10:12:03.697144 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7ksc5" Mar 13 10:12:04 crc kubenswrapper[4632]: I0313 10:12:04.030348 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556612-5t5ct" Mar 13 10:12:04 crc kubenswrapper[4632]: I0313 10:12:04.153389 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6pkz\" (UniqueName: \"kubernetes.io/projected/050ee655-a62f-4991-b493-d98493762823-kube-api-access-v6pkz\") pod \"050ee655-a62f-4991-b493-d98493762823\" (UID: \"050ee655-a62f-4991-b493-d98493762823\") " Mar 13 10:12:04 crc kubenswrapper[4632]: I0313 10:12:04.160662 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/050ee655-a62f-4991-b493-d98493762823-kube-api-access-v6pkz" (OuterVolumeSpecName: "kube-api-access-v6pkz") pod "050ee655-a62f-4991-b493-d98493762823" (UID: "050ee655-a62f-4991-b493-d98493762823"). InnerVolumeSpecName "kube-api-access-v6pkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:12:04 crc kubenswrapper[4632]: I0313 10:12:04.255682 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6pkz\" (UniqueName: \"kubernetes.io/projected/050ee655-a62f-4991-b493-d98493762823-kube-api-access-v6pkz\") on node \"crc\" DevicePath \"\"" Mar 13 10:12:04 crc kubenswrapper[4632]: I0313 10:12:04.740458 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556612-5t5ct" event={"ID":"050ee655-a62f-4991-b493-d98493762823","Type":"ContainerDied","Data":"72656c6d5a8d1a8b63e7d9f1bae015750738acdded0ca8534eb3a2e0a68316b7"} Mar 13 10:12:04 crc kubenswrapper[4632]: I0313 10:12:04.741166 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72656c6d5a8d1a8b63e7d9f1bae015750738acdded0ca8534eb3a2e0a68316b7" Mar 13 10:12:04 crc kubenswrapper[4632]: I0313 10:12:04.740567 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556612-5t5ct" Mar 13 10:12:05 crc kubenswrapper[4632]: I0313 10:12:05.082234 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556606-mkrp2"] Mar 13 10:12:05 crc kubenswrapper[4632]: I0313 10:12:05.087474 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556606-mkrp2"] Mar 13 10:12:06 crc kubenswrapper[4632]: I0313 10:12:06.052502 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c822257d-9d2f-4b6f-87de-131de5cd0efe" path="/var/lib/kubelet/pods/c822257d-9d2f-4b6f-87de-131de5cd0efe/volumes" Mar 13 10:12:10 crc kubenswrapper[4632]: I0313 10:12:10.461195 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 10:12:10 crc kubenswrapper[4632]: I0313 10:12:10.461720 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 10:12:10 crc kubenswrapper[4632]: I0313 10:12:10.461786 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 10:12:10 crc kubenswrapper[4632]: I0313 10:12:10.463279 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e4989d70178427347867288c3fc7b62a339fa6ecdddde954f719a53f3db7fe17"} pod="openshift-machine-config-operator/machine-config-daemon-zkscb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 10:12:10 crc kubenswrapper[4632]: I0313 10:12:10.463523 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" containerID="cri-o://e4989d70178427347867288c3fc7b62a339fa6ecdddde954f719a53f3db7fe17" gracePeriod=600 Mar 13 10:12:10 crc kubenswrapper[4632]: I0313 10:12:10.786701 4632 generic.go:334] "Generic (PLEG): container finished" podID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerID="e4989d70178427347867288c3fc7b62a339fa6ecdddde954f719a53f3db7fe17" exitCode=0 Mar 13 10:12:10 crc kubenswrapper[4632]: I0313 10:12:10.786998 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerDied","Data":"e4989d70178427347867288c3fc7b62a339fa6ecdddde954f719a53f3db7fe17"} Mar 13 10:12:10 crc kubenswrapper[4632]: I0313 10:12:10.787681 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerStarted","Data":"313e3b067f9ea051953ab56cbddeb09cc8cceb68240f33ca492d13584077681c"} Mar 13 10:12:10 crc kubenswrapper[4632]: I0313 10:12:10.787850 4632 scope.go:117] "RemoveContainer" containerID="8d890c96abcbe37c4f2e487a63e4f0d5f48c462a6fee6b8b1930384bdbfebee7" Mar 13 10:12:21 crc kubenswrapper[4632]: I0313 10:12:21.852207 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" podUID="f56fc09a-e2b7-46db-b938-f276df3f033e" containerName="registry" containerID="cri-o://1081e88c7001e64d6f95133ef3938fcdaf6163c9ecf6555e86dc52149387161f" gracePeriod=30 Mar 13 10:12:22 crc kubenswrapper[4632]: I0313 10:12:22.197567 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:12:22 crc kubenswrapper[4632]: I0313 10:12:22.305730 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mprmj\" (UniqueName: \"kubernetes.io/projected/f56fc09a-e2b7-46db-b938-f276df3f033e-kube-api-access-mprmj\") pod \"f56fc09a-e2b7-46db-b938-f276df3f033e\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " Mar 13 10:12:22 crc kubenswrapper[4632]: I0313 10:12:22.305785 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f56fc09a-e2b7-46db-b938-f276df3f033e-bound-sa-token\") pod \"f56fc09a-e2b7-46db-b938-f276df3f033e\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " Mar 13 10:12:22 crc kubenswrapper[4632]: I0313 10:12:22.306013 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"f56fc09a-e2b7-46db-b938-f276df3f033e\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " Mar 13 10:12:22 crc kubenswrapper[4632]: I0313 10:12:22.306057 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f56fc09a-e2b7-46db-b938-f276df3f033e-registry-tls\") pod \"f56fc09a-e2b7-46db-b938-f276df3f033e\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " Mar 13 10:12:22 crc kubenswrapper[4632]: I0313 10:12:22.306091 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f56fc09a-e2b7-46db-b938-f276df3f033e-trusted-ca\") pod \"f56fc09a-e2b7-46db-b938-f276df3f033e\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " Mar 13 10:12:22 crc kubenswrapper[4632]: I0313 10:12:22.306113 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f56fc09a-e2b7-46db-b938-f276df3f033e-registry-certificates\") pod \"f56fc09a-e2b7-46db-b938-f276df3f033e\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " Mar 13 10:12:22 crc kubenswrapper[4632]: I0313 10:12:22.306152 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f56fc09a-e2b7-46db-b938-f276df3f033e-installation-pull-secrets\") pod \"f56fc09a-e2b7-46db-b938-f276df3f033e\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " Mar 13 10:12:22 crc kubenswrapper[4632]: I0313 10:12:22.306193 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f56fc09a-e2b7-46db-b938-f276df3f033e-ca-trust-extracted\") pod \"f56fc09a-e2b7-46db-b938-f276df3f033e\" (UID: \"f56fc09a-e2b7-46db-b938-f276df3f033e\") " Mar 13 10:12:22 crc kubenswrapper[4632]: I0313 10:12:22.307117 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f56fc09a-e2b7-46db-b938-f276df3f033e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "f56fc09a-e2b7-46db-b938-f276df3f033e" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:12:22 crc kubenswrapper[4632]: I0313 10:12:22.307288 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f56fc09a-e2b7-46db-b938-f276df3f033e-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "f56fc09a-e2b7-46db-b938-f276df3f033e" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:12:22 crc kubenswrapper[4632]: I0313 10:12:22.314904 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f56fc09a-e2b7-46db-b938-f276df3f033e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "f56fc09a-e2b7-46db-b938-f276df3f033e" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:12:22 crc kubenswrapper[4632]: I0313 10:12:22.315431 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f56fc09a-e2b7-46db-b938-f276df3f033e-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "f56fc09a-e2b7-46db-b938-f276df3f033e" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:12:22 crc kubenswrapper[4632]: I0313 10:12:22.315633 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f56fc09a-e2b7-46db-b938-f276df3f033e-kube-api-access-mprmj" (OuterVolumeSpecName: "kube-api-access-mprmj") pod "f56fc09a-e2b7-46db-b938-f276df3f033e" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e"). InnerVolumeSpecName "kube-api-access-mprmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:12:22 crc kubenswrapper[4632]: I0313 10:12:22.317417 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "f56fc09a-e2b7-46db-b938-f276df3f033e" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 13 10:12:22 crc kubenswrapper[4632]: I0313 10:12:22.317816 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f56fc09a-e2b7-46db-b938-f276df3f033e-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "f56fc09a-e2b7-46db-b938-f276df3f033e" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:12:22 crc kubenswrapper[4632]: I0313 10:12:22.334917 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f56fc09a-e2b7-46db-b938-f276df3f033e-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "f56fc09a-e2b7-46db-b938-f276df3f033e" (UID: "f56fc09a-e2b7-46db-b938-f276df3f033e"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:12:22 crc kubenswrapper[4632]: I0313 10:12:22.408009 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mprmj\" (UniqueName: \"kubernetes.io/projected/f56fc09a-e2b7-46db-b938-f276df3f033e-kube-api-access-mprmj\") on node \"crc\" DevicePath \"\"" Mar 13 10:12:22 crc kubenswrapper[4632]: I0313 10:12:22.408050 4632 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f56fc09a-e2b7-46db-b938-f276df3f033e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 13 10:12:22 crc kubenswrapper[4632]: I0313 10:12:22.408065 4632 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f56fc09a-e2b7-46db-b938-f276df3f033e-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 13 10:12:22 crc kubenswrapper[4632]: I0313 10:12:22.408078 4632 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f56fc09a-e2b7-46db-b938-f276df3f033e-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 13 10:12:22 crc kubenswrapper[4632]: I0313 10:12:22.408090 4632 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f56fc09a-e2b7-46db-b938-f276df3f033e-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 13 10:12:22 crc kubenswrapper[4632]: I0313 10:12:22.408101 4632 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f56fc09a-e2b7-46db-b938-f276df3f033e-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 13 10:12:22 crc kubenswrapper[4632]: I0313 10:12:22.408112 4632 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f56fc09a-e2b7-46db-b938-f276df3f033e-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 13 10:12:22 crc kubenswrapper[4632]: I0313 10:12:22.860732 4632 generic.go:334] "Generic (PLEG): container finished" podID="f56fc09a-e2b7-46db-b938-f276df3f033e" containerID="1081e88c7001e64d6f95133ef3938fcdaf6163c9ecf6555e86dc52149387161f" exitCode=0 Mar 13 10:12:22 crc kubenswrapper[4632]: I0313 10:12:22.860782 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" Mar 13 10:12:22 crc kubenswrapper[4632]: I0313 10:12:22.860776 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" event={"ID":"f56fc09a-e2b7-46db-b938-f276df3f033e","Type":"ContainerDied","Data":"1081e88c7001e64d6f95133ef3938fcdaf6163c9ecf6555e86dc52149387161f"} Mar 13 10:12:22 crc kubenswrapper[4632]: I0313 10:12:22.861006 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fxs5z" event={"ID":"f56fc09a-e2b7-46db-b938-f276df3f033e","Type":"ContainerDied","Data":"6db13fe4cd83b1210971879bf1313cee58732376958e857687de7da1568c6519"} Mar 13 10:12:22 crc kubenswrapper[4632]: I0313 10:12:22.861058 4632 scope.go:117] "RemoveContainer" containerID="1081e88c7001e64d6f95133ef3938fcdaf6163c9ecf6555e86dc52149387161f" Mar 13 10:12:22 crc kubenswrapper[4632]: I0313 10:12:22.894049 4632 scope.go:117] "RemoveContainer" containerID="1081e88c7001e64d6f95133ef3938fcdaf6163c9ecf6555e86dc52149387161f" Mar 13 10:12:22 crc kubenswrapper[4632]: E0313 10:12:22.894672 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1081e88c7001e64d6f95133ef3938fcdaf6163c9ecf6555e86dc52149387161f\": container with ID starting with 1081e88c7001e64d6f95133ef3938fcdaf6163c9ecf6555e86dc52149387161f not found: ID does not exist" containerID="1081e88c7001e64d6f95133ef3938fcdaf6163c9ecf6555e86dc52149387161f" Mar 13 10:12:22 crc kubenswrapper[4632]: I0313 10:12:22.894694 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1081e88c7001e64d6f95133ef3938fcdaf6163c9ecf6555e86dc52149387161f"} err="failed to get container status \"1081e88c7001e64d6f95133ef3938fcdaf6163c9ecf6555e86dc52149387161f\": rpc error: code = NotFound desc = could not find container \"1081e88c7001e64d6f95133ef3938fcdaf6163c9ecf6555e86dc52149387161f\": container with ID starting with 1081e88c7001e64d6f95133ef3938fcdaf6163c9ecf6555e86dc52149387161f not found: ID does not exist" Mar 13 10:12:22 crc kubenswrapper[4632]: I0313 10:12:22.895982 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fxs5z"] Mar 13 10:12:22 crc kubenswrapper[4632]: I0313 10:12:22.899735 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fxs5z"] Mar 13 10:12:24 crc kubenswrapper[4632]: I0313 10:12:24.050601 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f56fc09a-e2b7-46db-b938-f276df3f033e" path="/var/lib/kubelet/pods/f56fc09a-e2b7-46db-b938-f276df3f033e/volumes" Mar 13 10:14:00 crc kubenswrapper[4632]: I0313 10:14:00.142225 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556614-7pzwt"] Mar 13 10:14:00 crc kubenswrapper[4632]: E0313 10:14:00.143645 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f56fc09a-e2b7-46db-b938-f276df3f033e" containerName="registry" Mar 13 10:14:00 crc kubenswrapper[4632]: I0313 10:14:00.143672 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="f56fc09a-e2b7-46db-b938-f276df3f033e" containerName="registry" Mar 13 10:14:00 crc kubenswrapper[4632]: E0313 10:14:00.143694 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050ee655-a62f-4991-b493-d98493762823" containerName="oc" Mar 13 10:14:00 crc kubenswrapper[4632]: I0313 10:14:00.143701 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="050ee655-a62f-4991-b493-d98493762823" containerName="oc" Mar 13 10:14:00 crc kubenswrapper[4632]: I0313 10:14:00.143808 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="050ee655-a62f-4991-b493-d98493762823" containerName="oc" Mar 13 10:14:00 crc kubenswrapper[4632]: I0313 10:14:00.143834 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="f56fc09a-e2b7-46db-b938-f276df3f033e" containerName="registry" Mar 13 10:14:00 crc kubenswrapper[4632]: I0313 10:14:00.144386 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556614-7pzwt" Mar 13 10:14:00 crc kubenswrapper[4632]: I0313 10:14:00.147555 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 10:14:00 crc kubenswrapper[4632]: I0313 10:14:00.147759 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 10:14:00 crc kubenswrapper[4632]: I0313 10:14:00.151639 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 10:14:00 crc kubenswrapper[4632]: I0313 10:14:00.157680 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556614-7pzwt"] Mar 13 10:14:00 crc kubenswrapper[4632]: I0313 10:14:00.247288 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z79d7\" (UniqueName: \"kubernetes.io/projected/f80bfe67-be24-45e3-9e57-b67389f8cc63-kube-api-access-z79d7\") pod \"auto-csr-approver-29556614-7pzwt\" (UID: \"f80bfe67-be24-45e3-9e57-b67389f8cc63\") " pod="openshift-infra/auto-csr-approver-29556614-7pzwt" Mar 13 10:14:00 crc kubenswrapper[4632]: I0313 10:14:00.348102 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z79d7\" (UniqueName: \"kubernetes.io/projected/f80bfe67-be24-45e3-9e57-b67389f8cc63-kube-api-access-z79d7\") pod \"auto-csr-approver-29556614-7pzwt\" (UID: \"f80bfe67-be24-45e3-9e57-b67389f8cc63\") " pod="openshift-infra/auto-csr-approver-29556614-7pzwt" Mar 13 10:14:00 crc kubenswrapper[4632]: I0313 10:14:00.371011 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z79d7\" (UniqueName: \"kubernetes.io/projected/f80bfe67-be24-45e3-9e57-b67389f8cc63-kube-api-access-z79d7\") pod \"auto-csr-approver-29556614-7pzwt\" (UID: \"f80bfe67-be24-45e3-9e57-b67389f8cc63\") " pod="openshift-infra/auto-csr-approver-29556614-7pzwt" Mar 13 10:14:00 crc kubenswrapper[4632]: I0313 10:14:00.460902 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556614-7pzwt" Mar 13 10:14:00 crc kubenswrapper[4632]: I0313 10:14:00.674674 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556614-7pzwt"] Mar 13 10:14:00 crc kubenswrapper[4632]: I0313 10:14:00.685619 4632 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 10:14:01 crc kubenswrapper[4632]: I0313 10:14:01.439893 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556614-7pzwt" event={"ID":"f80bfe67-be24-45e3-9e57-b67389f8cc63","Type":"ContainerStarted","Data":"9e73f3e8b4f5dc8b5a5a916940efe609dd694e9b9269560be1517d95ab449710"} Mar 13 10:14:02 crc kubenswrapper[4632]: I0313 10:14:02.447321 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556614-7pzwt" event={"ID":"f80bfe67-be24-45e3-9e57-b67389f8cc63","Type":"ContainerDied","Data":"fcf5d9f69f7435b287086bfcb908c42e9330ebc2ef407226d11b60f145efd8de"} Mar 13 10:14:02 crc kubenswrapper[4632]: I0313 10:14:02.447799 4632 generic.go:334] "Generic (PLEG): container finished" podID="f80bfe67-be24-45e3-9e57-b67389f8cc63" containerID="fcf5d9f69f7435b287086bfcb908c42e9330ebc2ef407226d11b60f145efd8de" exitCode=0 Mar 13 10:14:03 crc kubenswrapper[4632]: I0313 10:14:03.657284 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556614-7pzwt" Mar 13 10:14:03 crc kubenswrapper[4632]: I0313 10:14:03.795467 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z79d7\" (UniqueName: \"kubernetes.io/projected/f80bfe67-be24-45e3-9e57-b67389f8cc63-kube-api-access-z79d7\") pod \"f80bfe67-be24-45e3-9e57-b67389f8cc63\" (UID: \"f80bfe67-be24-45e3-9e57-b67389f8cc63\") " Mar 13 10:14:03 crc kubenswrapper[4632]: I0313 10:14:03.805831 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f80bfe67-be24-45e3-9e57-b67389f8cc63-kube-api-access-z79d7" (OuterVolumeSpecName: "kube-api-access-z79d7") pod "f80bfe67-be24-45e3-9e57-b67389f8cc63" (UID: "f80bfe67-be24-45e3-9e57-b67389f8cc63"). InnerVolumeSpecName "kube-api-access-z79d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:14:03 crc kubenswrapper[4632]: I0313 10:14:03.896852 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z79d7\" (UniqueName: \"kubernetes.io/projected/f80bfe67-be24-45e3-9e57-b67389f8cc63-kube-api-access-z79d7\") on node \"crc\" DevicePath \"\"" Mar 13 10:14:04 crc kubenswrapper[4632]: I0313 10:14:04.461277 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556614-7pzwt" event={"ID":"f80bfe67-be24-45e3-9e57-b67389f8cc63","Type":"ContainerDied","Data":"9e73f3e8b4f5dc8b5a5a916940efe609dd694e9b9269560be1517d95ab449710"} Mar 13 10:14:04 crc kubenswrapper[4632]: I0313 10:14:04.461598 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e73f3e8b4f5dc8b5a5a916940efe609dd694e9b9269560be1517d95ab449710" Mar 13 10:14:04 crc kubenswrapper[4632]: I0313 10:14:04.461491 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556614-7pzwt" Mar 13 10:14:04 crc kubenswrapper[4632]: I0313 10:14:04.722464 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556608-9kzfk"] Mar 13 10:14:04 crc kubenswrapper[4632]: I0313 10:14:04.730589 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556608-9kzfk"] Mar 13 10:14:06 crc kubenswrapper[4632]: I0313 10:14:06.056068 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37ab6711-478f-4cc7-b9a4-c9baa126b1a3" path="/var/lib/kubelet/pods/37ab6711-478f-4cc7-b9a4-c9baa126b1a3/volumes" Mar 13 10:14:10 crc kubenswrapper[4632]: I0313 10:14:10.461215 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 10:14:10 crc kubenswrapper[4632]: I0313 10:14:10.461820 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 10:14:40 crc kubenswrapper[4632]: I0313 10:14:40.461386 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 10:14:40 crc kubenswrapper[4632]: I0313 10:14:40.462082 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 10:14:56 crc kubenswrapper[4632]: I0313 10:14:56.573798 4632 scope.go:117] "RemoveContainer" containerID="24d957ae4862987ed76c21db8796ae914a7d2beca83397bc3f90816dc051c956" Mar 13 10:15:00 crc kubenswrapper[4632]: I0313 10:15:00.139474 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556615-zj2fm"] Mar 13 10:15:00 crc kubenswrapper[4632]: E0313 10:15:00.140696 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f80bfe67-be24-45e3-9e57-b67389f8cc63" containerName="oc" Mar 13 10:15:00 crc kubenswrapper[4632]: I0313 10:15:00.140800 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="f80bfe67-be24-45e3-9e57-b67389f8cc63" containerName="oc" Mar 13 10:15:00 crc kubenswrapper[4632]: I0313 10:15:00.141072 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="f80bfe67-be24-45e3-9e57-b67389f8cc63" containerName="oc" Mar 13 10:15:00 crc kubenswrapper[4632]: I0313 10:15:00.141647 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556615-zj2fm" Mar 13 10:15:00 crc kubenswrapper[4632]: I0313 10:15:00.144234 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 13 10:15:00 crc kubenswrapper[4632]: I0313 10:15:00.144852 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 13 10:15:00 crc kubenswrapper[4632]: I0313 10:15:00.149841 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556615-zj2fm"] Mar 13 10:15:00 crc kubenswrapper[4632]: I0313 10:15:00.220306 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c3c6392-454c-4131-90a0-6584565cef4c-config-volume\") pod \"collect-profiles-29556615-zj2fm\" (UID: \"3c3c6392-454c-4131-90a0-6584565cef4c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556615-zj2fm" Mar 13 10:15:00 crc kubenswrapper[4632]: I0313 10:15:00.220352 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdggs\" (UniqueName: \"kubernetes.io/projected/3c3c6392-454c-4131-90a0-6584565cef4c-kube-api-access-mdggs\") pod \"collect-profiles-29556615-zj2fm\" (UID: \"3c3c6392-454c-4131-90a0-6584565cef4c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556615-zj2fm" Mar 13 10:15:00 crc kubenswrapper[4632]: I0313 10:15:00.220410 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c3c6392-454c-4131-90a0-6584565cef4c-secret-volume\") pod \"collect-profiles-29556615-zj2fm\" (UID: \"3c3c6392-454c-4131-90a0-6584565cef4c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556615-zj2fm" Mar 13 10:15:00 crc kubenswrapper[4632]: I0313 10:15:00.321158 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c3c6392-454c-4131-90a0-6584565cef4c-config-volume\") pod \"collect-profiles-29556615-zj2fm\" (UID: \"3c3c6392-454c-4131-90a0-6584565cef4c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556615-zj2fm" Mar 13 10:15:00 crc kubenswrapper[4632]: I0313 10:15:00.321221 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdggs\" (UniqueName: \"kubernetes.io/projected/3c3c6392-454c-4131-90a0-6584565cef4c-kube-api-access-mdggs\") pod \"collect-profiles-29556615-zj2fm\" (UID: \"3c3c6392-454c-4131-90a0-6584565cef4c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556615-zj2fm" Mar 13 10:15:00 crc kubenswrapper[4632]: I0313 10:15:00.321256 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c3c6392-454c-4131-90a0-6584565cef4c-secret-volume\") pod \"collect-profiles-29556615-zj2fm\" (UID: \"3c3c6392-454c-4131-90a0-6584565cef4c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556615-zj2fm" Mar 13 10:15:00 crc kubenswrapper[4632]: I0313 10:15:00.322052 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c3c6392-454c-4131-90a0-6584565cef4c-config-volume\") pod \"collect-profiles-29556615-zj2fm\" (UID: \"3c3c6392-454c-4131-90a0-6584565cef4c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556615-zj2fm" Mar 13 10:15:00 crc kubenswrapper[4632]: I0313 10:15:00.327262 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c3c6392-454c-4131-90a0-6584565cef4c-secret-volume\") pod \"collect-profiles-29556615-zj2fm\" (UID: \"3c3c6392-454c-4131-90a0-6584565cef4c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556615-zj2fm" Mar 13 10:15:00 crc kubenswrapper[4632]: I0313 10:15:00.338478 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdggs\" (UniqueName: \"kubernetes.io/projected/3c3c6392-454c-4131-90a0-6584565cef4c-kube-api-access-mdggs\") pod \"collect-profiles-29556615-zj2fm\" (UID: \"3c3c6392-454c-4131-90a0-6584565cef4c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556615-zj2fm" Mar 13 10:15:00 crc kubenswrapper[4632]: I0313 10:15:00.465441 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556615-zj2fm" Mar 13 10:15:00 crc kubenswrapper[4632]: I0313 10:15:00.685614 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556615-zj2fm"] Mar 13 10:15:00 crc kubenswrapper[4632]: I0313 10:15:00.881854 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556615-zj2fm" event={"ID":"3c3c6392-454c-4131-90a0-6584565cef4c","Type":"ContainerStarted","Data":"6acdfd407705651773e15ca9493f2efdac886dce6f04123c798b57f93aa775b6"} Mar 13 10:15:00 crc kubenswrapper[4632]: I0313 10:15:00.897628 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556615-zj2fm" event={"ID":"3c3c6392-454c-4131-90a0-6584565cef4c","Type":"ContainerStarted","Data":"d111a5a6d8b3ddefc63155935fa715861914a40029da3f7348682af126904c65"} Mar 13 10:15:00 crc kubenswrapper[4632]: I0313 10:15:00.916139 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29556615-zj2fm" podStartSLOduration=0.916109748 podStartE2EDuration="916.109748ms" podCreationTimestamp="2026-03-13 10:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:15:00.912088434 +0000 UTC m=+674.934618587" watchObservedRunningTime="2026-03-13 10:15:00.916109748 +0000 UTC m=+674.938639881" Mar 13 10:15:01 crc kubenswrapper[4632]: I0313 10:15:01.897244 4632 generic.go:334] "Generic (PLEG): container finished" podID="3c3c6392-454c-4131-90a0-6584565cef4c" containerID="6acdfd407705651773e15ca9493f2efdac886dce6f04123c798b57f93aa775b6" exitCode=0 Mar 13 10:15:01 crc kubenswrapper[4632]: I0313 10:15:01.897290 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556615-zj2fm" event={"ID":"3c3c6392-454c-4131-90a0-6584565cef4c","Type":"ContainerDied","Data":"6acdfd407705651773e15ca9493f2efdac886dce6f04123c798b57f93aa775b6"} Mar 13 10:15:03 crc kubenswrapper[4632]: I0313 10:15:03.084997 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556615-zj2fm" Mar 13 10:15:03 crc kubenswrapper[4632]: I0313 10:15:03.262486 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c3c6392-454c-4131-90a0-6584565cef4c-secret-volume\") pod \"3c3c6392-454c-4131-90a0-6584565cef4c\" (UID: \"3c3c6392-454c-4131-90a0-6584565cef4c\") " Mar 13 10:15:03 crc kubenswrapper[4632]: I0313 10:15:03.262872 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdggs\" (UniqueName: \"kubernetes.io/projected/3c3c6392-454c-4131-90a0-6584565cef4c-kube-api-access-mdggs\") pod \"3c3c6392-454c-4131-90a0-6584565cef4c\" (UID: \"3c3c6392-454c-4131-90a0-6584565cef4c\") " Mar 13 10:15:03 crc kubenswrapper[4632]: I0313 10:15:03.263827 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c3c6392-454c-4131-90a0-6584565cef4c-config-volume\") pod \"3c3c6392-454c-4131-90a0-6584565cef4c\" (UID: \"3c3c6392-454c-4131-90a0-6584565cef4c\") " Mar 13 10:15:03 crc kubenswrapper[4632]: I0313 10:15:03.264583 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c3c6392-454c-4131-90a0-6584565cef4c-config-volume" (OuterVolumeSpecName: "config-volume") pod "3c3c6392-454c-4131-90a0-6584565cef4c" (UID: "3c3c6392-454c-4131-90a0-6584565cef4c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:15:03 crc kubenswrapper[4632]: I0313 10:15:03.269231 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c3c6392-454c-4131-90a0-6584565cef4c-kube-api-access-mdggs" (OuterVolumeSpecName: "kube-api-access-mdggs") pod "3c3c6392-454c-4131-90a0-6584565cef4c" (UID: "3c3c6392-454c-4131-90a0-6584565cef4c"). InnerVolumeSpecName "kube-api-access-mdggs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:15:03 crc kubenswrapper[4632]: I0313 10:15:03.269616 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c3c6392-454c-4131-90a0-6584565cef4c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3c3c6392-454c-4131-90a0-6584565cef4c" (UID: "3c3c6392-454c-4131-90a0-6584565cef4c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:15:03 crc kubenswrapper[4632]: I0313 10:15:03.364793 4632 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c3c6392-454c-4131-90a0-6584565cef4c-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 13 10:15:03 crc kubenswrapper[4632]: I0313 10:15:03.365068 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdggs\" (UniqueName: \"kubernetes.io/projected/3c3c6392-454c-4131-90a0-6584565cef4c-kube-api-access-mdggs\") on node \"crc\" DevicePath \"\"" Mar 13 10:15:03 crc kubenswrapper[4632]: I0313 10:15:03.365186 4632 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c3c6392-454c-4131-90a0-6584565cef4c-config-volume\") on node \"crc\" DevicePath \"\"" Mar 13 10:15:03 crc kubenswrapper[4632]: I0313 10:15:03.907670 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556615-zj2fm" event={"ID":"3c3c6392-454c-4131-90a0-6584565cef4c","Type":"ContainerDied","Data":"d111a5a6d8b3ddefc63155935fa715861914a40029da3f7348682af126904c65"} Mar 13 10:15:03 crc kubenswrapper[4632]: I0313 10:15:03.907700 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556615-zj2fm" Mar 13 10:15:03 crc kubenswrapper[4632]: I0313 10:15:03.907708 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d111a5a6d8b3ddefc63155935fa715861914a40029da3f7348682af126904c65" Mar 13 10:15:10 crc kubenswrapper[4632]: I0313 10:15:10.460725 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 10:15:10 crc kubenswrapper[4632]: I0313 10:15:10.461163 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 10:15:10 crc kubenswrapper[4632]: I0313 10:15:10.461209 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 10:15:10 crc kubenswrapper[4632]: I0313 10:15:10.947362 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"313e3b067f9ea051953ab56cbddeb09cc8cceb68240f33ca492d13584077681c"} pod="openshift-machine-config-operator/machine-config-daemon-zkscb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 10:15:10 crc kubenswrapper[4632]: I0313 10:15:10.947470 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" containerID="cri-o://313e3b067f9ea051953ab56cbddeb09cc8cceb68240f33ca492d13584077681c" gracePeriod=600 Mar 13 10:15:11 crc kubenswrapper[4632]: I0313 10:15:11.956053 4632 generic.go:334] "Generic (PLEG): container finished" podID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerID="313e3b067f9ea051953ab56cbddeb09cc8cceb68240f33ca492d13584077681c" exitCode=0 Mar 13 10:15:11 crc kubenswrapper[4632]: I0313 10:15:11.956123 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerDied","Data":"313e3b067f9ea051953ab56cbddeb09cc8cceb68240f33ca492d13584077681c"} Mar 13 10:15:11 crc kubenswrapper[4632]: I0313 10:15:11.956427 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerStarted","Data":"7fcd863f1a2b3af4768aa1d32979163bc846d3d472acea1e8c27ffcf3dfe0ffc"} Mar 13 10:15:11 crc kubenswrapper[4632]: I0313 10:15:11.956451 4632 scope.go:117] "RemoveContainer" containerID="e4989d70178427347867288c3fc7b62a339fa6ecdddde954f719a53f3db7fe17" Mar 13 10:15:56 crc kubenswrapper[4632]: I0313 10:15:56.617478 4632 scope.go:117] "RemoveContainer" containerID="481e1788f663e81921b410cd12a9e3666afaa2b706dda68096288fee3498f2fa" Mar 13 10:15:56 crc kubenswrapper[4632]: I0313 10:15:56.656861 4632 scope.go:117] "RemoveContainer" containerID="83de0881072cb52ab7a7fbd2d8ef18cbb3eb4eb7897fd1301bfd2cbf304913b7" Mar 13 10:16:00 crc kubenswrapper[4632]: I0313 10:16:00.147295 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556616-8xbbs"] Mar 13 10:16:00 crc kubenswrapper[4632]: E0313 10:16:00.149556 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c3c6392-454c-4131-90a0-6584565cef4c" containerName="collect-profiles" Mar 13 10:16:00 crc kubenswrapper[4632]: I0313 10:16:00.149686 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c3c6392-454c-4131-90a0-6584565cef4c" containerName="collect-profiles" Mar 13 10:16:00 crc kubenswrapper[4632]: I0313 10:16:00.149864 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c3c6392-454c-4131-90a0-6584565cef4c" containerName="collect-profiles" Mar 13 10:16:00 crc kubenswrapper[4632]: I0313 10:16:00.150411 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556616-8xbbs" Mar 13 10:16:00 crc kubenswrapper[4632]: I0313 10:16:00.155290 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 10:16:00 crc kubenswrapper[4632]: I0313 10:16:00.155680 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 10:16:00 crc kubenswrapper[4632]: I0313 10:16:00.158234 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 10:16:00 crc kubenswrapper[4632]: I0313 10:16:00.158880 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556616-8xbbs"] Mar 13 10:16:00 crc kubenswrapper[4632]: I0313 10:16:00.221195 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72j5z\" (UniqueName: \"kubernetes.io/projected/c21d462b-89d1-4844-9bfc-3f0cdf7727e9-kube-api-access-72j5z\") pod \"auto-csr-approver-29556616-8xbbs\" (UID: \"c21d462b-89d1-4844-9bfc-3f0cdf7727e9\") " pod="openshift-infra/auto-csr-approver-29556616-8xbbs" Mar 13 10:16:00 crc kubenswrapper[4632]: I0313 10:16:00.323427 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72j5z\" (UniqueName: \"kubernetes.io/projected/c21d462b-89d1-4844-9bfc-3f0cdf7727e9-kube-api-access-72j5z\") pod \"auto-csr-approver-29556616-8xbbs\" (UID: \"c21d462b-89d1-4844-9bfc-3f0cdf7727e9\") " pod="openshift-infra/auto-csr-approver-29556616-8xbbs" Mar 13 10:16:00 crc kubenswrapper[4632]: I0313 10:16:00.351091 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72j5z\" (UniqueName: \"kubernetes.io/projected/c21d462b-89d1-4844-9bfc-3f0cdf7727e9-kube-api-access-72j5z\") pod \"auto-csr-approver-29556616-8xbbs\" (UID: \"c21d462b-89d1-4844-9bfc-3f0cdf7727e9\") " pod="openshift-infra/auto-csr-approver-29556616-8xbbs" Mar 13 10:16:00 crc kubenswrapper[4632]: I0313 10:16:00.468376 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556616-8xbbs" Mar 13 10:16:00 crc kubenswrapper[4632]: I0313 10:16:00.685610 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556616-8xbbs"] Mar 13 10:16:01 crc kubenswrapper[4632]: I0313 10:16:01.221805 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556616-8xbbs" event={"ID":"c21d462b-89d1-4844-9bfc-3f0cdf7727e9","Type":"ContainerStarted","Data":"43de2e95e7518fc84e89cd9a199fc2193c16fb5445fb6d9085f1feeba22ffbc3"} Mar 13 10:16:02 crc kubenswrapper[4632]: I0313 10:16:02.229320 4632 generic.go:334] "Generic (PLEG): container finished" podID="c21d462b-89d1-4844-9bfc-3f0cdf7727e9" containerID="6a34c241348123944aa499915ed71c016789c868e3e563c2a1cb71763ed56ad8" exitCode=0 Mar 13 10:16:02 crc kubenswrapper[4632]: I0313 10:16:02.229364 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556616-8xbbs" event={"ID":"c21d462b-89d1-4844-9bfc-3f0cdf7727e9","Type":"ContainerDied","Data":"6a34c241348123944aa499915ed71c016789c868e3e563c2a1cb71763ed56ad8"} Mar 13 10:16:03 crc kubenswrapper[4632]: I0313 10:16:03.436206 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556616-8xbbs" Mar 13 10:16:03 crc kubenswrapper[4632]: I0313 10:16:03.568117 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72j5z\" (UniqueName: \"kubernetes.io/projected/c21d462b-89d1-4844-9bfc-3f0cdf7727e9-kube-api-access-72j5z\") pod \"c21d462b-89d1-4844-9bfc-3f0cdf7727e9\" (UID: \"c21d462b-89d1-4844-9bfc-3f0cdf7727e9\") " Mar 13 10:16:03 crc kubenswrapper[4632]: I0313 10:16:03.575265 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c21d462b-89d1-4844-9bfc-3f0cdf7727e9-kube-api-access-72j5z" (OuterVolumeSpecName: "kube-api-access-72j5z") pod "c21d462b-89d1-4844-9bfc-3f0cdf7727e9" (UID: "c21d462b-89d1-4844-9bfc-3f0cdf7727e9"). InnerVolumeSpecName "kube-api-access-72j5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:16:03 crc kubenswrapper[4632]: I0313 10:16:03.669236 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72j5z\" (UniqueName: \"kubernetes.io/projected/c21d462b-89d1-4844-9bfc-3f0cdf7727e9-kube-api-access-72j5z\") on node \"crc\" DevicePath \"\"" Mar 13 10:16:04 crc kubenswrapper[4632]: I0313 10:16:04.241309 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556616-8xbbs" event={"ID":"c21d462b-89d1-4844-9bfc-3f0cdf7727e9","Type":"ContainerDied","Data":"43de2e95e7518fc84e89cd9a199fc2193c16fb5445fb6d9085f1feeba22ffbc3"} Mar 13 10:16:04 crc kubenswrapper[4632]: I0313 10:16:04.241361 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556616-8xbbs" Mar 13 10:16:04 crc kubenswrapper[4632]: I0313 10:16:04.241360 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43de2e95e7518fc84e89cd9a199fc2193c16fb5445fb6d9085f1feeba22ffbc3" Mar 13 10:16:04 crc kubenswrapper[4632]: I0313 10:16:04.498647 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556610-sg5bx"] Mar 13 10:16:04 crc kubenswrapper[4632]: I0313 10:16:04.502072 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556610-sg5bx"] Mar 13 10:16:06 crc kubenswrapper[4632]: I0313 10:16:06.051526 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="795727b7-7a2e-4e97-8707-aecf893fd332" path="/var/lib/kubelet/pods/795727b7-7a2e-4e97-8707-aecf893fd332/volumes" Mar 13 10:16:56 crc kubenswrapper[4632]: I0313 10:16:56.703586 4632 scope.go:117] "RemoveContainer" containerID="2858228a654d1c5c1b9a9a04d00ea882bfe929e6c810389040bc3c0ba67d7a46" Mar 13 10:17:40 crc kubenswrapper[4632]: I0313 10:17:40.424047 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xg2df"] Mar 13 10:17:40 crc kubenswrapper[4632]: E0313 10:17:40.425058 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c21d462b-89d1-4844-9bfc-3f0cdf7727e9" containerName="oc" Mar 13 10:17:40 crc kubenswrapper[4632]: I0313 10:17:40.425084 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="c21d462b-89d1-4844-9bfc-3f0cdf7727e9" containerName="oc" Mar 13 10:17:40 crc kubenswrapper[4632]: I0313 10:17:40.425252 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="c21d462b-89d1-4844-9bfc-3f0cdf7727e9" containerName="oc" Mar 13 10:17:40 crc kubenswrapper[4632]: I0313 10:17:40.425743 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xg2df" Mar 13 10:17:40 crc kubenswrapper[4632]: I0313 10:17:40.434661 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 13 10:17:40 crc kubenswrapper[4632]: I0313 10:17:40.439017 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 13 10:17:40 crc kubenswrapper[4632]: I0313 10:17:40.442777 4632 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-l29dz" Mar 13 10:17:40 crc kubenswrapper[4632]: I0313 10:17:40.461514 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 10:17:40 crc kubenswrapper[4632]: I0313 10:17:40.461598 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 10:17:40 crc kubenswrapper[4632]: I0313 10:17:40.469676 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xg2df"] Mar 13 10:17:40 crc kubenswrapper[4632]: I0313 10:17:40.475466 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-kh4n9"] Mar 13 10:17:40 crc kubenswrapper[4632]: I0313 10:17:40.476323 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-kh4n9" Mar 13 10:17:40 crc kubenswrapper[4632]: I0313 10:17:40.495384 4632 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-vbrx7" Mar 13 10:17:40 crc kubenswrapper[4632]: I0313 10:17:40.534051 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-kh4n9"] Mar 13 10:17:40 crc kubenswrapper[4632]: I0313 10:17:40.545741 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-tjkbb"] Mar 13 10:17:40 crc kubenswrapper[4632]: I0313 10:17:40.546649 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-tjkbb" Mar 13 10:17:40 crc kubenswrapper[4632]: I0313 10:17:40.554500 4632 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-4ndzq" Mar 13 10:17:40 crc kubenswrapper[4632]: I0313 10:17:40.573224 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-tjkbb"] Mar 13 10:17:40 crc kubenswrapper[4632]: I0313 10:17:40.598568 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlwxc\" (UniqueName: \"kubernetes.io/projected/43729a96-008f-4af6-ba0d-d52f2f179c0b-kube-api-access-dlwxc\") pod \"cert-manager-858654f9db-kh4n9\" (UID: \"43729a96-008f-4af6-ba0d-d52f2f179c0b\") " pod="cert-manager/cert-manager-858654f9db-kh4n9" Mar 13 10:17:40 crc kubenswrapper[4632]: I0313 10:17:40.598841 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvlcv\" (UniqueName: \"kubernetes.io/projected/348f2814-4e97-4ec5-bcbb-35a868955687-kube-api-access-vvlcv\") pod \"cert-manager-cainjector-cf98fcc89-xg2df\" (UID: \"348f2814-4e97-4ec5-bcbb-35a868955687\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xg2df" Mar 13 10:17:40 crc kubenswrapper[4632]: I0313 10:17:40.699876 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvlcv\" (UniqueName: \"kubernetes.io/projected/348f2814-4e97-4ec5-bcbb-35a868955687-kube-api-access-vvlcv\") pod \"cert-manager-cainjector-cf98fcc89-xg2df\" (UID: \"348f2814-4e97-4ec5-bcbb-35a868955687\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xg2df" Mar 13 10:17:40 crc kubenswrapper[4632]: I0313 10:17:40.700176 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldz2c\" (UniqueName: \"kubernetes.io/projected/a0d52d98-fe87-4bc8-890e-5c5efb1f30d6-kube-api-access-ldz2c\") pod \"cert-manager-webhook-687f57d79b-tjkbb\" (UID: \"a0d52d98-fe87-4bc8-890e-5c5efb1f30d6\") " pod="cert-manager/cert-manager-webhook-687f57d79b-tjkbb" Mar 13 10:17:40 crc kubenswrapper[4632]: I0313 10:17:40.700338 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlwxc\" (UniqueName: \"kubernetes.io/projected/43729a96-008f-4af6-ba0d-d52f2f179c0b-kube-api-access-dlwxc\") pod \"cert-manager-858654f9db-kh4n9\" (UID: \"43729a96-008f-4af6-ba0d-d52f2f179c0b\") " pod="cert-manager/cert-manager-858654f9db-kh4n9" Mar 13 10:17:40 crc kubenswrapper[4632]: I0313 10:17:40.721638 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvlcv\" (UniqueName: \"kubernetes.io/projected/348f2814-4e97-4ec5-bcbb-35a868955687-kube-api-access-vvlcv\") pod \"cert-manager-cainjector-cf98fcc89-xg2df\" (UID: \"348f2814-4e97-4ec5-bcbb-35a868955687\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xg2df" Mar 13 10:17:40 crc kubenswrapper[4632]: I0313 10:17:40.721861 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlwxc\" (UniqueName: \"kubernetes.io/projected/43729a96-008f-4af6-ba0d-d52f2f179c0b-kube-api-access-dlwxc\") pod \"cert-manager-858654f9db-kh4n9\" (UID: \"43729a96-008f-4af6-ba0d-d52f2f179c0b\") " pod="cert-manager/cert-manager-858654f9db-kh4n9" Mar 13 10:17:40 crc kubenswrapper[4632]: I0313 10:17:40.756635 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xg2df" Mar 13 10:17:40 crc kubenswrapper[4632]: I0313 10:17:40.792190 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-kh4n9" Mar 13 10:17:40 crc kubenswrapper[4632]: I0313 10:17:40.801298 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldz2c\" (UniqueName: \"kubernetes.io/projected/a0d52d98-fe87-4bc8-890e-5c5efb1f30d6-kube-api-access-ldz2c\") pod \"cert-manager-webhook-687f57d79b-tjkbb\" (UID: \"a0d52d98-fe87-4bc8-890e-5c5efb1f30d6\") " pod="cert-manager/cert-manager-webhook-687f57d79b-tjkbb" Mar 13 10:17:40 crc kubenswrapper[4632]: I0313 10:17:40.819916 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldz2c\" (UniqueName: \"kubernetes.io/projected/a0d52d98-fe87-4bc8-890e-5c5efb1f30d6-kube-api-access-ldz2c\") pod \"cert-manager-webhook-687f57d79b-tjkbb\" (UID: \"a0d52d98-fe87-4bc8-890e-5c5efb1f30d6\") " pod="cert-manager/cert-manager-webhook-687f57d79b-tjkbb" Mar 13 10:17:40 crc kubenswrapper[4632]: I0313 10:17:40.883240 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-tjkbb" Mar 13 10:17:41 crc kubenswrapper[4632]: I0313 10:17:41.044516 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xg2df"] Mar 13 10:17:41 crc kubenswrapper[4632]: I0313 10:17:41.131417 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-kh4n9"] Mar 13 10:17:41 crc kubenswrapper[4632]: I0313 10:17:41.168486 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-tjkbb"] Mar 13 10:17:41 crc kubenswrapper[4632]: I0313 10:17:41.985569 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-tjkbb" event={"ID":"a0d52d98-fe87-4bc8-890e-5c5efb1f30d6","Type":"ContainerStarted","Data":"82b21e36ecdc4d34750785c32c17a383e36d23b315fdc56d3caf042751d9a95f"} Mar 13 10:17:41 crc kubenswrapper[4632]: I0313 10:17:41.986874 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-kh4n9" event={"ID":"43729a96-008f-4af6-ba0d-d52f2f179c0b","Type":"ContainerStarted","Data":"a0c0f987f5c2a930ae5107733f0d58d6f5e01e7b219af529245d17a68a3343f0"} Mar 13 10:17:41 crc kubenswrapper[4632]: I0313 10:17:41.987668 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xg2df" event={"ID":"348f2814-4e97-4ec5-bcbb-35a868955687","Type":"ContainerStarted","Data":"6a7153edc397d786cbf41e71a3296cd574a4cfc60a16e139cb563a8ccd14fc5c"} Mar 13 10:17:46 crc kubenswrapper[4632]: I0313 10:17:46.014593 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-kh4n9" event={"ID":"43729a96-008f-4af6-ba0d-d52f2f179c0b","Type":"ContainerStarted","Data":"8229d816898988c328ae2fb2bab8f7b34737337cfe0afd10b5293704807d3da1"} Mar 13 10:17:46 crc kubenswrapper[4632]: I0313 10:17:46.017739 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xg2df" event={"ID":"348f2814-4e97-4ec5-bcbb-35a868955687","Type":"ContainerStarted","Data":"6665295b03f0a6068d3b75c34593a0f084b92ac8a39e54bf31273838d5112efd"} Mar 13 10:17:46 crc kubenswrapper[4632]: I0313 10:17:46.018722 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-tjkbb" event={"ID":"a0d52d98-fe87-4bc8-890e-5c5efb1f30d6","Type":"ContainerStarted","Data":"f1eb968bf0483e67b1bec8db44be891387eb9d4776a59eb6ccb079e224198d23"} Mar 13 10:17:46 crc kubenswrapper[4632]: I0313 10:17:46.019125 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-tjkbb" Mar 13 10:17:46 crc kubenswrapper[4632]: I0313 10:17:46.046549 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-kh4n9" podStartSLOduration=2.187180402 podStartE2EDuration="6.046529157s" podCreationTimestamp="2026-03-13 10:17:40 +0000 UTC" firstStartedPulling="2026-03-13 10:17:41.134428136 +0000 UTC m=+835.156958269" lastFinishedPulling="2026-03-13 10:17:44.993776891 +0000 UTC m=+839.016307024" observedRunningTime="2026-03-13 10:17:46.044114435 +0000 UTC m=+840.066644568" watchObservedRunningTime="2026-03-13 10:17:46.046529157 +0000 UTC m=+840.069059300" Mar 13 10:17:46 crc kubenswrapper[4632]: I0313 10:17:46.063078 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-tjkbb" podStartSLOduration=2.251709628 podStartE2EDuration="6.063057779s" podCreationTimestamp="2026-03-13 10:17:40 +0000 UTC" firstStartedPulling="2026-03-13 10:17:41.184753269 +0000 UTC m=+835.207283402" lastFinishedPulling="2026-03-13 10:17:44.99610142 +0000 UTC m=+839.018631553" observedRunningTime="2026-03-13 10:17:46.061563141 +0000 UTC m=+840.084093274" watchObservedRunningTime="2026-03-13 10:17:46.063057779 +0000 UTC m=+840.085587912" Mar 13 10:17:46 crc kubenswrapper[4632]: I0313 10:17:46.085800 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xg2df" podStartSLOduration=2.160168593 podStartE2EDuration="6.085781709s" podCreationTimestamp="2026-03-13 10:17:40 +0000 UTC" firstStartedPulling="2026-03-13 10:17:41.062564882 +0000 UTC m=+835.085095025" lastFinishedPulling="2026-03-13 10:17:44.988177988 +0000 UTC m=+839.010708141" observedRunningTime="2026-03-13 10:17:46.081456258 +0000 UTC m=+840.103986391" watchObservedRunningTime="2026-03-13 10:17:46.085781709 +0000 UTC m=+840.108311862" Mar 13 10:17:49 crc kubenswrapper[4632]: I0313 10:17:49.260443 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qb725"] Mar 13 10:17:49 crc kubenswrapper[4632]: I0313 10:17:49.262534 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="ovn-controller" containerID="cri-o://7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705" gracePeriod=30 Mar 13 10:17:49 crc kubenswrapper[4632]: I0313 10:17:49.262647 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="sbdb" containerID="cri-o://e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d" gracePeriod=30 Mar 13 10:17:49 crc kubenswrapper[4632]: I0313 10:17:49.262570 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="nbdb" containerID="cri-o://af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f" gracePeriod=30 Mar 13 10:17:49 crc kubenswrapper[4632]: I0313 10:17:49.262703 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5" gracePeriod=30 Mar 13 10:17:49 crc kubenswrapper[4632]: I0313 10:17:49.262745 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="northd" containerID="cri-o://1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da" gracePeriod=30 Mar 13 10:17:49 crc kubenswrapper[4632]: I0313 10:17:49.262805 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="ovn-acl-logging" containerID="cri-o://32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719" gracePeriod=30 Mar 13 10:17:49 crc kubenswrapper[4632]: I0313 10:17:49.262821 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="kube-rbac-proxy-node" containerID="cri-o://cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b" gracePeriod=30 Mar 13 10:17:49 crc kubenswrapper[4632]: I0313 10:17:49.316107 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="ovnkube-controller" containerID="cri-o://166459cac16ec3b496c247f07645c5ed749fa575d48ccabd0d32f2daa03d6776" gracePeriod=30 Mar 13 10:17:49 crc kubenswrapper[4632]: I0313 10:17:49.989098 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qb725_3b40c6b3-0061-4224-82d5-3ccf67998722/ovnkube-controller/3.log" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.001616 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qb725_3b40c6b3-0061-4224-82d5-3ccf67998722/ovn-acl-logging/0.log" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.002423 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qb725_3b40c6b3-0061-4224-82d5-3ccf67998722/ovn-controller/0.log" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.003211 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.045074 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-cni-netd\") pod \"3b40c6b3-0061-4224-82d5-3ccf67998722\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.045124 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-kubelet\") pod \"3b40c6b3-0061-4224-82d5-3ccf67998722\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.045140 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-run-ovn-kubernetes\") pod \"3b40c6b3-0061-4224-82d5-3ccf67998722\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.045172 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-var-lib-cni-networks-ovn-kubernetes\") pod \"3b40c6b3-0061-4224-82d5-3ccf67998722\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.045190 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-run-openvswitch\") pod \"3b40c6b3-0061-4224-82d5-3ccf67998722\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.045206 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-etc-openvswitch\") pod \"3b40c6b3-0061-4224-82d5-3ccf67998722\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.045224 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-run-netns\") pod \"3b40c6b3-0061-4224-82d5-3ccf67998722\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.045241 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-cni-bin\") pod \"3b40c6b3-0061-4224-82d5-3ccf67998722\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.045257 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-slash\") pod \"3b40c6b3-0061-4224-82d5-3ccf67998722\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.045283 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-var-lib-openvswitch\") pod \"3b40c6b3-0061-4224-82d5-3ccf67998722\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.045309 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3b40c6b3-0061-4224-82d5-3ccf67998722-ovnkube-config\") pod \"3b40c6b3-0061-4224-82d5-3ccf67998722\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.045323 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-log-socket\") pod \"3b40c6b3-0061-4224-82d5-3ccf67998722\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.045353 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-node-log\") pod \"3b40c6b3-0061-4224-82d5-3ccf67998722\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.045372 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3b40c6b3-0061-4224-82d5-3ccf67998722-ovn-node-metrics-cert\") pod \"3b40c6b3-0061-4224-82d5-3ccf67998722\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.045386 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-systemd-units\") pod \"3b40c6b3-0061-4224-82d5-3ccf67998722\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.045401 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3b40c6b3-0061-4224-82d5-3ccf67998722-env-overrides\") pod \"3b40c6b3-0061-4224-82d5-3ccf67998722\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.045418 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3b40c6b3-0061-4224-82d5-3ccf67998722-ovnkube-script-lib\") pod \"3b40c6b3-0061-4224-82d5-3ccf67998722\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.045436 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dj6cl\" (UniqueName: \"kubernetes.io/projected/3b40c6b3-0061-4224-82d5-3ccf67998722-kube-api-access-dj6cl\") pod \"3b40c6b3-0061-4224-82d5-3ccf67998722\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.045464 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-run-systemd\") pod \"3b40c6b3-0061-4224-82d5-3ccf67998722\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.046286 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-slash" (OuterVolumeSpecName: "host-slash") pod "3b40c6b3-0061-4224-82d5-3ccf67998722" (UID: "3b40c6b3-0061-4224-82d5-3ccf67998722"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.046326 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "3b40c6b3-0061-4224-82d5-3ccf67998722" (UID: "3b40c6b3-0061-4224-82d5-3ccf67998722"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.046355 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "3b40c6b3-0061-4224-82d5-3ccf67998722" (UID: "3b40c6b3-0061-4224-82d5-3ccf67998722"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.046374 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "3b40c6b3-0061-4224-82d5-3ccf67998722" (UID: "3b40c6b3-0061-4224-82d5-3ccf67998722"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.046395 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "3b40c6b3-0061-4224-82d5-3ccf67998722" (UID: "3b40c6b3-0061-4224-82d5-3ccf67998722"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.046419 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "3b40c6b3-0061-4224-82d5-3ccf67998722" (UID: "3b40c6b3-0061-4224-82d5-3ccf67998722"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.046436 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "3b40c6b3-0061-4224-82d5-3ccf67998722" (UID: "3b40c6b3-0061-4224-82d5-3ccf67998722"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.046452 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "3b40c6b3-0061-4224-82d5-3ccf67998722" (UID: "3b40c6b3-0061-4224-82d5-3ccf67998722"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.046468 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "3b40c6b3-0061-4224-82d5-3ccf67998722" (UID: "3b40c6b3-0061-4224-82d5-3ccf67998722"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.047248 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-run-ovn\") pod \"3b40c6b3-0061-4224-82d5-3ccf67998722\" (UID: \"3b40c6b3-0061-4224-82d5-3ccf67998722\") " Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.047415 4632 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-cni-netd\") on node \"crc\" DevicePath \"\"" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.047428 4632 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-kubelet\") on node \"crc\" DevicePath \"\"" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.047437 4632 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.047447 4632 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.047456 4632 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-run-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.047465 4632 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.047474 4632 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-run-netns\") on node \"crc\" DevicePath \"\"" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.047482 4632 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-cni-bin\") on node \"crc\" DevicePath \"\"" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.047490 4632 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-host-slash\") on node \"crc\" DevicePath \"\"" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.047515 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "3b40c6b3-0061-4224-82d5-3ccf67998722" (UID: "3b40c6b3-0061-4224-82d5-3ccf67998722"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.047536 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "3b40c6b3-0061-4224-82d5-3ccf67998722" (UID: "3b40c6b3-0061-4224-82d5-3ccf67998722"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.047960 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b40c6b3-0061-4224-82d5-3ccf67998722-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "3b40c6b3-0061-4224-82d5-3ccf67998722" (UID: "3b40c6b3-0061-4224-82d5-3ccf67998722"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.048287 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b40c6b3-0061-4224-82d5-3ccf67998722-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "3b40c6b3-0061-4224-82d5-3ccf67998722" (UID: "3b40c6b3-0061-4224-82d5-3ccf67998722"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.049398 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b40c6b3-0061-4224-82d5-3ccf67998722-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "3b40c6b3-0061-4224-82d5-3ccf67998722" (UID: "3b40c6b3-0061-4224-82d5-3ccf67998722"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.049540 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "3b40c6b3-0061-4224-82d5-3ccf67998722" (UID: "3b40c6b3-0061-4224-82d5-3ccf67998722"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.050442 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-log-socket" (OuterVolumeSpecName: "log-socket") pod "3b40c6b3-0061-4224-82d5-3ccf67998722" (UID: "3b40c6b3-0061-4224-82d5-3ccf67998722"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.050539 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-node-log" (OuterVolumeSpecName: "node-log") pod "3b40c6b3-0061-4224-82d5-3ccf67998722" (UID: "3b40c6b3-0061-4224-82d5-3ccf67998722"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.066390 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b40c6b3-0061-4224-82d5-3ccf67998722-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "3b40c6b3-0061-4224-82d5-3ccf67998722" (UID: "3b40c6b3-0061-4224-82d5-3ccf67998722"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.066673 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b40c6b3-0061-4224-82d5-3ccf67998722-kube-api-access-dj6cl" (OuterVolumeSpecName: "kube-api-access-dj6cl") pod "3b40c6b3-0061-4224-82d5-3ccf67998722" (UID: "3b40c6b3-0061-4224-82d5-3ccf67998722"). InnerVolumeSpecName "kube-api-access-dj6cl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.077256 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qb725_3b40c6b3-0061-4224-82d5-3ccf67998722/ovnkube-controller/3.log" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.078852 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "3b40c6b3-0061-4224-82d5-3ccf67998722" (UID: "3b40c6b3-0061-4224-82d5-3ccf67998722"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.081712 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-khrch"] Mar 13 10:17:50 crc kubenswrapper[4632]: E0313 10:17:50.082008 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="sbdb" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.082027 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="sbdb" Mar 13 10:17:50 crc kubenswrapper[4632]: E0313 10:17:50.082040 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="kubecfg-setup" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.082047 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="kubecfg-setup" Mar 13 10:17:50 crc kubenswrapper[4632]: E0313 10:17:50.082055 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="ovn-controller" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.082062 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="ovn-controller" Mar 13 10:17:50 crc kubenswrapper[4632]: E0313 10:17:50.082071 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="ovn-acl-logging" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.082079 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="ovn-acl-logging" Mar 13 10:17:50 crc kubenswrapper[4632]: E0313 10:17:50.082093 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="nbdb" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.082100 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="nbdb" Mar 13 10:17:50 crc kubenswrapper[4632]: E0313 10:17:50.082111 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="kube-rbac-proxy-ovn-metrics" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.082117 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="kube-rbac-proxy-ovn-metrics" Mar 13 10:17:50 crc kubenswrapper[4632]: E0313 10:17:50.082128 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="ovnkube-controller" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.082135 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="ovnkube-controller" Mar 13 10:17:50 crc kubenswrapper[4632]: E0313 10:17:50.082143 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="kube-rbac-proxy-node" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.082179 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="kube-rbac-proxy-node" Mar 13 10:17:50 crc kubenswrapper[4632]: E0313 10:17:50.082207 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="northd" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.082213 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="northd" Mar 13 10:17:50 crc kubenswrapper[4632]: E0313 10:17:50.082222 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="ovnkube-controller" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.082229 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="ovnkube-controller" Mar 13 10:17:50 crc kubenswrapper[4632]: E0313 10:17:50.082238 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="ovnkube-controller" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.082246 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="ovnkube-controller" Mar 13 10:17:50 crc kubenswrapper[4632]: E0313 10:17:50.082258 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="ovnkube-controller" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.082267 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="ovnkube-controller" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.082326 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qb725_3b40c6b3-0061-4224-82d5-3ccf67998722/ovn-acl-logging/0.log" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.082379 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="ovnkube-controller" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.082396 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="nbdb" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.082407 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="ovn-acl-logging" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.082464 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="ovnkube-controller" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.082481 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="kube-rbac-proxy-ovn-metrics" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.082493 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="ovnkube-controller" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.082503 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="kube-rbac-proxy-node" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.082514 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="ovn-controller" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.082524 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="ovnkube-controller" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.082535 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="sbdb" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.082543 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="northd" Mar 13 10:17:50 crc kubenswrapper[4632]: E0313 10:17:50.082661 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="ovnkube-controller" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.082671 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="ovnkube-controller" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.082790 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerName="ovnkube-controller" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.083311 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qb725_3b40c6b3-0061-4224-82d5-3ccf67998722/ovn-controller/0.log" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.084197 4632 generic.go:334] "Generic (PLEG): container finished" podID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerID="166459cac16ec3b496c247f07645c5ed749fa575d48ccabd0d32f2daa03d6776" exitCode=0 Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.084225 4632 generic.go:334] "Generic (PLEG): container finished" podID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerID="e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d" exitCode=0 Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.084234 4632 generic.go:334] "Generic (PLEG): container finished" podID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerID="af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f" exitCode=0 Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.084245 4632 generic.go:334] "Generic (PLEG): container finished" podID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerID="1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da" exitCode=0 Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.084253 4632 generic.go:334] "Generic (PLEG): container finished" podID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerID="a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5" exitCode=0 Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.084259 4632 generic.go:334] "Generic (PLEG): container finished" podID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerID="cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b" exitCode=0 Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.084266 4632 generic.go:334] "Generic (PLEG): container finished" podID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerID="32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719" exitCode=143 Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.084273 4632 generic.go:334] "Generic (PLEG): container finished" podID="3b40c6b3-0061-4224-82d5-3ccf67998722" containerID="7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705" exitCode=143 Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.084412 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.084702 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" event={"ID":"3b40c6b3-0061-4224-82d5-3ccf67998722","Type":"ContainerDied","Data":"166459cac16ec3b496c247f07645c5ed749fa575d48ccabd0d32f2daa03d6776"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.084758 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" event={"ID":"3b40c6b3-0061-4224-82d5-3ccf67998722","Type":"ContainerDied","Data":"e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.084772 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" event={"ID":"3b40c6b3-0061-4224-82d5-3ccf67998722","Type":"ContainerDied","Data":"af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.084783 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" event={"ID":"3b40c6b3-0061-4224-82d5-3ccf67998722","Type":"ContainerDied","Data":"1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.084797 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" event={"ID":"3b40c6b3-0061-4224-82d5-3ccf67998722","Type":"ContainerDied","Data":"a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.084811 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" event={"ID":"3b40c6b3-0061-4224-82d5-3ccf67998722","Type":"ContainerDied","Data":"cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.084824 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.084837 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.084845 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.084853 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.084860 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.084868 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.084874 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.084880 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.084885 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.084894 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" event={"ID":"3b40c6b3-0061-4224-82d5-3ccf67998722","Type":"ContainerDied","Data":"32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.084903 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"166459cac16ec3b496c247f07645c5ed749fa575d48ccabd0d32f2daa03d6776"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.084910 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.084915 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.084921 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.084926 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.084932 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.084971 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.084980 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.084987 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.084993 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.085002 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" event={"ID":"3b40c6b3-0061-4224-82d5-3ccf67998722","Type":"ContainerDied","Data":"7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.085013 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"166459cac16ec3b496c247f07645c5ed749fa575d48ccabd0d32f2daa03d6776"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.085020 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.085026 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.085032 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.085039 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.085045 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.085052 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.085059 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.085065 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.085072 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.085082 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qb725" event={"ID":"3b40c6b3-0061-4224-82d5-3ccf67998722","Type":"ContainerDied","Data":"5f99589c0e329dc2bea211f1582fe2ff509c48ed7460521bac851a5b63796f30"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.085093 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"166459cac16ec3b496c247f07645c5ed749fa575d48ccabd0d32f2daa03d6776"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.085102 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.085108 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.085113 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.085119 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.085125 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.085131 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.085138 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.085144 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.085150 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.085169 4632 scope.go:117] "RemoveContainer" containerID="166459cac16ec3b496c247f07645c5ed749fa575d48ccabd0d32f2daa03d6776" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.085414 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.089038 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.092476 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.092853 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.098901 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gqf22_4ec8e301-3037-4de0-94d2-32c49709660e/kube-multus/2.log" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.102517 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gqf22_4ec8e301-3037-4de0-94d2-32c49709660e/kube-multus/1.log" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.102588 4632 generic.go:334] "Generic (PLEG): container finished" podID="4ec8e301-3037-4de0-94d2-32c49709660e" containerID="5fd2699ddbdedbd54069c44af8e38bc058b347d99af772939ae6ec1d10220723" exitCode=2 Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.102644 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gqf22" event={"ID":"4ec8e301-3037-4de0-94d2-32c49709660e","Type":"ContainerDied","Data":"5fd2699ddbdedbd54069c44af8e38bc058b347d99af772939ae6ec1d10220723"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.102694 4632 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e48bcc5861bda7a15e45c892fa67ba73299d99e896f36f2cb68274a659ec5d34"} Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.103544 4632 scope.go:117] "RemoveContainer" containerID="5fd2699ddbdedbd54069c44af8e38bc058b347d99af772939ae6ec1d10220723" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.131102 4632 scope.go:117] "RemoveContainer" containerID="8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.150232 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b6e936db-ec1c-447a-894d-49bd7c74c315-ovnkube-script-lib\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.150643 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-host-slash\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.150667 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhfsk\" (UniqueName: \"kubernetes.io/projected/b6e936db-ec1c-447a-894d-49bd7c74c315-kube-api-access-jhfsk\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.150699 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b6e936db-ec1c-447a-894d-49bd7c74c315-ovnkube-config\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.150745 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b6e936db-ec1c-447a-894d-49bd7c74c315-env-overrides\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.150766 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-host-run-netns\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.150794 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-run-ovn\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.150820 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-host-run-ovn-kubernetes\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.150840 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-log-socket\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.150878 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-node-log\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.150899 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-host-cni-bin\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.150933 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-etc-openvswitch\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.150996 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b6e936db-ec1c-447a-894d-49bd7c74c315-ovn-node-metrics-cert\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.151022 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-host-kubelet\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.151057 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-var-lib-openvswitch\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.151080 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-host-cni-netd\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.151105 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-run-openvswitch\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.151127 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.151153 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-systemd-units\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.151173 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-run-systemd\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.151239 4632 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-node-log\") on node \"crc\" DevicePath \"\"" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.151257 4632 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3b40c6b3-0061-4224-82d5-3ccf67998722-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.151270 4632 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-systemd-units\") on node \"crc\" DevicePath \"\"" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.151280 4632 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3b40c6b3-0061-4224-82d5-3ccf67998722-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.151291 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dj6cl\" (UniqueName: \"kubernetes.io/projected/3b40c6b3-0061-4224-82d5-3ccf67998722-kube-api-access-dj6cl\") on node \"crc\" DevicePath \"\"" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.151301 4632 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3b40c6b3-0061-4224-82d5-3ccf67998722-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.151311 4632 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-run-systemd\") on node \"crc\" DevicePath \"\"" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.151322 4632 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.151333 4632 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.151344 4632 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3b40c6b3-0061-4224-82d5-3ccf67998722-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.151353 4632 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3b40c6b3-0061-4224-82d5-3ccf67998722-log-socket\") on node \"crc\" DevicePath \"\"" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.179207 4632 scope.go:117] "RemoveContainer" containerID="e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.197365 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qb725"] Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.208157 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qb725"] Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.223100 4632 scope.go:117] "RemoveContainer" containerID="af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.252133 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b6e936db-ec1c-447a-894d-49bd7c74c315-ovnkube-script-lib\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.252209 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-host-slash\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.252238 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhfsk\" (UniqueName: \"kubernetes.io/projected/b6e936db-ec1c-447a-894d-49bd7c74c315-kube-api-access-jhfsk\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.252286 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b6e936db-ec1c-447a-894d-49bd7c74c315-ovnkube-config\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.252359 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-host-run-netns\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.252384 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b6e936db-ec1c-447a-894d-49bd7c74c315-env-overrides\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.252407 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-run-ovn\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.252452 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-host-run-ovn-kubernetes\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.252473 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-log-socket\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.252490 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-node-log\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.252521 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-host-cni-bin\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.252536 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b6e936db-ec1c-447a-894d-49bd7c74c315-ovn-node-metrics-cert\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.252550 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-etc-openvswitch\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.252565 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-host-kubelet\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.252597 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-var-lib-openvswitch\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.252614 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-host-cni-netd\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.252632 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-run-openvswitch\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.252669 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.252688 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-run-systemd\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.252705 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-systemd-units\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.252798 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-systemd-units\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.253903 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b6e936db-ec1c-447a-894d-49bd7c74c315-ovnkube-script-lib\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.253978 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-host-slash\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.254215 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-host-cni-bin\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.254976 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b6e936db-ec1c-447a-894d-49bd7c74c315-ovnkube-config\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.255030 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-host-run-netns\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.255393 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b6e936db-ec1c-447a-894d-49bd7c74c315-env-overrides\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.255445 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-run-ovn\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.255476 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-host-run-ovn-kubernetes\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.255511 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-log-socket\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.255545 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-node-log\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.255577 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-host-cni-netd\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.255606 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-etc-openvswitch\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.255638 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-host-kubelet\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.255668 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-var-lib-openvswitch\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.255699 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.255731 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-run-openvswitch\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.255760 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b6e936db-ec1c-447a-894d-49bd7c74c315-run-systemd\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.257885 4632 scope.go:117] "RemoveContainer" containerID="1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.260462 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b6e936db-ec1c-447a-894d-49bd7c74c315-ovn-node-metrics-cert\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.277483 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhfsk\" (UniqueName: \"kubernetes.io/projected/b6e936db-ec1c-447a-894d-49bd7c74c315-kube-api-access-jhfsk\") pod \"ovnkube-node-khrch\" (UID: \"b6e936db-ec1c-447a-894d-49bd7c74c315\") " pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.292135 4632 scope.go:117] "RemoveContainer" containerID="a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.335070 4632 scope.go:117] "RemoveContainer" containerID="cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.362342 4632 scope.go:117] "RemoveContainer" containerID="32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.383608 4632 scope.go:117] "RemoveContainer" containerID="7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.412379 4632 scope.go:117] "RemoveContainer" containerID="fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.432612 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.444103 4632 scope.go:117] "RemoveContainer" containerID="166459cac16ec3b496c247f07645c5ed749fa575d48ccabd0d32f2daa03d6776" Mar 13 10:17:50 crc kubenswrapper[4632]: E0313 10:17:50.445377 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"166459cac16ec3b496c247f07645c5ed749fa575d48ccabd0d32f2daa03d6776\": container with ID starting with 166459cac16ec3b496c247f07645c5ed749fa575d48ccabd0d32f2daa03d6776 not found: ID does not exist" containerID="166459cac16ec3b496c247f07645c5ed749fa575d48ccabd0d32f2daa03d6776" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.445460 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"166459cac16ec3b496c247f07645c5ed749fa575d48ccabd0d32f2daa03d6776"} err="failed to get container status \"166459cac16ec3b496c247f07645c5ed749fa575d48ccabd0d32f2daa03d6776\": rpc error: code = NotFound desc = could not find container \"166459cac16ec3b496c247f07645c5ed749fa575d48ccabd0d32f2daa03d6776\": container with ID starting with 166459cac16ec3b496c247f07645c5ed749fa575d48ccabd0d32f2daa03d6776 not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.445527 4632 scope.go:117] "RemoveContainer" containerID="8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b" Mar 13 10:17:50 crc kubenswrapper[4632]: E0313 10:17:50.446852 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b\": container with ID starting with 8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b not found: ID does not exist" containerID="8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.446927 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b"} err="failed to get container status \"8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b\": rpc error: code = NotFound desc = could not find container \"8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b\": container with ID starting with 8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.446974 4632 scope.go:117] "RemoveContainer" containerID="e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d" Mar 13 10:17:50 crc kubenswrapper[4632]: E0313 10:17:50.447551 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d\": container with ID starting with e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d not found: ID does not exist" containerID="e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.447611 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d"} err="failed to get container status \"e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d\": rpc error: code = NotFound desc = could not find container \"e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d\": container with ID starting with e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.447645 4632 scope.go:117] "RemoveContainer" containerID="af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f" Mar 13 10:17:50 crc kubenswrapper[4632]: E0313 10:17:50.448120 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f\": container with ID starting with af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f not found: ID does not exist" containerID="af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.448173 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f"} err="failed to get container status \"af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f\": rpc error: code = NotFound desc = could not find container \"af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f\": container with ID starting with af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.449430 4632 scope.go:117] "RemoveContainer" containerID="1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da" Mar 13 10:17:50 crc kubenswrapper[4632]: E0313 10:17:50.449836 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da\": container with ID starting with 1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da not found: ID does not exist" containerID="1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.449869 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da"} err="failed to get container status \"1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da\": rpc error: code = NotFound desc = could not find container \"1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da\": container with ID starting with 1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.449891 4632 scope.go:117] "RemoveContainer" containerID="a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5" Mar 13 10:17:50 crc kubenswrapper[4632]: E0313 10:17:50.450308 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5\": container with ID starting with a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5 not found: ID does not exist" containerID="a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.450343 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5"} err="failed to get container status \"a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5\": rpc error: code = NotFound desc = could not find container \"a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5\": container with ID starting with a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5 not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.450367 4632 scope.go:117] "RemoveContainer" containerID="cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b" Mar 13 10:17:50 crc kubenswrapper[4632]: E0313 10:17:50.450670 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b\": container with ID starting with cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b not found: ID does not exist" containerID="cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.450718 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b"} err="failed to get container status \"cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b\": rpc error: code = NotFound desc = could not find container \"cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b\": container with ID starting with cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.450741 4632 scope.go:117] "RemoveContainer" containerID="32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719" Mar 13 10:17:50 crc kubenswrapper[4632]: E0313 10:17:50.451276 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719\": container with ID starting with 32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719 not found: ID does not exist" containerID="32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.451328 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719"} err="failed to get container status \"32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719\": rpc error: code = NotFound desc = could not find container \"32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719\": container with ID starting with 32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719 not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.451349 4632 scope.go:117] "RemoveContainer" containerID="7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705" Mar 13 10:17:50 crc kubenswrapper[4632]: E0313 10:17:50.452280 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705\": container with ID starting with 7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705 not found: ID does not exist" containerID="7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.452318 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705"} err="failed to get container status \"7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705\": rpc error: code = NotFound desc = could not find container \"7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705\": container with ID starting with 7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705 not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.452344 4632 scope.go:117] "RemoveContainer" containerID="fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50" Mar 13 10:17:50 crc kubenswrapper[4632]: E0313 10:17:50.458900 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\": container with ID starting with fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50 not found: ID does not exist" containerID="fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.459003 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50"} err="failed to get container status \"fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\": rpc error: code = NotFound desc = could not find container \"fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\": container with ID starting with fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50 not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.459044 4632 scope.go:117] "RemoveContainer" containerID="166459cac16ec3b496c247f07645c5ed749fa575d48ccabd0d32f2daa03d6776" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.460039 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"166459cac16ec3b496c247f07645c5ed749fa575d48ccabd0d32f2daa03d6776"} err="failed to get container status \"166459cac16ec3b496c247f07645c5ed749fa575d48ccabd0d32f2daa03d6776\": rpc error: code = NotFound desc = could not find container \"166459cac16ec3b496c247f07645c5ed749fa575d48ccabd0d32f2daa03d6776\": container with ID starting with 166459cac16ec3b496c247f07645c5ed749fa575d48ccabd0d32f2daa03d6776 not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.460134 4632 scope.go:117] "RemoveContainer" containerID="8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.461141 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b"} err="failed to get container status \"8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b\": rpc error: code = NotFound desc = could not find container \"8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b\": container with ID starting with 8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.461166 4632 scope.go:117] "RemoveContainer" containerID="e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.461502 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d"} err="failed to get container status \"e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d\": rpc error: code = NotFound desc = could not find container \"e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d\": container with ID starting with e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.461547 4632 scope.go:117] "RemoveContainer" containerID="af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.462357 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f"} err="failed to get container status \"af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f\": rpc error: code = NotFound desc = could not find container \"af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f\": container with ID starting with af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.462383 4632 scope.go:117] "RemoveContainer" containerID="1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.462856 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da"} err="failed to get container status \"1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da\": rpc error: code = NotFound desc = could not find container \"1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da\": container with ID starting with 1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.462982 4632 scope.go:117] "RemoveContainer" containerID="a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.464153 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5"} err="failed to get container status \"a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5\": rpc error: code = NotFound desc = could not find container \"a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5\": container with ID starting with a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5 not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.464267 4632 scope.go:117] "RemoveContainer" containerID="cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.464706 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b"} err="failed to get container status \"cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b\": rpc error: code = NotFound desc = could not find container \"cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b\": container with ID starting with cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.464754 4632 scope.go:117] "RemoveContainer" containerID="32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.465276 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719"} err="failed to get container status \"32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719\": rpc error: code = NotFound desc = could not find container \"32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719\": container with ID starting with 32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719 not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.465353 4632 scope.go:117] "RemoveContainer" containerID="7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.466167 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705"} err="failed to get container status \"7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705\": rpc error: code = NotFound desc = could not find container \"7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705\": container with ID starting with 7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705 not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.466207 4632 scope.go:117] "RemoveContainer" containerID="fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.466807 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50"} err="failed to get container status \"fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\": rpc error: code = NotFound desc = could not find container \"fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\": container with ID starting with fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50 not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.466860 4632 scope.go:117] "RemoveContainer" containerID="166459cac16ec3b496c247f07645c5ed749fa575d48ccabd0d32f2daa03d6776" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.467458 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"166459cac16ec3b496c247f07645c5ed749fa575d48ccabd0d32f2daa03d6776"} err="failed to get container status \"166459cac16ec3b496c247f07645c5ed749fa575d48ccabd0d32f2daa03d6776\": rpc error: code = NotFound desc = could not find container \"166459cac16ec3b496c247f07645c5ed749fa575d48ccabd0d32f2daa03d6776\": container with ID starting with 166459cac16ec3b496c247f07645c5ed749fa575d48ccabd0d32f2daa03d6776 not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.467490 4632 scope.go:117] "RemoveContainer" containerID="8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.469885 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b"} err="failed to get container status \"8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b\": rpc error: code = NotFound desc = could not find container \"8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b\": container with ID starting with 8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.469911 4632 scope.go:117] "RemoveContainer" containerID="e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.470287 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d"} err="failed to get container status \"e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d\": rpc error: code = NotFound desc = could not find container \"e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d\": container with ID starting with e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.470311 4632 scope.go:117] "RemoveContainer" containerID="af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.470637 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f"} err="failed to get container status \"af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f\": rpc error: code = NotFound desc = could not find container \"af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f\": container with ID starting with af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.470657 4632 scope.go:117] "RemoveContainer" containerID="1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.470880 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da"} err="failed to get container status \"1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da\": rpc error: code = NotFound desc = could not find container \"1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da\": container with ID starting with 1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.470908 4632 scope.go:117] "RemoveContainer" containerID="a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.471219 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5"} err="failed to get container status \"a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5\": rpc error: code = NotFound desc = could not find container \"a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5\": container with ID starting with a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5 not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.471246 4632 scope.go:117] "RemoveContainer" containerID="cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.471597 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b"} err="failed to get container status \"cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b\": rpc error: code = NotFound desc = could not find container \"cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b\": container with ID starting with cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.471628 4632 scope.go:117] "RemoveContainer" containerID="32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.472007 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719"} err="failed to get container status \"32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719\": rpc error: code = NotFound desc = could not find container \"32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719\": container with ID starting with 32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719 not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.472031 4632 scope.go:117] "RemoveContainer" containerID="7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.472648 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705"} err="failed to get container status \"7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705\": rpc error: code = NotFound desc = could not find container \"7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705\": container with ID starting with 7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705 not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.472689 4632 scope.go:117] "RemoveContainer" containerID="fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.473095 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50"} err="failed to get container status \"fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\": rpc error: code = NotFound desc = could not find container \"fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\": container with ID starting with fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50 not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.473116 4632 scope.go:117] "RemoveContainer" containerID="166459cac16ec3b496c247f07645c5ed749fa575d48ccabd0d32f2daa03d6776" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.473862 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"166459cac16ec3b496c247f07645c5ed749fa575d48ccabd0d32f2daa03d6776"} err="failed to get container status \"166459cac16ec3b496c247f07645c5ed749fa575d48ccabd0d32f2daa03d6776\": rpc error: code = NotFound desc = could not find container \"166459cac16ec3b496c247f07645c5ed749fa575d48ccabd0d32f2daa03d6776\": container with ID starting with 166459cac16ec3b496c247f07645c5ed749fa575d48ccabd0d32f2daa03d6776 not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.473892 4632 scope.go:117] "RemoveContainer" containerID="8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.474514 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b"} err="failed to get container status \"8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b\": rpc error: code = NotFound desc = could not find container \"8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b\": container with ID starting with 8f88d0230bc4958132c1c8a67c55a1e41f6e91534d69eb1a8c5061d17838098b not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.474594 4632 scope.go:117] "RemoveContainer" containerID="e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.475152 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d"} err="failed to get container status \"e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d\": rpc error: code = NotFound desc = could not find container \"e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d\": container with ID starting with e9f07f221ba02566b33ff83d55a32048ca4b9f412109f1d5b261e5e8576a9e2d not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.475219 4632 scope.go:117] "RemoveContainer" containerID="af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.476717 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f"} err="failed to get container status \"af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f\": rpc error: code = NotFound desc = could not find container \"af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f\": container with ID starting with af0c4e2f04409c8ddc0f9bc84c72dc88b6475220d060124a714ad4ad78e6101f not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.476747 4632 scope.go:117] "RemoveContainer" containerID="1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da" Mar 13 10:17:50 crc kubenswrapper[4632]: W0313 10:17:50.477304 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6e936db_ec1c_447a_894d_49bd7c74c315.slice/crio-aba0854f8bfee4eec9401b0ece87c9abbe0d79c52c67dba182c74b2eb059ffdd WatchSource:0}: Error finding container aba0854f8bfee4eec9401b0ece87c9abbe0d79c52c67dba182c74b2eb059ffdd: Status 404 returned error can't find the container with id aba0854f8bfee4eec9401b0ece87c9abbe0d79c52c67dba182c74b2eb059ffdd Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.477872 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da"} err="failed to get container status \"1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da\": rpc error: code = NotFound desc = could not find container \"1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da\": container with ID starting with 1ce7c67dfef6bd36183b1d9a902c15016d24ebf1dc92b5970391aeeaa321e8da not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.477983 4632 scope.go:117] "RemoveContainer" containerID="a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.479375 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5"} err="failed to get container status \"a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5\": rpc error: code = NotFound desc = could not find container \"a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5\": container with ID starting with a748b574435e8ce2de0ee0e311c4bc983ea5afe25f8a4873e16e13eebb8709b5 not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.479429 4632 scope.go:117] "RemoveContainer" containerID="cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.479811 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b"} err="failed to get container status \"cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b\": rpc error: code = NotFound desc = could not find container \"cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b\": container with ID starting with cc7ffe60a2cb2de18612dc1db2a6002c06b442e1c0d689b556636671fd22c83b not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.479848 4632 scope.go:117] "RemoveContainer" containerID="32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.482507 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719"} err="failed to get container status \"32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719\": rpc error: code = NotFound desc = could not find container \"32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719\": container with ID starting with 32c437b2fe87b28335f5e5cc4f6c1921c9d7bc2f08ea8369e495cd6f4ab5b719 not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.482548 4632 scope.go:117] "RemoveContainer" containerID="7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.483012 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705"} err="failed to get container status \"7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705\": rpc error: code = NotFound desc = could not find container \"7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705\": container with ID starting with 7de4dee0b604ec5005f3041256a1c8fba71b4dfb59d28a20f0577bfa987d5705 not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.483038 4632 scope.go:117] "RemoveContainer" containerID="fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.485199 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50"} err="failed to get container status \"fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\": rpc error: code = NotFound desc = could not find container \"fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50\": container with ID starting with fde8ebd48ee574e63496a8cd11e135524ab5dbabff0982f33ef6aa64eb350b50 not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.485244 4632 scope.go:117] "RemoveContainer" containerID="166459cac16ec3b496c247f07645c5ed749fa575d48ccabd0d32f2daa03d6776" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.487253 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"166459cac16ec3b496c247f07645c5ed749fa575d48ccabd0d32f2daa03d6776"} err="failed to get container status \"166459cac16ec3b496c247f07645c5ed749fa575d48ccabd0d32f2daa03d6776\": rpc error: code = NotFound desc = could not find container \"166459cac16ec3b496c247f07645c5ed749fa575d48ccabd0d32f2daa03d6776\": container with ID starting with 166459cac16ec3b496c247f07645c5ed749fa575d48ccabd0d32f2daa03d6776 not found: ID does not exist" Mar 13 10:17:50 crc kubenswrapper[4632]: I0313 10:17:50.887545 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-tjkbb" Mar 13 10:17:51 crc kubenswrapper[4632]: I0313 10:17:51.109344 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gqf22_4ec8e301-3037-4de0-94d2-32c49709660e/kube-multus/2.log" Mar 13 10:17:51 crc kubenswrapper[4632]: I0313 10:17:51.110028 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gqf22_4ec8e301-3037-4de0-94d2-32c49709660e/kube-multus/1.log" Mar 13 10:17:51 crc kubenswrapper[4632]: I0313 10:17:51.110178 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gqf22" event={"ID":"4ec8e301-3037-4de0-94d2-32c49709660e","Type":"ContainerStarted","Data":"d8c124819539cd51aa3ecc51a7287d0ffc182af68ef9ff0ef3cca9bb296cb657"} Mar 13 10:17:51 crc kubenswrapper[4632]: I0313 10:17:51.113022 4632 generic.go:334] "Generic (PLEG): container finished" podID="b6e936db-ec1c-447a-894d-49bd7c74c315" containerID="7b1f6fd6cd656162231b57d1ed903f8ec0b1fd203839b9b2b1e97ee1cabd2305" exitCode=0 Mar 13 10:17:51 crc kubenswrapper[4632]: I0313 10:17:51.113086 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-khrch" event={"ID":"b6e936db-ec1c-447a-894d-49bd7c74c315","Type":"ContainerDied","Data":"7b1f6fd6cd656162231b57d1ed903f8ec0b1fd203839b9b2b1e97ee1cabd2305"} Mar 13 10:17:51 crc kubenswrapper[4632]: I0313 10:17:51.113145 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-khrch" event={"ID":"b6e936db-ec1c-447a-894d-49bd7c74c315","Type":"ContainerStarted","Data":"aba0854f8bfee4eec9401b0ece87c9abbe0d79c52c67dba182c74b2eb059ffdd"} Mar 13 10:17:52 crc kubenswrapper[4632]: I0313 10:17:52.053147 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b40c6b3-0061-4224-82d5-3ccf67998722" path="/var/lib/kubelet/pods/3b40c6b3-0061-4224-82d5-3ccf67998722/volumes" Mar 13 10:17:52 crc kubenswrapper[4632]: I0313 10:17:52.122024 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-khrch" event={"ID":"b6e936db-ec1c-447a-894d-49bd7c74c315","Type":"ContainerStarted","Data":"fd070db907a81c4129276f7963fb5d55f5374434eab025c5259df83494266b8d"} Mar 13 10:17:52 crc kubenswrapper[4632]: I0313 10:17:52.122092 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-khrch" event={"ID":"b6e936db-ec1c-447a-894d-49bd7c74c315","Type":"ContainerStarted","Data":"4c697aed6d5d0065ea2c0087f575a0767350e0241674f5e77e589fc4b744cf98"} Mar 13 10:17:52 crc kubenswrapper[4632]: I0313 10:17:52.122109 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-khrch" event={"ID":"b6e936db-ec1c-447a-894d-49bd7c74c315","Type":"ContainerStarted","Data":"89a52384fcc24d9b7edfc959adad0e5850f03abc686b20cf4fb1fc0ea6f9af72"} Mar 13 10:17:52 crc kubenswrapper[4632]: I0313 10:17:52.122123 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-khrch" event={"ID":"b6e936db-ec1c-447a-894d-49bd7c74c315","Type":"ContainerStarted","Data":"eb3431c358e931d65a798cec572556aa0c7f665d68e134af66a4f107f8282ba7"} Mar 13 10:17:52 crc kubenswrapper[4632]: I0313 10:17:52.122134 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-khrch" event={"ID":"b6e936db-ec1c-447a-894d-49bd7c74c315","Type":"ContainerStarted","Data":"79c0fd42d034938f1cc371005b0f7d56f7b78849587c385549ef8cebd5507bc2"} Mar 13 10:17:52 crc kubenswrapper[4632]: I0313 10:17:52.122144 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-khrch" event={"ID":"b6e936db-ec1c-447a-894d-49bd7c74c315","Type":"ContainerStarted","Data":"0ff1db46158b8bcf27a8f6bc8992c67300f851d3bae818c4446435bd5055d4cb"} Mar 13 10:17:54 crc kubenswrapper[4632]: I0313 10:17:54.136072 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-khrch" event={"ID":"b6e936db-ec1c-447a-894d-49bd7c74c315","Type":"ContainerStarted","Data":"aa7a10b4e72a51b406ec7be26b6bb8550c44700fdb3440d11638049a2c645726"} Mar 13 10:17:56 crc kubenswrapper[4632]: I0313 10:17:56.770160 4632 scope.go:117] "RemoveContainer" containerID="e48bcc5861bda7a15e45c892fa67ba73299d99e896f36f2cb68274a659ec5d34" Mar 13 10:17:57 crc kubenswrapper[4632]: I0313 10:17:57.156049 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-khrch" event={"ID":"b6e936db-ec1c-447a-894d-49bd7c74c315","Type":"ContainerStarted","Data":"c2db77d29f859566314a4b1b0340d322b3adfa73ec2e42760da934777cb7a840"} Mar 13 10:17:57 crc kubenswrapper[4632]: I0313 10:17:57.157418 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:57 crc kubenswrapper[4632]: I0313 10:17:57.157448 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:57 crc kubenswrapper[4632]: I0313 10:17:57.157542 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:57 crc kubenswrapper[4632]: I0313 10:17:57.162146 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gqf22_4ec8e301-3037-4de0-94d2-32c49709660e/kube-multus/2.log" Mar 13 10:17:57 crc kubenswrapper[4632]: I0313 10:17:57.191477 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:17:57 crc kubenswrapper[4632]: I0313 10:17:57.196549 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-khrch" podStartSLOduration=7.196532192 podStartE2EDuration="7.196532192s" podCreationTimestamp="2026-03-13 10:17:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:17:57.192053188 +0000 UTC m=+851.214583321" watchObservedRunningTime="2026-03-13 10:17:57.196532192 +0000 UTC m=+851.219062325" Mar 13 10:17:57 crc kubenswrapper[4632]: I0313 10:17:57.214201 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:18:00 crc kubenswrapper[4632]: I0313 10:18:00.129736 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556618-ngbmk"] Mar 13 10:18:00 crc kubenswrapper[4632]: I0313 10:18:00.130911 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556618-ngbmk" Mar 13 10:18:00 crc kubenswrapper[4632]: I0313 10:18:00.133611 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 10:18:00 crc kubenswrapper[4632]: I0313 10:18:00.133704 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 10:18:00 crc kubenswrapper[4632]: I0313 10:18:00.133628 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 10:18:00 crc kubenswrapper[4632]: I0313 10:18:00.140786 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556618-ngbmk"] Mar 13 10:18:00 crc kubenswrapper[4632]: I0313 10:18:00.283086 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9cs7\" (UniqueName: \"kubernetes.io/projected/b93f1106-edf9-4cde-9acb-e265d8e07191-kube-api-access-v9cs7\") pod \"auto-csr-approver-29556618-ngbmk\" (UID: \"b93f1106-edf9-4cde-9acb-e265d8e07191\") " pod="openshift-infra/auto-csr-approver-29556618-ngbmk" Mar 13 10:18:00 crc kubenswrapper[4632]: I0313 10:18:00.384240 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9cs7\" (UniqueName: \"kubernetes.io/projected/b93f1106-edf9-4cde-9acb-e265d8e07191-kube-api-access-v9cs7\") pod \"auto-csr-approver-29556618-ngbmk\" (UID: \"b93f1106-edf9-4cde-9acb-e265d8e07191\") " pod="openshift-infra/auto-csr-approver-29556618-ngbmk" Mar 13 10:18:00 crc kubenswrapper[4632]: I0313 10:18:00.409664 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9cs7\" (UniqueName: \"kubernetes.io/projected/b93f1106-edf9-4cde-9acb-e265d8e07191-kube-api-access-v9cs7\") pod \"auto-csr-approver-29556618-ngbmk\" (UID: \"b93f1106-edf9-4cde-9acb-e265d8e07191\") " pod="openshift-infra/auto-csr-approver-29556618-ngbmk" Mar 13 10:18:00 crc kubenswrapper[4632]: I0313 10:18:00.447233 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556618-ngbmk" Mar 13 10:18:00 crc kubenswrapper[4632]: I0313 10:18:00.641267 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556618-ngbmk"] Mar 13 10:18:00 crc kubenswrapper[4632]: W0313 10:18:00.645126 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb93f1106_edf9_4cde_9acb_e265d8e07191.slice/crio-cfa2ab88d74bc1da1b618e81508d8824c824e518ede12962da0681c9ac4b54a8 WatchSource:0}: Error finding container cfa2ab88d74bc1da1b618e81508d8824c824e518ede12962da0681c9ac4b54a8: Status 404 returned error can't find the container with id cfa2ab88d74bc1da1b618e81508d8824c824e518ede12962da0681c9ac4b54a8 Mar 13 10:18:01 crc kubenswrapper[4632]: I0313 10:18:01.195584 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556618-ngbmk" event={"ID":"b93f1106-edf9-4cde-9acb-e265d8e07191","Type":"ContainerStarted","Data":"cfa2ab88d74bc1da1b618e81508d8824c824e518ede12962da0681c9ac4b54a8"} Mar 13 10:18:02 crc kubenswrapper[4632]: I0313 10:18:02.202289 4632 generic.go:334] "Generic (PLEG): container finished" podID="b93f1106-edf9-4cde-9acb-e265d8e07191" containerID="971cfa2ec11ce234b8c8c574daddb17b130773fddba410f62dd84c800e0f4023" exitCode=0 Mar 13 10:18:02 crc kubenswrapper[4632]: I0313 10:18:02.202399 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556618-ngbmk" event={"ID":"b93f1106-edf9-4cde-9acb-e265d8e07191","Type":"ContainerDied","Data":"971cfa2ec11ce234b8c8c574daddb17b130773fddba410f62dd84c800e0f4023"} Mar 13 10:18:03 crc kubenswrapper[4632]: I0313 10:18:03.439876 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556618-ngbmk" Mar 13 10:18:03 crc kubenswrapper[4632]: I0313 10:18:03.622321 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9cs7\" (UniqueName: \"kubernetes.io/projected/b93f1106-edf9-4cde-9acb-e265d8e07191-kube-api-access-v9cs7\") pod \"b93f1106-edf9-4cde-9acb-e265d8e07191\" (UID: \"b93f1106-edf9-4cde-9acb-e265d8e07191\") " Mar 13 10:18:03 crc kubenswrapper[4632]: I0313 10:18:03.630557 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b93f1106-edf9-4cde-9acb-e265d8e07191-kube-api-access-v9cs7" (OuterVolumeSpecName: "kube-api-access-v9cs7") pod "b93f1106-edf9-4cde-9acb-e265d8e07191" (UID: "b93f1106-edf9-4cde-9acb-e265d8e07191"). InnerVolumeSpecName "kube-api-access-v9cs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:18:03 crc kubenswrapper[4632]: I0313 10:18:03.723580 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9cs7\" (UniqueName: \"kubernetes.io/projected/b93f1106-edf9-4cde-9acb-e265d8e07191-kube-api-access-v9cs7\") on node \"crc\" DevicePath \"\"" Mar 13 10:18:04 crc kubenswrapper[4632]: I0313 10:18:04.216363 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556618-ngbmk" event={"ID":"b93f1106-edf9-4cde-9acb-e265d8e07191","Type":"ContainerDied","Data":"cfa2ab88d74bc1da1b618e81508d8824c824e518ede12962da0681c9ac4b54a8"} Mar 13 10:18:04 crc kubenswrapper[4632]: I0313 10:18:04.216430 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556618-ngbmk" Mar 13 10:18:04 crc kubenswrapper[4632]: I0313 10:18:04.216462 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfa2ab88d74bc1da1b618e81508d8824c824e518ede12962da0681c9ac4b54a8" Mar 13 10:18:04 crc kubenswrapper[4632]: I0313 10:18:04.485688 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556612-5t5ct"] Mar 13 10:18:04 crc kubenswrapper[4632]: I0313 10:18:04.489677 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556612-5t5ct"] Mar 13 10:18:06 crc kubenswrapper[4632]: I0313 10:18:06.050758 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="050ee655-a62f-4991-b493-d98493762823" path="/var/lib/kubelet/pods/050ee655-a62f-4991-b493-d98493762823/volumes" Mar 13 10:18:06 crc kubenswrapper[4632]: I0313 10:18:06.518499 4632 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 13 10:18:10 crc kubenswrapper[4632]: I0313 10:18:10.461191 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 10:18:10 crc kubenswrapper[4632]: I0313 10:18:10.462146 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 10:18:20 crc kubenswrapper[4632]: I0313 10:18:20.458462 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-khrch" Mar 13 10:18:32 crc kubenswrapper[4632]: I0313 10:18:32.978434 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg"] Mar 13 10:18:32 crc kubenswrapper[4632]: E0313 10:18:32.979230 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b93f1106-edf9-4cde-9acb-e265d8e07191" containerName="oc" Mar 13 10:18:32 crc kubenswrapper[4632]: I0313 10:18:32.979246 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="b93f1106-edf9-4cde-9acb-e265d8e07191" containerName="oc" Mar 13 10:18:32 crc kubenswrapper[4632]: I0313 10:18:32.979377 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="b93f1106-edf9-4cde-9acb-e265d8e07191" containerName="oc" Mar 13 10:18:32 crc kubenswrapper[4632]: I0313 10:18:32.980315 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg" Mar 13 10:18:32 crc kubenswrapper[4632]: I0313 10:18:32.985308 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 13 10:18:32 crc kubenswrapper[4632]: I0313 10:18:32.986461 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg"] Mar 13 10:18:33 crc kubenswrapper[4632]: I0313 10:18:33.007199 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x2pj\" (UniqueName: \"kubernetes.io/projected/2e270cfe-55fc-4855-87ff-4313a0ad319c-kube-api-access-4x2pj\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg\" (UID: \"2e270cfe-55fc-4855-87ff-4313a0ad319c\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg" Mar 13 10:18:33 crc kubenswrapper[4632]: I0313 10:18:33.007278 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2e270cfe-55fc-4855-87ff-4313a0ad319c-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg\" (UID: \"2e270cfe-55fc-4855-87ff-4313a0ad319c\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg" Mar 13 10:18:33 crc kubenswrapper[4632]: I0313 10:18:33.007306 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2e270cfe-55fc-4855-87ff-4313a0ad319c-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg\" (UID: \"2e270cfe-55fc-4855-87ff-4313a0ad319c\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg" Mar 13 10:18:33 crc kubenswrapper[4632]: I0313 10:18:33.108882 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x2pj\" (UniqueName: \"kubernetes.io/projected/2e270cfe-55fc-4855-87ff-4313a0ad319c-kube-api-access-4x2pj\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg\" (UID: \"2e270cfe-55fc-4855-87ff-4313a0ad319c\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg" Mar 13 10:18:33 crc kubenswrapper[4632]: I0313 10:18:33.109034 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2e270cfe-55fc-4855-87ff-4313a0ad319c-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg\" (UID: \"2e270cfe-55fc-4855-87ff-4313a0ad319c\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg" Mar 13 10:18:33 crc kubenswrapper[4632]: I0313 10:18:33.109073 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2e270cfe-55fc-4855-87ff-4313a0ad319c-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg\" (UID: \"2e270cfe-55fc-4855-87ff-4313a0ad319c\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg" Mar 13 10:18:33 crc kubenswrapper[4632]: I0313 10:18:33.109511 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2e270cfe-55fc-4855-87ff-4313a0ad319c-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg\" (UID: \"2e270cfe-55fc-4855-87ff-4313a0ad319c\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg" Mar 13 10:18:33 crc kubenswrapper[4632]: I0313 10:18:33.109537 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2e270cfe-55fc-4855-87ff-4313a0ad319c-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg\" (UID: \"2e270cfe-55fc-4855-87ff-4313a0ad319c\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg" Mar 13 10:18:33 crc kubenswrapper[4632]: I0313 10:18:33.135588 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x2pj\" (UniqueName: \"kubernetes.io/projected/2e270cfe-55fc-4855-87ff-4313a0ad319c-kube-api-access-4x2pj\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg\" (UID: \"2e270cfe-55fc-4855-87ff-4313a0ad319c\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg" Mar 13 10:18:33 crc kubenswrapper[4632]: I0313 10:18:33.307505 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg" Mar 13 10:18:33 crc kubenswrapper[4632]: I0313 10:18:33.496996 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg"] Mar 13 10:18:33 crc kubenswrapper[4632]: I0313 10:18:33.746551 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg" event={"ID":"2e270cfe-55fc-4855-87ff-4313a0ad319c","Type":"ContainerStarted","Data":"70fccb3799fbfc67ff1cb5305ca58be21b5a5c6c0871d3079f2ed0fc9ec195dc"} Mar 13 10:18:33 crc kubenswrapper[4632]: I0313 10:18:33.746873 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg" event={"ID":"2e270cfe-55fc-4855-87ff-4313a0ad319c","Type":"ContainerStarted","Data":"30f776f8c54e2a7f9a372e8f33dd1f58da40deb74ed5a948205ed020d75644c7"} Mar 13 10:18:34 crc kubenswrapper[4632]: I0313 10:18:34.752752 4632 generic.go:334] "Generic (PLEG): container finished" podID="2e270cfe-55fc-4855-87ff-4313a0ad319c" containerID="70fccb3799fbfc67ff1cb5305ca58be21b5a5c6c0871d3079f2ed0fc9ec195dc" exitCode=0 Mar 13 10:18:34 crc kubenswrapper[4632]: I0313 10:18:34.752800 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg" event={"ID":"2e270cfe-55fc-4855-87ff-4313a0ad319c","Type":"ContainerDied","Data":"70fccb3799fbfc67ff1cb5305ca58be21b5a5c6c0871d3079f2ed0fc9ec195dc"} Mar 13 10:18:35 crc kubenswrapper[4632]: I0313 10:18:35.340074 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bww79"] Mar 13 10:18:35 crc kubenswrapper[4632]: I0313 10:18:35.341339 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bww79" Mar 13 10:18:35 crc kubenswrapper[4632]: I0313 10:18:35.366972 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bww79"] Mar 13 10:18:35 crc kubenswrapper[4632]: I0313 10:18:35.539423 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adab5e58-1b8e-4170-b244-d45be51beccb-catalog-content\") pod \"redhat-operators-bww79\" (UID: \"adab5e58-1b8e-4170-b244-d45be51beccb\") " pod="openshift-marketplace/redhat-operators-bww79" Mar 13 10:18:35 crc kubenswrapper[4632]: I0313 10:18:35.539516 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adab5e58-1b8e-4170-b244-d45be51beccb-utilities\") pod \"redhat-operators-bww79\" (UID: \"adab5e58-1b8e-4170-b244-d45be51beccb\") " pod="openshift-marketplace/redhat-operators-bww79" Mar 13 10:18:35 crc kubenswrapper[4632]: I0313 10:18:35.540465 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqhvx\" (UniqueName: \"kubernetes.io/projected/adab5e58-1b8e-4170-b244-d45be51beccb-kube-api-access-rqhvx\") pod \"redhat-operators-bww79\" (UID: \"adab5e58-1b8e-4170-b244-d45be51beccb\") " pod="openshift-marketplace/redhat-operators-bww79" Mar 13 10:18:35 crc kubenswrapper[4632]: I0313 10:18:35.642078 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqhvx\" (UniqueName: \"kubernetes.io/projected/adab5e58-1b8e-4170-b244-d45be51beccb-kube-api-access-rqhvx\") pod \"redhat-operators-bww79\" (UID: \"adab5e58-1b8e-4170-b244-d45be51beccb\") " pod="openshift-marketplace/redhat-operators-bww79" Mar 13 10:18:35 crc kubenswrapper[4632]: I0313 10:18:35.642145 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adab5e58-1b8e-4170-b244-d45be51beccb-catalog-content\") pod \"redhat-operators-bww79\" (UID: \"adab5e58-1b8e-4170-b244-d45be51beccb\") " pod="openshift-marketplace/redhat-operators-bww79" Mar 13 10:18:35 crc kubenswrapper[4632]: I0313 10:18:35.642193 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adab5e58-1b8e-4170-b244-d45be51beccb-utilities\") pod \"redhat-operators-bww79\" (UID: \"adab5e58-1b8e-4170-b244-d45be51beccb\") " pod="openshift-marketplace/redhat-operators-bww79" Mar 13 10:18:35 crc kubenswrapper[4632]: I0313 10:18:35.642855 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adab5e58-1b8e-4170-b244-d45be51beccb-catalog-content\") pod \"redhat-operators-bww79\" (UID: \"adab5e58-1b8e-4170-b244-d45be51beccb\") " pod="openshift-marketplace/redhat-operators-bww79" Mar 13 10:18:35 crc kubenswrapper[4632]: I0313 10:18:35.643122 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adab5e58-1b8e-4170-b244-d45be51beccb-utilities\") pod \"redhat-operators-bww79\" (UID: \"adab5e58-1b8e-4170-b244-d45be51beccb\") " pod="openshift-marketplace/redhat-operators-bww79" Mar 13 10:18:35 crc kubenswrapper[4632]: I0313 10:18:35.663194 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqhvx\" (UniqueName: \"kubernetes.io/projected/adab5e58-1b8e-4170-b244-d45be51beccb-kube-api-access-rqhvx\") pod \"redhat-operators-bww79\" (UID: \"adab5e58-1b8e-4170-b244-d45be51beccb\") " pod="openshift-marketplace/redhat-operators-bww79" Mar 13 10:18:35 crc kubenswrapper[4632]: I0313 10:18:35.959688 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bww79" Mar 13 10:18:36 crc kubenswrapper[4632]: I0313 10:18:36.218096 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bww79"] Mar 13 10:18:36 crc kubenswrapper[4632]: W0313 10:18:36.239298 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadab5e58_1b8e_4170_b244_d45be51beccb.slice/crio-0479633a9d80d98716b8676d2d46358f35e2abf71d04127168560bd5df09cbf0 WatchSource:0}: Error finding container 0479633a9d80d98716b8676d2d46358f35e2abf71d04127168560bd5df09cbf0: Status 404 returned error can't find the container with id 0479633a9d80d98716b8676d2d46358f35e2abf71d04127168560bd5df09cbf0 Mar 13 10:18:36 crc kubenswrapper[4632]: I0313 10:18:36.767159 4632 generic.go:334] "Generic (PLEG): container finished" podID="adab5e58-1b8e-4170-b244-d45be51beccb" containerID="61d923bcef83757034506bbf7a8f076d3bc9f8e9e2edb45de9372bf6e235a420" exitCode=0 Mar 13 10:18:36 crc kubenswrapper[4632]: I0313 10:18:36.767230 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bww79" event={"ID":"adab5e58-1b8e-4170-b244-d45be51beccb","Type":"ContainerDied","Data":"61d923bcef83757034506bbf7a8f076d3bc9f8e9e2edb45de9372bf6e235a420"} Mar 13 10:18:36 crc kubenswrapper[4632]: I0313 10:18:36.767257 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bww79" event={"ID":"adab5e58-1b8e-4170-b244-d45be51beccb","Type":"ContainerStarted","Data":"0479633a9d80d98716b8676d2d46358f35e2abf71d04127168560bd5df09cbf0"} Mar 13 10:18:36 crc kubenswrapper[4632]: I0313 10:18:36.771037 4632 generic.go:334] "Generic (PLEG): container finished" podID="2e270cfe-55fc-4855-87ff-4313a0ad319c" containerID="74e3de3bdd958d58cfbec4868a514778aadc959af0ac02fef0f61e787540e630" exitCode=0 Mar 13 10:18:36 crc kubenswrapper[4632]: I0313 10:18:36.771076 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg" event={"ID":"2e270cfe-55fc-4855-87ff-4313a0ad319c","Type":"ContainerDied","Data":"74e3de3bdd958d58cfbec4868a514778aadc959af0ac02fef0f61e787540e630"} Mar 13 10:18:37 crc kubenswrapper[4632]: I0313 10:18:37.777984 4632 generic.go:334] "Generic (PLEG): container finished" podID="2e270cfe-55fc-4855-87ff-4313a0ad319c" containerID="5659996057e94d41164ea9a63a709e951333a865b02ec58f8d7d3cf3acc64dbc" exitCode=0 Mar 13 10:18:37 crc kubenswrapper[4632]: I0313 10:18:37.779326 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg" event={"ID":"2e270cfe-55fc-4855-87ff-4313a0ad319c","Type":"ContainerDied","Data":"5659996057e94d41164ea9a63a709e951333a865b02ec58f8d7d3cf3acc64dbc"} Mar 13 10:18:37 crc kubenswrapper[4632]: I0313 10:18:37.787492 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bww79" event={"ID":"adab5e58-1b8e-4170-b244-d45be51beccb","Type":"ContainerStarted","Data":"69a89f03ba249c5eb9ce15b60d6967b375feee74919b0e43506625501d0e271b"} Mar 13 10:18:39 crc kubenswrapper[4632]: I0313 10:18:39.174121 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg" Mar 13 10:18:39 crc kubenswrapper[4632]: I0313 10:18:39.288120 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4x2pj\" (UniqueName: \"kubernetes.io/projected/2e270cfe-55fc-4855-87ff-4313a0ad319c-kube-api-access-4x2pj\") pod \"2e270cfe-55fc-4855-87ff-4313a0ad319c\" (UID: \"2e270cfe-55fc-4855-87ff-4313a0ad319c\") " Mar 13 10:18:39 crc kubenswrapper[4632]: I0313 10:18:39.288187 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2e270cfe-55fc-4855-87ff-4313a0ad319c-bundle\") pod \"2e270cfe-55fc-4855-87ff-4313a0ad319c\" (UID: \"2e270cfe-55fc-4855-87ff-4313a0ad319c\") " Mar 13 10:18:39 crc kubenswrapper[4632]: I0313 10:18:39.288224 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2e270cfe-55fc-4855-87ff-4313a0ad319c-util\") pod \"2e270cfe-55fc-4855-87ff-4313a0ad319c\" (UID: \"2e270cfe-55fc-4855-87ff-4313a0ad319c\") " Mar 13 10:18:39 crc kubenswrapper[4632]: I0313 10:18:39.288690 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e270cfe-55fc-4855-87ff-4313a0ad319c-bundle" (OuterVolumeSpecName: "bundle") pod "2e270cfe-55fc-4855-87ff-4313a0ad319c" (UID: "2e270cfe-55fc-4855-87ff-4313a0ad319c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:18:39 crc kubenswrapper[4632]: I0313 10:18:39.295798 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e270cfe-55fc-4855-87ff-4313a0ad319c-kube-api-access-4x2pj" (OuterVolumeSpecName: "kube-api-access-4x2pj") pod "2e270cfe-55fc-4855-87ff-4313a0ad319c" (UID: "2e270cfe-55fc-4855-87ff-4313a0ad319c"). InnerVolumeSpecName "kube-api-access-4x2pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:18:39 crc kubenswrapper[4632]: I0313 10:18:39.300548 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e270cfe-55fc-4855-87ff-4313a0ad319c-util" (OuterVolumeSpecName: "util") pod "2e270cfe-55fc-4855-87ff-4313a0ad319c" (UID: "2e270cfe-55fc-4855-87ff-4313a0ad319c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:18:39 crc kubenswrapper[4632]: I0313 10:18:39.389600 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4x2pj\" (UniqueName: \"kubernetes.io/projected/2e270cfe-55fc-4855-87ff-4313a0ad319c-kube-api-access-4x2pj\") on node \"crc\" DevicePath \"\"" Mar 13 10:18:39 crc kubenswrapper[4632]: I0313 10:18:39.389639 4632 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2e270cfe-55fc-4855-87ff-4313a0ad319c-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:18:39 crc kubenswrapper[4632]: I0313 10:18:39.389648 4632 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2e270cfe-55fc-4855-87ff-4313a0ad319c-util\") on node \"crc\" DevicePath \"\"" Mar 13 10:18:39 crc kubenswrapper[4632]: I0313 10:18:39.819825 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg" event={"ID":"2e270cfe-55fc-4855-87ff-4313a0ad319c","Type":"ContainerDied","Data":"30f776f8c54e2a7f9a372e8f33dd1f58da40deb74ed5a948205ed020d75644c7"} Mar 13 10:18:39 crc kubenswrapper[4632]: I0313 10:18:39.819846 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg" Mar 13 10:18:39 crc kubenswrapper[4632]: I0313 10:18:39.819885 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30f776f8c54e2a7f9a372e8f33dd1f58da40deb74ed5a948205ed020d75644c7" Mar 13 10:18:39 crc kubenswrapper[4632]: I0313 10:18:39.822106 4632 generic.go:334] "Generic (PLEG): container finished" podID="adab5e58-1b8e-4170-b244-d45be51beccb" containerID="69a89f03ba249c5eb9ce15b60d6967b375feee74919b0e43506625501d0e271b" exitCode=0 Mar 13 10:18:39 crc kubenswrapper[4632]: I0313 10:18:39.822175 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bww79" event={"ID":"adab5e58-1b8e-4170-b244-d45be51beccb","Type":"ContainerDied","Data":"69a89f03ba249c5eb9ce15b60d6967b375feee74919b0e43506625501d0e271b"} Mar 13 10:18:40 crc kubenswrapper[4632]: I0313 10:18:40.466204 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 10:18:40 crc kubenswrapper[4632]: I0313 10:18:40.466592 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 10:18:40 crc kubenswrapper[4632]: I0313 10:18:40.466649 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 10:18:40 crc kubenswrapper[4632]: I0313 10:18:40.467263 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7fcd863f1a2b3af4768aa1d32979163bc846d3d472acea1e8c27ffcf3dfe0ffc"} pod="openshift-machine-config-operator/machine-config-daemon-zkscb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 10:18:40 crc kubenswrapper[4632]: I0313 10:18:40.467309 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" containerID="cri-o://7fcd863f1a2b3af4768aa1d32979163bc846d3d472acea1e8c27ffcf3dfe0ffc" gracePeriod=600 Mar 13 10:18:40 crc kubenswrapper[4632]: I0313 10:18:40.832564 4632 generic.go:334] "Generic (PLEG): container finished" podID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerID="7fcd863f1a2b3af4768aa1d32979163bc846d3d472acea1e8c27ffcf3dfe0ffc" exitCode=0 Mar 13 10:18:40 crc kubenswrapper[4632]: I0313 10:18:40.832640 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerDied","Data":"7fcd863f1a2b3af4768aa1d32979163bc846d3d472acea1e8c27ffcf3dfe0ffc"} Mar 13 10:18:40 crc kubenswrapper[4632]: I0313 10:18:40.832688 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerStarted","Data":"624a339b1e1f8b218223c2e3440b7f9925bb18567bb6def4fcf3bfc022198658"} Mar 13 10:18:40 crc kubenswrapper[4632]: I0313 10:18:40.832708 4632 scope.go:117] "RemoveContainer" containerID="313e3b067f9ea051953ab56cbddeb09cc8cceb68240f33ca492d13584077681c" Mar 13 10:18:40 crc kubenswrapper[4632]: I0313 10:18:40.837522 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bww79" event={"ID":"adab5e58-1b8e-4170-b244-d45be51beccb","Type":"ContainerStarted","Data":"7470f34a42aacbea955a79fad4dd4ed78868cab12e40fc4e1449bd061f3deb93"} Mar 13 10:18:43 crc kubenswrapper[4632]: I0313 10:18:43.451249 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bww79" podStartSLOduration=4.938159852 podStartE2EDuration="8.451229694s" podCreationTimestamp="2026-03-13 10:18:35 +0000 UTC" firstStartedPulling="2026-03-13 10:18:36.769069777 +0000 UTC m=+890.791599910" lastFinishedPulling="2026-03-13 10:18:40.282139619 +0000 UTC m=+894.304669752" observedRunningTime="2026-03-13 10:18:40.878307507 +0000 UTC m=+894.900837660" watchObservedRunningTime="2026-03-13 10:18:43.451229694 +0000 UTC m=+897.473759827" Mar 13 10:18:43 crc kubenswrapper[4632]: I0313 10:18:43.454793 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-bzmdv"] Mar 13 10:18:43 crc kubenswrapper[4632]: E0313 10:18:43.455045 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e270cfe-55fc-4855-87ff-4313a0ad319c" containerName="pull" Mar 13 10:18:43 crc kubenswrapper[4632]: I0313 10:18:43.455066 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e270cfe-55fc-4855-87ff-4313a0ad319c" containerName="pull" Mar 13 10:18:43 crc kubenswrapper[4632]: E0313 10:18:43.455092 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e270cfe-55fc-4855-87ff-4313a0ad319c" containerName="util" Mar 13 10:18:43 crc kubenswrapper[4632]: I0313 10:18:43.455103 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e270cfe-55fc-4855-87ff-4313a0ad319c" containerName="util" Mar 13 10:18:43 crc kubenswrapper[4632]: E0313 10:18:43.455114 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e270cfe-55fc-4855-87ff-4313a0ad319c" containerName="extract" Mar 13 10:18:43 crc kubenswrapper[4632]: I0313 10:18:43.455120 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e270cfe-55fc-4855-87ff-4313a0ad319c" containerName="extract" Mar 13 10:18:43 crc kubenswrapper[4632]: I0313 10:18:43.455209 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e270cfe-55fc-4855-87ff-4313a0ad319c" containerName="extract" Mar 13 10:18:43 crc kubenswrapper[4632]: I0313 10:18:43.455586 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-bzmdv" Mar 13 10:18:43 crc kubenswrapper[4632]: I0313 10:18:43.457856 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-lxf9r" Mar 13 10:18:43 crc kubenswrapper[4632]: I0313 10:18:43.458546 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 13 10:18:43 crc kubenswrapper[4632]: I0313 10:18:43.460118 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 13 10:18:43 crc kubenswrapper[4632]: I0313 10:18:43.471704 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-bzmdv"] Mar 13 10:18:43 crc kubenswrapper[4632]: I0313 10:18:43.475103 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9jgc\" (UniqueName: \"kubernetes.io/projected/3b679db2-06cc-4796-945a-5ced45b39053-kube-api-access-t9jgc\") pod \"nmstate-operator-796d4cfff4-bzmdv\" (UID: \"3b679db2-06cc-4796-945a-5ced45b39053\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-bzmdv" Mar 13 10:18:43 crc kubenswrapper[4632]: I0313 10:18:43.576400 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9jgc\" (UniqueName: \"kubernetes.io/projected/3b679db2-06cc-4796-945a-5ced45b39053-kube-api-access-t9jgc\") pod \"nmstate-operator-796d4cfff4-bzmdv\" (UID: \"3b679db2-06cc-4796-945a-5ced45b39053\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-bzmdv" Mar 13 10:18:43 crc kubenswrapper[4632]: I0313 10:18:43.615806 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9jgc\" (UniqueName: \"kubernetes.io/projected/3b679db2-06cc-4796-945a-5ced45b39053-kube-api-access-t9jgc\") pod \"nmstate-operator-796d4cfff4-bzmdv\" (UID: \"3b679db2-06cc-4796-945a-5ced45b39053\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-bzmdv" Mar 13 10:18:43 crc kubenswrapper[4632]: I0313 10:18:43.778969 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-bzmdv" Mar 13 10:18:44 crc kubenswrapper[4632]: I0313 10:18:44.306473 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-bzmdv"] Mar 13 10:18:44 crc kubenswrapper[4632]: W0313 10:18:44.312601 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b679db2_06cc_4796_945a_5ced45b39053.slice/crio-6497a9252cbcc6d30b8d952e7753224e2d9e98fa6b8ef76f6a046249c1701353 WatchSource:0}: Error finding container 6497a9252cbcc6d30b8d952e7753224e2d9e98fa6b8ef76f6a046249c1701353: Status 404 returned error can't find the container with id 6497a9252cbcc6d30b8d952e7753224e2d9e98fa6b8ef76f6a046249c1701353 Mar 13 10:18:44 crc kubenswrapper[4632]: I0313 10:18:44.875806 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-bzmdv" event={"ID":"3b679db2-06cc-4796-945a-5ced45b39053","Type":"ContainerStarted","Data":"6497a9252cbcc6d30b8d952e7753224e2d9e98fa6b8ef76f6a046249c1701353"} Mar 13 10:18:45 crc kubenswrapper[4632]: I0313 10:18:45.961046 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bww79" Mar 13 10:18:45 crc kubenswrapper[4632]: I0313 10:18:45.961095 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bww79" Mar 13 10:18:47 crc kubenswrapper[4632]: I0313 10:18:47.015321 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bww79" podUID="adab5e58-1b8e-4170-b244-d45be51beccb" containerName="registry-server" probeResult="failure" output=< Mar 13 10:18:47 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 10:18:47 crc kubenswrapper[4632]: > Mar 13 10:18:47 crc kubenswrapper[4632]: I0313 10:18:47.902319 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-bzmdv" event={"ID":"3b679db2-06cc-4796-945a-5ced45b39053","Type":"ContainerStarted","Data":"d94fc64c10cc29b0260c1ee26784f074d44dac3354acfc38028b8bde57e66c66"} Mar 13 10:18:47 crc kubenswrapper[4632]: I0313 10:18:47.921503 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-796d4cfff4-bzmdv" podStartSLOduration=2.182431379 podStartE2EDuration="4.921483144s" podCreationTimestamp="2026-03-13 10:18:43 +0000 UTC" firstStartedPulling="2026-03-13 10:18:44.31501176 +0000 UTC m=+898.337541893" lastFinishedPulling="2026-03-13 10:18:47.054063525 +0000 UTC m=+901.076593658" observedRunningTime="2026-03-13 10:18:47.917685327 +0000 UTC m=+901.940215480" watchObservedRunningTime="2026-03-13 10:18:47.921483144 +0000 UTC m=+901.944013277" Mar 13 10:18:52 crc kubenswrapper[4632]: I0313 10:18:52.985835 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-lnfrw"] Mar 13 10:18:52 crc kubenswrapper[4632]: I0313 10:18:52.987507 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-lnfrw" Mar 13 10:18:52 crc kubenswrapper[4632]: I0313 10:18:52.990129 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-5fccg" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.004339 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-gcngd"] Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.005239 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-gcngd" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.013818 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.040821 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-mpfnk"] Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.041487 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-mpfnk" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.077828 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-lnfrw"] Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.107512 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-gcngd"] Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.184038 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-kzrvn"] Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.185481 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-kzrvn" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.190631 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.190932 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-st6tm" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.191623 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/33445a2b-7fa8-4198-a60a-09caeb69b8ed-dbus-socket\") pod \"nmstate-handler-mpfnk\" (UID: \"33445a2b-7fa8-4198-a60a-09caeb69b8ed\") " pod="openshift-nmstate/nmstate-handler-mpfnk" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.191656 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xctsv\" (UniqueName: \"kubernetes.io/projected/33445a2b-7fa8-4198-a60a-09caeb69b8ed-kube-api-access-xctsv\") pod \"nmstate-handler-mpfnk\" (UID: \"33445a2b-7fa8-4198-a60a-09caeb69b8ed\") " pod="openshift-nmstate/nmstate-handler-mpfnk" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.191680 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9bf11778-d854-4c97-acd1-ed4822ee5f47-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-gcngd\" (UID: \"9bf11778-d854-4c97-acd1-ed4822ee5f47\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-gcngd" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.191700 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdtv6\" (UniqueName: \"kubernetes.io/projected/0c63c4bc-5c1a-4af0-b255-eb418d8a02cd-kube-api-access-fdtv6\") pod \"nmstate-metrics-9b8c8685d-lnfrw\" (UID: \"0c63c4bc-5c1a-4af0-b255-eb418d8a02cd\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-lnfrw" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.191718 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/33445a2b-7fa8-4198-a60a-09caeb69b8ed-nmstate-lock\") pod \"nmstate-handler-mpfnk\" (UID: \"33445a2b-7fa8-4198-a60a-09caeb69b8ed\") " pod="openshift-nmstate/nmstate-handler-mpfnk" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.191741 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgc9p\" (UniqueName: \"kubernetes.io/projected/9bf11778-d854-4c97-acd1-ed4822ee5f47-kube-api-access-zgc9p\") pod \"nmstate-webhook-5f558f5558-gcngd\" (UID: \"9bf11778-d854-4c97-acd1-ed4822ee5f47\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-gcngd" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.191770 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/33445a2b-7fa8-4198-a60a-09caeb69b8ed-ovs-socket\") pod \"nmstate-handler-mpfnk\" (UID: \"33445a2b-7fa8-4198-a60a-09caeb69b8ed\") " pod="openshift-nmstate/nmstate-handler-mpfnk" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.197429 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-kzrvn"] Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.199117 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.292522 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/33445a2b-7fa8-4198-a60a-09caeb69b8ed-nmstate-lock\") pod \"nmstate-handler-mpfnk\" (UID: \"33445a2b-7fa8-4198-a60a-09caeb69b8ed\") " pod="openshift-nmstate/nmstate-handler-mpfnk" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.292982 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1ca5cae6-5549-492a-a257-745bb41d3574-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-kzrvn\" (UID: \"1ca5cae6-5549-492a-a257-745bb41d3574\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-kzrvn" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.292660 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/33445a2b-7fa8-4198-a60a-09caeb69b8ed-nmstate-lock\") pod \"nmstate-handler-mpfnk\" (UID: \"33445a2b-7fa8-4198-a60a-09caeb69b8ed\") " pod="openshift-nmstate/nmstate-handler-mpfnk" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.293203 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgc9p\" (UniqueName: \"kubernetes.io/projected/9bf11778-d854-4c97-acd1-ed4822ee5f47-kube-api-access-zgc9p\") pod \"nmstate-webhook-5f558f5558-gcngd\" (UID: \"9bf11778-d854-4c97-acd1-ed4822ee5f47\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-gcngd" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.293319 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/33445a2b-7fa8-4198-a60a-09caeb69b8ed-ovs-socket\") pod \"nmstate-handler-mpfnk\" (UID: \"33445a2b-7fa8-4198-a60a-09caeb69b8ed\") " pod="openshift-nmstate/nmstate-handler-mpfnk" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.293395 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/33445a2b-7fa8-4198-a60a-09caeb69b8ed-ovs-socket\") pod \"nmstate-handler-mpfnk\" (UID: \"33445a2b-7fa8-4198-a60a-09caeb69b8ed\") " pod="openshift-nmstate/nmstate-handler-mpfnk" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.293409 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ca5cae6-5549-492a-a257-745bb41d3574-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-kzrvn\" (UID: \"1ca5cae6-5549-492a-a257-745bb41d3574\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-kzrvn" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.293713 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54nqs\" (UniqueName: \"kubernetes.io/projected/1ca5cae6-5549-492a-a257-745bb41d3574-kube-api-access-54nqs\") pod \"nmstate-console-plugin-86f58fcf4-kzrvn\" (UID: \"1ca5cae6-5549-492a-a257-745bb41d3574\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-kzrvn" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.293804 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/33445a2b-7fa8-4198-a60a-09caeb69b8ed-dbus-socket\") pod \"nmstate-handler-mpfnk\" (UID: \"33445a2b-7fa8-4198-a60a-09caeb69b8ed\") " pod="openshift-nmstate/nmstate-handler-mpfnk" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.293831 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xctsv\" (UniqueName: \"kubernetes.io/projected/33445a2b-7fa8-4198-a60a-09caeb69b8ed-kube-api-access-xctsv\") pod \"nmstate-handler-mpfnk\" (UID: \"33445a2b-7fa8-4198-a60a-09caeb69b8ed\") " pod="openshift-nmstate/nmstate-handler-mpfnk" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.293870 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9bf11778-d854-4c97-acd1-ed4822ee5f47-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-gcngd\" (UID: \"9bf11778-d854-4c97-acd1-ed4822ee5f47\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-gcngd" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.293896 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdtv6\" (UniqueName: \"kubernetes.io/projected/0c63c4bc-5c1a-4af0-b255-eb418d8a02cd-kube-api-access-fdtv6\") pod \"nmstate-metrics-9b8c8685d-lnfrw\" (UID: \"0c63c4bc-5c1a-4af0-b255-eb418d8a02cd\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-lnfrw" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.294709 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/33445a2b-7fa8-4198-a60a-09caeb69b8ed-dbus-socket\") pod \"nmstate-handler-mpfnk\" (UID: \"33445a2b-7fa8-4198-a60a-09caeb69b8ed\") " pod="openshift-nmstate/nmstate-handler-mpfnk" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.310088 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9bf11778-d854-4c97-acd1-ed4822ee5f47-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-gcngd\" (UID: \"9bf11778-d854-4c97-acd1-ed4822ee5f47\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-gcngd" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.324775 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgc9p\" (UniqueName: \"kubernetes.io/projected/9bf11778-d854-4c97-acd1-ed4822ee5f47-kube-api-access-zgc9p\") pod \"nmstate-webhook-5f558f5558-gcngd\" (UID: \"9bf11778-d854-4c97-acd1-ed4822ee5f47\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-gcngd" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.328728 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xctsv\" (UniqueName: \"kubernetes.io/projected/33445a2b-7fa8-4198-a60a-09caeb69b8ed-kube-api-access-xctsv\") pod \"nmstate-handler-mpfnk\" (UID: \"33445a2b-7fa8-4198-a60a-09caeb69b8ed\") " pod="openshift-nmstate/nmstate-handler-mpfnk" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.336484 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdtv6\" (UniqueName: \"kubernetes.io/projected/0c63c4bc-5c1a-4af0-b255-eb418d8a02cd-kube-api-access-fdtv6\") pod \"nmstate-metrics-9b8c8685d-lnfrw\" (UID: \"0c63c4bc-5c1a-4af0-b255-eb418d8a02cd\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-lnfrw" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.354954 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-mpfnk" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.395647 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1ca5cae6-5549-492a-a257-745bb41d3574-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-kzrvn\" (UID: \"1ca5cae6-5549-492a-a257-745bb41d3574\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-kzrvn" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.395758 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ca5cae6-5549-492a-a257-745bb41d3574-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-kzrvn\" (UID: \"1ca5cae6-5549-492a-a257-745bb41d3574\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-kzrvn" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.395811 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54nqs\" (UniqueName: \"kubernetes.io/projected/1ca5cae6-5549-492a-a257-745bb41d3574-kube-api-access-54nqs\") pod \"nmstate-console-plugin-86f58fcf4-kzrvn\" (UID: \"1ca5cae6-5549-492a-a257-745bb41d3574\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-kzrvn" Mar 13 10:18:53 crc kubenswrapper[4632]: E0313 10:18:53.396839 4632 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Mar 13 10:18:53 crc kubenswrapper[4632]: E0313 10:18:53.397004 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1ca5cae6-5549-492a-a257-745bb41d3574-plugin-serving-cert podName:1ca5cae6-5549-492a-a257-745bb41d3574 nodeName:}" failed. No retries permitted until 2026-03-13 10:18:53.896980878 +0000 UTC m=+907.919511001 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/1ca5cae6-5549-492a-a257-745bb41d3574-plugin-serving-cert") pod "nmstate-console-plugin-86f58fcf4-kzrvn" (UID: "1ca5cae6-5549-492a-a257-745bb41d3574") : secret "plugin-serving-cert" not found Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.397659 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1ca5cae6-5549-492a-a257-745bb41d3574-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-kzrvn\" (UID: \"1ca5cae6-5549-492a-a257-745bb41d3574\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-kzrvn" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.416011 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54nqs\" (UniqueName: \"kubernetes.io/projected/1ca5cae6-5549-492a-a257-745bb41d3574-kube-api-access-54nqs\") pod \"nmstate-console-plugin-86f58fcf4-kzrvn\" (UID: \"1ca5cae6-5549-492a-a257-745bb41d3574\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-kzrvn" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.434617 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5678554f8b-n7dcv"] Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.435295 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5678554f8b-n7dcv" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.499652 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5678554f8b-n7dcv"] Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.598313 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xb8j\" (UniqueName: \"kubernetes.io/projected/a59bb7d3-da4a-4275-9dcb-b851215a9cd0-kube-api-access-4xb8j\") pod \"console-5678554f8b-n7dcv\" (UID: \"a59bb7d3-da4a-4275-9dcb-b851215a9cd0\") " pod="openshift-console/console-5678554f8b-n7dcv" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.598382 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a59bb7d3-da4a-4275-9dcb-b851215a9cd0-oauth-serving-cert\") pod \"console-5678554f8b-n7dcv\" (UID: \"a59bb7d3-da4a-4275-9dcb-b851215a9cd0\") " pod="openshift-console/console-5678554f8b-n7dcv" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.598457 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a59bb7d3-da4a-4275-9dcb-b851215a9cd0-console-config\") pod \"console-5678554f8b-n7dcv\" (UID: \"a59bb7d3-da4a-4275-9dcb-b851215a9cd0\") " pod="openshift-console/console-5678554f8b-n7dcv" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.598500 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a59bb7d3-da4a-4275-9dcb-b851215a9cd0-service-ca\") pod \"console-5678554f8b-n7dcv\" (UID: \"a59bb7d3-da4a-4275-9dcb-b851215a9cd0\") " pod="openshift-console/console-5678554f8b-n7dcv" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.598602 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a59bb7d3-da4a-4275-9dcb-b851215a9cd0-console-serving-cert\") pod \"console-5678554f8b-n7dcv\" (UID: \"a59bb7d3-da4a-4275-9dcb-b851215a9cd0\") " pod="openshift-console/console-5678554f8b-n7dcv" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.598640 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a59bb7d3-da4a-4275-9dcb-b851215a9cd0-trusted-ca-bundle\") pod \"console-5678554f8b-n7dcv\" (UID: \"a59bb7d3-da4a-4275-9dcb-b851215a9cd0\") " pod="openshift-console/console-5678554f8b-n7dcv" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.598700 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a59bb7d3-da4a-4275-9dcb-b851215a9cd0-console-oauth-config\") pod \"console-5678554f8b-n7dcv\" (UID: \"a59bb7d3-da4a-4275-9dcb-b851215a9cd0\") " pod="openshift-console/console-5678554f8b-n7dcv" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.606149 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-lnfrw" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.705005 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-gcngd" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.705449 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a59bb7d3-da4a-4275-9dcb-b851215a9cd0-console-serving-cert\") pod \"console-5678554f8b-n7dcv\" (UID: \"a59bb7d3-da4a-4275-9dcb-b851215a9cd0\") " pod="openshift-console/console-5678554f8b-n7dcv" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.705510 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a59bb7d3-da4a-4275-9dcb-b851215a9cd0-trusted-ca-bundle\") pod \"console-5678554f8b-n7dcv\" (UID: \"a59bb7d3-da4a-4275-9dcb-b851215a9cd0\") " pod="openshift-console/console-5678554f8b-n7dcv" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.705546 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a59bb7d3-da4a-4275-9dcb-b851215a9cd0-console-oauth-config\") pod \"console-5678554f8b-n7dcv\" (UID: \"a59bb7d3-da4a-4275-9dcb-b851215a9cd0\") " pod="openshift-console/console-5678554f8b-n7dcv" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.705607 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xb8j\" (UniqueName: \"kubernetes.io/projected/a59bb7d3-da4a-4275-9dcb-b851215a9cd0-kube-api-access-4xb8j\") pod \"console-5678554f8b-n7dcv\" (UID: \"a59bb7d3-da4a-4275-9dcb-b851215a9cd0\") " pod="openshift-console/console-5678554f8b-n7dcv" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.705630 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a59bb7d3-da4a-4275-9dcb-b851215a9cd0-oauth-serving-cert\") pod \"console-5678554f8b-n7dcv\" (UID: \"a59bb7d3-da4a-4275-9dcb-b851215a9cd0\") " pod="openshift-console/console-5678554f8b-n7dcv" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.705672 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a59bb7d3-da4a-4275-9dcb-b851215a9cd0-console-config\") pod \"console-5678554f8b-n7dcv\" (UID: \"a59bb7d3-da4a-4275-9dcb-b851215a9cd0\") " pod="openshift-console/console-5678554f8b-n7dcv" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.705700 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a59bb7d3-da4a-4275-9dcb-b851215a9cd0-service-ca\") pod \"console-5678554f8b-n7dcv\" (UID: \"a59bb7d3-da4a-4275-9dcb-b851215a9cd0\") " pod="openshift-console/console-5678554f8b-n7dcv" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.706707 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a59bb7d3-da4a-4275-9dcb-b851215a9cd0-oauth-serving-cert\") pod \"console-5678554f8b-n7dcv\" (UID: \"a59bb7d3-da4a-4275-9dcb-b851215a9cd0\") " pod="openshift-console/console-5678554f8b-n7dcv" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.706879 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a59bb7d3-da4a-4275-9dcb-b851215a9cd0-console-config\") pod \"console-5678554f8b-n7dcv\" (UID: \"a59bb7d3-da4a-4275-9dcb-b851215a9cd0\") " pod="openshift-console/console-5678554f8b-n7dcv" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.706961 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a59bb7d3-da4a-4275-9dcb-b851215a9cd0-service-ca\") pod \"console-5678554f8b-n7dcv\" (UID: \"a59bb7d3-da4a-4275-9dcb-b851215a9cd0\") " pod="openshift-console/console-5678554f8b-n7dcv" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.708142 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a59bb7d3-da4a-4275-9dcb-b851215a9cd0-trusted-ca-bundle\") pod \"console-5678554f8b-n7dcv\" (UID: \"a59bb7d3-da4a-4275-9dcb-b851215a9cd0\") " pod="openshift-console/console-5678554f8b-n7dcv" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.709281 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a59bb7d3-da4a-4275-9dcb-b851215a9cd0-console-serving-cert\") pod \"console-5678554f8b-n7dcv\" (UID: \"a59bb7d3-da4a-4275-9dcb-b851215a9cd0\") " pod="openshift-console/console-5678554f8b-n7dcv" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.710904 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a59bb7d3-da4a-4275-9dcb-b851215a9cd0-console-oauth-config\") pod \"console-5678554f8b-n7dcv\" (UID: \"a59bb7d3-da4a-4275-9dcb-b851215a9cd0\") " pod="openshift-console/console-5678554f8b-n7dcv" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.732864 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xb8j\" (UniqueName: \"kubernetes.io/projected/a59bb7d3-da4a-4275-9dcb-b851215a9cd0-kube-api-access-4xb8j\") pod \"console-5678554f8b-n7dcv\" (UID: \"a59bb7d3-da4a-4275-9dcb-b851215a9cd0\") " pod="openshift-console/console-5678554f8b-n7dcv" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.757791 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5678554f8b-n7dcv" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.908689 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ca5cae6-5549-492a-a257-745bb41d3574-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-kzrvn\" (UID: \"1ca5cae6-5549-492a-a257-745bb41d3574\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-kzrvn" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.916576 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ca5cae6-5549-492a-a257-745bb41d3574-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-kzrvn\" (UID: \"1ca5cae6-5549-492a-a257-745bb41d3574\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-kzrvn" Mar 13 10:18:53 crc kubenswrapper[4632]: I0313 10:18:53.937731 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-mpfnk" event={"ID":"33445a2b-7fa8-4198-a60a-09caeb69b8ed","Type":"ContainerStarted","Data":"dd8932567ba0ab49f5ef6914d96e172cff5fd167d4c6e19ef08608b2d79885af"} Mar 13 10:18:54 crc kubenswrapper[4632]: I0313 10:18:54.117187 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-kzrvn" Mar 13 10:18:54 crc kubenswrapper[4632]: I0313 10:18:54.388268 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-lnfrw"] Mar 13 10:18:54 crc kubenswrapper[4632]: W0313 10:18:54.397295 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c63c4bc_5c1a_4af0_b255_eb418d8a02cd.slice/crio-b562142a6db92fede8f830604066a7db2dbced3cab16efb3a78d56359c96acd3 WatchSource:0}: Error finding container b562142a6db92fede8f830604066a7db2dbced3cab16efb3a78d56359c96acd3: Status 404 returned error can't find the container with id b562142a6db92fede8f830604066a7db2dbced3cab16efb3a78d56359c96acd3 Mar 13 10:18:54 crc kubenswrapper[4632]: I0313 10:18:54.407016 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-gcngd"] Mar 13 10:18:54 crc kubenswrapper[4632]: W0313 10:18:54.409372 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bf11778_d854_4c97_acd1_ed4822ee5f47.slice/crio-475f5e9b33d3a7fbbe9894ef41760606869253d3857fd3c6b9c8facde90b207d WatchSource:0}: Error finding container 475f5e9b33d3a7fbbe9894ef41760606869253d3857fd3c6b9c8facde90b207d: Status 404 returned error can't find the container with id 475f5e9b33d3a7fbbe9894ef41760606869253d3857fd3c6b9c8facde90b207d Mar 13 10:18:54 crc kubenswrapper[4632]: I0313 10:18:54.415741 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5678554f8b-n7dcv"] Mar 13 10:18:54 crc kubenswrapper[4632]: I0313 10:18:54.570563 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-kzrvn"] Mar 13 10:18:54 crc kubenswrapper[4632]: W0313 10:18:54.577873 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ca5cae6_5549_492a_a257_745bb41d3574.slice/crio-3fc6b0a0a8d409585f69921a0edff900d5c7b4f432cc49fc9f9da240899b4999 WatchSource:0}: Error finding container 3fc6b0a0a8d409585f69921a0edff900d5c7b4f432cc49fc9f9da240899b4999: Status 404 returned error can't find the container with id 3fc6b0a0a8d409585f69921a0edff900d5c7b4f432cc49fc9f9da240899b4999 Mar 13 10:18:54 crc kubenswrapper[4632]: I0313 10:18:54.946670 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5678554f8b-n7dcv" event={"ID":"a59bb7d3-da4a-4275-9dcb-b851215a9cd0","Type":"ContainerStarted","Data":"ea0d7b906ac5950b0053bf184a8dbed0c1554fc87711206c4674d1bbc3408c3d"} Mar 13 10:18:54 crc kubenswrapper[4632]: I0313 10:18:54.947352 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5678554f8b-n7dcv" event={"ID":"a59bb7d3-da4a-4275-9dcb-b851215a9cd0","Type":"ContainerStarted","Data":"cb23bd32230b0013c61ac1a9dcc579e8f217e4053f1cf49608cf48c3bca30bb7"} Mar 13 10:18:54 crc kubenswrapper[4632]: I0313 10:18:54.951067 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-kzrvn" event={"ID":"1ca5cae6-5549-492a-a257-745bb41d3574","Type":"ContainerStarted","Data":"3fc6b0a0a8d409585f69921a0edff900d5c7b4f432cc49fc9f9da240899b4999"} Mar 13 10:18:54 crc kubenswrapper[4632]: I0313 10:18:54.952978 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-gcngd" event={"ID":"9bf11778-d854-4c97-acd1-ed4822ee5f47","Type":"ContainerStarted","Data":"475f5e9b33d3a7fbbe9894ef41760606869253d3857fd3c6b9c8facde90b207d"} Mar 13 10:18:54 crc kubenswrapper[4632]: I0313 10:18:54.955121 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-lnfrw" event={"ID":"0c63c4bc-5c1a-4af0-b255-eb418d8a02cd","Type":"ContainerStarted","Data":"b562142a6db92fede8f830604066a7db2dbced3cab16efb3a78d56359c96acd3"} Mar 13 10:18:54 crc kubenswrapper[4632]: I0313 10:18:54.970413 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5678554f8b-n7dcv" podStartSLOduration=1.9703955419999999 podStartE2EDuration="1.970395542s" podCreationTimestamp="2026-03-13 10:18:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:18:54.969461708 +0000 UTC m=+908.991991841" watchObservedRunningTime="2026-03-13 10:18:54.970395542 +0000 UTC m=+908.992925675" Mar 13 10:18:56 crc kubenswrapper[4632]: I0313 10:18:56.137573 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bww79" Mar 13 10:18:56 crc kubenswrapper[4632]: I0313 10:18:56.213876 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bww79" Mar 13 10:18:56 crc kubenswrapper[4632]: I0313 10:18:56.373220 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bww79"] Mar 13 10:18:56 crc kubenswrapper[4632]: I0313 10:18:56.958801 4632 scope.go:117] "RemoveContainer" containerID="3025e6a57984dbcc7f1272476cb4a6a1339dea799f52af43239e5a72f7479138" Mar 13 10:18:57 crc kubenswrapper[4632]: I0313 10:18:57.976868 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bww79" podUID="adab5e58-1b8e-4170-b244-d45be51beccb" containerName="registry-server" containerID="cri-o://7470f34a42aacbea955a79fad4dd4ed78868cab12e40fc4e1449bd061f3deb93" gracePeriod=2 Mar 13 10:18:58 crc kubenswrapper[4632]: I0313 10:18:58.986913 4632 generic.go:334] "Generic (PLEG): container finished" podID="adab5e58-1b8e-4170-b244-d45be51beccb" containerID="7470f34a42aacbea955a79fad4dd4ed78868cab12e40fc4e1449bd061f3deb93" exitCode=0 Mar 13 10:18:58 crc kubenswrapper[4632]: I0313 10:18:58.987324 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bww79" event={"ID":"adab5e58-1b8e-4170-b244-d45be51beccb","Type":"ContainerDied","Data":"7470f34a42aacbea955a79fad4dd4ed78868cab12e40fc4e1449bd061f3deb93"} Mar 13 10:18:58 crc kubenswrapper[4632]: I0313 10:18:58.991026 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-gcngd" event={"ID":"9bf11778-d854-4c97-acd1-ed4822ee5f47","Type":"ContainerStarted","Data":"5716a0e9287e6d22b7af01f41bdef1fc843fe9af997de193cb88b8f7571d5088"} Mar 13 10:18:58 crc kubenswrapper[4632]: I0313 10:18:58.991774 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-gcngd" Mar 13 10:18:58 crc kubenswrapper[4632]: I0313 10:18:58.994878 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-lnfrw" event={"ID":"0c63c4bc-5c1a-4af0-b255-eb418d8a02cd","Type":"ContainerStarted","Data":"7c1f2f888382cc87649c5b407623d5a89f5e995564511ff3d0581b77b107eaaf"} Mar 13 10:18:58 crc kubenswrapper[4632]: I0313 10:18:58.996264 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-mpfnk" event={"ID":"33445a2b-7fa8-4198-a60a-09caeb69b8ed","Type":"ContainerStarted","Data":"01dbb2c512e7a384c4a413468acce6565de99a22432dbec235a8843f26e6f1a4"} Mar 13 10:18:58 crc kubenswrapper[4632]: I0313 10:18:58.996398 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-mpfnk" Mar 13 10:18:59 crc kubenswrapper[4632]: I0313 10:18:59.021306 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f558f5558-gcngd" podStartSLOduration=3.201810704 podStartE2EDuration="7.021287514s" podCreationTimestamp="2026-03-13 10:18:52 +0000 UTC" firstStartedPulling="2026-03-13 10:18:54.411408967 +0000 UTC m=+908.433939100" lastFinishedPulling="2026-03-13 10:18:58.230885767 +0000 UTC m=+912.253415910" observedRunningTime="2026-03-13 10:18:59.009847599 +0000 UTC m=+913.032377752" watchObservedRunningTime="2026-03-13 10:18:59.021287514 +0000 UTC m=+913.043817647" Mar 13 10:18:59 crc kubenswrapper[4632]: I0313 10:18:59.209381 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bww79" Mar 13 10:18:59 crc kubenswrapper[4632]: I0313 10:18:59.232933 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-mpfnk" podStartSLOduration=2.3914720210000002 podStartE2EDuration="7.23291618s" podCreationTimestamp="2026-03-13 10:18:52 +0000 UTC" firstStartedPulling="2026-03-13 10:18:53.393358116 +0000 UTC m=+907.415888249" lastFinishedPulling="2026-03-13 10:18:58.234802275 +0000 UTC m=+912.257332408" observedRunningTime="2026-03-13 10:18:59.030822891 +0000 UTC m=+913.053353044" watchObservedRunningTime="2026-03-13 10:18:59.23291618 +0000 UTC m=+913.255446313" Mar 13 10:18:59 crc kubenswrapper[4632]: I0313 10:18:59.234847 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adab5e58-1b8e-4170-b244-d45be51beccb-catalog-content\") pod \"adab5e58-1b8e-4170-b244-d45be51beccb\" (UID: \"adab5e58-1b8e-4170-b244-d45be51beccb\") " Mar 13 10:18:59 crc kubenswrapper[4632]: I0313 10:18:59.234992 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqhvx\" (UniqueName: \"kubernetes.io/projected/adab5e58-1b8e-4170-b244-d45be51beccb-kube-api-access-rqhvx\") pod \"adab5e58-1b8e-4170-b244-d45be51beccb\" (UID: \"adab5e58-1b8e-4170-b244-d45be51beccb\") " Mar 13 10:18:59 crc kubenswrapper[4632]: I0313 10:18:59.235043 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adab5e58-1b8e-4170-b244-d45be51beccb-utilities\") pod \"adab5e58-1b8e-4170-b244-d45be51beccb\" (UID: \"adab5e58-1b8e-4170-b244-d45be51beccb\") " Mar 13 10:18:59 crc kubenswrapper[4632]: I0313 10:18:59.236328 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adab5e58-1b8e-4170-b244-d45be51beccb-utilities" (OuterVolumeSpecName: "utilities") pod "adab5e58-1b8e-4170-b244-d45be51beccb" (UID: "adab5e58-1b8e-4170-b244-d45be51beccb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:18:59 crc kubenswrapper[4632]: I0313 10:18:59.260116 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adab5e58-1b8e-4170-b244-d45be51beccb-kube-api-access-rqhvx" (OuterVolumeSpecName: "kube-api-access-rqhvx") pod "adab5e58-1b8e-4170-b244-d45be51beccb" (UID: "adab5e58-1b8e-4170-b244-d45be51beccb"). InnerVolumeSpecName "kube-api-access-rqhvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:18:59 crc kubenswrapper[4632]: I0313 10:18:59.336514 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqhvx\" (UniqueName: \"kubernetes.io/projected/adab5e58-1b8e-4170-b244-d45be51beccb-kube-api-access-rqhvx\") on node \"crc\" DevicePath \"\"" Mar 13 10:18:59 crc kubenswrapper[4632]: I0313 10:18:59.336571 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adab5e58-1b8e-4170-b244-d45be51beccb-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 10:18:59 crc kubenswrapper[4632]: I0313 10:18:59.431816 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adab5e58-1b8e-4170-b244-d45be51beccb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "adab5e58-1b8e-4170-b244-d45be51beccb" (UID: "adab5e58-1b8e-4170-b244-d45be51beccb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:18:59 crc kubenswrapper[4632]: I0313 10:18:59.438090 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adab5e58-1b8e-4170-b244-d45be51beccb-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 10:19:00 crc kubenswrapper[4632]: I0313 10:19:00.010057 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-kzrvn" event={"ID":"1ca5cae6-5549-492a-a257-745bb41d3574","Type":"ContainerStarted","Data":"c03c482f8a1d9b16c2959e5bf7664ff287487bc5fffc3e179aa213dca54a42c4"} Mar 13 10:19:00 crc kubenswrapper[4632]: I0313 10:19:00.012315 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bww79" event={"ID":"adab5e58-1b8e-4170-b244-d45be51beccb","Type":"ContainerDied","Data":"0479633a9d80d98716b8676d2d46358f35e2abf71d04127168560bd5df09cbf0"} Mar 13 10:19:00 crc kubenswrapper[4632]: I0313 10:19:00.012378 4632 scope.go:117] "RemoveContainer" containerID="7470f34a42aacbea955a79fad4dd4ed78868cab12e40fc4e1449bd061f3deb93" Mar 13 10:19:00 crc kubenswrapper[4632]: I0313 10:19:00.012459 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bww79" Mar 13 10:19:00 crc kubenswrapper[4632]: I0313 10:19:00.034429 4632 scope.go:117] "RemoveContainer" containerID="69a89f03ba249c5eb9ce15b60d6967b375feee74919b0e43506625501d0e271b" Mar 13 10:19:00 crc kubenswrapper[4632]: I0313 10:19:00.037672 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-kzrvn" podStartSLOduration=2.293789154 podStartE2EDuration="7.037652475s" podCreationTimestamp="2026-03-13 10:18:53 +0000 UTC" firstStartedPulling="2026-03-13 10:18:54.580146401 +0000 UTC m=+908.602676534" lastFinishedPulling="2026-03-13 10:18:59.324009722 +0000 UTC m=+913.346539855" observedRunningTime="2026-03-13 10:19:00.03420482 +0000 UTC m=+914.056734973" watchObservedRunningTime="2026-03-13 10:19:00.037652475 +0000 UTC m=+914.060182628" Mar 13 10:19:00 crc kubenswrapper[4632]: I0313 10:19:00.063511 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bww79"] Mar 13 10:19:00 crc kubenswrapper[4632]: I0313 10:19:00.067236 4632 scope.go:117] "RemoveContainer" containerID="61d923bcef83757034506bbf7a8f076d3bc9f8e9e2edb45de9372bf6e235a420" Mar 13 10:19:00 crc kubenswrapper[4632]: I0313 10:19:00.070375 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bww79"] Mar 13 10:19:02 crc kubenswrapper[4632]: I0313 10:19:02.025784 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-lnfrw" event={"ID":"0c63c4bc-5c1a-4af0-b255-eb418d8a02cd","Type":"ContainerStarted","Data":"4c7d634e9b1bceea1fe3fad2c30e40060141b191fcf67108f5820b1526fa18ac"} Mar 13 10:19:02 crc kubenswrapper[4632]: I0313 10:19:02.042954 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-lnfrw" podStartSLOduration=3.091006074 podStartE2EDuration="10.042923383s" podCreationTimestamp="2026-03-13 10:18:52 +0000 UTC" firstStartedPulling="2026-03-13 10:18:54.399216806 +0000 UTC m=+908.421746939" lastFinishedPulling="2026-03-13 10:19:01.351134115 +0000 UTC m=+915.373664248" observedRunningTime="2026-03-13 10:19:02.040179455 +0000 UTC m=+916.062709588" watchObservedRunningTime="2026-03-13 10:19:02.042923383 +0000 UTC m=+916.065453516" Mar 13 10:19:02 crc kubenswrapper[4632]: I0313 10:19:02.054473 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adab5e58-1b8e-4170-b244-d45be51beccb" path="/var/lib/kubelet/pods/adab5e58-1b8e-4170-b244-d45be51beccb/volumes" Mar 13 10:19:03 crc kubenswrapper[4632]: I0313 10:19:03.376172 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-mpfnk" Mar 13 10:19:03 crc kubenswrapper[4632]: I0313 10:19:03.758344 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5678554f8b-n7dcv" Mar 13 10:19:03 crc kubenswrapper[4632]: I0313 10:19:03.758404 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5678554f8b-n7dcv" Mar 13 10:19:03 crc kubenswrapper[4632]: I0313 10:19:03.763382 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5678554f8b-n7dcv" Mar 13 10:19:04 crc kubenswrapper[4632]: I0313 10:19:04.051445 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5678554f8b-n7dcv" Mar 13 10:19:04 crc kubenswrapper[4632]: I0313 10:19:04.111840 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-zn7mn"] Mar 13 10:19:13 crc kubenswrapper[4632]: I0313 10:19:13.711557 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-gcngd" Mar 13 10:19:26 crc kubenswrapper[4632]: I0313 10:19:26.989804 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj"] Mar 13 10:19:26 crc kubenswrapper[4632]: E0313 10:19:26.990682 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adab5e58-1b8e-4170-b244-d45be51beccb" containerName="registry-server" Mar 13 10:19:26 crc kubenswrapper[4632]: I0313 10:19:26.990702 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="adab5e58-1b8e-4170-b244-d45be51beccb" containerName="registry-server" Mar 13 10:19:26 crc kubenswrapper[4632]: E0313 10:19:26.990719 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adab5e58-1b8e-4170-b244-d45be51beccb" containerName="extract-content" Mar 13 10:19:26 crc kubenswrapper[4632]: I0313 10:19:26.990727 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="adab5e58-1b8e-4170-b244-d45be51beccb" containerName="extract-content" Mar 13 10:19:26 crc kubenswrapper[4632]: E0313 10:19:26.990739 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adab5e58-1b8e-4170-b244-d45be51beccb" containerName="extract-utilities" Mar 13 10:19:26 crc kubenswrapper[4632]: I0313 10:19:26.990747 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="adab5e58-1b8e-4170-b244-d45be51beccb" containerName="extract-utilities" Mar 13 10:19:26 crc kubenswrapper[4632]: I0313 10:19:26.990867 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="adab5e58-1b8e-4170-b244-d45be51beccb" containerName="registry-server" Mar 13 10:19:26 crc kubenswrapper[4632]: I0313 10:19:26.991823 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj" Mar 13 10:19:26 crc kubenswrapper[4632]: I0313 10:19:26.993654 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 13 10:19:27 crc kubenswrapper[4632]: I0313 10:19:27.002427 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj"] Mar 13 10:19:27 crc kubenswrapper[4632]: I0313 10:19:27.075362 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh6ml\" (UniqueName: \"kubernetes.io/projected/8c1e4d78-3f38-48b5-b157-a1a076f31b76-kube-api-access-hh6ml\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj\" (UID: \"8c1e4d78-3f38-48b5-b157-a1a076f31b76\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj" Mar 13 10:19:27 crc kubenswrapper[4632]: I0313 10:19:27.075567 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c1e4d78-3f38-48b5-b157-a1a076f31b76-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj\" (UID: \"8c1e4d78-3f38-48b5-b157-a1a076f31b76\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj" Mar 13 10:19:27 crc kubenswrapper[4632]: I0313 10:19:27.075597 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c1e4d78-3f38-48b5-b157-a1a076f31b76-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj\" (UID: \"8c1e4d78-3f38-48b5-b157-a1a076f31b76\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj" Mar 13 10:19:27 crc kubenswrapper[4632]: I0313 10:19:27.176552 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c1e4d78-3f38-48b5-b157-a1a076f31b76-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj\" (UID: \"8c1e4d78-3f38-48b5-b157-a1a076f31b76\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj" Mar 13 10:19:27 crc kubenswrapper[4632]: I0313 10:19:27.176618 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c1e4d78-3f38-48b5-b157-a1a076f31b76-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj\" (UID: \"8c1e4d78-3f38-48b5-b157-a1a076f31b76\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj" Mar 13 10:19:27 crc kubenswrapper[4632]: I0313 10:19:27.176676 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh6ml\" (UniqueName: \"kubernetes.io/projected/8c1e4d78-3f38-48b5-b157-a1a076f31b76-kube-api-access-hh6ml\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj\" (UID: \"8c1e4d78-3f38-48b5-b157-a1a076f31b76\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj" Mar 13 10:19:27 crc kubenswrapper[4632]: I0313 10:19:27.177173 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c1e4d78-3f38-48b5-b157-a1a076f31b76-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj\" (UID: \"8c1e4d78-3f38-48b5-b157-a1a076f31b76\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj" Mar 13 10:19:27 crc kubenswrapper[4632]: I0313 10:19:27.177239 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c1e4d78-3f38-48b5-b157-a1a076f31b76-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj\" (UID: \"8c1e4d78-3f38-48b5-b157-a1a076f31b76\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj" Mar 13 10:19:27 crc kubenswrapper[4632]: I0313 10:19:27.210408 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh6ml\" (UniqueName: \"kubernetes.io/projected/8c1e4d78-3f38-48b5-b157-a1a076f31b76-kube-api-access-hh6ml\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj\" (UID: \"8c1e4d78-3f38-48b5-b157-a1a076f31b76\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj" Mar 13 10:19:27 crc kubenswrapper[4632]: I0313 10:19:27.306793 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj" Mar 13 10:19:27 crc kubenswrapper[4632]: I0313 10:19:27.499314 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj"] Mar 13 10:19:28 crc kubenswrapper[4632]: I0313 10:19:28.394362 4632 generic.go:334] "Generic (PLEG): container finished" podID="8c1e4d78-3f38-48b5-b157-a1a076f31b76" containerID="dd017346c528d3551d35eba6b6c7d7562e4c30d260c2251b9178a5bce35ecced" exitCode=0 Mar 13 10:19:28 crc kubenswrapper[4632]: I0313 10:19:28.394439 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj" event={"ID":"8c1e4d78-3f38-48b5-b157-a1a076f31b76","Type":"ContainerDied","Data":"dd017346c528d3551d35eba6b6c7d7562e4c30d260c2251b9178a5bce35ecced"} Mar 13 10:19:28 crc kubenswrapper[4632]: I0313 10:19:28.396305 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj" event={"ID":"8c1e4d78-3f38-48b5-b157-a1a076f31b76","Type":"ContainerStarted","Data":"0e375dc0f6df9d2f12b95e2145547fe9337a959c6c2f008f89398e35282d0f19"} Mar 13 10:19:28 crc kubenswrapper[4632]: I0313 10:19:28.395922 4632 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 10:19:29 crc kubenswrapper[4632]: I0313 10:19:29.182702 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-zn7mn" podUID="f5a50074-5531-442f-a0e9-0578f15634c1" containerName="console" containerID="cri-o://662793b7c27b62a99fd064350b3cd52eb21f393bbf5603bbcbf03a65855922bf" gracePeriod=15 Mar 13 10:19:29 crc kubenswrapper[4632]: I0313 10:19:29.407738 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-zn7mn_f5a50074-5531-442f-a0e9-0578f15634c1/console/0.log" Mar 13 10:19:29 crc kubenswrapper[4632]: I0313 10:19:29.408036 4632 generic.go:334] "Generic (PLEG): container finished" podID="f5a50074-5531-442f-a0e9-0578f15634c1" containerID="662793b7c27b62a99fd064350b3cd52eb21f393bbf5603bbcbf03a65855922bf" exitCode=2 Mar 13 10:19:29 crc kubenswrapper[4632]: I0313 10:19:29.408078 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zn7mn" event={"ID":"f5a50074-5531-442f-a0e9-0578f15634c1","Type":"ContainerDied","Data":"662793b7c27b62a99fd064350b3cd52eb21f393bbf5603bbcbf03a65855922bf"} Mar 13 10:19:29 crc kubenswrapper[4632]: I0313 10:19:29.540958 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-zn7mn_f5a50074-5531-442f-a0e9-0578f15634c1/console/0.log" Mar 13 10:19:29 crc kubenswrapper[4632]: I0313 10:19:29.541078 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zn7mn" Mar 13 10:19:29 crc kubenswrapper[4632]: I0313 10:19:29.613557 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f5a50074-5531-442f-a0e9-0578f15634c1-console-oauth-config\") pod \"f5a50074-5531-442f-a0e9-0578f15634c1\" (UID: \"f5a50074-5531-442f-a0e9-0578f15634c1\") " Mar 13 10:19:29 crc kubenswrapper[4632]: I0313 10:19:29.613634 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5a50074-5531-442f-a0e9-0578f15634c1-trusted-ca-bundle\") pod \"f5a50074-5531-442f-a0e9-0578f15634c1\" (UID: \"f5a50074-5531-442f-a0e9-0578f15634c1\") " Mar 13 10:19:29 crc kubenswrapper[4632]: I0313 10:19:29.613683 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f5a50074-5531-442f-a0e9-0578f15634c1-console-serving-cert\") pod \"f5a50074-5531-442f-a0e9-0578f15634c1\" (UID: \"f5a50074-5531-442f-a0e9-0578f15634c1\") " Mar 13 10:19:29 crc kubenswrapper[4632]: I0313 10:19:29.613769 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f5a50074-5531-442f-a0e9-0578f15634c1-console-config\") pod \"f5a50074-5531-442f-a0e9-0578f15634c1\" (UID: \"f5a50074-5531-442f-a0e9-0578f15634c1\") " Mar 13 10:19:29 crc kubenswrapper[4632]: I0313 10:19:29.613797 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f5a50074-5531-442f-a0e9-0578f15634c1-service-ca\") pod \"f5a50074-5531-442f-a0e9-0578f15634c1\" (UID: \"f5a50074-5531-442f-a0e9-0578f15634c1\") " Mar 13 10:19:29 crc kubenswrapper[4632]: I0313 10:19:29.613847 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f5a50074-5531-442f-a0e9-0578f15634c1-oauth-serving-cert\") pod \"f5a50074-5531-442f-a0e9-0578f15634c1\" (UID: \"f5a50074-5531-442f-a0e9-0578f15634c1\") " Mar 13 10:19:29 crc kubenswrapper[4632]: I0313 10:19:29.613871 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpqvj\" (UniqueName: \"kubernetes.io/projected/f5a50074-5531-442f-a0e9-0578f15634c1-kube-api-access-gpqvj\") pod \"f5a50074-5531-442f-a0e9-0578f15634c1\" (UID: \"f5a50074-5531-442f-a0e9-0578f15634c1\") " Mar 13 10:19:29 crc kubenswrapper[4632]: I0313 10:19:29.615328 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5a50074-5531-442f-a0e9-0578f15634c1-console-config" (OuterVolumeSpecName: "console-config") pod "f5a50074-5531-442f-a0e9-0578f15634c1" (UID: "f5a50074-5531-442f-a0e9-0578f15634c1"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:19:29 crc kubenswrapper[4632]: I0313 10:19:29.615370 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5a50074-5531-442f-a0e9-0578f15634c1-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f5a50074-5531-442f-a0e9-0578f15634c1" (UID: "f5a50074-5531-442f-a0e9-0578f15634c1"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:19:29 crc kubenswrapper[4632]: I0313 10:19:29.615427 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5a50074-5531-442f-a0e9-0578f15634c1-service-ca" (OuterVolumeSpecName: "service-ca") pod "f5a50074-5531-442f-a0e9-0578f15634c1" (UID: "f5a50074-5531-442f-a0e9-0578f15634c1"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:19:29 crc kubenswrapper[4632]: I0313 10:19:29.617521 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5a50074-5531-442f-a0e9-0578f15634c1-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "f5a50074-5531-442f-a0e9-0578f15634c1" (UID: "f5a50074-5531-442f-a0e9-0578f15634c1"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:19:29 crc kubenswrapper[4632]: I0313 10:19:29.626991 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5a50074-5531-442f-a0e9-0578f15634c1-kube-api-access-gpqvj" (OuterVolumeSpecName: "kube-api-access-gpqvj") pod "f5a50074-5531-442f-a0e9-0578f15634c1" (UID: "f5a50074-5531-442f-a0e9-0578f15634c1"). InnerVolumeSpecName "kube-api-access-gpqvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:19:29 crc kubenswrapper[4632]: I0313 10:19:29.627304 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5a50074-5531-442f-a0e9-0578f15634c1-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "f5a50074-5531-442f-a0e9-0578f15634c1" (UID: "f5a50074-5531-442f-a0e9-0578f15634c1"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:19:29 crc kubenswrapper[4632]: I0313 10:19:29.651480 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5a50074-5531-442f-a0e9-0578f15634c1-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "f5a50074-5531-442f-a0e9-0578f15634c1" (UID: "f5a50074-5531-442f-a0e9-0578f15634c1"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:19:29 crc kubenswrapper[4632]: I0313 10:19:29.715634 4632 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f5a50074-5531-442f-a0e9-0578f15634c1-console-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:19:29 crc kubenswrapper[4632]: I0313 10:19:29.715668 4632 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f5a50074-5531-442f-a0e9-0578f15634c1-service-ca\") on node \"crc\" DevicePath \"\"" Mar 13 10:19:29 crc kubenswrapper[4632]: I0313 10:19:29.715704 4632 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f5a50074-5531-442f-a0e9-0578f15634c1-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:19:29 crc kubenswrapper[4632]: I0313 10:19:29.715717 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpqvj\" (UniqueName: \"kubernetes.io/projected/f5a50074-5531-442f-a0e9-0578f15634c1-kube-api-access-gpqvj\") on node \"crc\" DevicePath \"\"" Mar 13 10:19:29 crc kubenswrapper[4632]: I0313 10:19:29.715731 4632 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f5a50074-5531-442f-a0e9-0578f15634c1-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:19:29 crc kubenswrapper[4632]: I0313 10:19:29.715744 4632 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5a50074-5531-442f-a0e9-0578f15634c1-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:19:29 crc kubenswrapper[4632]: I0313 10:19:29.715779 4632 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f5a50074-5531-442f-a0e9-0578f15634c1-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 10:19:30 crc kubenswrapper[4632]: I0313 10:19:30.416242 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-zn7mn_f5a50074-5531-442f-a0e9-0578f15634c1/console/0.log" Mar 13 10:19:30 crc kubenswrapper[4632]: I0313 10:19:30.417160 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zn7mn" event={"ID":"f5a50074-5531-442f-a0e9-0578f15634c1","Type":"ContainerDied","Data":"c0f56571b6b9472de716bb190b1d68fe783e6f7b131b06ae9b0c01071f1d985f"} Mar 13 10:19:30 crc kubenswrapper[4632]: I0313 10:19:30.417200 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zn7mn" Mar 13 10:19:30 crc kubenswrapper[4632]: I0313 10:19:30.417207 4632 scope.go:117] "RemoveContainer" containerID="662793b7c27b62a99fd064350b3cd52eb21f393bbf5603bbcbf03a65855922bf" Mar 13 10:19:30 crc kubenswrapper[4632]: I0313 10:19:30.421174 4632 generic.go:334] "Generic (PLEG): container finished" podID="8c1e4d78-3f38-48b5-b157-a1a076f31b76" containerID="f6eab6c1270c708d5635fc346dbf0abff23385e6fdb0b88031343e1eeb347e7e" exitCode=0 Mar 13 10:19:30 crc kubenswrapper[4632]: I0313 10:19:30.421226 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj" event={"ID":"8c1e4d78-3f38-48b5-b157-a1a076f31b76","Type":"ContainerDied","Data":"f6eab6c1270c708d5635fc346dbf0abff23385e6fdb0b88031343e1eeb347e7e"} Mar 13 10:19:30 crc kubenswrapper[4632]: I0313 10:19:30.438578 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-zn7mn"] Mar 13 10:19:30 crc kubenswrapper[4632]: I0313 10:19:30.443318 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-zn7mn"] Mar 13 10:19:31 crc kubenswrapper[4632]: I0313 10:19:31.428475 4632 generic.go:334] "Generic (PLEG): container finished" podID="8c1e4d78-3f38-48b5-b157-a1a076f31b76" containerID="35bf12e34ddb56d3a7959a6cf56252b0e8e3b3a801abde467165cc190ac8dec4" exitCode=0 Mar 13 10:19:31 crc kubenswrapper[4632]: I0313 10:19:31.428558 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj" event={"ID":"8c1e4d78-3f38-48b5-b157-a1a076f31b76","Type":"ContainerDied","Data":"35bf12e34ddb56d3a7959a6cf56252b0e8e3b3a801abde467165cc190ac8dec4"} Mar 13 10:19:32 crc kubenswrapper[4632]: I0313 10:19:32.060359 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5a50074-5531-442f-a0e9-0578f15634c1" path="/var/lib/kubelet/pods/f5a50074-5531-442f-a0e9-0578f15634c1/volumes" Mar 13 10:19:32 crc kubenswrapper[4632]: I0313 10:19:32.630646 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj" Mar 13 10:19:32 crc kubenswrapper[4632]: I0313 10:19:32.657894 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c1e4d78-3f38-48b5-b157-a1a076f31b76-util\") pod \"8c1e4d78-3f38-48b5-b157-a1a076f31b76\" (UID: \"8c1e4d78-3f38-48b5-b157-a1a076f31b76\") " Mar 13 10:19:32 crc kubenswrapper[4632]: I0313 10:19:32.658042 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hh6ml\" (UniqueName: \"kubernetes.io/projected/8c1e4d78-3f38-48b5-b157-a1a076f31b76-kube-api-access-hh6ml\") pod \"8c1e4d78-3f38-48b5-b157-a1a076f31b76\" (UID: \"8c1e4d78-3f38-48b5-b157-a1a076f31b76\") " Mar 13 10:19:32 crc kubenswrapper[4632]: I0313 10:19:32.658077 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c1e4d78-3f38-48b5-b157-a1a076f31b76-bundle\") pod \"8c1e4d78-3f38-48b5-b157-a1a076f31b76\" (UID: \"8c1e4d78-3f38-48b5-b157-a1a076f31b76\") " Mar 13 10:19:32 crc kubenswrapper[4632]: I0313 10:19:32.659147 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c1e4d78-3f38-48b5-b157-a1a076f31b76-bundle" (OuterVolumeSpecName: "bundle") pod "8c1e4d78-3f38-48b5-b157-a1a076f31b76" (UID: "8c1e4d78-3f38-48b5-b157-a1a076f31b76"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:19:32 crc kubenswrapper[4632]: I0313 10:19:32.664977 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c1e4d78-3f38-48b5-b157-a1a076f31b76-kube-api-access-hh6ml" (OuterVolumeSpecName: "kube-api-access-hh6ml") pod "8c1e4d78-3f38-48b5-b157-a1a076f31b76" (UID: "8c1e4d78-3f38-48b5-b157-a1a076f31b76"). InnerVolumeSpecName "kube-api-access-hh6ml". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:19:32 crc kubenswrapper[4632]: I0313 10:19:32.678428 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c1e4d78-3f38-48b5-b157-a1a076f31b76-util" (OuterVolumeSpecName: "util") pod "8c1e4d78-3f38-48b5-b157-a1a076f31b76" (UID: "8c1e4d78-3f38-48b5-b157-a1a076f31b76"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:19:32 crc kubenswrapper[4632]: I0313 10:19:32.759312 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hh6ml\" (UniqueName: \"kubernetes.io/projected/8c1e4d78-3f38-48b5-b157-a1a076f31b76-kube-api-access-hh6ml\") on node \"crc\" DevicePath \"\"" Mar 13 10:19:32 crc kubenswrapper[4632]: I0313 10:19:32.759360 4632 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c1e4d78-3f38-48b5-b157-a1a076f31b76-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:19:32 crc kubenswrapper[4632]: I0313 10:19:32.759369 4632 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c1e4d78-3f38-48b5-b157-a1a076f31b76-util\") on node \"crc\" DevicePath \"\"" Mar 13 10:19:33 crc kubenswrapper[4632]: I0313 10:19:33.443154 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj" event={"ID":"8c1e4d78-3f38-48b5-b157-a1a076f31b76","Type":"ContainerDied","Data":"0e375dc0f6df9d2f12b95e2145547fe9337a959c6c2f008f89398e35282d0f19"} Mar 13 10:19:33 crc kubenswrapper[4632]: I0313 10:19:33.443206 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e375dc0f6df9d2f12b95e2145547fe9337a959c6c2f008f89398e35282d0f19" Mar 13 10:19:33 crc kubenswrapper[4632]: I0313 10:19:33.443208 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj" Mar 13 10:19:42 crc kubenswrapper[4632]: I0313 10:19:42.802435 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-ffdcc767b-qxvlq"] Mar 13 10:19:42 crc kubenswrapper[4632]: E0313 10:19:42.803001 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c1e4d78-3f38-48b5-b157-a1a076f31b76" containerName="extract" Mar 13 10:19:42 crc kubenswrapper[4632]: I0313 10:19:42.803014 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c1e4d78-3f38-48b5-b157-a1a076f31b76" containerName="extract" Mar 13 10:19:42 crc kubenswrapper[4632]: E0313 10:19:42.803025 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5a50074-5531-442f-a0e9-0578f15634c1" containerName="console" Mar 13 10:19:42 crc kubenswrapper[4632]: I0313 10:19:42.803031 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5a50074-5531-442f-a0e9-0578f15634c1" containerName="console" Mar 13 10:19:42 crc kubenswrapper[4632]: E0313 10:19:42.803047 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c1e4d78-3f38-48b5-b157-a1a076f31b76" containerName="util" Mar 13 10:19:42 crc kubenswrapper[4632]: I0313 10:19:42.803053 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c1e4d78-3f38-48b5-b157-a1a076f31b76" containerName="util" Mar 13 10:19:42 crc kubenswrapper[4632]: E0313 10:19:42.803065 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c1e4d78-3f38-48b5-b157-a1a076f31b76" containerName="pull" Mar 13 10:19:42 crc kubenswrapper[4632]: I0313 10:19:42.803070 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c1e4d78-3f38-48b5-b157-a1a076f31b76" containerName="pull" Mar 13 10:19:42 crc kubenswrapper[4632]: I0313 10:19:42.803158 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5a50074-5531-442f-a0e9-0578f15634c1" containerName="console" Mar 13 10:19:42 crc kubenswrapper[4632]: I0313 10:19:42.803168 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c1e4d78-3f38-48b5-b157-a1a076f31b76" containerName="extract" Mar 13 10:19:42 crc kubenswrapper[4632]: I0313 10:19:42.803525 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-ffdcc767b-qxvlq" Mar 13 10:19:42 crc kubenswrapper[4632]: I0313 10:19:42.806550 4632 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-8ndqr" Mar 13 10:19:42 crc kubenswrapper[4632]: I0313 10:19:42.807505 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 13 10:19:42 crc kubenswrapper[4632]: I0313 10:19:42.807614 4632 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 13 10:19:42 crc kubenswrapper[4632]: I0313 10:19:42.808302 4632 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 13 10:19:42 crc kubenswrapper[4632]: I0313 10:19:42.808620 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 13 10:19:42 crc kubenswrapper[4632]: I0313 10:19:42.819095 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-ffdcc767b-qxvlq"] Mar 13 10:19:42 crc kubenswrapper[4632]: I0313 10:19:42.881738 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e62d674f-5b2c-4788-85a3-95b51621dbef-webhook-cert\") pod \"metallb-operator-controller-manager-ffdcc767b-qxvlq\" (UID: \"e62d674f-5b2c-4788-85a3-95b51621dbef\") " pod="metallb-system/metallb-operator-controller-manager-ffdcc767b-qxvlq" Mar 13 10:19:42 crc kubenswrapper[4632]: I0313 10:19:42.881810 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72kgf\" (UniqueName: \"kubernetes.io/projected/e62d674f-5b2c-4788-85a3-95b51621dbef-kube-api-access-72kgf\") pod \"metallb-operator-controller-manager-ffdcc767b-qxvlq\" (UID: \"e62d674f-5b2c-4788-85a3-95b51621dbef\") " pod="metallb-system/metallb-operator-controller-manager-ffdcc767b-qxvlq" Mar 13 10:19:42 crc kubenswrapper[4632]: I0313 10:19:42.881882 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e62d674f-5b2c-4788-85a3-95b51621dbef-apiservice-cert\") pod \"metallb-operator-controller-manager-ffdcc767b-qxvlq\" (UID: \"e62d674f-5b2c-4788-85a3-95b51621dbef\") " pod="metallb-system/metallb-operator-controller-manager-ffdcc767b-qxvlq" Mar 13 10:19:42 crc kubenswrapper[4632]: I0313 10:19:42.983450 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e62d674f-5b2c-4788-85a3-95b51621dbef-apiservice-cert\") pod \"metallb-operator-controller-manager-ffdcc767b-qxvlq\" (UID: \"e62d674f-5b2c-4788-85a3-95b51621dbef\") " pod="metallb-system/metallb-operator-controller-manager-ffdcc767b-qxvlq" Mar 13 10:19:42 crc kubenswrapper[4632]: I0313 10:19:42.983731 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e62d674f-5b2c-4788-85a3-95b51621dbef-webhook-cert\") pod \"metallb-operator-controller-manager-ffdcc767b-qxvlq\" (UID: \"e62d674f-5b2c-4788-85a3-95b51621dbef\") " pod="metallb-system/metallb-operator-controller-manager-ffdcc767b-qxvlq" Mar 13 10:19:42 crc kubenswrapper[4632]: I0313 10:19:42.983759 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72kgf\" (UniqueName: \"kubernetes.io/projected/e62d674f-5b2c-4788-85a3-95b51621dbef-kube-api-access-72kgf\") pod \"metallb-operator-controller-manager-ffdcc767b-qxvlq\" (UID: \"e62d674f-5b2c-4788-85a3-95b51621dbef\") " pod="metallb-system/metallb-operator-controller-manager-ffdcc767b-qxvlq" Mar 13 10:19:42 crc kubenswrapper[4632]: I0313 10:19:42.990907 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e62d674f-5b2c-4788-85a3-95b51621dbef-apiservice-cert\") pod \"metallb-operator-controller-manager-ffdcc767b-qxvlq\" (UID: \"e62d674f-5b2c-4788-85a3-95b51621dbef\") " pod="metallb-system/metallb-operator-controller-manager-ffdcc767b-qxvlq" Mar 13 10:19:42 crc kubenswrapper[4632]: I0313 10:19:42.991786 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e62d674f-5b2c-4788-85a3-95b51621dbef-webhook-cert\") pod \"metallb-operator-controller-manager-ffdcc767b-qxvlq\" (UID: \"e62d674f-5b2c-4788-85a3-95b51621dbef\") " pod="metallb-system/metallb-operator-controller-manager-ffdcc767b-qxvlq" Mar 13 10:19:43 crc kubenswrapper[4632]: I0313 10:19:43.002124 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72kgf\" (UniqueName: \"kubernetes.io/projected/e62d674f-5b2c-4788-85a3-95b51621dbef-kube-api-access-72kgf\") pod \"metallb-operator-controller-manager-ffdcc767b-qxvlq\" (UID: \"e62d674f-5b2c-4788-85a3-95b51621dbef\") " pod="metallb-system/metallb-operator-controller-manager-ffdcc767b-qxvlq" Mar 13 10:19:43 crc kubenswrapper[4632]: I0313 10:19:43.065297 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6c7bf5ddc5-v6t5l"] Mar 13 10:19:43 crc kubenswrapper[4632]: I0313 10:19:43.066133 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6c7bf5ddc5-v6t5l" Mar 13 10:19:43 crc kubenswrapper[4632]: I0313 10:19:43.068020 4632 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 13 10:19:43 crc kubenswrapper[4632]: I0313 10:19:43.068972 4632 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 13 10:19:43 crc kubenswrapper[4632]: I0313 10:19:43.069338 4632 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-w8ltf" Mar 13 10:19:43 crc kubenswrapper[4632]: I0313 10:19:43.084348 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/712b2002-4fce-4983-926a-99a4b2dc7a8c-webhook-cert\") pod \"metallb-operator-webhook-server-6c7bf5ddc5-v6t5l\" (UID: \"712b2002-4fce-4983-926a-99a4b2dc7a8c\") " pod="metallb-system/metallb-operator-webhook-server-6c7bf5ddc5-v6t5l" Mar 13 10:19:43 crc kubenswrapper[4632]: I0313 10:19:43.089231 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6c7bf5ddc5-v6t5l"] Mar 13 10:19:43 crc kubenswrapper[4632]: I0313 10:19:43.090558 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/712b2002-4fce-4983-926a-99a4b2dc7a8c-apiservice-cert\") pod \"metallb-operator-webhook-server-6c7bf5ddc5-v6t5l\" (UID: \"712b2002-4fce-4983-926a-99a4b2dc7a8c\") " pod="metallb-system/metallb-operator-webhook-server-6c7bf5ddc5-v6t5l" Mar 13 10:19:43 crc kubenswrapper[4632]: I0313 10:19:43.090770 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zcw9\" (UniqueName: \"kubernetes.io/projected/712b2002-4fce-4983-926a-99a4b2dc7a8c-kube-api-access-7zcw9\") pod \"metallb-operator-webhook-server-6c7bf5ddc5-v6t5l\" (UID: \"712b2002-4fce-4983-926a-99a4b2dc7a8c\") " pod="metallb-system/metallb-operator-webhook-server-6c7bf5ddc5-v6t5l" Mar 13 10:19:43 crc kubenswrapper[4632]: I0313 10:19:43.121227 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-ffdcc767b-qxvlq" Mar 13 10:19:43 crc kubenswrapper[4632]: I0313 10:19:43.191527 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/712b2002-4fce-4983-926a-99a4b2dc7a8c-apiservice-cert\") pod \"metallb-operator-webhook-server-6c7bf5ddc5-v6t5l\" (UID: \"712b2002-4fce-4983-926a-99a4b2dc7a8c\") " pod="metallb-system/metallb-operator-webhook-server-6c7bf5ddc5-v6t5l" Mar 13 10:19:43 crc kubenswrapper[4632]: I0313 10:19:43.192323 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zcw9\" (UniqueName: \"kubernetes.io/projected/712b2002-4fce-4983-926a-99a4b2dc7a8c-kube-api-access-7zcw9\") pod \"metallb-operator-webhook-server-6c7bf5ddc5-v6t5l\" (UID: \"712b2002-4fce-4983-926a-99a4b2dc7a8c\") " pod="metallb-system/metallb-operator-webhook-server-6c7bf5ddc5-v6t5l" Mar 13 10:19:43 crc kubenswrapper[4632]: I0313 10:19:43.194965 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/712b2002-4fce-4983-926a-99a4b2dc7a8c-webhook-cert\") pod \"metallb-operator-webhook-server-6c7bf5ddc5-v6t5l\" (UID: \"712b2002-4fce-4983-926a-99a4b2dc7a8c\") " pod="metallb-system/metallb-operator-webhook-server-6c7bf5ddc5-v6t5l" Mar 13 10:19:43 crc kubenswrapper[4632]: I0313 10:19:43.194830 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/712b2002-4fce-4983-926a-99a4b2dc7a8c-apiservice-cert\") pod \"metallb-operator-webhook-server-6c7bf5ddc5-v6t5l\" (UID: \"712b2002-4fce-4983-926a-99a4b2dc7a8c\") " pod="metallb-system/metallb-operator-webhook-server-6c7bf5ddc5-v6t5l" Mar 13 10:19:43 crc kubenswrapper[4632]: I0313 10:19:43.201505 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/712b2002-4fce-4983-926a-99a4b2dc7a8c-webhook-cert\") pod \"metallb-operator-webhook-server-6c7bf5ddc5-v6t5l\" (UID: \"712b2002-4fce-4983-926a-99a4b2dc7a8c\") " pod="metallb-system/metallb-operator-webhook-server-6c7bf5ddc5-v6t5l" Mar 13 10:19:43 crc kubenswrapper[4632]: I0313 10:19:43.243687 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zcw9\" (UniqueName: \"kubernetes.io/projected/712b2002-4fce-4983-926a-99a4b2dc7a8c-kube-api-access-7zcw9\") pod \"metallb-operator-webhook-server-6c7bf5ddc5-v6t5l\" (UID: \"712b2002-4fce-4983-926a-99a4b2dc7a8c\") " pod="metallb-system/metallb-operator-webhook-server-6c7bf5ddc5-v6t5l" Mar 13 10:19:43 crc kubenswrapper[4632]: I0313 10:19:43.382347 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6c7bf5ddc5-v6t5l" Mar 13 10:19:43 crc kubenswrapper[4632]: I0313 10:19:43.603721 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-ffdcc767b-qxvlq"] Mar 13 10:19:43 crc kubenswrapper[4632]: W0313 10:19:43.625385 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode62d674f_5b2c_4788_85a3_95b51621dbef.slice/crio-36539b45fbaabd48077b1e3bf16bc5b42f8aeb664d9ffa7f5fd14bd9c877f6b3 WatchSource:0}: Error finding container 36539b45fbaabd48077b1e3bf16bc5b42f8aeb664d9ffa7f5fd14bd9c877f6b3: Status 404 returned error can't find the container with id 36539b45fbaabd48077b1e3bf16bc5b42f8aeb664d9ffa7f5fd14bd9c877f6b3 Mar 13 10:19:43 crc kubenswrapper[4632]: I0313 10:19:43.843798 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6c7bf5ddc5-v6t5l"] Mar 13 10:19:43 crc kubenswrapper[4632]: W0313 10:19:43.863009 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod712b2002_4fce_4983_926a_99a4b2dc7a8c.slice/crio-a456a30659a7a3870cf65fd46e2de49b22835f180cfc7ed293faa6d6aedac332 WatchSource:0}: Error finding container a456a30659a7a3870cf65fd46e2de49b22835f180cfc7ed293faa6d6aedac332: Status 404 returned error can't find the container with id a456a30659a7a3870cf65fd46e2de49b22835f180cfc7ed293faa6d6aedac332 Mar 13 10:19:44 crc kubenswrapper[4632]: I0313 10:19:44.509773 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-ffdcc767b-qxvlq" event={"ID":"e62d674f-5b2c-4788-85a3-95b51621dbef","Type":"ContainerStarted","Data":"36539b45fbaabd48077b1e3bf16bc5b42f8aeb664d9ffa7f5fd14bd9c877f6b3"} Mar 13 10:19:44 crc kubenswrapper[4632]: I0313 10:19:44.511217 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6c7bf5ddc5-v6t5l" event={"ID":"712b2002-4fce-4983-926a-99a4b2dc7a8c","Type":"ContainerStarted","Data":"a456a30659a7a3870cf65fd46e2de49b22835f180cfc7ed293faa6d6aedac332"} Mar 13 10:19:52 crc kubenswrapper[4632]: I0313 10:19:52.568025 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-ffdcc767b-qxvlq" event={"ID":"e62d674f-5b2c-4788-85a3-95b51621dbef","Type":"ContainerStarted","Data":"d10aebbd8471a57b9c8fb772d9957bcb409f35eba90c3ff18f788ca610c86020"} Mar 13 10:19:52 crc kubenswrapper[4632]: I0313 10:19:52.568565 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-ffdcc767b-qxvlq" Mar 13 10:19:52 crc kubenswrapper[4632]: I0313 10:19:52.569858 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6c7bf5ddc5-v6t5l" event={"ID":"712b2002-4fce-4983-926a-99a4b2dc7a8c","Type":"ContainerStarted","Data":"4f3954f3545fe16f459ae776b9c8dd134a4be9b3933eebfce6eab0b02f3d82e6"} Mar 13 10:19:52 crc kubenswrapper[4632]: I0313 10:19:52.570033 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6c7bf5ddc5-v6t5l" Mar 13 10:19:52 crc kubenswrapper[4632]: I0313 10:19:52.592978 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-ffdcc767b-qxvlq" podStartSLOduration=2.822942367 podStartE2EDuration="10.592931729s" podCreationTimestamp="2026-03-13 10:19:42 +0000 UTC" firstStartedPulling="2026-03-13 10:19:43.632470075 +0000 UTC m=+957.655000198" lastFinishedPulling="2026-03-13 10:19:51.402459427 +0000 UTC m=+965.424989560" observedRunningTime="2026-03-13 10:19:52.585197835 +0000 UTC m=+966.607727988" watchObservedRunningTime="2026-03-13 10:19:52.592931729 +0000 UTC m=+966.615461902" Mar 13 10:19:52 crc kubenswrapper[4632]: I0313 10:19:52.605861 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6c7bf5ddc5-v6t5l" podStartSLOduration=2.008069632 podStartE2EDuration="9.60584267s" podCreationTimestamp="2026-03-13 10:19:43 +0000 UTC" firstStartedPulling="2026-03-13 10:19:43.866537801 +0000 UTC m=+957.889067934" lastFinishedPulling="2026-03-13 10:19:51.464310829 +0000 UTC m=+965.486840972" observedRunningTime="2026-03-13 10:19:52.603739328 +0000 UTC m=+966.626269481" watchObservedRunningTime="2026-03-13 10:19:52.60584267 +0000 UTC m=+966.628372803" Mar 13 10:20:00 crc kubenswrapper[4632]: I0313 10:20:00.140755 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556620-42vs6"] Mar 13 10:20:00 crc kubenswrapper[4632]: I0313 10:20:00.142258 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556620-42vs6" Mar 13 10:20:00 crc kubenswrapper[4632]: I0313 10:20:00.151915 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556620-42vs6"] Mar 13 10:20:00 crc kubenswrapper[4632]: I0313 10:20:00.153099 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 10:20:00 crc kubenswrapper[4632]: I0313 10:20:00.153524 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 10:20:00 crc kubenswrapper[4632]: I0313 10:20:00.154103 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 10:20:00 crc kubenswrapper[4632]: I0313 10:20:00.232888 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6c8k\" (UniqueName: \"kubernetes.io/projected/28dbef1d-ca7f-4387-80af-8dffbfe92895-kube-api-access-w6c8k\") pod \"auto-csr-approver-29556620-42vs6\" (UID: \"28dbef1d-ca7f-4387-80af-8dffbfe92895\") " pod="openshift-infra/auto-csr-approver-29556620-42vs6" Mar 13 10:20:00 crc kubenswrapper[4632]: I0313 10:20:00.334256 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6c8k\" (UniqueName: \"kubernetes.io/projected/28dbef1d-ca7f-4387-80af-8dffbfe92895-kube-api-access-w6c8k\") pod \"auto-csr-approver-29556620-42vs6\" (UID: \"28dbef1d-ca7f-4387-80af-8dffbfe92895\") " pod="openshift-infra/auto-csr-approver-29556620-42vs6" Mar 13 10:20:00 crc kubenswrapper[4632]: I0313 10:20:00.363195 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6c8k\" (UniqueName: \"kubernetes.io/projected/28dbef1d-ca7f-4387-80af-8dffbfe92895-kube-api-access-w6c8k\") pod \"auto-csr-approver-29556620-42vs6\" (UID: \"28dbef1d-ca7f-4387-80af-8dffbfe92895\") " pod="openshift-infra/auto-csr-approver-29556620-42vs6" Mar 13 10:20:00 crc kubenswrapper[4632]: I0313 10:20:00.464264 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556620-42vs6" Mar 13 10:20:00 crc kubenswrapper[4632]: I0313 10:20:00.696524 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556620-42vs6"] Mar 13 10:20:01 crc kubenswrapper[4632]: I0313 10:20:01.619932 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556620-42vs6" event={"ID":"28dbef1d-ca7f-4387-80af-8dffbfe92895","Type":"ContainerStarted","Data":"b8eaff64006265fcd0d329ca24eef7b4cfa19bbf794b80db42c9037947401d93"} Mar 13 10:20:02 crc kubenswrapper[4632]: I0313 10:20:02.628246 4632 generic.go:334] "Generic (PLEG): container finished" podID="28dbef1d-ca7f-4387-80af-8dffbfe92895" containerID="746bd1f1584c6b468985171d618d35f15871608c045fd5e9f4070c7ace66e505" exitCode=0 Mar 13 10:20:02 crc kubenswrapper[4632]: I0313 10:20:02.628316 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556620-42vs6" event={"ID":"28dbef1d-ca7f-4387-80af-8dffbfe92895","Type":"ContainerDied","Data":"746bd1f1584c6b468985171d618d35f15871608c045fd5e9f4070c7ace66e505"} Mar 13 10:20:03 crc kubenswrapper[4632]: I0313 10:20:03.390274 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6c7bf5ddc5-v6t5l" Mar 13 10:20:03 crc kubenswrapper[4632]: I0313 10:20:03.919605 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556620-42vs6" Mar 13 10:20:03 crc kubenswrapper[4632]: I0313 10:20:03.985844 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6c8k\" (UniqueName: \"kubernetes.io/projected/28dbef1d-ca7f-4387-80af-8dffbfe92895-kube-api-access-w6c8k\") pod \"28dbef1d-ca7f-4387-80af-8dffbfe92895\" (UID: \"28dbef1d-ca7f-4387-80af-8dffbfe92895\") " Mar 13 10:20:03 crc kubenswrapper[4632]: I0313 10:20:03.990798 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28dbef1d-ca7f-4387-80af-8dffbfe92895-kube-api-access-w6c8k" (OuterVolumeSpecName: "kube-api-access-w6c8k") pod "28dbef1d-ca7f-4387-80af-8dffbfe92895" (UID: "28dbef1d-ca7f-4387-80af-8dffbfe92895"). InnerVolumeSpecName "kube-api-access-w6c8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:20:04 crc kubenswrapper[4632]: I0313 10:20:04.087741 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6c8k\" (UniqueName: \"kubernetes.io/projected/28dbef1d-ca7f-4387-80af-8dffbfe92895-kube-api-access-w6c8k\") on node \"crc\" DevicePath \"\"" Mar 13 10:20:04 crc kubenswrapper[4632]: I0313 10:20:04.638788 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556620-42vs6" event={"ID":"28dbef1d-ca7f-4387-80af-8dffbfe92895","Type":"ContainerDied","Data":"b8eaff64006265fcd0d329ca24eef7b4cfa19bbf794b80db42c9037947401d93"} Mar 13 10:20:04 crc kubenswrapper[4632]: I0313 10:20:04.639217 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8eaff64006265fcd0d329ca24eef7b4cfa19bbf794b80db42c9037947401d93" Mar 13 10:20:04 crc kubenswrapper[4632]: I0313 10:20:04.639130 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556620-42vs6" Mar 13 10:20:04 crc kubenswrapper[4632]: I0313 10:20:04.971901 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556614-7pzwt"] Mar 13 10:20:04 crc kubenswrapper[4632]: I0313 10:20:04.975508 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556614-7pzwt"] Mar 13 10:20:06 crc kubenswrapper[4632]: I0313 10:20:06.060873 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f80bfe67-be24-45e3-9e57-b67389f8cc63" path="/var/lib/kubelet/pods/f80bfe67-be24-45e3-9e57-b67389f8cc63/volumes" Mar 13 10:20:23 crc kubenswrapper[4632]: I0313 10:20:23.125661 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-ffdcc767b-qxvlq" Mar 13 10:20:23 crc kubenswrapper[4632]: I0313 10:20:23.858664 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-lvlxj"] Mar 13 10:20:23 crc kubenswrapper[4632]: E0313 10:20:23.858925 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28dbef1d-ca7f-4387-80af-8dffbfe92895" containerName="oc" Mar 13 10:20:23 crc kubenswrapper[4632]: I0313 10:20:23.858978 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="28dbef1d-ca7f-4387-80af-8dffbfe92895" containerName="oc" Mar 13 10:20:23 crc kubenswrapper[4632]: I0313 10:20:23.859113 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="28dbef1d-ca7f-4387-80af-8dffbfe92895" containerName="oc" Mar 13 10:20:23 crc kubenswrapper[4632]: I0313 10:20:23.861120 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-lvlxj" Mar 13 10:20:23 crc kubenswrapper[4632]: I0313 10:20:23.864347 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-9zbh8"] Mar 13 10:20:23 crc kubenswrapper[4632]: I0313 10:20:23.865372 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9zbh8" Mar 13 10:20:23 crc kubenswrapper[4632]: I0313 10:20:23.866763 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 13 10:20:23 crc kubenswrapper[4632]: I0313 10:20:23.866862 4632 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 13 10:20:23 crc kubenswrapper[4632]: I0313 10:20:23.867157 4632 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-cmb4l" Mar 13 10:20:23 crc kubenswrapper[4632]: I0313 10:20:23.869015 4632 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 13 10:20:23 crc kubenswrapper[4632]: I0313 10:20:23.887325 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-9zbh8"] Mar 13 10:20:23 crc kubenswrapper[4632]: I0313 10:20:23.953389 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-tztd9"] Mar 13 10:20:23 crc kubenswrapper[4632]: I0313 10:20:23.954291 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-tztd9" Mar 13 10:20:23 crc kubenswrapper[4632]: I0313 10:20:23.957713 4632 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 13 10:20:23 crc kubenswrapper[4632]: I0313 10:20:23.957927 4632 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 13 10:20:23 crc kubenswrapper[4632]: I0313 10:20:23.958115 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 13 10:20:23 crc kubenswrapper[4632]: I0313 10:20:23.958266 4632 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-v97kj" Mar 13 10:20:23 crc kubenswrapper[4632]: I0313 10:20:23.966559 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92cl4\" (UniqueName: \"kubernetes.io/projected/b33bccd8-6f28-4ffe-9500-069a52aab5df-kube-api-access-92cl4\") pod \"frr-k8s-webhook-server-bcc4b6f68-9zbh8\" (UID: \"b33bccd8-6f28-4ffe-9500-069a52aab5df\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9zbh8" Mar 13 10:20:23 crc kubenswrapper[4632]: I0313 10:20:23.966633 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/85b58bb0-63f5-4c85-8759-ce28d2c7db58-frr-conf\") pod \"frr-k8s-lvlxj\" (UID: \"85b58bb0-63f5-4c85-8759-ce28d2c7db58\") " pod="metallb-system/frr-k8s-lvlxj" Mar 13 10:20:23 crc kubenswrapper[4632]: I0313 10:20:23.966666 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxgsd\" (UniqueName: \"kubernetes.io/projected/85b58bb0-63f5-4c85-8759-ce28d2c7db58-kube-api-access-lxgsd\") pod \"frr-k8s-lvlxj\" (UID: \"85b58bb0-63f5-4c85-8759-ce28d2c7db58\") " pod="metallb-system/frr-k8s-lvlxj" Mar 13 10:20:23 crc kubenswrapper[4632]: I0313 10:20:23.966813 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/85b58bb0-63f5-4c85-8759-ce28d2c7db58-frr-startup\") pod \"frr-k8s-lvlxj\" (UID: \"85b58bb0-63f5-4c85-8759-ce28d2c7db58\") " pod="metallb-system/frr-k8s-lvlxj" Mar 13 10:20:23 crc kubenswrapper[4632]: I0313 10:20:23.966892 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/85b58bb0-63f5-4c85-8759-ce28d2c7db58-metrics-certs\") pod \"frr-k8s-lvlxj\" (UID: \"85b58bb0-63f5-4c85-8759-ce28d2c7db58\") " pod="metallb-system/frr-k8s-lvlxj" Mar 13 10:20:23 crc kubenswrapper[4632]: I0313 10:20:23.966976 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b33bccd8-6f28-4ffe-9500-069a52aab5df-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-9zbh8\" (UID: \"b33bccd8-6f28-4ffe-9500-069a52aab5df\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9zbh8" Mar 13 10:20:23 crc kubenswrapper[4632]: I0313 10:20:23.967063 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/85b58bb0-63f5-4c85-8759-ce28d2c7db58-reloader\") pod \"frr-k8s-lvlxj\" (UID: \"85b58bb0-63f5-4c85-8759-ce28d2c7db58\") " pod="metallb-system/frr-k8s-lvlxj" Mar 13 10:20:23 crc kubenswrapper[4632]: I0313 10:20:23.967087 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/85b58bb0-63f5-4c85-8759-ce28d2c7db58-frr-sockets\") pod \"frr-k8s-lvlxj\" (UID: \"85b58bb0-63f5-4c85-8759-ce28d2c7db58\") " pod="metallb-system/frr-k8s-lvlxj" Mar 13 10:20:23 crc kubenswrapper[4632]: I0313 10:20:23.967133 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/85b58bb0-63f5-4c85-8759-ce28d2c7db58-metrics\") pod \"frr-k8s-lvlxj\" (UID: \"85b58bb0-63f5-4c85-8759-ce28d2c7db58\") " pod="metallb-system/frr-k8s-lvlxj" Mar 13 10:20:23 crc kubenswrapper[4632]: I0313 10:20:23.987834 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-7bb4cc7c98-62bwr"] Mar 13 10:20:23 crc kubenswrapper[4632]: I0313 10:20:23.988687 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-62bwr" Mar 13 10:20:23 crc kubenswrapper[4632]: I0313 10:20:23.991380 4632 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.002834 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-62bwr"] Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.068682 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxgsd\" (UniqueName: \"kubernetes.io/projected/85b58bb0-63f5-4c85-8759-ce28d2c7db58-kube-api-access-lxgsd\") pod \"frr-k8s-lvlxj\" (UID: \"85b58bb0-63f5-4c85-8759-ce28d2c7db58\") " pod="metallb-system/frr-k8s-lvlxj" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.068739 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/277ddd7f-fd9c-4b27-9563-c904f1dffd40-metrics-certs\") pod \"controller-7bb4cc7c98-62bwr\" (UID: \"277ddd7f-fd9c-4b27-9563-c904f1dffd40\") " pod="metallb-system/controller-7bb4cc7c98-62bwr" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.068766 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/85b58bb0-63f5-4c85-8759-ce28d2c7db58-frr-startup\") pod \"frr-k8s-lvlxj\" (UID: \"85b58bb0-63f5-4c85-8759-ce28d2c7db58\") " pod="metallb-system/frr-k8s-lvlxj" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.068793 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8f51973a-596d-40dc-9b5b-b2c95a60ea0c-metrics-certs\") pod \"speaker-tztd9\" (UID: \"8f51973a-596d-40dc-9b5b-b2c95a60ea0c\") " pod="metallb-system/speaker-tztd9" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.068815 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/85b58bb0-63f5-4c85-8759-ce28d2c7db58-metrics-certs\") pod \"frr-k8s-lvlxj\" (UID: \"85b58bb0-63f5-4c85-8759-ce28d2c7db58\") " pod="metallb-system/frr-k8s-lvlxj" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.068842 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b33bccd8-6f28-4ffe-9500-069a52aab5df-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-9zbh8\" (UID: \"b33bccd8-6f28-4ffe-9500-069a52aab5df\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9zbh8" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.068865 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8f51973a-596d-40dc-9b5b-b2c95a60ea0c-metallb-excludel2\") pod \"speaker-tztd9\" (UID: \"8f51973a-596d-40dc-9b5b-b2c95a60ea0c\") " pod="metallb-system/speaker-tztd9" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.068887 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffpbj\" (UniqueName: \"kubernetes.io/projected/277ddd7f-fd9c-4b27-9563-c904f1dffd40-kube-api-access-ffpbj\") pod \"controller-7bb4cc7c98-62bwr\" (UID: \"277ddd7f-fd9c-4b27-9563-c904f1dffd40\") " pod="metallb-system/controller-7bb4cc7c98-62bwr" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.068908 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/277ddd7f-fd9c-4b27-9563-c904f1dffd40-cert\") pod \"controller-7bb4cc7c98-62bwr\" (UID: \"277ddd7f-fd9c-4b27-9563-c904f1dffd40\") " pod="metallb-system/controller-7bb4cc7c98-62bwr" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.068923 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/85b58bb0-63f5-4c85-8759-ce28d2c7db58-reloader\") pod \"frr-k8s-lvlxj\" (UID: \"85b58bb0-63f5-4c85-8759-ce28d2c7db58\") " pod="metallb-system/frr-k8s-lvlxj" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.068953 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/85b58bb0-63f5-4c85-8759-ce28d2c7db58-frr-sockets\") pod \"frr-k8s-lvlxj\" (UID: \"85b58bb0-63f5-4c85-8759-ce28d2c7db58\") " pod="metallb-system/frr-k8s-lvlxj" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.068971 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8f51973a-596d-40dc-9b5b-b2c95a60ea0c-memberlist\") pod \"speaker-tztd9\" (UID: \"8f51973a-596d-40dc-9b5b-b2c95a60ea0c\") " pod="metallb-system/speaker-tztd9" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.068989 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/85b58bb0-63f5-4c85-8759-ce28d2c7db58-metrics\") pod \"frr-k8s-lvlxj\" (UID: \"85b58bb0-63f5-4c85-8759-ce28d2c7db58\") " pod="metallb-system/frr-k8s-lvlxj" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.069019 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92cl4\" (UniqueName: \"kubernetes.io/projected/b33bccd8-6f28-4ffe-9500-069a52aab5df-kube-api-access-92cl4\") pod \"frr-k8s-webhook-server-bcc4b6f68-9zbh8\" (UID: \"b33bccd8-6f28-4ffe-9500-069a52aab5df\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9zbh8" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.069039 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd9n6\" (UniqueName: \"kubernetes.io/projected/8f51973a-596d-40dc-9b5b-b2c95a60ea0c-kube-api-access-qd9n6\") pod \"speaker-tztd9\" (UID: \"8f51973a-596d-40dc-9b5b-b2c95a60ea0c\") " pod="metallb-system/speaker-tztd9" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.069058 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/85b58bb0-63f5-4c85-8759-ce28d2c7db58-frr-conf\") pod \"frr-k8s-lvlxj\" (UID: \"85b58bb0-63f5-4c85-8759-ce28d2c7db58\") " pod="metallb-system/frr-k8s-lvlxj" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.069620 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/85b58bb0-63f5-4c85-8759-ce28d2c7db58-frr-conf\") pod \"frr-k8s-lvlxj\" (UID: \"85b58bb0-63f5-4c85-8759-ce28d2c7db58\") " pod="metallb-system/frr-k8s-lvlxj" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.069684 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/85b58bb0-63f5-4c85-8759-ce28d2c7db58-reloader\") pod \"frr-k8s-lvlxj\" (UID: \"85b58bb0-63f5-4c85-8759-ce28d2c7db58\") " pod="metallb-system/frr-k8s-lvlxj" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.069893 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/85b58bb0-63f5-4c85-8759-ce28d2c7db58-frr-sockets\") pod \"frr-k8s-lvlxj\" (UID: \"85b58bb0-63f5-4c85-8759-ce28d2c7db58\") " pod="metallb-system/frr-k8s-lvlxj" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.070188 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/85b58bb0-63f5-4c85-8759-ce28d2c7db58-metrics\") pod \"frr-k8s-lvlxj\" (UID: \"85b58bb0-63f5-4c85-8759-ce28d2c7db58\") " pod="metallb-system/frr-k8s-lvlxj" Mar 13 10:20:24 crc kubenswrapper[4632]: E0313 10:20:24.070517 4632 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Mar 13 10:20:24 crc kubenswrapper[4632]: E0313 10:20:24.070611 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b33bccd8-6f28-4ffe-9500-069a52aab5df-cert podName:b33bccd8-6f28-4ffe-9500-069a52aab5df nodeName:}" failed. No retries permitted until 2026-03-13 10:20:24.570585087 +0000 UTC m=+998.593115220 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b33bccd8-6f28-4ffe-9500-069a52aab5df-cert") pod "frr-k8s-webhook-server-bcc4b6f68-9zbh8" (UID: "b33bccd8-6f28-4ffe-9500-069a52aab5df") : secret "frr-k8s-webhook-server-cert" not found Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.071203 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/85b58bb0-63f5-4c85-8759-ce28d2c7db58-frr-startup\") pod \"frr-k8s-lvlxj\" (UID: \"85b58bb0-63f5-4c85-8759-ce28d2c7db58\") " pod="metallb-system/frr-k8s-lvlxj" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.079793 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/85b58bb0-63f5-4c85-8759-ce28d2c7db58-metrics-certs\") pod \"frr-k8s-lvlxj\" (UID: \"85b58bb0-63f5-4c85-8759-ce28d2c7db58\") " pod="metallb-system/frr-k8s-lvlxj" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.087517 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92cl4\" (UniqueName: \"kubernetes.io/projected/b33bccd8-6f28-4ffe-9500-069a52aab5df-kube-api-access-92cl4\") pod \"frr-k8s-webhook-server-bcc4b6f68-9zbh8\" (UID: \"b33bccd8-6f28-4ffe-9500-069a52aab5df\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9zbh8" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.088245 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxgsd\" (UniqueName: \"kubernetes.io/projected/85b58bb0-63f5-4c85-8759-ce28d2c7db58-kube-api-access-lxgsd\") pod \"frr-k8s-lvlxj\" (UID: \"85b58bb0-63f5-4c85-8759-ce28d2c7db58\") " pod="metallb-system/frr-k8s-lvlxj" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.170635 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8f51973a-596d-40dc-9b5b-b2c95a60ea0c-metrics-certs\") pod \"speaker-tztd9\" (UID: \"8f51973a-596d-40dc-9b5b-b2c95a60ea0c\") " pod="metallb-system/speaker-tztd9" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.170752 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8f51973a-596d-40dc-9b5b-b2c95a60ea0c-metallb-excludel2\") pod \"speaker-tztd9\" (UID: \"8f51973a-596d-40dc-9b5b-b2c95a60ea0c\") " pod="metallb-system/speaker-tztd9" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.170786 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffpbj\" (UniqueName: \"kubernetes.io/projected/277ddd7f-fd9c-4b27-9563-c904f1dffd40-kube-api-access-ffpbj\") pod \"controller-7bb4cc7c98-62bwr\" (UID: \"277ddd7f-fd9c-4b27-9563-c904f1dffd40\") " pod="metallb-system/controller-7bb4cc7c98-62bwr" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.170813 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/277ddd7f-fd9c-4b27-9563-c904f1dffd40-cert\") pod \"controller-7bb4cc7c98-62bwr\" (UID: \"277ddd7f-fd9c-4b27-9563-c904f1dffd40\") " pod="metallb-system/controller-7bb4cc7c98-62bwr" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.170839 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8f51973a-596d-40dc-9b5b-b2c95a60ea0c-memberlist\") pod \"speaker-tztd9\" (UID: \"8f51973a-596d-40dc-9b5b-b2c95a60ea0c\") " pod="metallb-system/speaker-tztd9" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.170882 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd9n6\" (UniqueName: \"kubernetes.io/projected/8f51973a-596d-40dc-9b5b-b2c95a60ea0c-kube-api-access-qd9n6\") pod \"speaker-tztd9\" (UID: \"8f51973a-596d-40dc-9b5b-b2c95a60ea0c\") " pod="metallb-system/speaker-tztd9" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.170919 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/277ddd7f-fd9c-4b27-9563-c904f1dffd40-metrics-certs\") pod \"controller-7bb4cc7c98-62bwr\" (UID: \"277ddd7f-fd9c-4b27-9563-c904f1dffd40\") " pod="metallb-system/controller-7bb4cc7c98-62bwr" Mar 13 10:20:24 crc kubenswrapper[4632]: E0313 10:20:24.171788 4632 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 13 10:20:24 crc kubenswrapper[4632]: E0313 10:20:24.171859 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8f51973a-596d-40dc-9b5b-b2c95a60ea0c-memberlist podName:8f51973a-596d-40dc-9b5b-b2c95a60ea0c nodeName:}" failed. No retries permitted until 2026-03-13 10:20:24.671838072 +0000 UTC m=+998.694368205 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8f51973a-596d-40dc-9b5b-b2c95a60ea0c-memberlist") pod "speaker-tztd9" (UID: "8f51973a-596d-40dc-9b5b-b2c95a60ea0c") : secret "metallb-memberlist" not found Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.172470 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8f51973a-596d-40dc-9b5b-b2c95a60ea0c-metallb-excludel2\") pod \"speaker-tztd9\" (UID: \"8f51973a-596d-40dc-9b5b-b2c95a60ea0c\") " pod="metallb-system/speaker-tztd9" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.178281 4632 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.178692 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/277ddd7f-fd9c-4b27-9563-c904f1dffd40-metrics-certs\") pod \"controller-7bb4cc7c98-62bwr\" (UID: \"277ddd7f-fd9c-4b27-9563-c904f1dffd40\") " pod="metallb-system/controller-7bb4cc7c98-62bwr" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.179376 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-lvlxj" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.186022 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/277ddd7f-fd9c-4b27-9563-c904f1dffd40-cert\") pod \"controller-7bb4cc7c98-62bwr\" (UID: \"277ddd7f-fd9c-4b27-9563-c904f1dffd40\") " pod="metallb-system/controller-7bb4cc7c98-62bwr" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.199190 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffpbj\" (UniqueName: \"kubernetes.io/projected/277ddd7f-fd9c-4b27-9563-c904f1dffd40-kube-api-access-ffpbj\") pod \"controller-7bb4cc7c98-62bwr\" (UID: \"277ddd7f-fd9c-4b27-9563-c904f1dffd40\") " pod="metallb-system/controller-7bb4cc7c98-62bwr" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.201769 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd9n6\" (UniqueName: \"kubernetes.io/projected/8f51973a-596d-40dc-9b5b-b2c95a60ea0c-kube-api-access-qd9n6\") pod \"speaker-tztd9\" (UID: \"8f51973a-596d-40dc-9b5b-b2c95a60ea0c\") " pod="metallb-system/speaker-tztd9" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.206586 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8f51973a-596d-40dc-9b5b-b2c95a60ea0c-metrics-certs\") pod \"speaker-tztd9\" (UID: \"8f51973a-596d-40dc-9b5b-b2c95a60ea0c\") " pod="metallb-system/speaker-tztd9" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.312280 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-62bwr" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.576618 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b33bccd8-6f28-4ffe-9500-069a52aab5df-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-9zbh8\" (UID: \"b33bccd8-6f28-4ffe-9500-069a52aab5df\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9zbh8" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.582272 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b33bccd8-6f28-4ffe-9500-069a52aab5df-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-9zbh8\" (UID: \"b33bccd8-6f28-4ffe-9500-069a52aab5df\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9zbh8" Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.677975 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8f51973a-596d-40dc-9b5b-b2c95a60ea0c-memberlist\") pod \"speaker-tztd9\" (UID: \"8f51973a-596d-40dc-9b5b-b2c95a60ea0c\") " pod="metallb-system/speaker-tztd9" Mar 13 10:20:24 crc kubenswrapper[4632]: E0313 10:20:24.678152 4632 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 13 10:20:24 crc kubenswrapper[4632]: E0313 10:20:24.678249 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8f51973a-596d-40dc-9b5b-b2c95a60ea0c-memberlist podName:8f51973a-596d-40dc-9b5b-b2c95a60ea0c nodeName:}" failed. No retries permitted until 2026-03-13 10:20:25.678228918 +0000 UTC m=+999.700759051 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8f51973a-596d-40dc-9b5b-b2c95a60ea0c-memberlist") pod "speaker-tztd9" (UID: "8f51973a-596d-40dc-9b5b-b2c95a60ea0c") : secret "metallb-memberlist" not found Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.746344 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lvlxj" event={"ID":"85b58bb0-63f5-4c85-8759-ce28d2c7db58","Type":"ContainerStarted","Data":"78c4f38e92514e8e35bc3a4d59f7da89119cb64f0e178cf6c63c5d138a8a7177"} Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.760906 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-62bwr"] Mar 13 10:20:24 crc kubenswrapper[4632]: W0313 10:20:24.766808 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod277ddd7f_fd9c_4b27_9563_c904f1dffd40.slice/crio-6f91d639a73c89c4bef35530e3553cae46d82fd04f4d5159a503dd42c5f41663 WatchSource:0}: Error finding container 6f91d639a73c89c4bef35530e3553cae46d82fd04f4d5159a503dd42c5f41663: Status 404 returned error can't find the container with id 6f91d639a73c89c4bef35530e3553cae46d82fd04f4d5159a503dd42c5f41663 Mar 13 10:20:24 crc kubenswrapper[4632]: I0313 10:20:24.789522 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9zbh8" Mar 13 10:20:25 crc kubenswrapper[4632]: I0313 10:20:25.084051 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-9zbh8"] Mar 13 10:20:25 crc kubenswrapper[4632]: I0313 10:20:25.690783 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8f51973a-596d-40dc-9b5b-b2c95a60ea0c-memberlist\") pod \"speaker-tztd9\" (UID: \"8f51973a-596d-40dc-9b5b-b2c95a60ea0c\") " pod="metallb-system/speaker-tztd9" Mar 13 10:20:25 crc kubenswrapper[4632]: I0313 10:20:25.697624 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8f51973a-596d-40dc-9b5b-b2c95a60ea0c-memberlist\") pod \"speaker-tztd9\" (UID: \"8f51973a-596d-40dc-9b5b-b2c95a60ea0c\") " pod="metallb-system/speaker-tztd9" Mar 13 10:20:25 crc kubenswrapper[4632]: I0313 10:20:25.754805 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-62bwr" event={"ID":"277ddd7f-fd9c-4b27-9563-c904f1dffd40","Type":"ContainerStarted","Data":"6d322cabee18aef00b71e74c7d9c0afec843e21a91f4426253de3249590b9941"} Mar 13 10:20:25 crc kubenswrapper[4632]: I0313 10:20:25.755097 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-62bwr" event={"ID":"277ddd7f-fd9c-4b27-9563-c904f1dffd40","Type":"ContainerStarted","Data":"b70c01aadbba4c08d00cd500d541a93419f2965f917c721c2a529c0f228795b5"} Mar 13 10:20:25 crc kubenswrapper[4632]: I0313 10:20:25.755198 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-62bwr" event={"ID":"277ddd7f-fd9c-4b27-9563-c904f1dffd40","Type":"ContainerStarted","Data":"6f91d639a73c89c4bef35530e3553cae46d82fd04f4d5159a503dd42c5f41663"} Mar 13 10:20:25 crc kubenswrapper[4632]: I0313 10:20:25.755303 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-62bwr" Mar 13 10:20:25 crc kubenswrapper[4632]: I0313 10:20:25.755578 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9zbh8" event={"ID":"b33bccd8-6f28-4ffe-9500-069a52aab5df","Type":"ContainerStarted","Data":"6ccfd3b438da53aa3a9cc24cd575fef5e9f4147fe4893d65dafa89b9bddf7863"} Mar 13 10:20:25 crc kubenswrapper[4632]: I0313 10:20:25.777081 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-tztd9" Mar 13 10:20:25 crc kubenswrapper[4632]: I0313 10:20:25.782708 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-7bb4cc7c98-62bwr" podStartSLOduration=2.782685855 podStartE2EDuration="2.782685855s" podCreationTimestamp="2026-03-13 10:20:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:20:25.773510307 +0000 UTC m=+999.796040450" watchObservedRunningTime="2026-03-13 10:20:25.782685855 +0000 UTC m=+999.805216008" Mar 13 10:20:26 crc kubenswrapper[4632]: I0313 10:20:26.792610 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tztd9" event={"ID":"8f51973a-596d-40dc-9b5b-b2c95a60ea0c","Type":"ContainerStarted","Data":"2ea54c19d6a4ac011be67fd99761d87872512667c4d68bb1f20f2ba64f27c6b9"} Mar 13 10:20:26 crc kubenswrapper[4632]: I0313 10:20:26.792908 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tztd9" event={"ID":"8f51973a-596d-40dc-9b5b-b2c95a60ea0c","Type":"ContainerStarted","Data":"7369028ab3380b8162926288f2a66e0780eba331066b6d04106bd606debba692"} Mar 13 10:20:26 crc kubenswrapper[4632]: I0313 10:20:26.792923 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tztd9" event={"ID":"8f51973a-596d-40dc-9b5b-b2c95a60ea0c","Type":"ContainerStarted","Data":"21d4836cf410138123a02393e40bd0af724c4c3a101092ac3a79a03eed782a1d"} Mar 13 10:20:26 crc kubenswrapper[4632]: I0313 10:20:26.793523 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-tztd9" Mar 13 10:20:26 crc kubenswrapper[4632]: I0313 10:20:26.841205 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-tztd9" podStartSLOduration=3.841183327 podStartE2EDuration="3.841183327s" podCreationTimestamp="2026-03-13 10:20:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:20:26.840297755 +0000 UTC m=+1000.862827888" watchObservedRunningTime="2026-03-13 10:20:26.841183327 +0000 UTC m=+1000.863713470" Mar 13 10:20:34 crc kubenswrapper[4632]: I0313 10:20:34.320404 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-62bwr" Mar 13 10:20:36 crc kubenswrapper[4632]: I0313 10:20:36.049394 4632 generic.go:334] "Generic (PLEG): container finished" podID="85b58bb0-63f5-4c85-8759-ce28d2c7db58" containerID="5d35a7bcb38219d00926d9e32f8bdc06f3388e1497f4e4293b861382b0d06c02" exitCode=0 Mar 13 10:20:36 crc kubenswrapper[4632]: I0313 10:20:36.057247 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9zbh8" Mar 13 10:20:36 crc kubenswrapper[4632]: I0313 10:20:36.057277 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lvlxj" event={"ID":"85b58bb0-63f5-4c85-8759-ce28d2c7db58","Type":"ContainerDied","Data":"5d35a7bcb38219d00926d9e32f8bdc06f3388e1497f4e4293b861382b0d06c02"} Mar 13 10:20:36 crc kubenswrapper[4632]: I0313 10:20:36.057297 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9zbh8" event={"ID":"b33bccd8-6f28-4ffe-9500-069a52aab5df","Type":"ContainerStarted","Data":"6b277b3621566e90d2ea8a306394444270adbf026557398f5520284a63c356df"} Mar 13 10:20:36 crc kubenswrapper[4632]: I0313 10:20:36.140381 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9zbh8" podStartSLOduration=2.474485731 podStartE2EDuration="13.140357057s" podCreationTimestamp="2026-03-13 10:20:23 +0000 UTC" firstStartedPulling="2026-03-13 10:20:25.090932867 +0000 UTC m=+999.113463000" lastFinishedPulling="2026-03-13 10:20:35.756804193 +0000 UTC m=+1009.779334326" observedRunningTime="2026-03-13 10:20:36.118626555 +0000 UTC m=+1010.141156698" watchObservedRunningTime="2026-03-13 10:20:36.140357057 +0000 UTC m=+1010.162887190" Mar 13 10:20:37 crc kubenswrapper[4632]: I0313 10:20:37.064178 4632 generic.go:334] "Generic (PLEG): container finished" podID="85b58bb0-63f5-4c85-8759-ce28d2c7db58" containerID="7f411b298d721dfc2f203bd0d068c8eb0744299f10d8bf319ffb996f2c67fa11" exitCode=0 Mar 13 10:20:37 crc kubenswrapper[4632]: I0313 10:20:37.064242 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lvlxj" event={"ID":"85b58bb0-63f5-4c85-8759-ce28d2c7db58","Type":"ContainerDied","Data":"7f411b298d721dfc2f203bd0d068c8eb0744299f10d8bf319ffb996f2c67fa11"} Mar 13 10:20:38 crc kubenswrapper[4632]: I0313 10:20:38.071392 4632 generic.go:334] "Generic (PLEG): container finished" podID="85b58bb0-63f5-4c85-8759-ce28d2c7db58" containerID="eca84017b08d4a3daf0b100d49fcc9129ce1ea0f23816d19bcc8797ce9117ebd" exitCode=0 Mar 13 10:20:38 crc kubenswrapper[4632]: I0313 10:20:38.071438 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lvlxj" event={"ID":"85b58bb0-63f5-4c85-8759-ce28d2c7db58","Type":"ContainerDied","Data":"eca84017b08d4a3daf0b100d49fcc9129ce1ea0f23816d19bcc8797ce9117ebd"} Mar 13 10:20:39 crc kubenswrapper[4632]: I0313 10:20:39.080106 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lvlxj" event={"ID":"85b58bb0-63f5-4c85-8759-ce28d2c7db58","Type":"ContainerStarted","Data":"ac86591594d77be34bb7d30cd47d6e21c76bb21f414a9eee2975a4ab00905070"} Mar 13 10:20:39 crc kubenswrapper[4632]: I0313 10:20:39.080143 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lvlxj" event={"ID":"85b58bb0-63f5-4c85-8759-ce28d2c7db58","Type":"ContainerStarted","Data":"f1bb9b44e61c382d8baef6eea87e50c583aaf39ef98e6660ba4c14110edbbd50"} Mar 13 10:20:39 crc kubenswrapper[4632]: I0313 10:20:39.080154 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lvlxj" event={"ID":"85b58bb0-63f5-4c85-8759-ce28d2c7db58","Type":"ContainerStarted","Data":"7460db0e48987b1f59a273cebeb41489b6ba6de15b26f36b4087497d7870a4f2"} Mar 13 10:20:39 crc kubenswrapper[4632]: I0313 10:20:39.080163 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lvlxj" event={"ID":"85b58bb0-63f5-4c85-8759-ce28d2c7db58","Type":"ContainerStarted","Data":"856e616e36ae078de77d4b4ce66fe7580c0b8b24fdb8aaa99573dc768bc62e8a"} Mar 13 10:20:39 crc kubenswrapper[4632]: I0313 10:20:39.080172 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lvlxj" event={"ID":"85b58bb0-63f5-4c85-8759-ce28d2c7db58","Type":"ContainerStarted","Data":"59d96a138cf7adeb4d273db270ca9998a9b75447d7d6c92e875e751afba3f9b8"} Mar 13 10:20:39 crc kubenswrapper[4632]: I0313 10:20:39.080180 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lvlxj" event={"ID":"85b58bb0-63f5-4c85-8759-ce28d2c7db58","Type":"ContainerStarted","Data":"bfa960455f207db762d901f5af9c2b35ade8cd1c5f43d1bc1d4a40a5bfd8199d"} Mar 13 10:20:39 crc kubenswrapper[4632]: I0313 10:20:39.081301 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-lvlxj" Mar 13 10:20:39 crc kubenswrapper[4632]: I0313 10:20:39.109780 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-lvlxj" podStartSLOduration=4.699453806 podStartE2EDuration="16.109758674s" podCreationTimestamp="2026-03-13 10:20:23 +0000 UTC" firstStartedPulling="2026-03-13 10:20:24.326331513 +0000 UTC m=+998.348861656" lastFinishedPulling="2026-03-13 10:20:35.736636391 +0000 UTC m=+1009.759166524" observedRunningTime="2026-03-13 10:20:39.107710682 +0000 UTC m=+1013.130240835" watchObservedRunningTime="2026-03-13 10:20:39.109758674 +0000 UTC m=+1013.132288807" Mar 13 10:20:39 crc kubenswrapper[4632]: I0313 10:20:39.180523 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-lvlxj" Mar 13 10:20:39 crc kubenswrapper[4632]: I0313 10:20:39.222777 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-lvlxj" Mar 13 10:20:40 crc kubenswrapper[4632]: I0313 10:20:40.461352 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 10:20:40 crc kubenswrapper[4632]: I0313 10:20:40.461654 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 10:20:45 crc kubenswrapper[4632]: I0313 10:20:45.783411 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-tztd9" Mar 13 10:20:48 crc kubenswrapper[4632]: I0313 10:20:48.483033 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-2jqnk"] Mar 13 10:20:48 crc kubenswrapper[4632]: I0313 10:20:48.484087 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2jqnk" Mar 13 10:20:48 crc kubenswrapper[4632]: I0313 10:20:48.487546 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 13 10:20:48 crc kubenswrapper[4632]: I0313 10:20:48.488332 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 13 10:20:48 crc kubenswrapper[4632]: I0313 10:20:48.488608 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-zrjwg" Mar 13 10:20:48 crc kubenswrapper[4632]: I0313 10:20:48.515704 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2jqnk"] Mar 13 10:20:48 crc kubenswrapper[4632]: I0313 10:20:48.606024 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h96l4\" (UniqueName: \"kubernetes.io/projected/7de02b7f-4e1c-4ba1-9659-c864e9080092-kube-api-access-h96l4\") pod \"openstack-operator-index-2jqnk\" (UID: \"7de02b7f-4e1c-4ba1-9659-c864e9080092\") " pod="openstack-operators/openstack-operator-index-2jqnk" Mar 13 10:20:48 crc kubenswrapper[4632]: I0313 10:20:48.706813 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h96l4\" (UniqueName: \"kubernetes.io/projected/7de02b7f-4e1c-4ba1-9659-c864e9080092-kube-api-access-h96l4\") pod \"openstack-operator-index-2jqnk\" (UID: \"7de02b7f-4e1c-4ba1-9659-c864e9080092\") " pod="openstack-operators/openstack-operator-index-2jqnk" Mar 13 10:20:48 crc kubenswrapper[4632]: I0313 10:20:48.728480 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h96l4\" (UniqueName: \"kubernetes.io/projected/7de02b7f-4e1c-4ba1-9659-c864e9080092-kube-api-access-h96l4\") pod \"openstack-operator-index-2jqnk\" (UID: \"7de02b7f-4e1c-4ba1-9659-c864e9080092\") " pod="openstack-operators/openstack-operator-index-2jqnk" Mar 13 10:20:48 crc kubenswrapper[4632]: I0313 10:20:48.811877 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2jqnk" Mar 13 10:20:49 crc kubenswrapper[4632]: I0313 10:20:49.044323 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2jqnk"] Mar 13 10:20:49 crc kubenswrapper[4632]: I0313 10:20:49.139985 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2jqnk" event={"ID":"7de02b7f-4e1c-4ba1-9659-c864e9080092","Type":"ContainerStarted","Data":"04de8e81d8d4902a0ce7d3552a9cd405d1dd0a14387d654e4100f584583eff01"} Mar 13 10:20:53 crc kubenswrapper[4632]: I0313 10:20:53.189576 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2jqnk" event={"ID":"7de02b7f-4e1c-4ba1-9659-c864e9080092","Type":"ContainerStarted","Data":"956a137089f814594898450d78be9fb64aa26d046a610f28da0c9756520f90c8"} Mar 13 10:20:53 crc kubenswrapper[4632]: I0313 10:20:53.664260 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-2jqnk" podStartSLOduration=2.481847726 podStartE2EDuration="5.664235343s" podCreationTimestamp="2026-03-13 10:20:48 +0000 UTC" firstStartedPulling="2026-03-13 10:20:49.073227875 +0000 UTC m=+1023.095758028" lastFinishedPulling="2026-03-13 10:20:52.255615512 +0000 UTC m=+1026.278145645" observedRunningTime="2026-03-13 10:20:53.21167938 +0000 UTC m=+1027.234209513" watchObservedRunningTime="2026-03-13 10:20:53.664235343 +0000 UTC m=+1027.686765476" Mar 13 10:20:53 crc kubenswrapper[4632]: I0313 10:20:53.665615 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j2w2m"] Mar 13 10:20:53 crc kubenswrapper[4632]: I0313 10:20:53.667049 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j2w2m" Mar 13 10:20:53 crc kubenswrapper[4632]: I0313 10:20:53.685542 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j2w2m"] Mar 13 10:20:53 crc kubenswrapper[4632]: I0313 10:20:53.770981 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95e13797-40e9-4942-a7b5-6174fa448654-catalog-content\") pod \"certified-operators-j2w2m\" (UID: \"95e13797-40e9-4942-a7b5-6174fa448654\") " pod="openshift-marketplace/certified-operators-j2w2m" Mar 13 10:20:53 crc kubenswrapper[4632]: I0313 10:20:53.771038 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95e13797-40e9-4942-a7b5-6174fa448654-utilities\") pod \"certified-operators-j2w2m\" (UID: \"95e13797-40e9-4942-a7b5-6174fa448654\") " pod="openshift-marketplace/certified-operators-j2w2m" Mar 13 10:20:53 crc kubenswrapper[4632]: I0313 10:20:53.771217 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48kk8\" (UniqueName: \"kubernetes.io/projected/95e13797-40e9-4942-a7b5-6174fa448654-kube-api-access-48kk8\") pod \"certified-operators-j2w2m\" (UID: \"95e13797-40e9-4942-a7b5-6174fa448654\") " pod="openshift-marketplace/certified-operators-j2w2m" Mar 13 10:20:53 crc kubenswrapper[4632]: I0313 10:20:53.873778 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48kk8\" (UniqueName: \"kubernetes.io/projected/95e13797-40e9-4942-a7b5-6174fa448654-kube-api-access-48kk8\") pod \"certified-operators-j2w2m\" (UID: \"95e13797-40e9-4942-a7b5-6174fa448654\") " pod="openshift-marketplace/certified-operators-j2w2m" Mar 13 10:20:53 crc kubenswrapper[4632]: I0313 10:20:53.873860 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95e13797-40e9-4942-a7b5-6174fa448654-catalog-content\") pod \"certified-operators-j2w2m\" (UID: \"95e13797-40e9-4942-a7b5-6174fa448654\") " pod="openshift-marketplace/certified-operators-j2w2m" Mar 13 10:20:53 crc kubenswrapper[4632]: I0313 10:20:53.873883 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95e13797-40e9-4942-a7b5-6174fa448654-utilities\") pod \"certified-operators-j2w2m\" (UID: \"95e13797-40e9-4942-a7b5-6174fa448654\") " pod="openshift-marketplace/certified-operators-j2w2m" Mar 13 10:20:53 crc kubenswrapper[4632]: I0313 10:20:53.874465 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95e13797-40e9-4942-a7b5-6174fa448654-utilities\") pod \"certified-operators-j2w2m\" (UID: \"95e13797-40e9-4942-a7b5-6174fa448654\") " pod="openshift-marketplace/certified-operators-j2w2m" Mar 13 10:20:53 crc kubenswrapper[4632]: I0313 10:20:53.874527 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95e13797-40e9-4942-a7b5-6174fa448654-catalog-content\") pod \"certified-operators-j2w2m\" (UID: \"95e13797-40e9-4942-a7b5-6174fa448654\") " pod="openshift-marketplace/certified-operators-j2w2m" Mar 13 10:20:53 crc kubenswrapper[4632]: I0313 10:20:53.911524 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48kk8\" (UniqueName: \"kubernetes.io/projected/95e13797-40e9-4942-a7b5-6174fa448654-kube-api-access-48kk8\") pod \"certified-operators-j2w2m\" (UID: \"95e13797-40e9-4942-a7b5-6174fa448654\") " pod="openshift-marketplace/certified-operators-j2w2m" Mar 13 10:20:53 crc kubenswrapper[4632]: I0313 10:20:53.984457 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j2w2m" Mar 13 10:20:54 crc kubenswrapper[4632]: I0313 10:20:54.185047 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-lvlxj" Mar 13 10:20:54 crc kubenswrapper[4632]: I0313 10:20:54.565449 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j2w2m"] Mar 13 10:20:54 crc kubenswrapper[4632]: I0313 10:20:54.810570 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9zbh8" Mar 13 10:20:55 crc kubenswrapper[4632]: I0313 10:20:55.203417 4632 generic.go:334] "Generic (PLEG): container finished" podID="95e13797-40e9-4942-a7b5-6174fa448654" containerID="6e108feffd870d15032d03dad27338b300eaf2cbdd55be7000c4f4a141b8f3ad" exitCode=0 Mar 13 10:20:55 crc kubenswrapper[4632]: I0313 10:20:55.203452 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j2w2m" event={"ID":"95e13797-40e9-4942-a7b5-6174fa448654","Type":"ContainerDied","Data":"6e108feffd870d15032d03dad27338b300eaf2cbdd55be7000c4f4a141b8f3ad"} Mar 13 10:20:55 crc kubenswrapper[4632]: I0313 10:20:55.203476 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j2w2m" event={"ID":"95e13797-40e9-4942-a7b5-6174fa448654","Type":"ContainerStarted","Data":"a5d6997e64c1bba55b267a1de22e67ed4e90ac2795eea19781b2283baf2f9305"} Mar 13 10:20:56 crc kubenswrapper[4632]: I0313 10:20:56.211438 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j2w2m" event={"ID":"95e13797-40e9-4942-a7b5-6174fa448654","Type":"ContainerStarted","Data":"eff3d4047aa48452ec24bfeecbd51ff4236d75ad1f2c6051aceaf13600d9b721"} Mar 13 10:20:57 crc kubenswrapper[4632]: I0313 10:20:57.218536 4632 generic.go:334] "Generic (PLEG): container finished" podID="95e13797-40e9-4942-a7b5-6174fa448654" containerID="eff3d4047aa48452ec24bfeecbd51ff4236d75ad1f2c6051aceaf13600d9b721" exitCode=0 Mar 13 10:20:57 crc kubenswrapper[4632]: I0313 10:20:57.218585 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j2w2m" event={"ID":"95e13797-40e9-4942-a7b5-6174fa448654","Type":"ContainerDied","Data":"eff3d4047aa48452ec24bfeecbd51ff4236d75ad1f2c6051aceaf13600d9b721"} Mar 13 10:20:57 crc kubenswrapper[4632]: I0313 10:20:57.667901 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fdtl7"] Mar 13 10:20:57 crc kubenswrapper[4632]: I0313 10:20:57.669887 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fdtl7" Mar 13 10:20:57 crc kubenswrapper[4632]: I0313 10:20:57.679822 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fdtl7"] Mar 13 10:20:57 crc kubenswrapper[4632]: I0313 10:20:57.729231 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a01bcaf0-e2c1-495b-bc6d-a57978c7817b-catalog-content\") pod \"community-operators-fdtl7\" (UID: \"a01bcaf0-e2c1-495b-bc6d-a57978c7817b\") " pod="openshift-marketplace/community-operators-fdtl7" Mar 13 10:20:57 crc kubenswrapper[4632]: I0313 10:20:57.729327 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a01bcaf0-e2c1-495b-bc6d-a57978c7817b-utilities\") pod \"community-operators-fdtl7\" (UID: \"a01bcaf0-e2c1-495b-bc6d-a57978c7817b\") " pod="openshift-marketplace/community-operators-fdtl7" Mar 13 10:20:57 crc kubenswrapper[4632]: I0313 10:20:57.729408 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgb8q\" (UniqueName: \"kubernetes.io/projected/a01bcaf0-e2c1-495b-bc6d-a57978c7817b-kube-api-access-sgb8q\") pod \"community-operators-fdtl7\" (UID: \"a01bcaf0-e2c1-495b-bc6d-a57978c7817b\") " pod="openshift-marketplace/community-operators-fdtl7" Mar 13 10:20:57 crc kubenswrapper[4632]: I0313 10:20:57.830177 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a01bcaf0-e2c1-495b-bc6d-a57978c7817b-catalog-content\") pod \"community-operators-fdtl7\" (UID: \"a01bcaf0-e2c1-495b-bc6d-a57978c7817b\") " pod="openshift-marketplace/community-operators-fdtl7" Mar 13 10:20:57 crc kubenswrapper[4632]: I0313 10:20:57.832107 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a01bcaf0-e2c1-495b-bc6d-a57978c7817b-utilities\") pod \"community-operators-fdtl7\" (UID: \"a01bcaf0-e2c1-495b-bc6d-a57978c7817b\") " pod="openshift-marketplace/community-operators-fdtl7" Mar 13 10:20:57 crc kubenswrapper[4632]: I0313 10:20:57.832157 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgb8q\" (UniqueName: \"kubernetes.io/projected/a01bcaf0-e2c1-495b-bc6d-a57978c7817b-kube-api-access-sgb8q\") pod \"community-operators-fdtl7\" (UID: \"a01bcaf0-e2c1-495b-bc6d-a57978c7817b\") " pod="openshift-marketplace/community-operators-fdtl7" Mar 13 10:20:57 crc kubenswrapper[4632]: I0313 10:20:57.831146 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a01bcaf0-e2c1-495b-bc6d-a57978c7817b-catalog-content\") pod \"community-operators-fdtl7\" (UID: \"a01bcaf0-e2c1-495b-bc6d-a57978c7817b\") " pod="openshift-marketplace/community-operators-fdtl7" Mar 13 10:20:57 crc kubenswrapper[4632]: I0313 10:20:57.832784 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a01bcaf0-e2c1-495b-bc6d-a57978c7817b-utilities\") pod \"community-operators-fdtl7\" (UID: \"a01bcaf0-e2c1-495b-bc6d-a57978c7817b\") " pod="openshift-marketplace/community-operators-fdtl7" Mar 13 10:20:57 crc kubenswrapper[4632]: I0313 10:20:57.860740 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgb8q\" (UniqueName: \"kubernetes.io/projected/a01bcaf0-e2c1-495b-bc6d-a57978c7817b-kube-api-access-sgb8q\") pod \"community-operators-fdtl7\" (UID: \"a01bcaf0-e2c1-495b-bc6d-a57978c7817b\") " pod="openshift-marketplace/community-operators-fdtl7" Mar 13 10:20:57 crc kubenswrapper[4632]: I0313 10:20:57.966686 4632 scope.go:117] "RemoveContainer" containerID="fcf5d9f69f7435b287086bfcb908c42e9330ebc2ef407226d11b60f145efd8de" Mar 13 10:20:57 crc kubenswrapper[4632]: I0313 10:20:57.987716 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fdtl7" Mar 13 10:20:58 crc kubenswrapper[4632]: I0313 10:20:58.254281 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j2w2m" event={"ID":"95e13797-40e9-4942-a7b5-6174fa448654","Type":"ContainerStarted","Data":"f3b5913884cb0cd302a4b84c7cdf0903689fa6580713d76d4c3ad0f4d72eb034"} Mar 13 10:20:58 crc kubenswrapper[4632]: I0313 10:20:58.293068 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j2w2m" podStartSLOduration=2.758044938 podStartE2EDuration="5.293049504s" podCreationTimestamp="2026-03-13 10:20:53 +0000 UTC" firstStartedPulling="2026-03-13 10:20:55.205126562 +0000 UTC m=+1029.227656705" lastFinishedPulling="2026-03-13 10:20:57.740131138 +0000 UTC m=+1031.762661271" observedRunningTime="2026-03-13 10:20:58.292338696 +0000 UTC m=+1032.314868829" watchObservedRunningTime="2026-03-13 10:20:58.293049504 +0000 UTC m=+1032.315579637" Mar 13 10:20:58 crc kubenswrapper[4632]: I0313 10:20:58.721788 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fdtl7"] Mar 13 10:20:58 crc kubenswrapper[4632]: I0313 10:20:58.812921 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-2jqnk" Mar 13 10:20:58 crc kubenswrapper[4632]: I0313 10:20:58.812983 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-2jqnk" Mar 13 10:20:58 crc kubenswrapper[4632]: I0313 10:20:58.850853 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-2jqnk" Mar 13 10:20:59 crc kubenswrapper[4632]: I0313 10:20:59.261499 4632 generic.go:334] "Generic (PLEG): container finished" podID="a01bcaf0-e2c1-495b-bc6d-a57978c7817b" containerID="e0b81f7099d2b6e974290907876a0f00faa065b0c428d84017f9ee0db229bc73" exitCode=0 Mar 13 10:20:59 crc kubenswrapper[4632]: I0313 10:20:59.261564 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fdtl7" event={"ID":"a01bcaf0-e2c1-495b-bc6d-a57978c7817b","Type":"ContainerDied","Data":"e0b81f7099d2b6e974290907876a0f00faa065b0c428d84017f9ee0db229bc73"} Mar 13 10:20:59 crc kubenswrapper[4632]: I0313 10:20:59.261605 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fdtl7" event={"ID":"a01bcaf0-e2c1-495b-bc6d-a57978c7817b","Type":"ContainerStarted","Data":"b009a6765de2ecd804f8d033b53e49abb32cd27ba05ed7eacca8b430a75a2575"} Mar 13 10:20:59 crc kubenswrapper[4632]: I0313 10:20:59.295881 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-2jqnk" Mar 13 10:21:01 crc kubenswrapper[4632]: I0313 10:21:01.463434 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x5lbm"] Mar 13 10:21:01 crc kubenswrapper[4632]: I0313 10:21:01.465251 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x5lbm" Mar 13 10:21:01 crc kubenswrapper[4632]: I0313 10:21:01.477691 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x5lbm"] Mar 13 10:21:01 crc kubenswrapper[4632]: I0313 10:21:01.587739 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d96ff75-88fd-4637-9199-806314e5276d-utilities\") pod \"redhat-marketplace-x5lbm\" (UID: \"2d96ff75-88fd-4637-9199-806314e5276d\") " pod="openshift-marketplace/redhat-marketplace-x5lbm" Mar 13 10:21:01 crc kubenswrapper[4632]: I0313 10:21:01.587823 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7vjg\" (UniqueName: \"kubernetes.io/projected/2d96ff75-88fd-4637-9199-806314e5276d-kube-api-access-c7vjg\") pod \"redhat-marketplace-x5lbm\" (UID: \"2d96ff75-88fd-4637-9199-806314e5276d\") " pod="openshift-marketplace/redhat-marketplace-x5lbm" Mar 13 10:21:01 crc kubenswrapper[4632]: I0313 10:21:01.587872 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d96ff75-88fd-4637-9199-806314e5276d-catalog-content\") pod \"redhat-marketplace-x5lbm\" (UID: \"2d96ff75-88fd-4637-9199-806314e5276d\") " pod="openshift-marketplace/redhat-marketplace-x5lbm" Mar 13 10:21:01 crc kubenswrapper[4632]: I0313 10:21:01.689369 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7vjg\" (UniqueName: \"kubernetes.io/projected/2d96ff75-88fd-4637-9199-806314e5276d-kube-api-access-c7vjg\") pod \"redhat-marketplace-x5lbm\" (UID: \"2d96ff75-88fd-4637-9199-806314e5276d\") " pod="openshift-marketplace/redhat-marketplace-x5lbm" Mar 13 10:21:01 crc kubenswrapper[4632]: I0313 10:21:01.689466 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d96ff75-88fd-4637-9199-806314e5276d-catalog-content\") pod \"redhat-marketplace-x5lbm\" (UID: \"2d96ff75-88fd-4637-9199-806314e5276d\") " pod="openshift-marketplace/redhat-marketplace-x5lbm" Mar 13 10:21:01 crc kubenswrapper[4632]: I0313 10:21:01.689519 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d96ff75-88fd-4637-9199-806314e5276d-utilities\") pod \"redhat-marketplace-x5lbm\" (UID: \"2d96ff75-88fd-4637-9199-806314e5276d\") " pod="openshift-marketplace/redhat-marketplace-x5lbm" Mar 13 10:21:01 crc kubenswrapper[4632]: I0313 10:21:01.690226 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d96ff75-88fd-4637-9199-806314e5276d-catalog-content\") pod \"redhat-marketplace-x5lbm\" (UID: \"2d96ff75-88fd-4637-9199-806314e5276d\") " pod="openshift-marketplace/redhat-marketplace-x5lbm" Mar 13 10:21:01 crc kubenswrapper[4632]: I0313 10:21:01.690249 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d96ff75-88fd-4637-9199-806314e5276d-utilities\") pod \"redhat-marketplace-x5lbm\" (UID: \"2d96ff75-88fd-4637-9199-806314e5276d\") " pod="openshift-marketplace/redhat-marketplace-x5lbm" Mar 13 10:21:01 crc kubenswrapper[4632]: I0313 10:21:01.711654 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7vjg\" (UniqueName: \"kubernetes.io/projected/2d96ff75-88fd-4637-9199-806314e5276d-kube-api-access-c7vjg\") pod \"redhat-marketplace-x5lbm\" (UID: \"2d96ff75-88fd-4637-9199-806314e5276d\") " pod="openshift-marketplace/redhat-marketplace-x5lbm" Mar 13 10:21:01 crc kubenswrapper[4632]: I0313 10:21:01.791213 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x5lbm" Mar 13 10:21:03 crc kubenswrapper[4632]: I0313 10:21:03.985641 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j2w2m" Mar 13 10:21:03 crc kubenswrapper[4632]: I0313 10:21:03.986021 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j2w2m" Mar 13 10:21:05 crc kubenswrapper[4632]: I0313 10:21:05.032930 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-j2w2m" podUID="95e13797-40e9-4942-a7b5-6174fa448654" containerName="registry-server" probeResult="failure" output=< Mar 13 10:21:05 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 10:21:05 crc kubenswrapper[4632]: > Mar 13 10:21:05 crc kubenswrapper[4632]: I0313 10:21:05.290816 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x5lbm"] Mar 13 10:21:06 crc kubenswrapper[4632]: I0313 10:21:06.308353 4632 generic.go:334] "Generic (PLEG): container finished" podID="2d96ff75-88fd-4637-9199-806314e5276d" containerID="9d09e614abc972c9bc41b522cbf719729037565923e2784a67b05584e2614a0c" exitCode=0 Mar 13 10:21:06 crc kubenswrapper[4632]: I0313 10:21:06.308451 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x5lbm" event={"ID":"2d96ff75-88fd-4637-9199-806314e5276d","Type":"ContainerDied","Data":"9d09e614abc972c9bc41b522cbf719729037565923e2784a67b05584e2614a0c"} Mar 13 10:21:06 crc kubenswrapper[4632]: I0313 10:21:06.308750 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x5lbm" event={"ID":"2d96ff75-88fd-4637-9199-806314e5276d","Type":"ContainerStarted","Data":"21e3226d1d96fc021bc557ff5418b6dec4ea17fe511f69b4fe8609410d428008"} Mar 13 10:21:06 crc kubenswrapper[4632]: I0313 10:21:06.311925 4632 generic.go:334] "Generic (PLEG): container finished" podID="a01bcaf0-e2c1-495b-bc6d-a57978c7817b" containerID="90d2b05d55e78b9f9829c2ee4bf7bbc01510b17dfbff9b47dda76cd10b610010" exitCode=0 Mar 13 10:21:06 crc kubenswrapper[4632]: I0313 10:21:06.311978 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fdtl7" event={"ID":"a01bcaf0-e2c1-495b-bc6d-a57978c7817b","Type":"ContainerDied","Data":"90d2b05d55e78b9f9829c2ee4bf7bbc01510b17dfbff9b47dda76cd10b610010"} Mar 13 10:21:06 crc kubenswrapper[4632]: I0313 10:21:06.385865 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m"] Mar 13 10:21:06 crc kubenswrapper[4632]: I0313 10:21:06.387018 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m" Mar 13 10:21:06 crc kubenswrapper[4632]: I0313 10:21:06.394862 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-wbvdr" Mar 13 10:21:06 crc kubenswrapper[4632]: I0313 10:21:06.410984 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m"] Mar 13 10:21:06 crc kubenswrapper[4632]: I0313 10:21:06.476125 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrkd5\" (UniqueName: \"kubernetes.io/projected/13abf84a-b499-4439-ab4e-1c34bcf07308-kube-api-access-wrkd5\") pod \"cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m\" (UID: \"13abf84a-b499-4439-ab4e-1c34bcf07308\") " pod="openstack-operators/cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m" Mar 13 10:21:06 crc kubenswrapper[4632]: I0313 10:21:06.476232 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/13abf84a-b499-4439-ab4e-1c34bcf07308-util\") pod \"cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m\" (UID: \"13abf84a-b499-4439-ab4e-1c34bcf07308\") " pod="openstack-operators/cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m" Mar 13 10:21:06 crc kubenswrapper[4632]: I0313 10:21:06.476293 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/13abf84a-b499-4439-ab4e-1c34bcf07308-bundle\") pod \"cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m\" (UID: \"13abf84a-b499-4439-ab4e-1c34bcf07308\") " pod="openstack-operators/cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m" Mar 13 10:21:06 crc kubenswrapper[4632]: I0313 10:21:06.577132 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrkd5\" (UniqueName: \"kubernetes.io/projected/13abf84a-b499-4439-ab4e-1c34bcf07308-kube-api-access-wrkd5\") pod \"cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m\" (UID: \"13abf84a-b499-4439-ab4e-1c34bcf07308\") " pod="openstack-operators/cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m" Mar 13 10:21:06 crc kubenswrapper[4632]: I0313 10:21:06.577238 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/13abf84a-b499-4439-ab4e-1c34bcf07308-util\") pod \"cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m\" (UID: \"13abf84a-b499-4439-ab4e-1c34bcf07308\") " pod="openstack-operators/cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m" Mar 13 10:21:06 crc kubenswrapper[4632]: I0313 10:21:06.577295 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/13abf84a-b499-4439-ab4e-1c34bcf07308-bundle\") pod \"cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m\" (UID: \"13abf84a-b499-4439-ab4e-1c34bcf07308\") " pod="openstack-operators/cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m" Mar 13 10:21:06 crc kubenswrapper[4632]: I0313 10:21:06.577994 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/13abf84a-b499-4439-ab4e-1c34bcf07308-bundle\") pod \"cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m\" (UID: \"13abf84a-b499-4439-ab4e-1c34bcf07308\") " pod="openstack-operators/cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m" Mar 13 10:21:06 crc kubenswrapper[4632]: I0313 10:21:06.578591 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/13abf84a-b499-4439-ab4e-1c34bcf07308-util\") pod \"cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m\" (UID: \"13abf84a-b499-4439-ab4e-1c34bcf07308\") " pod="openstack-operators/cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m" Mar 13 10:21:06 crc kubenswrapper[4632]: I0313 10:21:06.604197 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrkd5\" (UniqueName: \"kubernetes.io/projected/13abf84a-b499-4439-ab4e-1c34bcf07308-kube-api-access-wrkd5\") pod \"cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m\" (UID: \"13abf84a-b499-4439-ab4e-1c34bcf07308\") " pod="openstack-operators/cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m" Mar 13 10:21:06 crc kubenswrapper[4632]: I0313 10:21:06.701304 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m" Mar 13 10:21:06 crc kubenswrapper[4632]: I0313 10:21:06.997758 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m"] Mar 13 10:21:07 crc kubenswrapper[4632]: W0313 10:21:07.026954 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13abf84a_b499_4439_ab4e_1c34bcf07308.slice/crio-c2b7069b8c86dedbcbc82d2f9138beb8a86a6fd02f2f8774925b78243b5b613e WatchSource:0}: Error finding container c2b7069b8c86dedbcbc82d2f9138beb8a86a6fd02f2f8774925b78243b5b613e: Status 404 returned error can't find the container with id c2b7069b8c86dedbcbc82d2f9138beb8a86a6fd02f2f8774925b78243b5b613e Mar 13 10:21:07 crc kubenswrapper[4632]: I0313 10:21:07.321604 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x5lbm" event={"ID":"2d96ff75-88fd-4637-9199-806314e5276d","Type":"ContainerStarted","Data":"559890583b5bdac795810d916d2ba129170ca28c94f2aae83a30fbc62b754214"} Mar 13 10:21:07 crc kubenswrapper[4632]: I0313 10:21:07.324977 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fdtl7" event={"ID":"a01bcaf0-e2c1-495b-bc6d-a57978c7817b","Type":"ContainerStarted","Data":"e2fda5f5d13b5663527978c3fcfd3cccd016f3f222280313a4a0c5c88b5212d7"} Mar 13 10:21:07 crc kubenswrapper[4632]: I0313 10:21:07.326779 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m" event={"ID":"13abf84a-b499-4439-ab4e-1c34bcf07308","Type":"ContainerStarted","Data":"7318b74c846ee470b6f66c429df22fea7568a67857fa4aca7afcbc6ac2e7ef17"} Mar 13 10:21:07 crc kubenswrapper[4632]: I0313 10:21:07.326901 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m" event={"ID":"13abf84a-b499-4439-ab4e-1c34bcf07308","Type":"ContainerStarted","Data":"c2b7069b8c86dedbcbc82d2f9138beb8a86a6fd02f2f8774925b78243b5b613e"} Mar 13 10:21:07 crc kubenswrapper[4632]: I0313 10:21:07.388426 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fdtl7" podStartSLOduration=2.7759761210000002 podStartE2EDuration="10.388405803s" podCreationTimestamp="2026-03-13 10:20:57 +0000 UTC" firstStartedPulling="2026-03-13 10:20:59.263106421 +0000 UTC m=+1033.285636554" lastFinishedPulling="2026-03-13 10:21:06.875536103 +0000 UTC m=+1040.898066236" observedRunningTime="2026-03-13 10:21:07.383778523 +0000 UTC m=+1041.406308666" watchObservedRunningTime="2026-03-13 10:21:07.388405803 +0000 UTC m=+1041.410935936" Mar 13 10:21:07 crc kubenswrapper[4632]: I0313 10:21:07.988421 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fdtl7" Mar 13 10:21:07 crc kubenswrapper[4632]: I0313 10:21:07.988589 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fdtl7" Mar 13 10:21:08 crc kubenswrapper[4632]: I0313 10:21:08.335126 4632 generic.go:334] "Generic (PLEG): container finished" podID="2d96ff75-88fd-4637-9199-806314e5276d" containerID="559890583b5bdac795810d916d2ba129170ca28c94f2aae83a30fbc62b754214" exitCode=0 Mar 13 10:21:08 crc kubenswrapper[4632]: I0313 10:21:08.335203 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x5lbm" event={"ID":"2d96ff75-88fd-4637-9199-806314e5276d","Type":"ContainerDied","Data":"559890583b5bdac795810d916d2ba129170ca28c94f2aae83a30fbc62b754214"} Mar 13 10:21:09 crc kubenswrapper[4632]: I0313 10:21:09.039818 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-fdtl7" podUID="a01bcaf0-e2c1-495b-bc6d-a57978c7817b" containerName="registry-server" probeResult="failure" output=< Mar 13 10:21:09 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 10:21:09 crc kubenswrapper[4632]: > Mar 13 10:21:09 crc kubenswrapper[4632]: I0313 10:21:09.345208 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x5lbm" event={"ID":"2d96ff75-88fd-4637-9199-806314e5276d","Type":"ContainerStarted","Data":"db96b68191a6b558bd0777d22d1fa122614ae61dec4e51a4b871ee53f8b057b3"} Mar 13 10:21:09 crc kubenswrapper[4632]: I0313 10:21:09.347108 4632 generic.go:334] "Generic (PLEG): container finished" podID="13abf84a-b499-4439-ab4e-1c34bcf07308" containerID="7318b74c846ee470b6f66c429df22fea7568a67857fa4aca7afcbc6ac2e7ef17" exitCode=0 Mar 13 10:21:09 crc kubenswrapper[4632]: I0313 10:21:09.347601 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m" event={"ID":"13abf84a-b499-4439-ab4e-1c34bcf07308","Type":"ContainerDied","Data":"7318b74c846ee470b6f66c429df22fea7568a67857fa4aca7afcbc6ac2e7ef17"} Mar 13 10:21:09 crc kubenswrapper[4632]: I0313 10:21:09.370714 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x5lbm" podStartSLOduration=5.626338865 podStartE2EDuration="8.370698194s" podCreationTimestamp="2026-03-13 10:21:01 +0000 UTC" firstStartedPulling="2026-03-13 10:21:06.311073387 +0000 UTC m=+1040.333603530" lastFinishedPulling="2026-03-13 10:21:09.055432726 +0000 UTC m=+1043.077962859" observedRunningTime="2026-03-13 10:21:09.367309422 +0000 UTC m=+1043.389839555" watchObservedRunningTime="2026-03-13 10:21:09.370698194 +0000 UTC m=+1043.393228327" Mar 13 10:21:10 crc kubenswrapper[4632]: I0313 10:21:10.356063 4632 generic.go:334] "Generic (PLEG): container finished" podID="13abf84a-b499-4439-ab4e-1c34bcf07308" containerID="45880f5326c1f4d4af7e9ddd4f6f22fd9085395311e7514541dd5ff4a8ffb09d" exitCode=0 Mar 13 10:21:10 crc kubenswrapper[4632]: I0313 10:21:10.356134 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m" event={"ID":"13abf84a-b499-4439-ab4e-1c34bcf07308","Type":"ContainerDied","Data":"45880f5326c1f4d4af7e9ddd4f6f22fd9085395311e7514541dd5ff4a8ffb09d"} Mar 13 10:21:10 crc kubenswrapper[4632]: I0313 10:21:10.460754 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 10:21:10 crc kubenswrapper[4632]: I0313 10:21:10.460853 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 10:21:11 crc kubenswrapper[4632]: I0313 10:21:11.364654 4632 generic.go:334] "Generic (PLEG): container finished" podID="13abf84a-b499-4439-ab4e-1c34bcf07308" containerID="6116cafc17e787a40f5b468c06cdc737d5e7d0fe51a208dd653b4773ccf5ac26" exitCode=0 Mar 13 10:21:11 crc kubenswrapper[4632]: I0313 10:21:11.364743 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m" event={"ID":"13abf84a-b499-4439-ab4e-1c34bcf07308","Type":"ContainerDied","Data":"6116cafc17e787a40f5b468c06cdc737d5e7d0fe51a208dd653b4773ccf5ac26"} Mar 13 10:21:11 crc kubenswrapper[4632]: I0313 10:21:11.792185 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x5lbm" Mar 13 10:21:11 crc kubenswrapper[4632]: I0313 10:21:11.794006 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x5lbm" Mar 13 10:21:11 crc kubenswrapper[4632]: I0313 10:21:11.834257 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x5lbm" Mar 13 10:21:12 crc kubenswrapper[4632]: I0313 10:21:12.639723 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m" Mar 13 10:21:12 crc kubenswrapper[4632]: I0313 10:21:12.782102 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/13abf84a-b499-4439-ab4e-1c34bcf07308-bundle\") pod \"13abf84a-b499-4439-ab4e-1c34bcf07308\" (UID: \"13abf84a-b499-4439-ab4e-1c34bcf07308\") " Mar 13 10:21:12 crc kubenswrapper[4632]: I0313 10:21:12.782255 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrkd5\" (UniqueName: \"kubernetes.io/projected/13abf84a-b499-4439-ab4e-1c34bcf07308-kube-api-access-wrkd5\") pod \"13abf84a-b499-4439-ab4e-1c34bcf07308\" (UID: \"13abf84a-b499-4439-ab4e-1c34bcf07308\") " Mar 13 10:21:12 crc kubenswrapper[4632]: I0313 10:21:12.782288 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/13abf84a-b499-4439-ab4e-1c34bcf07308-util\") pod \"13abf84a-b499-4439-ab4e-1c34bcf07308\" (UID: \"13abf84a-b499-4439-ab4e-1c34bcf07308\") " Mar 13 10:21:12 crc kubenswrapper[4632]: I0313 10:21:12.782923 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13abf84a-b499-4439-ab4e-1c34bcf07308-bundle" (OuterVolumeSpecName: "bundle") pod "13abf84a-b499-4439-ab4e-1c34bcf07308" (UID: "13abf84a-b499-4439-ab4e-1c34bcf07308"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:21:12 crc kubenswrapper[4632]: I0313 10:21:12.794084 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13abf84a-b499-4439-ab4e-1c34bcf07308-util" (OuterVolumeSpecName: "util") pod "13abf84a-b499-4439-ab4e-1c34bcf07308" (UID: "13abf84a-b499-4439-ab4e-1c34bcf07308"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:21:12 crc kubenswrapper[4632]: I0313 10:21:12.795279 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13abf84a-b499-4439-ab4e-1c34bcf07308-kube-api-access-wrkd5" (OuterVolumeSpecName: "kube-api-access-wrkd5") pod "13abf84a-b499-4439-ab4e-1c34bcf07308" (UID: "13abf84a-b499-4439-ab4e-1c34bcf07308"). InnerVolumeSpecName "kube-api-access-wrkd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:21:12 crc kubenswrapper[4632]: I0313 10:21:12.884736 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrkd5\" (UniqueName: \"kubernetes.io/projected/13abf84a-b499-4439-ab4e-1c34bcf07308-kube-api-access-wrkd5\") on node \"crc\" DevicePath \"\"" Mar 13 10:21:12 crc kubenswrapper[4632]: I0313 10:21:12.884776 4632 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/13abf84a-b499-4439-ab4e-1c34bcf07308-util\") on node \"crc\" DevicePath \"\"" Mar 13 10:21:12 crc kubenswrapper[4632]: I0313 10:21:12.884787 4632 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/13abf84a-b499-4439-ab4e-1c34bcf07308-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:21:13 crc kubenswrapper[4632]: I0313 10:21:13.382170 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m" event={"ID":"13abf84a-b499-4439-ab4e-1c34bcf07308","Type":"ContainerDied","Data":"c2b7069b8c86dedbcbc82d2f9138beb8a86a6fd02f2f8774925b78243b5b613e"} Mar 13 10:21:13 crc kubenswrapper[4632]: I0313 10:21:13.382239 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2b7069b8c86dedbcbc82d2f9138beb8a86a6fd02f2f8774925b78243b5b613e" Mar 13 10:21:13 crc kubenswrapper[4632]: I0313 10:21:13.382203 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m" Mar 13 10:21:14 crc kubenswrapper[4632]: I0313 10:21:14.023772 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j2w2m" Mar 13 10:21:14 crc kubenswrapper[4632]: I0313 10:21:14.064640 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j2w2m" Mar 13 10:21:17 crc kubenswrapper[4632]: I0313 10:21:17.502777 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-865685cd99-ls9jq"] Mar 13 10:21:17 crc kubenswrapper[4632]: E0313 10:21:17.503120 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13abf84a-b499-4439-ab4e-1c34bcf07308" containerName="pull" Mar 13 10:21:17 crc kubenswrapper[4632]: I0313 10:21:17.503149 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="13abf84a-b499-4439-ab4e-1c34bcf07308" containerName="pull" Mar 13 10:21:17 crc kubenswrapper[4632]: E0313 10:21:17.503172 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13abf84a-b499-4439-ab4e-1c34bcf07308" containerName="extract" Mar 13 10:21:17 crc kubenswrapper[4632]: I0313 10:21:17.503182 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="13abf84a-b499-4439-ab4e-1c34bcf07308" containerName="extract" Mar 13 10:21:17 crc kubenswrapper[4632]: E0313 10:21:17.503194 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13abf84a-b499-4439-ab4e-1c34bcf07308" containerName="util" Mar 13 10:21:17 crc kubenswrapper[4632]: I0313 10:21:17.503203 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="13abf84a-b499-4439-ab4e-1c34bcf07308" containerName="util" Mar 13 10:21:17 crc kubenswrapper[4632]: I0313 10:21:17.503334 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="13abf84a-b499-4439-ab4e-1c34bcf07308" containerName="extract" Mar 13 10:21:17 crc kubenswrapper[4632]: I0313 10:21:17.503827 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-865685cd99-ls9jq" Mar 13 10:21:17 crc kubenswrapper[4632]: I0313 10:21:17.508499 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-7gsqz" Mar 13 10:21:17 crc kubenswrapper[4632]: I0313 10:21:17.547096 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2npzm\" (UniqueName: \"kubernetes.io/projected/82fe7ef6-50a5-41d4-9419-787812e16bd6-kube-api-access-2npzm\") pod \"openstack-operator-controller-init-865685cd99-ls9jq\" (UID: \"82fe7ef6-50a5-41d4-9419-787812e16bd6\") " pod="openstack-operators/openstack-operator-controller-init-865685cd99-ls9jq" Mar 13 10:21:17 crc kubenswrapper[4632]: I0313 10:21:17.567191 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-865685cd99-ls9jq"] Mar 13 10:21:17 crc kubenswrapper[4632]: I0313 10:21:17.648225 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2npzm\" (UniqueName: \"kubernetes.io/projected/82fe7ef6-50a5-41d4-9419-787812e16bd6-kube-api-access-2npzm\") pod \"openstack-operator-controller-init-865685cd99-ls9jq\" (UID: \"82fe7ef6-50a5-41d4-9419-787812e16bd6\") " pod="openstack-operators/openstack-operator-controller-init-865685cd99-ls9jq" Mar 13 10:21:17 crc kubenswrapper[4632]: I0313 10:21:17.671201 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2npzm\" (UniqueName: \"kubernetes.io/projected/82fe7ef6-50a5-41d4-9419-787812e16bd6-kube-api-access-2npzm\") pod \"openstack-operator-controller-init-865685cd99-ls9jq\" (UID: \"82fe7ef6-50a5-41d4-9419-787812e16bd6\") " pod="openstack-operators/openstack-operator-controller-init-865685cd99-ls9jq" Mar 13 10:21:17 crc kubenswrapper[4632]: I0313 10:21:17.857314 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-865685cd99-ls9jq" Mar 13 10:21:17 crc kubenswrapper[4632]: I0313 10:21:17.857894 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j2w2m"] Mar 13 10:21:17 crc kubenswrapper[4632]: I0313 10:21:17.858203 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j2w2m" podUID="95e13797-40e9-4942-a7b5-6174fa448654" containerName="registry-server" containerID="cri-o://f3b5913884cb0cd302a4b84c7cdf0903689fa6580713d76d4c3ad0f4d72eb034" gracePeriod=2 Mar 13 10:21:18 crc kubenswrapper[4632]: I0313 10:21:18.073714 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fdtl7" Mar 13 10:21:18 crc kubenswrapper[4632]: I0313 10:21:18.161802 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fdtl7" Mar 13 10:21:18 crc kubenswrapper[4632]: I0313 10:21:18.246823 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-865685cd99-ls9jq"] Mar 13 10:21:18 crc kubenswrapper[4632]: I0313 10:21:18.365448 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j2w2m" Mar 13 10:21:18 crc kubenswrapper[4632]: I0313 10:21:18.419665 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-865685cd99-ls9jq" event={"ID":"82fe7ef6-50a5-41d4-9419-787812e16bd6","Type":"ContainerStarted","Data":"84d12dff21de88c28f1e03859a2f3517c9de7ca42f5ece138cc13f2a26293e57"} Mar 13 10:21:18 crc kubenswrapper[4632]: I0313 10:21:18.424853 4632 generic.go:334] "Generic (PLEG): container finished" podID="95e13797-40e9-4942-a7b5-6174fa448654" containerID="f3b5913884cb0cd302a4b84c7cdf0903689fa6580713d76d4c3ad0f4d72eb034" exitCode=0 Mar 13 10:21:18 crc kubenswrapper[4632]: I0313 10:21:18.425680 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j2w2m" Mar 13 10:21:18 crc kubenswrapper[4632]: I0313 10:21:18.425972 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j2w2m" event={"ID":"95e13797-40e9-4942-a7b5-6174fa448654","Type":"ContainerDied","Data":"f3b5913884cb0cd302a4b84c7cdf0903689fa6580713d76d4c3ad0f4d72eb034"} Mar 13 10:21:18 crc kubenswrapper[4632]: I0313 10:21:18.426001 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j2w2m" event={"ID":"95e13797-40e9-4942-a7b5-6174fa448654","Type":"ContainerDied","Data":"a5d6997e64c1bba55b267a1de22e67ed4e90ac2795eea19781b2283baf2f9305"} Mar 13 10:21:18 crc kubenswrapper[4632]: I0313 10:21:18.426017 4632 scope.go:117] "RemoveContainer" containerID="f3b5913884cb0cd302a4b84c7cdf0903689fa6580713d76d4c3ad0f4d72eb034" Mar 13 10:21:18 crc kubenswrapper[4632]: I0313 10:21:18.453971 4632 scope.go:117] "RemoveContainer" containerID="eff3d4047aa48452ec24bfeecbd51ff4236d75ad1f2c6051aceaf13600d9b721" Mar 13 10:21:18 crc kubenswrapper[4632]: I0313 10:21:18.461438 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48kk8\" (UniqueName: \"kubernetes.io/projected/95e13797-40e9-4942-a7b5-6174fa448654-kube-api-access-48kk8\") pod \"95e13797-40e9-4942-a7b5-6174fa448654\" (UID: \"95e13797-40e9-4942-a7b5-6174fa448654\") " Mar 13 10:21:18 crc kubenswrapper[4632]: I0313 10:21:18.467059 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95e13797-40e9-4942-a7b5-6174fa448654-kube-api-access-48kk8" (OuterVolumeSpecName: "kube-api-access-48kk8") pod "95e13797-40e9-4942-a7b5-6174fa448654" (UID: "95e13797-40e9-4942-a7b5-6174fa448654"). InnerVolumeSpecName "kube-api-access-48kk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:21:18 crc kubenswrapper[4632]: I0313 10:21:18.478109 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95e13797-40e9-4942-a7b5-6174fa448654-catalog-content\") pod \"95e13797-40e9-4942-a7b5-6174fa448654\" (UID: \"95e13797-40e9-4942-a7b5-6174fa448654\") " Mar 13 10:21:18 crc kubenswrapper[4632]: I0313 10:21:18.478222 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95e13797-40e9-4942-a7b5-6174fa448654-utilities\") pod \"95e13797-40e9-4942-a7b5-6174fa448654\" (UID: \"95e13797-40e9-4942-a7b5-6174fa448654\") " Mar 13 10:21:18 crc kubenswrapper[4632]: I0313 10:21:18.478826 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48kk8\" (UniqueName: \"kubernetes.io/projected/95e13797-40e9-4942-a7b5-6174fa448654-kube-api-access-48kk8\") on node \"crc\" DevicePath \"\"" Mar 13 10:21:18 crc kubenswrapper[4632]: I0313 10:21:18.479242 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95e13797-40e9-4942-a7b5-6174fa448654-utilities" (OuterVolumeSpecName: "utilities") pod "95e13797-40e9-4942-a7b5-6174fa448654" (UID: "95e13797-40e9-4942-a7b5-6174fa448654"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:21:18 crc kubenswrapper[4632]: I0313 10:21:18.492987 4632 scope.go:117] "RemoveContainer" containerID="6e108feffd870d15032d03dad27338b300eaf2cbdd55be7000c4f4a141b8f3ad" Mar 13 10:21:18 crc kubenswrapper[4632]: I0313 10:21:18.515082 4632 scope.go:117] "RemoveContainer" containerID="f3b5913884cb0cd302a4b84c7cdf0903689fa6580713d76d4c3ad0f4d72eb034" Mar 13 10:21:18 crc kubenswrapper[4632]: E0313 10:21:18.515611 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3b5913884cb0cd302a4b84c7cdf0903689fa6580713d76d4c3ad0f4d72eb034\": container with ID starting with f3b5913884cb0cd302a4b84c7cdf0903689fa6580713d76d4c3ad0f4d72eb034 not found: ID does not exist" containerID="f3b5913884cb0cd302a4b84c7cdf0903689fa6580713d76d4c3ad0f4d72eb034" Mar 13 10:21:18 crc kubenswrapper[4632]: I0313 10:21:18.515658 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3b5913884cb0cd302a4b84c7cdf0903689fa6580713d76d4c3ad0f4d72eb034"} err="failed to get container status \"f3b5913884cb0cd302a4b84c7cdf0903689fa6580713d76d4c3ad0f4d72eb034\": rpc error: code = NotFound desc = could not find container \"f3b5913884cb0cd302a4b84c7cdf0903689fa6580713d76d4c3ad0f4d72eb034\": container with ID starting with f3b5913884cb0cd302a4b84c7cdf0903689fa6580713d76d4c3ad0f4d72eb034 not found: ID does not exist" Mar 13 10:21:18 crc kubenswrapper[4632]: I0313 10:21:18.515690 4632 scope.go:117] "RemoveContainer" containerID="eff3d4047aa48452ec24bfeecbd51ff4236d75ad1f2c6051aceaf13600d9b721" Mar 13 10:21:18 crc kubenswrapper[4632]: E0313 10:21:18.516166 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eff3d4047aa48452ec24bfeecbd51ff4236d75ad1f2c6051aceaf13600d9b721\": container with ID starting with eff3d4047aa48452ec24bfeecbd51ff4236d75ad1f2c6051aceaf13600d9b721 not found: ID does not exist" containerID="eff3d4047aa48452ec24bfeecbd51ff4236d75ad1f2c6051aceaf13600d9b721" Mar 13 10:21:18 crc kubenswrapper[4632]: I0313 10:21:18.516189 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eff3d4047aa48452ec24bfeecbd51ff4236d75ad1f2c6051aceaf13600d9b721"} err="failed to get container status \"eff3d4047aa48452ec24bfeecbd51ff4236d75ad1f2c6051aceaf13600d9b721\": rpc error: code = NotFound desc = could not find container \"eff3d4047aa48452ec24bfeecbd51ff4236d75ad1f2c6051aceaf13600d9b721\": container with ID starting with eff3d4047aa48452ec24bfeecbd51ff4236d75ad1f2c6051aceaf13600d9b721 not found: ID does not exist" Mar 13 10:21:18 crc kubenswrapper[4632]: I0313 10:21:18.516255 4632 scope.go:117] "RemoveContainer" containerID="6e108feffd870d15032d03dad27338b300eaf2cbdd55be7000c4f4a141b8f3ad" Mar 13 10:21:18 crc kubenswrapper[4632]: E0313 10:21:18.516466 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e108feffd870d15032d03dad27338b300eaf2cbdd55be7000c4f4a141b8f3ad\": container with ID starting with 6e108feffd870d15032d03dad27338b300eaf2cbdd55be7000c4f4a141b8f3ad not found: ID does not exist" containerID="6e108feffd870d15032d03dad27338b300eaf2cbdd55be7000c4f4a141b8f3ad" Mar 13 10:21:18 crc kubenswrapper[4632]: I0313 10:21:18.516488 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e108feffd870d15032d03dad27338b300eaf2cbdd55be7000c4f4a141b8f3ad"} err="failed to get container status \"6e108feffd870d15032d03dad27338b300eaf2cbdd55be7000c4f4a141b8f3ad\": rpc error: code = NotFound desc = could not find container \"6e108feffd870d15032d03dad27338b300eaf2cbdd55be7000c4f4a141b8f3ad\": container with ID starting with 6e108feffd870d15032d03dad27338b300eaf2cbdd55be7000c4f4a141b8f3ad not found: ID does not exist" Mar 13 10:21:18 crc kubenswrapper[4632]: I0313 10:21:18.538974 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95e13797-40e9-4942-a7b5-6174fa448654-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95e13797-40e9-4942-a7b5-6174fa448654" (UID: "95e13797-40e9-4942-a7b5-6174fa448654"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:21:18 crc kubenswrapper[4632]: I0313 10:21:18.583655 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95e13797-40e9-4942-a7b5-6174fa448654-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 10:21:18 crc kubenswrapper[4632]: I0313 10:21:18.583694 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95e13797-40e9-4942-a7b5-6174fa448654-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 10:21:18 crc kubenswrapper[4632]: I0313 10:21:18.774284 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j2w2m"] Mar 13 10:21:18 crc kubenswrapper[4632]: I0313 10:21:18.785354 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j2w2m"] Mar 13 10:21:18 crc kubenswrapper[4632]: I0313 10:21:18.902764 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fdtl7"] Mar 13 10:21:19 crc kubenswrapper[4632]: I0313 10:21:19.257411 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lgwff"] Mar 13 10:21:19 crc kubenswrapper[4632]: I0313 10:21:19.257690 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lgwff" podUID="0d0fc567-0682-4bbc-981b-b4d1df62aa4e" containerName="registry-server" containerID="cri-o://6fa56a0ef2065ba4287ddc46b227aad0c8d55e685aeeb7889682c05acb775492" gracePeriod=2 Mar 13 10:21:19 crc kubenswrapper[4632]: I0313 10:21:19.450033 4632 generic.go:334] "Generic (PLEG): container finished" podID="0d0fc567-0682-4bbc-981b-b4d1df62aa4e" containerID="6fa56a0ef2065ba4287ddc46b227aad0c8d55e685aeeb7889682c05acb775492" exitCode=0 Mar 13 10:21:19 crc kubenswrapper[4632]: I0313 10:21:19.450093 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lgwff" event={"ID":"0d0fc567-0682-4bbc-981b-b4d1df62aa4e","Type":"ContainerDied","Data":"6fa56a0ef2065ba4287ddc46b227aad0c8d55e685aeeb7889682c05acb775492"} Mar 13 10:21:19 crc kubenswrapper[4632]: I0313 10:21:19.942892 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lgwff" Mar 13 10:21:20 crc kubenswrapper[4632]: I0313 10:21:20.002792 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sw5gz\" (UniqueName: \"kubernetes.io/projected/0d0fc567-0682-4bbc-981b-b4d1df62aa4e-kube-api-access-sw5gz\") pod \"0d0fc567-0682-4bbc-981b-b4d1df62aa4e\" (UID: \"0d0fc567-0682-4bbc-981b-b4d1df62aa4e\") " Mar 13 10:21:20 crc kubenswrapper[4632]: I0313 10:21:20.002921 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d0fc567-0682-4bbc-981b-b4d1df62aa4e-catalog-content\") pod \"0d0fc567-0682-4bbc-981b-b4d1df62aa4e\" (UID: \"0d0fc567-0682-4bbc-981b-b4d1df62aa4e\") " Mar 13 10:21:20 crc kubenswrapper[4632]: I0313 10:21:20.003023 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d0fc567-0682-4bbc-981b-b4d1df62aa4e-utilities\") pod \"0d0fc567-0682-4bbc-981b-b4d1df62aa4e\" (UID: \"0d0fc567-0682-4bbc-981b-b4d1df62aa4e\") " Mar 13 10:21:20 crc kubenswrapper[4632]: I0313 10:21:20.003464 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d0fc567-0682-4bbc-981b-b4d1df62aa4e-utilities" (OuterVolumeSpecName: "utilities") pod "0d0fc567-0682-4bbc-981b-b4d1df62aa4e" (UID: "0d0fc567-0682-4bbc-981b-b4d1df62aa4e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:21:20 crc kubenswrapper[4632]: I0313 10:21:20.006320 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d0fc567-0682-4bbc-981b-b4d1df62aa4e-kube-api-access-sw5gz" (OuterVolumeSpecName: "kube-api-access-sw5gz") pod "0d0fc567-0682-4bbc-981b-b4d1df62aa4e" (UID: "0d0fc567-0682-4bbc-981b-b4d1df62aa4e"). InnerVolumeSpecName "kube-api-access-sw5gz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:21:20 crc kubenswrapper[4632]: I0313 10:21:20.069049 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95e13797-40e9-4942-a7b5-6174fa448654" path="/var/lib/kubelet/pods/95e13797-40e9-4942-a7b5-6174fa448654/volumes" Mar 13 10:21:20 crc kubenswrapper[4632]: I0313 10:21:20.108564 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sw5gz\" (UniqueName: \"kubernetes.io/projected/0d0fc567-0682-4bbc-981b-b4d1df62aa4e-kube-api-access-sw5gz\") on node \"crc\" DevicePath \"\"" Mar 13 10:21:20 crc kubenswrapper[4632]: I0313 10:21:20.108600 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d0fc567-0682-4bbc-981b-b4d1df62aa4e-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 10:21:20 crc kubenswrapper[4632]: I0313 10:21:20.109392 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d0fc567-0682-4bbc-981b-b4d1df62aa4e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d0fc567-0682-4bbc-981b-b4d1df62aa4e" (UID: "0d0fc567-0682-4bbc-981b-b4d1df62aa4e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:21:20 crc kubenswrapper[4632]: I0313 10:21:20.209714 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d0fc567-0682-4bbc-981b-b4d1df62aa4e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 10:21:20 crc kubenswrapper[4632]: I0313 10:21:20.472257 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lgwff" event={"ID":"0d0fc567-0682-4bbc-981b-b4d1df62aa4e","Type":"ContainerDied","Data":"4266d075e4d04ea92ccfdc02ec4b3551e54779fe4f2f2c386ac2a209fda18404"} Mar 13 10:21:20 crc kubenswrapper[4632]: I0313 10:21:20.472319 4632 scope.go:117] "RemoveContainer" containerID="6fa56a0ef2065ba4287ddc46b227aad0c8d55e685aeeb7889682c05acb775492" Mar 13 10:21:20 crc kubenswrapper[4632]: I0313 10:21:20.472448 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lgwff" Mar 13 10:21:20 crc kubenswrapper[4632]: I0313 10:21:20.587100 4632 scope.go:117] "RemoveContainer" containerID="6ed6c1b1b2793ab4b788dc1723932bf9c4121a7bf0945a697809d4c945eec749" Mar 13 10:21:20 crc kubenswrapper[4632]: I0313 10:21:20.592400 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lgwff"] Mar 13 10:21:20 crc kubenswrapper[4632]: I0313 10:21:20.623667 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lgwff"] Mar 13 10:21:20 crc kubenswrapper[4632]: I0313 10:21:20.638086 4632 scope.go:117] "RemoveContainer" containerID="647f33468b0e454866917d9beec3f31ad6bc8dca469daccdfcf7e8df5de24312" Mar 13 10:21:21 crc kubenswrapper[4632]: I0313 10:21:21.865720 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x5lbm" Mar 13 10:21:22 crc kubenswrapper[4632]: I0313 10:21:22.055146 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d0fc567-0682-4bbc-981b-b4d1df62aa4e" path="/var/lib/kubelet/pods/0d0fc567-0682-4bbc-981b-b4d1df62aa4e/volumes" Mar 13 10:21:25 crc kubenswrapper[4632]: I0313 10:21:25.509363 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-865685cd99-ls9jq" event={"ID":"82fe7ef6-50a5-41d4-9419-787812e16bd6","Type":"ContainerStarted","Data":"5b061cfc2623671cbdb73e62714370004bd271740be45f371ce11c87da85cf57"} Mar 13 10:21:25 crc kubenswrapper[4632]: I0313 10:21:25.509673 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-865685cd99-ls9jq" Mar 13 10:21:25 crc kubenswrapper[4632]: I0313 10:21:25.539300 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-865685cd99-ls9jq" podStartSLOduration=1.716355558 podStartE2EDuration="8.539280562s" podCreationTimestamp="2026-03-13 10:21:17 +0000 UTC" firstStartedPulling="2026-03-13 10:21:18.262296863 +0000 UTC m=+1052.284826996" lastFinishedPulling="2026-03-13 10:21:25.085221867 +0000 UTC m=+1059.107752000" observedRunningTime="2026-03-13 10:21:25.534837206 +0000 UTC m=+1059.557367349" watchObservedRunningTime="2026-03-13 10:21:25.539280562 +0000 UTC m=+1059.561810695" Mar 13 10:21:26 crc kubenswrapper[4632]: I0313 10:21:26.065370 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x5lbm"] Mar 13 10:21:26 crc kubenswrapper[4632]: I0313 10:21:26.065704 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x5lbm" podUID="2d96ff75-88fd-4637-9199-806314e5276d" containerName="registry-server" containerID="cri-o://db96b68191a6b558bd0777d22d1fa122614ae61dec4e51a4b871ee53f8b057b3" gracePeriod=2 Mar 13 10:21:26 crc kubenswrapper[4632]: I0313 10:21:26.462868 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x5lbm" Mar 13 10:21:26 crc kubenswrapper[4632]: I0313 10:21:26.516258 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d96ff75-88fd-4637-9199-806314e5276d-catalog-content\") pod \"2d96ff75-88fd-4637-9199-806314e5276d\" (UID: \"2d96ff75-88fd-4637-9199-806314e5276d\") " Mar 13 10:21:26 crc kubenswrapper[4632]: I0313 10:21:26.516312 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d96ff75-88fd-4637-9199-806314e5276d-utilities\") pod \"2d96ff75-88fd-4637-9199-806314e5276d\" (UID: \"2d96ff75-88fd-4637-9199-806314e5276d\") " Mar 13 10:21:26 crc kubenswrapper[4632]: I0313 10:21:26.516340 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7vjg\" (UniqueName: \"kubernetes.io/projected/2d96ff75-88fd-4637-9199-806314e5276d-kube-api-access-c7vjg\") pod \"2d96ff75-88fd-4637-9199-806314e5276d\" (UID: \"2d96ff75-88fd-4637-9199-806314e5276d\") " Mar 13 10:21:26 crc kubenswrapper[4632]: I0313 10:21:26.517699 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d96ff75-88fd-4637-9199-806314e5276d-utilities" (OuterVolumeSpecName: "utilities") pod "2d96ff75-88fd-4637-9199-806314e5276d" (UID: "2d96ff75-88fd-4637-9199-806314e5276d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:21:26 crc kubenswrapper[4632]: I0313 10:21:26.519635 4632 generic.go:334] "Generic (PLEG): container finished" podID="2d96ff75-88fd-4637-9199-806314e5276d" containerID="db96b68191a6b558bd0777d22d1fa122614ae61dec4e51a4b871ee53f8b057b3" exitCode=0 Mar 13 10:21:26 crc kubenswrapper[4632]: I0313 10:21:26.520422 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x5lbm" Mar 13 10:21:26 crc kubenswrapper[4632]: I0313 10:21:26.520904 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x5lbm" event={"ID":"2d96ff75-88fd-4637-9199-806314e5276d","Type":"ContainerDied","Data":"db96b68191a6b558bd0777d22d1fa122614ae61dec4e51a4b871ee53f8b057b3"} Mar 13 10:21:26 crc kubenswrapper[4632]: I0313 10:21:26.520950 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x5lbm" event={"ID":"2d96ff75-88fd-4637-9199-806314e5276d","Type":"ContainerDied","Data":"21e3226d1d96fc021bc557ff5418b6dec4ea17fe511f69b4fe8609410d428008"} Mar 13 10:21:26 crc kubenswrapper[4632]: I0313 10:21:26.520970 4632 scope.go:117] "RemoveContainer" containerID="db96b68191a6b558bd0777d22d1fa122614ae61dec4e51a4b871ee53f8b057b3" Mar 13 10:21:26 crc kubenswrapper[4632]: I0313 10:21:26.544571 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d96ff75-88fd-4637-9199-806314e5276d-kube-api-access-c7vjg" (OuterVolumeSpecName: "kube-api-access-c7vjg") pod "2d96ff75-88fd-4637-9199-806314e5276d" (UID: "2d96ff75-88fd-4637-9199-806314e5276d"). InnerVolumeSpecName "kube-api-access-c7vjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:21:26 crc kubenswrapper[4632]: I0313 10:21:26.560664 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d96ff75-88fd-4637-9199-806314e5276d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d96ff75-88fd-4637-9199-806314e5276d" (UID: "2d96ff75-88fd-4637-9199-806314e5276d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:21:26 crc kubenswrapper[4632]: I0313 10:21:26.561383 4632 scope.go:117] "RemoveContainer" containerID="559890583b5bdac795810d916d2ba129170ca28c94f2aae83a30fbc62b754214" Mar 13 10:21:26 crc kubenswrapper[4632]: I0313 10:21:26.586986 4632 scope.go:117] "RemoveContainer" containerID="9d09e614abc972c9bc41b522cbf719729037565923e2784a67b05584e2614a0c" Mar 13 10:21:26 crc kubenswrapper[4632]: I0313 10:21:26.605144 4632 scope.go:117] "RemoveContainer" containerID="db96b68191a6b558bd0777d22d1fa122614ae61dec4e51a4b871ee53f8b057b3" Mar 13 10:21:26 crc kubenswrapper[4632]: E0313 10:21:26.605706 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db96b68191a6b558bd0777d22d1fa122614ae61dec4e51a4b871ee53f8b057b3\": container with ID starting with db96b68191a6b558bd0777d22d1fa122614ae61dec4e51a4b871ee53f8b057b3 not found: ID does not exist" containerID="db96b68191a6b558bd0777d22d1fa122614ae61dec4e51a4b871ee53f8b057b3" Mar 13 10:21:26 crc kubenswrapper[4632]: I0313 10:21:26.605807 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db96b68191a6b558bd0777d22d1fa122614ae61dec4e51a4b871ee53f8b057b3"} err="failed to get container status \"db96b68191a6b558bd0777d22d1fa122614ae61dec4e51a4b871ee53f8b057b3\": rpc error: code = NotFound desc = could not find container \"db96b68191a6b558bd0777d22d1fa122614ae61dec4e51a4b871ee53f8b057b3\": container with ID starting with db96b68191a6b558bd0777d22d1fa122614ae61dec4e51a4b871ee53f8b057b3 not found: ID does not exist" Mar 13 10:21:26 crc kubenswrapper[4632]: I0313 10:21:26.605896 4632 scope.go:117] "RemoveContainer" containerID="559890583b5bdac795810d916d2ba129170ca28c94f2aae83a30fbc62b754214" Mar 13 10:21:26 crc kubenswrapper[4632]: E0313 10:21:26.606687 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"559890583b5bdac795810d916d2ba129170ca28c94f2aae83a30fbc62b754214\": container with ID starting with 559890583b5bdac795810d916d2ba129170ca28c94f2aae83a30fbc62b754214 not found: ID does not exist" containerID="559890583b5bdac795810d916d2ba129170ca28c94f2aae83a30fbc62b754214" Mar 13 10:21:26 crc kubenswrapper[4632]: I0313 10:21:26.606737 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"559890583b5bdac795810d916d2ba129170ca28c94f2aae83a30fbc62b754214"} err="failed to get container status \"559890583b5bdac795810d916d2ba129170ca28c94f2aae83a30fbc62b754214\": rpc error: code = NotFound desc = could not find container \"559890583b5bdac795810d916d2ba129170ca28c94f2aae83a30fbc62b754214\": container with ID starting with 559890583b5bdac795810d916d2ba129170ca28c94f2aae83a30fbc62b754214 not found: ID does not exist" Mar 13 10:21:26 crc kubenswrapper[4632]: I0313 10:21:26.606776 4632 scope.go:117] "RemoveContainer" containerID="9d09e614abc972c9bc41b522cbf719729037565923e2784a67b05584e2614a0c" Mar 13 10:21:26 crc kubenswrapper[4632]: E0313 10:21:26.607229 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d09e614abc972c9bc41b522cbf719729037565923e2784a67b05584e2614a0c\": container with ID starting with 9d09e614abc972c9bc41b522cbf719729037565923e2784a67b05584e2614a0c not found: ID does not exist" containerID="9d09e614abc972c9bc41b522cbf719729037565923e2784a67b05584e2614a0c" Mar 13 10:21:26 crc kubenswrapper[4632]: I0313 10:21:26.607308 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d09e614abc972c9bc41b522cbf719729037565923e2784a67b05584e2614a0c"} err="failed to get container status \"9d09e614abc972c9bc41b522cbf719729037565923e2784a67b05584e2614a0c\": rpc error: code = NotFound desc = could not find container \"9d09e614abc972c9bc41b522cbf719729037565923e2784a67b05584e2614a0c\": container with ID starting with 9d09e614abc972c9bc41b522cbf719729037565923e2784a67b05584e2614a0c not found: ID does not exist" Mar 13 10:21:26 crc kubenswrapper[4632]: I0313 10:21:26.618128 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d96ff75-88fd-4637-9199-806314e5276d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 10:21:26 crc kubenswrapper[4632]: I0313 10:21:26.618176 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d96ff75-88fd-4637-9199-806314e5276d-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 10:21:26 crc kubenswrapper[4632]: I0313 10:21:26.618186 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7vjg\" (UniqueName: \"kubernetes.io/projected/2d96ff75-88fd-4637-9199-806314e5276d-kube-api-access-c7vjg\") on node \"crc\" DevicePath \"\"" Mar 13 10:21:26 crc kubenswrapper[4632]: I0313 10:21:26.849218 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x5lbm"] Mar 13 10:21:26 crc kubenswrapper[4632]: I0313 10:21:26.860493 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x5lbm"] Mar 13 10:21:28 crc kubenswrapper[4632]: I0313 10:21:28.052512 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d96ff75-88fd-4637-9199-806314e5276d" path="/var/lib/kubelet/pods/2d96ff75-88fd-4637-9199-806314e5276d/volumes" Mar 13 10:21:37 crc kubenswrapper[4632]: I0313 10:21:37.860985 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-865685cd99-ls9jq" Mar 13 10:21:40 crc kubenswrapper[4632]: I0313 10:21:40.460448 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 10:21:40 crc kubenswrapper[4632]: I0313 10:21:40.460517 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 10:21:40 crc kubenswrapper[4632]: I0313 10:21:40.460565 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 10:21:40 crc kubenswrapper[4632]: I0313 10:21:40.461218 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"624a339b1e1f8b218223c2e3440b7f9925bb18567bb6def4fcf3bfc022198658"} pod="openshift-machine-config-operator/machine-config-daemon-zkscb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 10:21:40 crc kubenswrapper[4632]: I0313 10:21:40.461273 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" containerID="cri-o://624a339b1e1f8b218223c2e3440b7f9925bb18567bb6def4fcf3bfc022198658" gracePeriod=600 Mar 13 10:21:40 crc kubenswrapper[4632]: I0313 10:21:40.646715 4632 generic.go:334] "Generic (PLEG): container finished" podID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerID="624a339b1e1f8b218223c2e3440b7f9925bb18567bb6def4fcf3bfc022198658" exitCode=0 Mar 13 10:21:40 crc kubenswrapper[4632]: I0313 10:21:40.646784 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerDied","Data":"624a339b1e1f8b218223c2e3440b7f9925bb18567bb6def4fcf3bfc022198658"} Mar 13 10:21:40 crc kubenswrapper[4632]: I0313 10:21:40.647048 4632 scope.go:117] "RemoveContainer" containerID="7fcd863f1a2b3af4768aa1d32979163bc846d3d472acea1e8c27ffcf3dfe0ffc" Mar 13 10:21:41 crc kubenswrapper[4632]: I0313 10:21:41.656188 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerStarted","Data":"e9a22f93dffae95945f5e47a3d15b0ebe11dc6b72712dcbe34fa0191ff687b27"} Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.278572 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-wj9qs"] Mar 13 10:21:57 crc kubenswrapper[4632]: E0313 10:21:57.279265 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d0fc567-0682-4bbc-981b-b4d1df62aa4e" containerName="extract-content" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.279279 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d0fc567-0682-4bbc-981b-b4d1df62aa4e" containerName="extract-content" Mar 13 10:21:57 crc kubenswrapper[4632]: E0313 10:21:57.279294 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d0fc567-0682-4bbc-981b-b4d1df62aa4e" containerName="extract-utilities" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.279300 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d0fc567-0682-4bbc-981b-b4d1df62aa4e" containerName="extract-utilities" Mar 13 10:21:57 crc kubenswrapper[4632]: E0313 10:21:57.279309 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d96ff75-88fd-4637-9199-806314e5276d" containerName="extract-utilities" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.279317 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d96ff75-88fd-4637-9199-806314e5276d" containerName="extract-utilities" Mar 13 10:21:57 crc kubenswrapper[4632]: E0313 10:21:57.279328 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95e13797-40e9-4942-a7b5-6174fa448654" containerName="extract-content" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.279334 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="95e13797-40e9-4942-a7b5-6174fa448654" containerName="extract-content" Mar 13 10:21:57 crc kubenswrapper[4632]: E0313 10:21:57.279344 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95e13797-40e9-4942-a7b5-6174fa448654" containerName="registry-server" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.279349 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="95e13797-40e9-4942-a7b5-6174fa448654" containerName="registry-server" Mar 13 10:21:57 crc kubenswrapper[4632]: E0313 10:21:57.279361 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95e13797-40e9-4942-a7b5-6174fa448654" containerName="extract-utilities" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.279366 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="95e13797-40e9-4942-a7b5-6174fa448654" containerName="extract-utilities" Mar 13 10:21:57 crc kubenswrapper[4632]: E0313 10:21:57.279373 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d0fc567-0682-4bbc-981b-b4d1df62aa4e" containerName="registry-server" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.279379 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d0fc567-0682-4bbc-981b-b4d1df62aa4e" containerName="registry-server" Mar 13 10:21:57 crc kubenswrapper[4632]: E0313 10:21:57.279397 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d96ff75-88fd-4637-9199-806314e5276d" containerName="extract-content" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.279407 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d96ff75-88fd-4637-9199-806314e5276d" containerName="extract-content" Mar 13 10:21:57 crc kubenswrapper[4632]: E0313 10:21:57.279417 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d96ff75-88fd-4637-9199-806314e5276d" containerName="registry-server" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.279423 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d96ff75-88fd-4637-9199-806314e5276d" containerName="registry-server" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.279528 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d96ff75-88fd-4637-9199-806314e5276d" containerName="registry-server" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.279542 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="95e13797-40e9-4942-a7b5-6174fa448654" containerName="registry-server" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.279552 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d0fc567-0682-4bbc-981b-b4d1df62aa4e" containerName="registry-server" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.279913 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-wj9qs" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.283563 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-8pjg4" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.291308 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-wj9qs"] Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.295078 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-f6c87"] Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.295761 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-f6c87" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.297580 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-5z6b8" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.320587 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-cfcgn"] Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.321572 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-cfcgn" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.323503 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-xb4n2" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.360173 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-qg79l"] Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.361494 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qg79l" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.367439 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-d8hjt" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.377411 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-cgh6c"] Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.378198 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-cgh6c" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.383824 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-7sbvl" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.391471 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp8hr\" (UniqueName: \"kubernetes.io/projected/20f92131-aca4-41ea-9144-a23bd9216f49-kube-api-access-mp8hr\") pod \"glance-operator-controller-manager-5964f64c48-qg79l\" (UID: \"20f92131-aca4-41ea-9144-a23bd9216f49\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qg79l" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.391584 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wpcd\" (UniqueName: \"kubernetes.io/projected/3f3a462e-4d89-45b3-8611-181aca5f8558-kube-api-access-7wpcd\") pod \"cinder-operator-controller-manager-984cd4dcf-f6c87\" (UID: \"3f3a462e-4d89-45b3-8611-181aca5f8558\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-f6c87" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.391653 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96kl7\" (UniqueName: \"kubernetes.io/projected/68c5eb80-4214-42c5-a08d-de6012969621-kube-api-access-96kl7\") pod \"barbican-operator-controller-manager-677bd678f7-wj9qs\" (UID: \"68c5eb80-4214-42c5-a08d-de6012969621\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-wj9qs" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.391731 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99gkt\" (UniqueName: \"kubernetes.io/projected/75d652c7-8521-4039-913a-fa625f89b094-kube-api-access-99gkt\") pod \"designate-operator-controller-manager-66d56f6ff4-cfcgn\" (UID: \"75d652c7-8521-4039-913a-fa625f89b094\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-cfcgn" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.391799 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9mvv\" (UniqueName: \"kubernetes.io/projected/ff6d4dcb-9eb8-44fc-951e-f2aecd77a639-kube-api-access-q9mvv\") pod \"heat-operator-controller-manager-77b6666d85-cgh6c\" (UID: \"ff6d4dcb-9eb8-44fc-951e-f2aecd77a639\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-cgh6c" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.396219 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-f6c87"] Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.404230 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-qg79l"] Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.415218 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-2rv7s"] Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.426162 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-2rv7s" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.431595 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-fpwd2" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.439254 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-cgh6c"] Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.484172 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-cfcgn"] Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.493208 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzvbs\" (UniqueName: \"kubernetes.io/projected/9a963f9c-ac58-4e21-abfa-fca1279a192d-kube-api-access-pzvbs\") pod \"horizon-operator-controller-manager-6d9d6b584d-2rv7s\" (UID: \"9a963f9c-ac58-4e21-abfa-fca1279a192d\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-2rv7s" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.493528 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp8hr\" (UniqueName: \"kubernetes.io/projected/20f92131-aca4-41ea-9144-a23bd9216f49-kube-api-access-mp8hr\") pod \"glance-operator-controller-manager-5964f64c48-qg79l\" (UID: \"20f92131-aca4-41ea-9144-a23bd9216f49\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qg79l" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.493674 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wpcd\" (UniqueName: \"kubernetes.io/projected/3f3a462e-4d89-45b3-8611-181aca5f8558-kube-api-access-7wpcd\") pod \"cinder-operator-controller-manager-984cd4dcf-f6c87\" (UID: \"3f3a462e-4d89-45b3-8611-181aca5f8558\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-f6c87" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.493807 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96kl7\" (UniqueName: \"kubernetes.io/projected/68c5eb80-4214-42c5-a08d-de6012969621-kube-api-access-96kl7\") pod \"barbican-operator-controller-manager-677bd678f7-wj9qs\" (UID: \"68c5eb80-4214-42c5-a08d-de6012969621\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-wj9qs" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.493930 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99gkt\" (UniqueName: \"kubernetes.io/projected/75d652c7-8521-4039-913a-fa625f89b094-kube-api-access-99gkt\") pod \"designate-operator-controller-manager-66d56f6ff4-cfcgn\" (UID: \"75d652c7-8521-4039-913a-fa625f89b094\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-cfcgn" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.494144 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9mvv\" (UniqueName: \"kubernetes.io/projected/ff6d4dcb-9eb8-44fc-951e-f2aecd77a639-kube-api-access-q9mvv\") pod \"heat-operator-controller-manager-77b6666d85-cgh6c\" (UID: \"ff6d4dcb-9eb8-44fc-951e-f2aecd77a639\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-cgh6c" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.507063 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-2rv7s"] Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.526201 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-5995f4446f-flfxh"] Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.527034 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-flfxh" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.528505 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99gkt\" (UniqueName: \"kubernetes.io/projected/75d652c7-8521-4039-913a-fa625f89b094-kube-api-access-99gkt\") pod \"designate-operator-controller-manager-66d56f6ff4-cfcgn\" (UID: \"75d652c7-8521-4039-913a-fa625f89b094\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-cfcgn" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.528517 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9mvv\" (UniqueName: \"kubernetes.io/projected/ff6d4dcb-9eb8-44fc-951e-f2aecd77a639-kube-api-access-q9mvv\") pod \"heat-operator-controller-manager-77b6666d85-cgh6c\" (UID: \"ff6d4dcb-9eb8-44fc-951e-f2aecd77a639\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-cgh6c" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.538209 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp8hr\" (UniqueName: \"kubernetes.io/projected/20f92131-aca4-41ea-9144-a23bd9216f49-kube-api-access-mp8hr\") pod \"glance-operator-controller-manager-5964f64c48-qg79l\" (UID: \"20f92131-aca4-41ea-9144-a23bd9216f49\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qg79l" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.538583 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-qf888" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.542077 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wpcd\" (UniqueName: \"kubernetes.io/projected/3f3a462e-4d89-45b3-8611-181aca5f8558-kube-api-access-7wpcd\") pod \"cinder-operator-controller-manager-984cd4dcf-f6c87\" (UID: \"3f3a462e-4d89-45b3-8611-181aca5f8558\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-f6c87" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.549307 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.582059 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96kl7\" (UniqueName: \"kubernetes.io/projected/68c5eb80-4214-42c5-a08d-de6012969621-kube-api-access-96kl7\") pod \"barbican-operator-controller-manager-677bd678f7-wj9qs\" (UID: \"68c5eb80-4214-42c5-a08d-de6012969621\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-wj9qs" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.586452 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-wtzrw"] Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.587464 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-wtzrw" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.590855 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-cxrjj" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.595419 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1542a9c8-92f6-4bc9-8231-829f649b0b8f-cert\") pod \"infra-operator-controller-manager-5995f4446f-flfxh\" (UID: \"1542a9c8-92f6-4bc9-8231-829f649b0b8f\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-flfxh" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.595504 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnh7r\" (UniqueName: \"kubernetes.io/projected/1542a9c8-92f6-4bc9-8231-829f649b0b8f-kube-api-access-pnh7r\") pod \"infra-operator-controller-manager-5995f4446f-flfxh\" (UID: \"1542a9c8-92f6-4bc9-8231-829f649b0b8f\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-flfxh" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.595533 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psrhj\" (UniqueName: \"kubernetes.io/projected/c8fc6f03-c43b-4ade-92a8-acc5537a4eeb-kube-api-access-psrhj\") pod \"ironic-operator-controller-manager-6bbb499bbc-wtzrw\" (UID: \"c8fc6f03-c43b-4ade-92a8-acc5537a4eeb\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-wtzrw" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.595571 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzvbs\" (UniqueName: \"kubernetes.io/projected/9a963f9c-ac58-4e21-abfa-fca1279a192d-kube-api-access-pzvbs\") pod \"horizon-operator-controller-manager-6d9d6b584d-2rv7s\" (UID: \"9a963f9c-ac58-4e21-abfa-fca1279a192d\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-2rv7s" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.601531 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-wj9qs" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.620029 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-f6c87" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.620473 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5995f4446f-flfxh"] Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.634809 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-6nb82"] Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.635600 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-6nb82" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.646523 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-cfcgn" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.650001 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-wdj9j" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.665008 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-wtzrw"] Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.670250 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-6nb82"] Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.675727 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzvbs\" (UniqueName: \"kubernetes.io/projected/9a963f9c-ac58-4e21-abfa-fca1279a192d-kube-api-access-pzvbs\") pod \"horizon-operator-controller-manager-6d9d6b584d-2rv7s\" (UID: \"9a963f9c-ac58-4e21-abfa-fca1279a192d\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-2rv7s" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.676047 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qg79l" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.700445 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1542a9c8-92f6-4bc9-8231-829f649b0b8f-cert\") pod \"infra-operator-controller-manager-5995f4446f-flfxh\" (UID: \"1542a9c8-92f6-4bc9-8231-829f649b0b8f\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-flfxh" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.700526 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnh7r\" (UniqueName: \"kubernetes.io/projected/1542a9c8-92f6-4bc9-8231-829f649b0b8f-kube-api-access-pnh7r\") pod \"infra-operator-controller-manager-5995f4446f-flfxh\" (UID: \"1542a9c8-92f6-4bc9-8231-829f649b0b8f\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-flfxh" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.700548 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psrhj\" (UniqueName: \"kubernetes.io/projected/c8fc6f03-c43b-4ade-92a8-acc5537a4eeb-kube-api-access-psrhj\") pod \"ironic-operator-controller-manager-6bbb499bbc-wtzrw\" (UID: \"c8fc6f03-c43b-4ade-92a8-acc5537a4eeb\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-wtzrw" Mar 13 10:21:57 crc kubenswrapper[4632]: E0313 10:21:57.701317 4632 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 13 10:21:57 crc kubenswrapper[4632]: E0313 10:21:57.701367 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1542a9c8-92f6-4bc9-8231-829f649b0b8f-cert podName:1542a9c8-92f6-4bc9-8231-829f649b0b8f nodeName:}" failed. No retries permitted until 2026-03-13 10:21:58.201348944 +0000 UTC m=+1092.223879077 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1542a9c8-92f6-4bc9-8231-829f649b0b8f-cert") pod "infra-operator-controller-manager-5995f4446f-flfxh" (UID: "1542a9c8-92f6-4bc9-8231-829f649b0b8f") : secret "infra-operator-webhook-server-cert" not found Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.701775 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-cgh6c" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.713608 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-sxw8d"] Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.714423 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-sxw8d" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.719430 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-qnd2l" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.733814 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnh7r\" (UniqueName: \"kubernetes.io/projected/1542a9c8-92f6-4bc9-8231-829f649b0b8f-kube-api-access-pnh7r\") pod \"infra-operator-controller-manager-5995f4446f-flfxh\" (UID: \"1542a9c8-92f6-4bc9-8231-829f649b0b8f\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-flfxh" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.743761 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-szd7c"] Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.744618 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-szd7c" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.749920 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-77vkb" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.751073 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psrhj\" (UniqueName: \"kubernetes.io/projected/c8fc6f03-c43b-4ade-92a8-acc5537a4eeb-kube-api-access-psrhj\") pod \"ironic-operator-controller-manager-6bbb499bbc-wtzrw\" (UID: \"c8fc6f03-c43b-4ade-92a8-acc5537a4eeb\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-wtzrw" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.767239 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-sxw8d"] Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.767348 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-2rv7s" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.784470 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-szd7c"] Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.802142 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-bkmbn"] Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.803116 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-bkmbn" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.803418 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzwsh\" (UniqueName: \"kubernetes.io/projected/9040a0e0-2a56-4331-ba50-b19ff05ef0c0-kube-api-access-bzwsh\") pod \"mariadb-operator-controller-manager-658d4cdd5-szd7c\" (UID: \"9040a0e0-2a56-4331-ba50-b19ff05ef0c0\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-szd7c" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.803507 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xct9m\" (UniqueName: \"kubernetes.io/projected/7b491335-6a73-46de-8098-f27ff4c6f795-kube-api-access-xct9m\") pod \"manila-operator-controller-manager-68f45f9d9f-sxw8d\" (UID: \"7b491335-6a73-46de-8098-f27ff4c6f795\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-sxw8d" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.803573 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pd2q\" (UniqueName: \"kubernetes.io/projected/f0be4a6b-e3ac-4141-b5bf-b3fafcca5fda-kube-api-access-8pd2q\") pod \"keystone-operator-controller-manager-684f77d66d-6nb82\" (UID: \"f0be4a6b-e3ac-4141-b5bf-b3fafcca5fda\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-6nb82" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.807001 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-bnhzq" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.817709 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-wtzrw" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.827867 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-569cc54c5-628ss"] Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.828713 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-628ss" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.840633 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-sn5sp" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.863010 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-62gpm"] Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.863821 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-62gpm" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.875109 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-q9f6p" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.882729 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-bkmbn"] Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.913023 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pd2q\" (UniqueName: \"kubernetes.io/projected/f0be4a6b-e3ac-4141-b5bf-b3fafcca5fda-kube-api-access-8pd2q\") pod \"keystone-operator-controller-manager-684f77d66d-6nb82\" (UID: \"f0be4a6b-e3ac-4141-b5bf-b3fafcca5fda\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-6nb82" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.913089 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzwsh\" (UniqueName: \"kubernetes.io/projected/9040a0e0-2a56-4331-ba50-b19ff05ef0c0-kube-api-access-bzwsh\") pod \"mariadb-operator-controller-manager-658d4cdd5-szd7c\" (UID: \"9040a0e0-2a56-4331-ba50-b19ff05ef0c0\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-szd7c" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.913147 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xct9m\" (UniqueName: \"kubernetes.io/projected/7b491335-6a73-46de-8098-f27ff4c6f795-kube-api-access-xct9m\") pod \"manila-operator-controller-manager-68f45f9d9f-sxw8d\" (UID: \"7b491335-6a73-46de-8098-f27ff4c6f795\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-sxw8d" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.932777 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-62gpm"] Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.941645 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xct9m\" (UniqueName: \"kubernetes.io/projected/7b491335-6a73-46de-8098-f27ff4c6f795-kube-api-access-xct9m\") pod \"manila-operator-controller-manager-68f45f9d9f-sxw8d\" (UID: \"7b491335-6a73-46de-8098-f27ff4c6f795\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-sxw8d" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.946720 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzwsh\" (UniqueName: \"kubernetes.io/projected/9040a0e0-2a56-4331-ba50-b19ff05ef0c0-kube-api-access-bzwsh\") pod \"mariadb-operator-controller-manager-658d4cdd5-szd7c\" (UID: \"9040a0e0-2a56-4331-ba50-b19ff05ef0c0\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-szd7c" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.954543 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v927j"] Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.955681 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v927j" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.965345 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-szd7c" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.957590 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pd2q\" (UniqueName: \"kubernetes.io/projected/f0be4a6b-e3ac-4141-b5bf-b3fafcca5fda-kube-api-access-8pd2q\") pod \"keystone-operator-controller-manager-684f77d66d-6nb82\" (UID: \"f0be4a6b-e3ac-4141-b5bf-b3fafcca5fda\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-6nb82" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.969912 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-rcfbr" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.970420 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.979331 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-4m8kf"] Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.980108 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-4m8kf" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.983869 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-x42p7" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.989870 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-qkr9n"] Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.992015 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-qkr9n" Mar 13 10:21:57 crc kubenswrapper[4632]: I0313 10:21:57.994839 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-jg474" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.001103 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v927j"] Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.016092 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-4m8kf"] Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.016795 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl4tq\" (UniqueName: \"kubernetes.io/projected/c33d0da9-5a04-42d6-80d3-2f558b4a90b0-kube-api-access-dl4tq\") pod \"neutron-operator-controller-manager-776c5696bf-bkmbn\" (UID: \"c33d0da9-5a04-42d6-80d3-2f558b4a90b0\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-bkmbn" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.016841 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsp6d\" (UniqueName: \"kubernetes.io/projected/9e1d6ac6-c4ee-4381-86b8-c337f8c2d6a5-kube-api-access-zsp6d\") pod \"octavia-operator-controller-manager-5f4f55cb5c-62gpm\" (UID: \"9e1d6ac6-c4ee-4381-86b8-c337f8c2d6a5\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-62gpm" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.016899 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp7zl\" (UniqueName: \"kubernetes.io/projected/d04e9aa6-f234-4ffa-81e2-1a2407addb77-kube-api-access-qp7zl\") pod \"nova-operator-controller-manager-569cc54c5-628ss\" (UID: \"d04e9aa6-f234-4ffa-81e2-1a2407addb77\") " pod="openstack-operators/nova-operator-controller-manager-569cc54c5-628ss" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.034036 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-569cc54c5-628ss"] Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.169522 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt4gr\" (UniqueName: \"kubernetes.io/projected/e66fe20e-05b5-42cd-ac1d-bc4eaee4c8e5-kube-api-access-nt4gr\") pod \"placement-operator-controller-manager-574d45c66c-qkr9n\" (UID: \"e66fe20e-05b5-42cd-ac1d-bc4eaee4c8e5\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-qkr9n" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.177160 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsqv6\" (UniqueName: \"kubernetes.io/projected/0a9d48f4-d68b-4ef9-826e-ed619c761405-kube-api-access-zsqv6\") pod \"ovn-operator-controller-manager-bbc5b68f9-4m8kf\" (UID: \"0a9d48f4-d68b-4ef9-826e-ed619c761405\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-4m8kf" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.177456 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl4tq\" (UniqueName: \"kubernetes.io/projected/c33d0da9-5a04-42d6-80d3-2f558b4a90b0-kube-api-access-dl4tq\") pod \"neutron-operator-controller-manager-776c5696bf-bkmbn\" (UID: \"c33d0da9-5a04-42d6-80d3-2f558b4a90b0\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-bkmbn" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.177641 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q22k\" (UniqueName: \"kubernetes.io/projected/2d221857-ee77-4165-a351-ecd5fc424970-kube-api-access-2q22k\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7v927j\" (UID: \"2d221857-ee77-4165-a351-ecd5fc424970\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v927j" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.177816 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsp6d\" (UniqueName: \"kubernetes.io/projected/9e1d6ac6-c4ee-4381-86b8-c337f8c2d6a5-kube-api-access-zsp6d\") pod \"octavia-operator-controller-manager-5f4f55cb5c-62gpm\" (UID: \"9e1d6ac6-c4ee-4381-86b8-c337f8c2d6a5\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-62gpm" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.181306 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d221857-ee77-4165-a351-ecd5fc424970-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7v927j\" (UID: \"2d221857-ee77-4165-a351-ecd5fc424970\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v927j" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.181423 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp7zl\" (UniqueName: \"kubernetes.io/projected/d04e9aa6-f234-4ffa-81e2-1a2407addb77-kube-api-access-qp7zl\") pod \"nova-operator-controller-manager-569cc54c5-628ss\" (UID: \"d04e9aa6-f234-4ffa-81e2-1a2407addb77\") " pod="openstack-operators/nova-operator-controller-manager-569cc54c5-628ss" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.179800 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-6nb82" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.223804 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp7zl\" (UniqueName: \"kubernetes.io/projected/d04e9aa6-f234-4ffa-81e2-1a2407addb77-kube-api-access-qp7zl\") pod \"nova-operator-controller-manager-569cc54c5-628ss\" (UID: \"d04e9aa6-f234-4ffa-81e2-1a2407addb77\") " pod="openstack-operators/nova-operator-controller-manager-569cc54c5-628ss" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.231265 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl4tq\" (UniqueName: \"kubernetes.io/projected/c33d0da9-5a04-42d6-80d3-2f558b4a90b0-kube-api-access-dl4tq\") pod \"neutron-operator-controller-manager-776c5696bf-bkmbn\" (UID: \"c33d0da9-5a04-42d6-80d3-2f558b4a90b0\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-bkmbn" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.231831 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-sxw8d" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.263721 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-qkr9n"] Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.283596 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-bkmbn" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.288335 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1542a9c8-92f6-4bc9-8231-829f649b0b8f-cert\") pod \"infra-operator-controller-manager-5995f4446f-flfxh\" (UID: \"1542a9c8-92f6-4bc9-8231-829f649b0b8f\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-flfxh" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.288384 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt4gr\" (UniqueName: \"kubernetes.io/projected/e66fe20e-05b5-42cd-ac1d-bc4eaee4c8e5-kube-api-access-nt4gr\") pod \"placement-operator-controller-manager-574d45c66c-qkr9n\" (UID: \"e66fe20e-05b5-42cd-ac1d-bc4eaee4c8e5\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-qkr9n" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.288409 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsqv6\" (UniqueName: \"kubernetes.io/projected/0a9d48f4-d68b-4ef9-826e-ed619c761405-kube-api-access-zsqv6\") pod \"ovn-operator-controller-manager-bbc5b68f9-4m8kf\" (UID: \"0a9d48f4-d68b-4ef9-826e-ed619c761405\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-4m8kf" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.288449 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q22k\" (UniqueName: \"kubernetes.io/projected/2d221857-ee77-4165-a351-ecd5fc424970-kube-api-access-2q22k\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7v927j\" (UID: \"2d221857-ee77-4165-a351-ecd5fc424970\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v927j" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.288501 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d221857-ee77-4165-a351-ecd5fc424970-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7v927j\" (UID: \"2d221857-ee77-4165-a351-ecd5fc424970\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v927j" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.288657 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsp6d\" (UniqueName: \"kubernetes.io/projected/9e1d6ac6-c4ee-4381-86b8-c337f8c2d6a5-kube-api-access-zsp6d\") pod \"octavia-operator-controller-manager-5f4f55cb5c-62gpm\" (UID: \"9e1d6ac6-c4ee-4381-86b8-c337f8c2d6a5\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-62gpm" Mar 13 10:21:58 crc kubenswrapper[4632]: E0313 10:21:58.289097 4632 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 10:21:58 crc kubenswrapper[4632]: E0313 10:21:58.289144 4632 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 13 10:21:58 crc kubenswrapper[4632]: E0313 10:21:58.289147 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d221857-ee77-4165-a351-ecd5fc424970-cert podName:2d221857-ee77-4165-a351-ecd5fc424970 nodeName:}" failed. No retries permitted until 2026-03-13 10:21:58.789133206 +0000 UTC m=+1092.811663339 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2d221857-ee77-4165-a351-ecd5fc424970-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7v927j" (UID: "2d221857-ee77-4165-a351-ecd5fc424970") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 10:21:58 crc kubenswrapper[4632]: E0313 10:21:58.289377 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1542a9c8-92f6-4bc9-8231-829f649b0b8f-cert podName:1542a9c8-92f6-4bc9-8231-829f649b0b8f nodeName:}" failed. No retries permitted until 2026-03-13 10:21:59.289348271 +0000 UTC m=+1093.311878404 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1542a9c8-92f6-4bc9-8231-829f649b0b8f-cert") pod "infra-operator-controller-manager-5995f4446f-flfxh" (UID: "1542a9c8-92f6-4bc9-8231-829f649b0b8f") : secret "infra-operator-webhook-server-cert" not found Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.316083 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-qbfg2"] Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.318846 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-677c674df7-qbfg2" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.326113 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-wr8sw" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.327121 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-628ss" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.383600 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt4gr\" (UniqueName: \"kubernetes.io/projected/e66fe20e-05b5-42cd-ac1d-bc4eaee4c8e5-kube-api-access-nt4gr\") pod \"placement-operator-controller-manager-574d45c66c-qkr9n\" (UID: \"e66fe20e-05b5-42cd-ac1d-bc4eaee4c8e5\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-qkr9n" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.385331 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-62gpm" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.390892 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p7rc\" (UniqueName: \"kubernetes.io/projected/2d8a9f3a-6631-4c1e-8381-3bc313837ca0-kube-api-access-8p7rc\") pod \"swift-operator-controller-manager-677c674df7-qbfg2\" (UID: \"2d8a9f3a-6631-4c1e-8381-3bc313837ca0\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-qbfg2" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.399761 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q22k\" (UniqueName: \"kubernetes.io/projected/2d221857-ee77-4165-a351-ecd5fc424970-kube-api-access-2q22k\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7v927j\" (UID: \"2d221857-ee77-4165-a351-ecd5fc424970\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v927j" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.401090 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsqv6\" (UniqueName: \"kubernetes.io/projected/0a9d48f4-d68b-4ef9-826e-ed619c761405-kube-api-access-zsqv6\") pod \"ovn-operator-controller-manager-bbc5b68f9-4m8kf\" (UID: \"0a9d48f4-d68b-4ef9-826e-ed619c761405\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-4m8kf" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.425738 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-qbfg2"] Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.449836 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-nt7np"] Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.450835 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-nt7np" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.471114 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-ds7sc" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.494698 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p7rc\" (UniqueName: \"kubernetes.io/projected/2d8a9f3a-6631-4c1e-8381-3bc313837ca0-kube-api-access-8p7rc\") pod \"swift-operator-controller-manager-677c674df7-qbfg2\" (UID: \"2d8a9f3a-6631-4c1e-8381-3bc313837ca0\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-qbfg2" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.495032 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmz6h\" (UniqueName: \"kubernetes.io/projected/ee081327-4c3f-4c0a-9085-71085c6487b5-kube-api-access-fmz6h\") pod \"telemetry-operator-controller-manager-6cd66dbd4b-nt7np\" (UID: \"ee081327-4c3f-4c0a-9085-71085c6487b5\") " pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-nt7np" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.495303 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-4m8kf" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.541999 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-nt7np"] Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.565068 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-jwrgq"] Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.565964 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-jwrgq" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.579478 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-8vng2" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.580063 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-qkr9n" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.609625 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmz6h\" (UniqueName: \"kubernetes.io/projected/ee081327-4c3f-4c0a-9085-71085c6487b5-kube-api-access-fmz6h\") pod \"telemetry-operator-controller-manager-6cd66dbd4b-nt7np\" (UID: \"ee081327-4c3f-4c0a-9085-71085c6487b5\") " pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-nt7np" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.609772 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4vf2\" (UniqueName: \"kubernetes.io/projected/7bab78c8-7dac-48dc-a426-ccd4ae00a428-kube-api-access-j4vf2\") pod \"test-operator-controller-manager-5c5cb9c4d7-jwrgq\" (UID: \"7bab78c8-7dac-48dc-a426-ccd4ae00a428\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-jwrgq" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.620696 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-kv8b2"] Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.621519 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-kv8b2" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.654141 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-z8lfp" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.669995 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-jwrgq"] Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.695479 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p7rc\" (UniqueName: \"kubernetes.io/projected/2d8a9f3a-6631-4c1e-8381-3bc313837ca0-kube-api-access-8p7rc\") pod \"swift-operator-controller-manager-677c674df7-qbfg2\" (UID: \"2d8a9f3a-6631-4c1e-8381-3bc313837ca0\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-qbfg2" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.722527 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmz6h\" (UniqueName: \"kubernetes.io/projected/ee081327-4c3f-4c0a-9085-71085c6487b5-kube-api-access-fmz6h\") pod \"telemetry-operator-controller-manager-6cd66dbd4b-nt7np\" (UID: \"ee081327-4c3f-4c0a-9085-71085c6487b5\") " pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-nt7np" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.754398 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-677c674df7-qbfg2" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.789090 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-kv8b2"] Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.822965 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d221857-ee77-4165-a351-ecd5fc424970-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7v927j\" (UID: \"2d221857-ee77-4165-a351-ecd5fc424970\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v927j" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.823146 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4vf2\" (UniqueName: \"kubernetes.io/projected/7bab78c8-7dac-48dc-a426-ccd4ae00a428-kube-api-access-j4vf2\") pod \"test-operator-controller-manager-5c5cb9c4d7-jwrgq\" (UID: \"7bab78c8-7dac-48dc-a426-ccd4ae00a428\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-jwrgq" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.823292 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk5b8\" (UniqueName: \"kubernetes.io/projected/e0d1d349-d63d-498b-ae15-3121f9ae73f8-kube-api-access-rk5b8\") pod \"watcher-operator-controller-manager-6dd88c6f67-kv8b2\" (UID: \"e0d1d349-d63d-498b-ae15-3121f9ae73f8\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-kv8b2" Mar 13 10:21:58 crc kubenswrapper[4632]: E0313 10:21:58.823491 4632 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 10:21:58 crc kubenswrapper[4632]: E0313 10:21:58.823785 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d221857-ee77-4165-a351-ecd5fc424970-cert podName:2d221857-ee77-4165-a351-ecd5fc424970 nodeName:}" failed. No retries permitted until 2026-03-13 10:21:59.823768638 +0000 UTC m=+1093.846298771 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2d221857-ee77-4165-a351-ecd5fc424970-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7v927j" (UID: "2d221857-ee77-4165-a351-ecd5fc424970") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.855273 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-nt7np" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.870741 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4vf2\" (UniqueName: \"kubernetes.io/projected/7bab78c8-7dac-48dc-a426-ccd4ae00a428-kube-api-access-j4vf2\") pod \"test-operator-controller-manager-5c5cb9c4d7-jwrgq\" (UID: \"7bab78c8-7dac-48dc-a426-ccd4ae00a428\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-jwrgq" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.918061 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-85c677895b-thbc4"] Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.918994 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-85c677895b-thbc4" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.923674 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.923799 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.923909 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-thpwj" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.926639 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltgld\" (UniqueName: \"kubernetes.io/projected/3fdb377f-5a78-4687-82e1-50718514290d-kube-api-access-ltgld\") pod \"openstack-operator-controller-manager-85c677895b-thbc4\" (UID: \"3fdb377f-5a78-4687-82e1-50718514290d\") " pod="openstack-operators/openstack-operator-controller-manager-85c677895b-thbc4" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.926792 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-metrics-certs\") pod \"openstack-operator-controller-manager-85c677895b-thbc4\" (UID: \"3fdb377f-5a78-4687-82e1-50718514290d\") " pod="openstack-operators/openstack-operator-controller-manager-85c677895b-thbc4" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.926819 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-webhook-certs\") pod \"openstack-operator-controller-manager-85c677895b-thbc4\" (UID: \"3fdb377f-5a78-4687-82e1-50718514290d\") " pod="openstack-operators/openstack-operator-controller-manager-85c677895b-thbc4" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.926904 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk5b8\" (UniqueName: \"kubernetes.io/projected/e0d1d349-d63d-498b-ae15-3121f9ae73f8-kube-api-access-rk5b8\") pod \"watcher-operator-controller-manager-6dd88c6f67-kv8b2\" (UID: \"e0d1d349-d63d-498b-ae15-3121f9ae73f8\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-kv8b2" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.936016 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-85c677895b-thbc4"] Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.945741 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lzt8"] Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.946816 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lzt8" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.951053 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-78jxg" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.956702 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk5b8\" (UniqueName: \"kubernetes.io/projected/e0d1d349-d63d-498b-ae15-3121f9ae73f8-kube-api-access-rk5b8\") pod \"watcher-operator-controller-manager-6dd88c6f67-kv8b2\" (UID: \"e0d1d349-d63d-498b-ae15-3121f9ae73f8\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-kv8b2" Mar 13 10:21:58 crc kubenswrapper[4632]: I0313 10:21:58.971734 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lzt8"] Mar 13 10:21:59 crc kubenswrapper[4632]: I0313 10:21:59.031644 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltgld\" (UniqueName: \"kubernetes.io/projected/3fdb377f-5a78-4687-82e1-50718514290d-kube-api-access-ltgld\") pod \"openstack-operator-controller-manager-85c677895b-thbc4\" (UID: \"3fdb377f-5a78-4687-82e1-50718514290d\") " pod="openstack-operators/openstack-operator-controller-manager-85c677895b-thbc4" Mar 13 10:21:59 crc kubenswrapper[4632]: I0313 10:21:59.031985 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp7js\" (UniqueName: \"kubernetes.io/projected/daba1153-3b28-4234-8dd0-ec20160abbfe-kube-api-access-dp7js\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2lzt8\" (UID: \"daba1153-3b28-4234-8dd0-ec20160abbfe\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lzt8" Mar 13 10:21:59 crc kubenswrapper[4632]: I0313 10:21:59.032009 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-metrics-certs\") pod \"openstack-operator-controller-manager-85c677895b-thbc4\" (UID: \"3fdb377f-5a78-4687-82e1-50718514290d\") " pod="openstack-operators/openstack-operator-controller-manager-85c677895b-thbc4" Mar 13 10:21:59 crc kubenswrapper[4632]: I0313 10:21:59.032026 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-webhook-certs\") pod \"openstack-operator-controller-manager-85c677895b-thbc4\" (UID: \"3fdb377f-5a78-4687-82e1-50718514290d\") " pod="openstack-operators/openstack-operator-controller-manager-85c677895b-thbc4" Mar 13 10:21:59 crc kubenswrapper[4632]: E0313 10:21:59.032515 4632 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 13 10:21:59 crc kubenswrapper[4632]: E0313 10:21:59.032574 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-webhook-certs podName:3fdb377f-5a78-4687-82e1-50718514290d nodeName:}" failed. No retries permitted until 2026-03-13 10:21:59.532558299 +0000 UTC m=+1093.555088432 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-webhook-certs") pod "openstack-operator-controller-manager-85c677895b-thbc4" (UID: "3fdb377f-5a78-4687-82e1-50718514290d") : secret "webhook-server-cert" not found Mar 13 10:21:59 crc kubenswrapper[4632]: E0313 10:21:59.033727 4632 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 13 10:21:59 crc kubenswrapper[4632]: E0313 10:21:59.033756 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-metrics-certs podName:3fdb377f-5a78-4687-82e1-50718514290d nodeName:}" failed. No retries permitted until 2026-03-13 10:21:59.533747888 +0000 UTC m=+1093.556278011 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-metrics-certs") pod "openstack-operator-controller-manager-85c677895b-thbc4" (UID: "3fdb377f-5a78-4687-82e1-50718514290d") : secret "metrics-server-cert" not found Mar 13 10:21:59 crc kubenswrapper[4632]: I0313 10:21:59.060791 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-jwrgq" Mar 13 10:21:59 crc kubenswrapper[4632]: I0313 10:21:59.092756 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltgld\" (UniqueName: \"kubernetes.io/projected/3fdb377f-5a78-4687-82e1-50718514290d-kube-api-access-ltgld\") pod \"openstack-operator-controller-manager-85c677895b-thbc4\" (UID: \"3fdb377f-5a78-4687-82e1-50718514290d\") " pod="openstack-operators/openstack-operator-controller-manager-85c677895b-thbc4" Mar 13 10:21:59 crc kubenswrapper[4632]: I0313 10:21:59.106257 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-f6c87"] Mar 13 10:21:59 crc kubenswrapper[4632]: I0313 10:21:59.107741 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-kv8b2" Mar 13 10:21:59 crc kubenswrapper[4632]: I0313 10:21:59.132908 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp7js\" (UniqueName: \"kubernetes.io/projected/daba1153-3b28-4234-8dd0-ec20160abbfe-kube-api-access-dp7js\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2lzt8\" (UID: \"daba1153-3b28-4234-8dd0-ec20160abbfe\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lzt8" Mar 13 10:21:59 crc kubenswrapper[4632]: I0313 10:21:59.197932 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-wj9qs"] Mar 13 10:21:59 crc kubenswrapper[4632]: I0313 10:21:59.204453 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp7js\" (UniqueName: \"kubernetes.io/projected/daba1153-3b28-4234-8dd0-ec20160abbfe-kube-api-access-dp7js\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2lzt8\" (UID: \"daba1153-3b28-4234-8dd0-ec20160abbfe\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lzt8" Mar 13 10:21:59 crc kubenswrapper[4632]: I0313 10:21:59.212879 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-2rv7s"] Mar 13 10:21:59 crc kubenswrapper[4632]: I0313 10:21:59.340231 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1542a9c8-92f6-4bc9-8231-829f649b0b8f-cert\") pod \"infra-operator-controller-manager-5995f4446f-flfxh\" (UID: \"1542a9c8-92f6-4bc9-8231-829f649b0b8f\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-flfxh" Mar 13 10:21:59 crc kubenswrapper[4632]: E0313 10:21:59.341095 4632 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 13 10:21:59 crc kubenswrapper[4632]: E0313 10:21:59.341656 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1542a9c8-92f6-4bc9-8231-829f649b0b8f-cert podName:1542a9c8-92f6-4bc9-8231-829f649b0b8f nodeName:}" failed. No retries permitted until 2026-03-13 10:22:01.341633679 +0000 UTC m=+1095.364163822 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1542a9c8-92f6-4bc9-8231-829f649b0b8f-cert") pod "infra-operator-controller-manager-5995f4446f-flfxh" (UID: "1542a9c8-92f6-4bc9-8231-829f649b0b8f") : secret "infra-operator-webhook-server-cert" not found Mar 13 10:21:59 crc kubenswrapper[4632]: I0313 10:21:59.354115 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lzt8" Mar 13 10:21:59 crc kubenswrapper[4632]: I0313 10:21:59.553128 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-metrics-certs\") pod \"openstack-operator-controller-manager-85c677895b-thbc4\" (UID: \"3fdb377f-5a78-4687-82e1-50718514290d\") " pod="openstack-operators/openstack-operator-controller-manager-85c677895b-thbc4" Mar 13 10:21:59 crc kubenswrapper[4632]: I0313 10:21:59.553169 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-webhook-certs\") pod \"openstack-operator-controller-manager-85c677895b-thbc4\" (UID: \"3fdb377f-5a78-4687-82e1-50718514290d\") " pod="openstack-operators/openstack-operator-controller-manager-85c677895b-thbc4" Mar 13 10:21:59 crc kubenswrapper[4632]: E0313 10:21:59.555330 4632 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 13 10:21:59 crc kubenswrapper[4632]: E0313 10:21:59.555408 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-metrics-certs podName:3fdb377f-5a78-4687-82e1-50718514290d nodeName:}" failed. No retries permitted until 2026-03-13 10:22:00.555387908 +0000 UTC m=+1094.577918041 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-metrics-certs") pod "openstack-operator-controller-manager-85c677895b-thbc4" (UID: "3fdb377f-5a78-4687-82e1-50718514290d") : secret "metrics-server-cert" not found Mar 13 10:21:59 crc kubenswrapper[4632]: E0313 10:21:59.555922 4632 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 13 10:21:59 crc kubenswrapper[4632]: E0313 10:21:59.556014 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-webhook-certs podName:3fdb377f-5a78-4687-82e1-50718514290d nodeName:}" failed. No retries permitted until 2026-03-13 10:22:00.556001523 +0000 UTC m=+1094.578531656 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-webhook-certs") pod "openstack-operator-controller-manager-85c677895b-thbc4" (UID: "3fdb377f-5a78-4687-82e1-50718514290d") : secret "webhook-server-cert" not found Mar 13 10:21:59 crc kubenswrapper[4632]: W0313 10:21:59.741161 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8fc6f03_c43b_4ade_92a8_acc5537a4eeb.slice/crio-8b14b6b02d647f7be0233d6ae3c76aafb06cdac55c64087f8d1a18d6d170164d WatchSource:0}: Error finding container 8b14b6b02d647f7be0233d6ae3c76aafb06cdac55c64087f8d1a18d6d170164d: Status 404 returned error can't find the container with id 8b14b6b02d647f7be0233d6ae3c76aafb06cdac55c64087f8d1a18d6d170164d Mar 13 10:21:59 crc kubenswrapper[4632]: I0313 10:21:59.801158 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-wtzrw"] Mar 13 10:21:59 crc kubenswrapper[4632]: I0313 10:21:59.822416 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-szd7c"] Mar 13 10:21:59 crc kubenswrapper[4632]: I0313 10:21:59.832601 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-qg79l"] Mar 13 10:21:59 crc kubenswrapper[4632]: I0313 10:21:59.844071 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-cfcgn"] Mar 13 10:21:59 crc kubenswrapper[4632]: I0313 10:21:59.876231 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qg79l" event={"ID":"20f92131-aca4-41ea-9144-a23bd9216f49","Type":"ContainerStarted","Data":"1b12cd6affbbba27a91255f8a7d15ae6836df88d5d0c65e64972b933b667f736"} Mar 13 10:21:59 crc kubenswrapper[4632]: I0313 10:21:59.879788 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-62gpm"] Mar 13 10:21:59 crc kubenswrapper[4632]: I0313 10:21:59.886552 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d221857-ee77-4165-a351-ecd5fc424970-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7v927j\" (UID: \"2d221857-ee77-4165-a351-ecd5fc424970\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v927j" Mar 13 10:21:59 crc kubenswrapper[4632]: E0313 10:21:59.886799 4632 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 10:21:59 crc kubenswrapper[4632]: E0313 10:21:59.888278 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d221857-ee77-4165-a351-ecd5fc424970-cert podName:2d221857-ee77-4165-a351-ecd5fc424970 nodeName:}" failed. No retries permitted until 2026-03-13 10:22:01.888255306 +0000 UTC m=+1095.910785439 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2d221857-ee77-4165-a351-ecd5fc424970-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7v927j" (UID: "2d221857-ee77-4165-a351-ecd5fc424970") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 10:21:59 crc kubenswrapper[4632]: I0313 10:21:59.890347 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-2rv7s" event={"ID":"9a963f9c-ac58-4e21-abfa-fca1279a192d","Type":"ContainerStarted","Data":"b828a39b1e00247b03288b6f0c7d7291f0cebe51d87ba54cc05d5c9092c5939a"} Mar 13 10:21:59 crc kubenswrapper[4632]: I0313 10:21:59.903428 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-szd7c" event={"ID":"9040a0e0-2a56-4331-ba50-b19ff05ef0c0","Type":"ContainerStarted","Data":"ecdb7d21968cab437082542a10cb34e09bc2b54ea5965a8ce16bde9dfc8089f3"} Mar 13 10:21:59 crc kubenswrapper[4632]: I0313 10:21:59.911789 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-6nb82"] Mar 13 10:21:59 crc kubenswrapper[4632]: I0313 10:21:59.916427 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-cfcgn" event={"ID":"75d652c7-8521-4039-913a-fa625f89b094","Type":"ContainerStarted","Data":"c2dfe6e0745050c82493b5f6751e24433eca74094dba7adfe32b436c8ed15f81"} Mar 13 10:21:59 crc kubenswrapper[4632]: I0313 10:21:59.922667 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-cgh6c"] Mar 13 10:21:59 crc kubenswrapper[4632]: I0313 10:21:59.933214 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-bkmbn"] Mar 13 10:21:59 crc kubenswrapper[4632]: I0313 10:21:59.933515 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-wj9qs" event={"ID":"68c5eb80-4214-42c5-a08d-de6012969621","Type":"ContainerStarted","Data":"afddeda00568af10f04e19906cded4b4cc285f511804b2870e8288b8e948d902"} Mar 13 10:21:59 crc kubenswrapper[4632]: I0313 10:21:59.940478 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-wtzrw" event={"ID":"c8fc6f03-c43b-4ade-92a8-acc5537a4eeb","Type":"ContainerStarted","Data":"8b14b6b02d647f7be0233d6ae3c76aafb06cdac55c64087f8d1a18d6d170164d"} Mar 13 10:21:59 crc kubenswrapper[4632]: I0313 10:21:59.944633 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-f6c87" event={"ID":"3f3a462e-4d89-45b3-8611-181aca5f8558","Type":"ContainerStarted","Data":"068122cb5d809545fff29ef97ce88c07259a8c6b589d361afe704f82996ee9df"} Mar 13 10:21:59 crc kubenswrapper[4632]: I0313 10:21:59.975375 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-sxw8d"] Mar 13 10:22:00 crc kubenswrapper[4632]: I0313 10:22:00.134535 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556622-7428t"] Mar 13 10:22:00 crc kubenswrapper[4632]: I0313 10:22:00.135770 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556622-7428t" Mar 13 10:22:00 crc kubenswrapper[4632]: I0313 10:22:00.139471 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 10:22:00 crc kubenswrapper[4632]: I0313 10:22:00.139807 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 10:22:00 crc kubenswrapper[4632]: I0313 10:22:00.146235 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556622-7428t"] Mar 13 10:22:00 crc kubenswrapper[4632]: I0313 10:22:00.148707 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 10:22:00 crc kubenswrapper[4632]: I0313 10:22:00.173151 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lzt8"] Mar 13 10:22:00 crc kubenswrapper[4632]: I0313 10:22:00.196714 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb5hx\" (UniqueName: \"kubernetes.io/projected/bedc1d17-f5c4-4a62-ab0c-f20a002e859b-kube-api-access-mb5hx\") pod \"auto-csr-approver-29556622-7428t\" (UID: \"bedc1d17-f5c4-4a62-ab0c-f20a002e859b\") " pod="openshift-infra/auto-csr-approver-29556622-7428t" Mar 13 10:22:00 crc kubenswrapper[4632]: I0313 10:22:00.216992 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-4m8kf"] Mar 13 10:22:00 crc kubenswrapper[4632]: I0313 10:22:00.251000 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-kv8b2"] Mar 13 10:22:00 crc kubenswrapper[4632]: I0313 10:22:00.255039 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-qkr9n"] Mar 13 10:22:00 crc kubenswrapper[4632]: I0313 10:22:00.266910 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-qbfg2"] Mar 13 10:22:00 crc kubenswrapper[4632]: I0313 10:22:00.279608 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-569cc54c5-628ss"] Mar 13 10:22:00 crc kubenswrapper[4632]: W0313 10:22:00.279738 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d8a9f3a_6631_4c1e_8381_3bc313837ca0.slice/crio-318f9130b56f9833106599e46f4be878dc926c055c7a426d7e877917e09016bd WatchSource:0}: Error finding container 318f9130b56f9833106599e46f4be878dc926c055c7a426d7e877917e09016bd: Status 404 returned error can't find the container with id 318f9130b56f9833106599e46f4be878dc926c055c7a426d7e877917e09016bd Mar 13 10:22:00 crc kubenswrapper[4632]: W0313 10:22:00.284294 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd04e9aa6_f234_4ffa_81e2_1a2407addb77.slice/crio-ec0c150d251d0c1342ab20ef9e2f5047cf5ccd875f9bcb8b322ae58b99a25992 WatchSource:0}: Error finding container ec0c150d251d0c1342ab20ef9e2f5047cf5ccd875f9bcb8b322ae58b99a25992: Status 404 returned error can't find the container with id ec0c150d251d0c1342ab20ef9e2f5047cf5ccd875f9bcb8b322ae58b99a25992 Mar 13 10:22:00 crc kubenswrapper[4632]: E0313 10:22:00.297383 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:2bd37bdd917e3abe72613a734ce5021330242ec8cae9b8da76c57a0765152922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qp7zl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-569cc54c5-628ss_openstack-operators(d04e9aa6-f234-4ffa-81e2-1a2407addb77): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 13 10:22:00 crc kubenswrapper[4632]: E0313 10:22:00.298581 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-628ss" podUID="d04e9aa6-f234-4ffa-81e2-1a2407addb77" Mar 13 10:22:00 crc kubenswrapper[4632]: I0313 10:22:00.300621 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb5hx\" (UniqueName: \"kubernetes.io/projected/bedc1d17-f5c4-4a62-ab0c-f20a002e859b-kube-api-access-mb5hx\") pod \"auto-csr-approver-29556622-7428t\" (UID: \"bedc1d17-f5c4-4a62-ab0c-f20a002e859b\") " pod="openshift-infra/auto-csr-approver-29556622-7428t" Mar 13 10:22:00 crc kubenswrapper[4632]: I0313 10:22:00.310187 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-nt7np"] Mar 13 10:22:00 crc kubenswrapper[4632]: E0313 10:22:00.323307 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4af709a2a6a1a1abb9659dbdd6fb3818122bdec7e66009fcced0bf0949f91554,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rk5b8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-6dd88c6f67-kv8b2_openstack-operators(e0d1d349-d63d-498b-ae15-3121f9ae73f8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 13 10:22:00 crc kubenswrapper[4632]: E0313 10:22:00.323336 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e7e865363955c670e41b6c042c4f87abceff78f5495ba5c5c82988baad45c978,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nt4gr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-574d45c66c-qkr9n_openstack-operators(e66fe20e-05b5-42cd-ac1d-bc4eaee4c8e5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 13 10:22:00 crc kubenswrapper[4632]: E0313 10:22:00.327091 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-qkr9n" podUID="e66fe20e-05b5-42cd-ac1d-bc4eaee4c8e5" Mar 13 10:22:00 crc kubenswrapper[4632]: E0313 10:22:00.327174 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-kv8b2" podUID="e0d1d349-d63d-498b-ae15-3121f9ae73f8" Mar 13 10:22:00 crc kubenswrapper[4632]: E0313 10:22:00.331137 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:27c84b712abc2df6108e22636075eec25fea0229800f38594a492fd41b02c49d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fmz6h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6cd66dbd4b-nt7np_openstack-operators(ee081327-4c3f-4c0a-9085-71085c6487b5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 13 10:22:00 crc kubenswrapper[4632]: E0313 10:22:00.333479 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-nt7np" podUID="ee081327-4c3f-4c0a-9085-71085c6487b5" Mar 13 10:22:00 crc kubenswrapper[4632]: I0313 10:22:00.335172 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-jwrgq"] Mar 13 10:22:00 crc kubenswrapper[4632]: I0313 10:22:00.347299 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb5hx\" (UniqueName: \"kubernetes.io/projected/bedc1d17-f5c4-4a62-ab0c-f20a002e859b-kube-api-access-mb5hx\") pod \"auto-csr-approver-29556622-7428t\" (UID: \"bedc1d17-f5c4-4a62-ab0c-f20a002e859b\") " pod="openshift-infra/auto-csr-approver-29556622-7428t" Mar 13 10:22:00 crc kubenswrapper[4632]: E0313 10:22:00.373335 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j4vf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5c5cb9c4d7-jwrgq_openstack-operators(7bab78c8-7dac-48dc-a426-ccd4ae00a428): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 13 10:22:00 crc kubenswrapper[4632]: E0313 10:22:00.374473 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-jwrgq" podUID="7bab78c8-7dac-48dc-a426-ccd4ae00a428" Mar 13 10:22:00 crc kubenswrapper[4632]: I0313 10:22:00.460802 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556622-7428t" Mar 13 10:22:00 crc kubenswrapper[4632]: I0313 10:22:00.610840 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-metrics-certs\") pod \"openstack-operator-controller-manager-85c677895b-thbc4\" (UID: \"3fdb377f-5a78-4687-82e1-50718514290d\") " pod="openstack-operators/openstack-operator-controller-manager-85c677895b-thbc4" Mar 13 10:22:00 crc kubenswrapper[4632]: I0313 10:22:00.611225 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-webhook-certs\") pod \"openstack-operator-controller-manager-85c677895b-thbc4\" (UID: \"3fdb377f-5a78-4687-82e1-50718514290d\") " pod="openstack-operators/openstack-operator-controller-manager-85c677895b-thbc4" Mar 13 10:22:00 crc kubenswrapper[4632]: E0313 10:22:00.611437 4632 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 13 10:22:00 crc kubenswrapper[4632]: E0313 10:22:00.611493 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-webhook-certs podName:3fdb377f-5a78-4687-82e1-50718514290d nodeName:}" failed. No retries permitted until 2026-03-13 10:22:02.611475477 +0000 UTC m=+1096.634005610 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-webhook-certs") pod "openstack-operator-controller-manager-85c677895b-thbc4" (UID: "3fdb377f-5a78-4687-82e1-50718514290d") : secret "webhook-server-cert" not found Mar 13 10:22:00 crc kubenswrapper[4632]: E0313 10:22:00.611858 4632 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 13 10:22:00 crc kubenswrapper[4632]: E0313 10:22:00.611899 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-metrics-certs podName:3fdb377f-5a78-4687-82e1-50718514290d nodeName:}" failed. No retries permitted until 2026-03-13 10:22:02.611889296 +0000 UTC m=+1096.634419429 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-metrics-certs") pod "openstack-operator-controller-manager-85c677895b-thbc4" (UID: "3fdb377f-5a78-4687-82e1-50718514290d") : secret "metrics-server-cert" not found Mar 13 10:22:00 crc kubenswrapper[4632]: I0313 10:22:00.927158 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556622-7428t"] Mar 13 10:22:00 crc kubenswrapper[4632]: I0313 10:22:00.970680 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-jwrgq" event={"ID":"7bab78c8-7dac-48dc-a426-ccd4ae00a428","Type":"ContainerStarted","Data":"0cc9f256adf45e067289c3c09cf5a3b887718e0ed4aebb00f8bdea99acfc8399"} Mar 13 10:22:00 crc kubenswrapper[4632]: E0313 10:22:00.972052 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42\\\"\"" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-jwrgq" podUID="7bab78c8-7dac-48dc-a426-ccd4ae00a428" Mar 13 10:22:00 crc kubenswrapper[4632]: I0313 10:22:00.974111 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-6nb82" event={"ID":"f0be4a6b-e3ac-4141-b5bf-b3fafcca5fda","Type":"ContainerStarted","Data":"238d41f4e6a11f56c98923b450487bbb345510f86b4677377ab990be7a3b5c6a"} Mar 13 10:22:00 crc kubenswrapper[4632]: I0313 10:22:00.983070 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-cgh6c" event={"ID":"ff6d4dcb-9eb8-44fc-951e-f2aecd77a639","Type":"ContainerStarted","Data":"86b1d28feab0bd93d02ef0640d701cb5efd494a7316382881d4f7b85938d06cb"} Mar 13 10:22:00 crc kubenswrapper[4632]: I0313 10:22:00.986477 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lzt8" event={"ID":"daba1153-3b28-4234-8dd0-ec20160abbfe","Type":"ContainerStarted","Data":"48107bf864a23956d2ba690d946ba69cb98e569466221aab236b2174d3456fce"} Mar 13 10:22:01 crc kubenswrapper[4632]: I0313 10:22:01.002836 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-nt7np" event={"ID":"ee081327-4c3f-4c0a-9085-71085c6487b5","Type":"ContainerStarted","Data":"5252ec9dd71bcfd5c302107c0607594a44f45cfd9ef669b450dbea52c1fe870c"} Mar 13 10:22:01 crc kubenswrapper[4632]: I0313 10:22:01.005056 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-kv8b2" event={"ID":"e0d1d349-d63d-498b-ae15-3121f9ae73f8","Type":"ContainerStarted","Data":"8cab672d4a6a2d22b35fd58f618ec6c32836e4e3a60fbd42506cd93f4230fcbb"} Mar 13 10:22:01 crc kubenswrapper[4632]: E0313 10:22:01.010499 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:27c84b712abc2df6108e22636075eec25fea0229800f38594a492fd41b02c49d\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-nt7np" podUID="ee081327-4c3f-4c0a-9085-71085c6487b5" Mar 13 10:22:01 crc kubenswrapper[4632]: I0313 10:22:01.011487 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-62gpm" event={"ID":"9e1d6ac6-c4ee-4381-86b8-c337f8c2d6a5","Type":"ContainerStarted","Data":"523cef2b6f8900fac046ee029faf34f4d6f4fadca57bf80c65f3eab1f3b235fd"} Mar 13 10:22:01 crc kubenswrapper[4632]: E0313 10:22:01.011538 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4af709a2a6a1a1abb9659dbdd6fb3818122bdec7e66009fcced0bf0949f91554\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-kv8b2" podUID="e0d1d349-d63d-498b-ae15-3121f9ae73f8" Mar 13 10:22:01 crc kubenswrapper[4632]: I0313 10:22:01.017955 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-4m8kf" event={"ID":"0a9d48f4-d68b-4ef9-826e-ed619c761405","Type":"ContainerStarted","Data":"fad94d640a989a658da91c9c469a80976afb4335402e17bf8610cd318e33e922"} Mar 13 10:22:01 crc kubenswrapper[4632]: I0313 10:22:01.029343 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-628ss" event={"ID":"d04e9aa6-f234-4ffa-81e2-1a2407addb77","Type":"ContainerStarted","Data":"ec0c150d251d0c1342ab20ef9e2f5047cf5ccd875f9bcb8b322ae58b99a25992"} Mar 13 10:22:01 crc kubenswrapper[4632]: E0313 10:22:01.035363 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:2bd37bdd917e3abe72613a734ce5021330242ec8cae9b8da76c57a0765152922\\\"\"" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-628ss" podUID="d04e9aa6-f234-4ffa-81e2-1a2407addb77" Mar 13 10:22:01 crc kubenswrapper[4632]: I0313 10:22:01.040017 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-bkmbn" event={"ID":"c33d0da9-5a04-42d6-80d3-2f558b4a90b0","Type":"ContainerStarted","Data":"cd5d441f5b93e630c3961ba5f4b1df277192e39becf2051dc9847a995c777521"} Mar 13 10:22:01 crc kubenswrapper[4632]: I0313 10:22:01.055233 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-sxw8d" event={"ID":"7b491335-6a73-46de-8098-f27ff4c6f795","Type":"ContainerStarted","Data":"a1959e840f4a0b201ad8fbaf6f64fc480857a670b1237564c4fb9ad0250ca5ea"} Mar 13 10:22:01 crc kubenswrapper[4632]: I0313 10:22:01.067529 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-qkr9n" event={"ID":"e66fe20e-05b5-42cd-ac1d-bc4eaee4c8e5","Type":"ContainerStarted","Data":"8960cdc79b8729e94514f9d4f70168401027047a85c476a873705f12f51c1873"} Mar 13 10:22:01 crc kubenswrapper[4632]: E0313 10:22:01.070136 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e7e865363955c670e41b6c042c4f87abceff78f5495ba5c5c82988baad45c978\\\"\"" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-qkr9n" podUID="e66fe20e-05b5-42cd-ac1d-bc4eaee4c8e5" Mar 13 10:22:01 crc kubenswrapper[4632]: I0313 10:22:01.074811 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-677c674df7-qbfg2" event={"ID":"2d8a9f3a-6631-4c1e-8381-3bc313837ca0","Type":"ContainerStarted","Data":"318f9130b56f9833106599e46f4be878dc926c055c7a426d7e877917e09016bd"} Mar 13 10:22:01 crc kubenswrapper[4632]: I0313 10:22:01.429634 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1542a9c8-92f6-4bc9-8231-829f649b0b8f-cert\") pod \"infra-operator-controller-manager-5995f4446f-flfxh\" (UID: \"1542a9c8-92f6-4bc9-8231-829f649b0b8f\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-flfxh" Mar 13 10:22:01 crc kubenswrapper[4632]: E0313 10:22:01.429817 4632 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 13 10:22:01 crc kubenswrapper[4632]: E0313 10:22:01.429865 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1542a9c8-92f6-4bc9-8231-829f649b0b8f-cert podName:1542a9c8-92f6-4bc9-8231-829f649b0b8f nodeName:}" failed. No retries permitted until 2026-03-13 10:22:05.429849961 +0000 UTC m=+1099.452380094 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1542a9c8-92f6-4bc9-8231-829f649b0b8f-cert") pod "infra-operator-controller-manager-5995f4446f-flfxh" (UID: "1542a9c8-92f6-4bc9-8231-829f649b0b8f") : secret "infra-operator-webhook-server-cert" not found Mar 13 10:22:01 crc kubenswrapper[4632]: I0313 10:22:01.957452 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d221857-ee77-4165-a351-ecd5fc424970-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7v927j\" (UID: \"2d221857-ee77-4165-a351-ecd5fc424970\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v927j" Mar 13 10:22:01 crc kubenswrapper[4632]: E0313 10:22:01.959754 4632 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 10:22:01 crc kubenswrapper[4632]: E0313 10:22:01.959829 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d221857-ee77-4165-a351-ecd5fc424970-cert podName:2d221857-ee77-4165-a351-ecd5fc424970 nodeName:}" failed. No retries permitted until 2026-03-13 10:22:05.959809491 +0000 UTC m=+1099.982339624 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2d221857-ee77-4165-a351-ecd5fc424970-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7v927j" (UID: "2d221857-ee77-4165-a351-ecd5fc424970") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 10:22:02 crc kubenswrapper[4632]: I0313 10:22:02.125656 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556622-7428t" event={"ID":"bedc1d17-f5c4-4a62-ab0c-f20a002e859b","Type":"ContainerStarted","Data":"235384aa6e3de087f78098e5ca2543f682f052a84f878bf4d06b8cb3d539f270"} Mar 13 10:22:02 crc kubenswrapper[4632]: E0313 10:22:02.138691 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e7e865363955c670e41b6c042c4f87abceff78f5495ba5c5c82988baad45c978\\\"\"" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-qkr9n" podUID="e66fe20e-05b5-42cd-ac1d-bc4eaee4c8e5" Mar 13 10:22:02 crc kubenswrapper[4632]: E0313 10:22:02.139490 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42\\\"\"" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-jwrgq" podUID="7bab78c8-7dac-48dc-a426-ccd4ae00a428" Mar 13 10:22:02 crc kubenswrapper[4632]: E0313 10:22:02.139574 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:2bd37bdd917e3abe72613a734ce5021330242ec8cae9b8da76c57a0765152922\\\"\"" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-628ss" podUID="d04e9aa6-f234-4ffa-81e2-1a2407addb77" Mar 13 10:22:02 crc kubenswrapper[4632]: E0313 10:22:02.139715 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4af709a2a6a1a1abb9659dbdd6fb3818122bdec7e66009fcced0bf0949f91554\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-kv8b2" podUID="e0d1d349-d63d-498b-ae15-3121f9ae73f8" Mar 13 10:22:02 crc kubenswrapper[4632]: E0313 10:22:02.141980 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:27c84b712abc2df6108e22636075eec25fea0229800f38594a492fd41b02c49d\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-nt7np" podUID="ee081327-4c3f-4c0a-9085-71085c6487b5" Mar 13 10:22:02 crc kubenswrapper[4632]: I0313 10:22:02.691628 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-metrics-certs\") pod \"openstack-operator-controller-manager-85c677895b-thbc4\" (UID: \"3fdb377f-5a78-4687-82e1-50718514290d\") " pod="openstack-operators/openstack-operator-controller-manager-85c677895b-thbc4" Mar 13 10:22:02 crc kubenswrapper[4632]: I0313 10:22:02.691688 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-webhook-certs\") pod \"openstack-operator-controller-manager-85c677895b-thbc4\" (UID: \"3fdb377f-5a78-4687-82e1-50718514290d\") " pod="openstack-operators/openstack-operator-controller-manager-85c677895b-thbc4" Mar 13 10:22:02 crc kubenswrapper[4632]: E0313 10:22:02.691778 4632 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 13 10:22:02 crc kubenswrapper[4632]: E0313 10:22:02.691837 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-metrics-certs podName:3fdb377f-5a78-4687-82e1-50718514290d nodeName:}" failed. No retries permitted until 2026-03-13 10:22:06.69181931 +0000 UTC m=+1100.714349443 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-metrics-certs") pod "openstack-operator-controller-manager-85c677895b-thbc4" (UID: "3fdb377f-5a78-4687-82e1-50718514290d") : secret "metrics-server-cert" not found Mar 13 10:22:02 crc kubenswrapper[4632]: E0313 10:22:02.691870 4632 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 13 10:22:02 crc kubenswrapper[4632]: E0313 10:22:02.691907 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-webhook-certs podName:3fdb377f-5a78-4687-82e1-50718514290d nodeName:}" failed. No retries permitted until 2026-03-13 10:22:06.691896342 +0000 UTC m=+1100.714426475 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-webhook-certs") pod "openstack-operator-controller-manager-85c677895b-thbc4" (UID: "3fdb377f-5a78-4687-82e1-50718514290d") : secret "webhook-server-cert" not found Mar 13 10:22:04 crc kubenswrapper[4632]: I0313 10:22:04.154933 4632 generic.go:334] "Generic (PLEG): container finished" podID="bedc1d17-f5c4-4a62-ab0c-f20a002e859b" containerID="0e23e3344de45eadba8d2e2f7dead6b7591126ab6ec56a759524e9fc0c54694e" exitCode=0 Mar 13 10:22:04 crc kubenswrapper[4632]: I0313 10:22:04.155003 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556622-7428t" event={"ID":"bedc1d17-f5c4-4a62-ab0c-f20a002e859b","Type":"ContainerDied","Data":"0e23e3344de45eadba8d2e2f7dead6b7591126ab6ec56a759524e9fc0c54694e"} Mar 13 10:22:05 crc kubenswrapper[4632]: I0313 10:22:05.460285 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1542a9c8-92f6-4bc9-8231-829f649b0b8f-cert\") pod \"infra-operator-controller-manager-5995f4446f-flfxh\" (UID: \"1542a9c8-92f6-4bc9-8231-829f649b0b8f\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-flfxh" Mar 13 10:22:05 crc kubenswrapper[4632]: E0313 10:22:05.460497 4632 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 13 10:22:05 crc kubenswrapper[4632]: E0313 10:22:05.460763 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1542a9c8-92f6-4bc9-8231-829f649b0b8f-cert podName:1542a9c8-92f6-4bc9-8231-829f649b0b8f nodeName:}" failed. No retries permitted until 2026-03-13 10:22:13.460738026 +0000 UTC m=+1107.483268249 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1542a9c8-92f6-4bc9-8231-829f649b0b8f-cert") pod "infra-operator-controller-manager-5995f4446f-flfxh" (UID: "1542a9c8-92f6-4bc9-8231-829f649b0b8f") : secret "infra-operator-webhook-server-cert" not found Mar 13 10:22:05 crc kubenswrapper[4632]: I0313 10:22:05.966928 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d221857-ee77-4165-a351-ecd5fc424970-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7v927j\" (UID: \"2d221857-ee77-4165-a351-ecd5fc424970\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v927j" Mar 13 10:22:05 crc kubenswrapper[4632]: E0313 10:22:05.967133 4632 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 10:22:05 crc kubenswrapper[4632]: E0313 10:22:05.967236 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d221857-ee77-4165-a351-ecd5fc424970-cert podName:2d221857-ee77-4165-a351-ecd5fc424970 nodeName:}" failed. No retries permitted until 2026-03-13 10:22:13.967218485 +0000 UTC m=+1107.989748618 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2d221857-ee77-4165-a351-ecd5fc424970-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7v927j" (UID: "2d221857-ee77-4165-a351-ecd5fc424970") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 10:22:06 crc kubenswrapper[4632]: I0313 10:22:06.674856 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556622-7428t" Mar 13 10:22:06 crc kubenswrapper[4632]: I0313 10:22:06.783414 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb5hx\" (UniqueName: \"kubernetes.io/projected/bedc1d17-f5c4-4a62-ab0c-f20a002e859b-kube-api-access-mb5hx\") pod \"bedc1d17-f5c4-4a62-ab0c-f20a002e859b\" (UID: \"bedc1d17-f5c4-4a62-ab0c-f20a002e859b\") " Mar 13 10:22:06 crc kubenswrapper[4632]: I0313 10:22:06.783849 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-metrics-certs\") pod \"openstack-operator-controller-manager-85c677895b-thbc4\" (UID: \"3fdb377f-5a78-4687-82e1-50718514290d\") " pod="openstack-operators/openstack-operator-controller-manager-85c677895b-thbc4" Mar 13 10:22:06 crc kubenswrapper[4632]: I0313 10:22:06.783886 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-webhook-certs\") pod \"openstack-operator-controller-manager-85c677895b-thbc4\" (UID: \"3fdb377f-5a78-4687-82e1-50718514290d\") " pod="openstack-operators/openstack-operator-controller-manager-85c677895b-thbc4" Mar 13 10:22:06 crc kubenswrapper[4632]: E0313 10:22:06.784019 4632 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 13 10:22:06 crc kubenswrapper[4632]: E0313 10:22:06.784076 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-webhook-certs podName:3fdb377f-5a78-4687-82e1-50718514290d nodeName:}" failed. No retries permitted until 2026-03-13 10:22:14.784059232 +0000 UTC m=+1108.806589365 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-webhook-certs") pod "openstack-operator-controller-manager-85c677895b-thbc4" (UID: "3fdb377f-5a78-4687-82e1-50718514290d") : secret "webhook-server-cert" not found Mar 13 10:22:06 crc kubenswrapper[4632]: E0313 10:22:06.784286 4632 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 13 10:22:06 crc kubenswrapper[4632]: E0313 10:22:06.784332 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-metrics-certs podName:3fdb377f-5a78-4687-82e1-50718514290d nodeName:}" failed. No retries permitted until 2026-03-13 10:22:14.784319668 +0000 UTC m=+1108.806849801 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-metrics-certs") pod "openstack-operator-controller-manager-85c677895b-thbc4" (UID: "3fdb377f-5a78-4687-82e1-50718514290d") : secret "metrics-server-cert" not found Mar 13 10:22:06 crc kubenswrapper[4632]: I0313 10:22:06.808298 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bedc1d17-f5c4-4a62-ab0c-f20a002e859b-kube-api-access-mb5hx" (OuterVolumeSpecName: "kube-api-access-mb5hx") pod "bedc1d17-f5c4-4a62-ab0c-f20a002e859b" (UID: "bedc1d17-f5c4-4a62-ab0c-f20a002e859b"). InnerVolumeSpecName "kube-api-access-mb5hx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:22:06 crc kubenswrapper[4632]: I0313 10:22:06.885522 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb5hx\" (UniqueName: \"kubernetes.io/projected/bedc1d17-f5c4-4a62-ab0c-f20a002e859b-kube-api-access-mb5hx\") on node \"crc\" DevicePath \"\"" Mar 13 10:22:07 crc kubenswrapper[4632]: I0313 10:22:07.175927 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556622-7428t" event={"ID":"bedc1d17-f5c4-4a62-ab0c-f20a002e859b","Type":"ContainerDied","Data":"235384aa6e3de087f78098e5ca2543f682f052a84f878bf4d06b8cb3d539f270"} Mar 13 10:22:07 crc kubenswrapper[4632]: I0313 10:22:07.175984 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="235384aa6e3de087f78098e5ca2543f682f052a84f878bf4d06b8cb3d539f270" Mar 13 10:22:07 crc kubenswrapper[4632]: I0313 10:22:07.176056 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556622-7428t" Mar 13 10:22:07 crc kubenswrapper[4632]: I0313 10:22:07.734315 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556616-8xbbs"] Mar 13 10:22:07 crc kubenswrapper[4632]: I0313 10:22:07.738901 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556616-8xbbs"] Mar 13 10:22:08 crc kubenswrapper[4632]: I0313 10:22:08.081667 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c21d462b-89d1-4844-9bfc-3f0cdf7727e9" path="/var/lib/kubelet/pods/c21d462b-89d1-4844-9bfc-3f0cdf7727e9/volumes" Mar 13 10:22:13 crc kubenswrapper[4632]: I0313 10:22:13.476445 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1542a9c8-92f6-4bc9-8231-829f649b0b8f-cert\") pod \"infra-operator-controller-manager-5995f4446f-flfxh\" (UID: \"1542a9c8-92f6-4bc9-8231-829f649b0b8f\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-flfxh" Mar 13 10:22:13 crc kubenswrapper[4632]: I0313 10:22:13.481166 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1542a9c8-92f6-4bc9-8231-829f649b0b8f-cert\") pod \"infra-operator-controller-manager-5995f4446f-flfxh\" (UID: \"1542a9c8-92f6-4bc9-8231-829f649b0b8f\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-flfxh" Mar 13 10:22:13 crc kubenswrapper[4632]: I0313 10:22:13.669839 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-flfxh" Mar 13 10:22:13 crc kubenswrapper[4632]: I0313 10:22:13.982432 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d221857-ee77-4165-a351-ecd5fc424970-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7v927j\" (UID: \"2d221857-ee77-4165-a351-ecd5fc424970\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v927j" Mar 13 10:22:13 crc kubenswrapper[4632]: I0313 10:22:13.985962 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d221857-ee77-4165-a351-ecd5fc424970-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7v927j\" (UID: \"2d221857-ee77-4165-a351-ecd5fc424970\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v927j" Mar 13 10:22:14 crc kubenswrapper[4632]: I0313 10:22:14.277240 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v927j" Mar 13 10:22:14 crc kubenswrapper[4632]: E0313 10:22:14.795413 4632 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 13 10:22:14 crc kubenswrapper[4632]: I0313 10:22:14.796076 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-metrics-certs\") pod \"openstack-operator-controller-manager-85c677895b-thbc4\" (UID: \"3fdb377f-5a78-4687-82e1-50718514290d\") " pod="openstack-operators/openstack-operator-controller-manager-85c677895b-thbc4" Mar 13 10:22:14 crc kubenswrapper[4632]: E0313 10:22:14.796126 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-metrics-certs podName:3fdb377f-5a78-4687-82e1-50718514290d nodeName:}" failed. No retries permitted until 2026-03-13 10:22:30.796106325 +0000 UTC m=+1124.818636458 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-metrics-certs") pod "openstack-operator-controller-manager-85c677895b-thbc4" (UID: "3fdb377f-5a78-4687-82e1-50718514290d") : secret "metrics-server-cert" not found Mar 13 10:22:14 crc kubenswrapper[4632]: I0313 10:22:14.796179 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-webhook-certs\") pod \"openstack-operator-controller-manager-85c677895b-thbc4\" (UID: \"3fdb377f-5a78-4687-82e1-50718514290d\") " pod="openstack-operators/openstack-operator-controller-manager-85c677895b-thbc4" Mar 13 10:22:14 crc kubenswrapper[4632]: I0313 10:22:14.814675 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-webhook-certs\") pod \"openstack-operator-controller-manager-85c677895b-thbc4\" (UID: \"3fdb377f-5a78-4687-82e1-50718514290d\") " pod="openstack-operators/openstack-operator-controller-manager-85c677895b-thbc4" Mar 13 10:22:15 crc kubenswrapper[4632]: E0313 10:22:15.304270 4632 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:d9bffb59bb7f9f0a6cb103c3986fd2c1bdb13ce6349c39427a690858cbd754d6" Mar 13 10:22:15 crc kubenswrapper[4632]: E0313 10:22:15.304443 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:d9bffb59bb7f9f0a6cb103c3986fd2c1bdb13ce6349c39427a690858cbd754d6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pzvbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-6d9d6b584d-2rv7s_openstack-operators(9a963f9c-ac58-4e21-abfa-fca1279a192d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 10:22:15 crc kubenswrapper[4632]: E0313 10:22:15.307593 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-2rv7s" podUID="9a963f9c-ac58-4e21-abfa-fca1279a192d" Mar 13 10:22:16 crc kubenswrapper[4632]: E0313 10:22:16.268745 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:d9bffb59bb7f9f0a6cb103c3986fd2c1bdb13ce6349c39427a690858cbd754d6\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-2rv7s" podUID="9a963f9c-ac58-4e21-abfa-fca1279a192d" Mar 13 10:22:16 crc kubenswrapper[4632]: E0313 10:22:16.880048 4632 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:5fe5351a3de5e1267112d52cd81477a01d47f90be713cc5439c76543a4c33721" Mar 13 10:22:16 crc kubenswrapper[4632]: E0313 10:22:16.880261 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:5fe5351a3de5e1267112d52cd81477a01d47f90be713cc5439c76543a4c33721,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dl4tq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-776c5696bf-bkmbn_openstack-operators(c33d0da9-5a04-42d6-80d3-2f558b4a90b0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 10:22:16 crc kubenswrapper[4632]: E0313 10:22:16.881499 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-bkmbn" podUID="c33d0da9-5a04-42d6-80d3-2f558b4a90b0" Mar 13 10:22:17 crc kubenswrapper[4632]: E0313 10:22:17.270893 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:5fe5351a3de5e1267112d52cd81477a01d47f90be713cc5439c76543a4c33721\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-bkmbn" podUID="c33d0da9-5a04-42d6-80d3-2f558b4a90b0" Mar 13 10:22:18 crc kubenswrapper[4632]: E0313 10:22:18.697438 4632 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:18fe6f2f0be7e736db86ff2d600af12a753e14b0a03232ce4f03629a89905571" Mar 13 10:22:18 crc kubenswrapper[4632]: E0313 10:22:18.697604 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:18fe6f2f0be7e736db86ff2d600af12a753e14b0a03232ce4f03629a89905571,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zsp6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5f4f55cb5c-62gpm_openstack-operators(9e1d6ac6-c4ee-4381-86b8-c337f8c2d6a5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 10:22:18 crc kubenswrapper[4632]: E0313 10:22:18.700511 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-62gpm" podUID="9e1d6ac6-c4ee-4381-86b8-c337f8c2d6a5" Mar 13 10:22:19 crc kubenswrapper[4632]: E0313 10:22:19.282320 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:18fe6f2f0be7e736db86ff2d600af12a753e14b0a03232ce4f03629a89905571\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-62gpm" podUID="9e1d6ac6-c4ee-4381-86b8-c337f8c2d6a5" Mar 13 10:22:19 crc kubenswrapper[4632]: E0313 10:22:19.435807 4632 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:9182d1816c6fdb093d6328f1b0bf39296b9eccfa495f35e2198ec4764fa6288f" Mar 13 10:22:19 crc kubenswrapper[4632]: E0313 10:22:19.435997 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:9182d1816c6fdb093d6328f1b0bf39296b9eccfa495f35e2198ec4764fa6288f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-psrhj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-6bbb499bbc-wtzrw_openstack-operators(c8fc6f03-c43b-4ade-92a8-acc5537a4eeb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 10:22:19 crc kubenswrapper[4632]: E0313 10:22:19.437162 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-wtzrw" podUID="c8fc6f03-c43b-4ade-92a8-acc5537a4eeb" Mar 13 10:22:20 crc kubenswrapper[4632]: E0313 10:22:20.290078 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:9182d1816c6fdb093d6328f1b0bf39296b9eccfa495f35e2198ec4764fa6288f\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-wtzrw" podUID="c8fc6f03-c43b-4ade-92a8-acc5537a4eeb" Mar 13 10:22:21 crc kubenswrapper[4632]: E0313 10:22:21.313570 4632 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:2f63ddf5c95c6c82f6e04bc9f7f20d56dc003614647726ab00276239eec40b7f" Mar 13 10:22:21 crc kubenswrapper[4632]: E0313 10:22:21.314587 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:2f63ddf5c95c6c82f6e04bc9f7f20d56dc003614647726ab00276239eec40b7f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zsqv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-bbc5b68f9-4m8kf_openstack-operators(0a9d48f4-d68b-4ef9-826e-ed619c761405): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 10:22:21 crc kubenswrapper[4632]: E0313 10:22:21.315832 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-4m8kf" podUID="0a9d48f4-d68b-4ef9-826e-ed619c761405" Mar 13 10:22:21 crc kubenswrapper[4632]: E0313 10:22:21.841703 4632 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:6c9aef12f50be0b974f5e35b0d69303e7f7b95e6db5d41bcdb2d9d1100e921a6" Mar 13 10:22:21 crc kubenswrapper[4632]: E0313 10:22:21.841888 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:6c9aef12f50be0b974f5e35b0d69303e7f7b95e6db5d41bcdb2d9d1100e921a6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q9mvv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-77b6666d85-cgh6c_openstack-operators(ff6d4dcb-9eb8-44fc-951e-f2aecd77a639): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 10:22:21 crc kubenswrapper[4632]: E0313 10:22:21.843889 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-cgh6c" podUID="ff6d4dcb-9eb8-44fc-951e-f2aecd77a639" Mar 13 10:22:22 crc kubenswrapper[4632]: E0313 10:22:22.305383 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:6c9aef12f50be0b974f5e35b0d69303e7f7b95e6db5d41bcdb2d9d1100e921a6\\\"\"" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-cgh6c" podUID="ff6d4dcb-9eb8-44fc-951e-f2aecd77a639" Mar 13 10:22:22 crc kubenswrapper[4632]: E0313 10:22:22.305777 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:2f63ddf5c95c6c82f6e04bc9f7f20d56dc003614647726ab00276239eec40b7f\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-4m8kf" podUID="0a9d48f4-d68b-4ef9-826e-ed619c761405" Mar 13 10:22:22 crc kubenswrapper[4632]: E0313 10:22:22.547855 4632 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:571f369855b0891a2b14e54a4c1c5ae2fbbd5de4c8fddd48e81033aad4b26423" Mar 13 10:22:22 crc kubenswrapper[4632]: E0313 10:22:22.548048 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:571f369855b0891a2b14e54a4c1c5ae2fbbd5de4c8fddd48e81033aad4b26423,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-96kl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-677bd678f7-wj9qs_openstack-operators(68c5eb80-4214-42c5-a08d-de6012969621): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 10:22:22 crc kubenswrapper[4632]: E0313 10:22:22.549263 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-wj9qs" podUID="68c5eb80-4214-42c5-a08d-de6012969621" Mar 13 10:22:23 crc kubenswrapper[4632]: E0313 10:22:23.309645 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:571f369855b0891a2b14e54a4c1c5ae2fbbd5de4c8fddd48e81033aad4b26423\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-wj9qs" podUID="68c5eb80-4214-42c5-a08d-de6012969621" Mar 13 10:22:24 crc kubenswrapper[4632]: E0313 10:22:24.663372 4632 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Mar 13 10:22:24 crc kubenswrapper[4632]: E0313 10:22:24.663538 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dp7js,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-2lzt8_openstack-operators(daba1153-3b28-4234-8dd0-ec20160abbfe): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 10:22:24 crc kubenswrapper[4632]: E0313 10:22:24.665549 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lzt8" podUID="daba1153-3b28-4234-8dd0-ec20160abbfe" Mar 13 10:22:25 crc kubenswrapper[4632]: E0313 10:22:25.320369 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lzt8" podUID="daba1153-3b28-4234-8dd0-ec20160abbfe" Mar 13 10:22:25 crc kubenswrapper[4632]: I0313 10:22:25.917565 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5995f4446f-flfxh"] Mar 13 10:22:25 crc kubenswrapper[4632]: W0313 10:22:25.963358 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1542a9c8_92f6_4bc9_8231_829f649b0b8f.slice/crio-b11fbfeed73c462f309b92701ff644296a5764bdfe9b7decd3c8c75dab49c807 WatchSource:0}: Error finding container b11fbfeed73c462f309b92701ff644296a5764bdfe9b7decd3c8c75dab49c807: Status 404 returned error can't find the container with id b11fbfeed73c462f309b92701ff644296a5764bdfe9b7decd3c8c75dab49c807 Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.029288 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v927j"] Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.326787 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-sxw8d" event={"ID":"7b491335-6a73-46de-8098-f27ff4c6f795","Type":"ContainerStarted","Data":"bee39db93b042daffd1b30df6ba6bf123a3260d5b83250e53a8fd157f3c524e7"} Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.327270 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-sxw8d" Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.328214 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-6nb82" event={"ID":"f0be4a6b-e3ac-4141-b5bf-b3fafcca5fda","Type":"ContainerStarted","Data":"b79dde4b0109a751bfba6b9882a550b5aaf0de838fae99b2eeecdc581770755b"} Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.328963 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-6nb82" Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.329582 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-nt7np" event={"ID":"ee081327-4c3f-4c0a-9085-71085c6487b5","Type":"ContainerStarted","Data":"67deb5497e0835791a752b16658e0c0dc0da7d1e14342650d8635b85be9aef64"} Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.329923 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-nt7np" Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.331310 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-szd7c" event={"ID":"9040a0e0-2a56-4331-ba50-b19ff05ef0c0","Type":"ContainerStarted","Data":"6c302ccc4aed3f78a784b9cb4fb4353ddd24cf9520fc3acd91084005d77482dd"} Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.331705 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-szd7c" Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.333780 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-628ss" event={"ID":"d04e9aa6-f234-4ffa-81e2-1a2407addb77","Type":"ContainerStarted","Data":"c892cdb883fb6ba8b138b2e24e260c237c4f7bb1b18836e7c594d3bacb5fb705"} Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.334000 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-628ss" Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.335084 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-flfxh" event={"ID":"1542a9c8-92f6-4bc9-8231-829f649b0b8f","Type":"ContainerStarted","Data":"b11fbfeed73c462f309b92701ff644296a5764bdfe9b7decd3c8c75dab49c807"} Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.337226 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-qkr9n" event={"ID":"e66fe20e-05b5-42cd-ac1d-bc4eaee4c8e5","Type":"ContainerStarted","Data":"6f0852ca13597fb9fc03773f58fd247d53043d5908672daa18e3613508f67bbf"} Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.337454 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-qkr9n" Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.338588 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-cfcgn" event={"ID":"75d652c7-8521-4039-913a-fa625f89b094","Type":"ContainerStarted","Data":"83cb70749e54e22ecd698ccb4729792c40e2011f55f7d20f7370432c8a980067"} Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.338735 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-cfcgn" Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.339781 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-jwrgq" event={"ID":"7bab78c8-7dac-48dc-a426-ccd4ae00a428","Type":"ContainerStarted","Data":"b3fcfa0533b27651d8f7065fb7e5b6efa7d64fd553b96209205f07ed6ffdad34"} Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.340361 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-jwrgq" Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.341928 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-f6c87" event={"ID":"3f3a462e-4d89-45b3-8611-181aca5f8558","Type":"ContainerStarted","Data":"9afae6bee180d74fb018ae3b7e7ec98295c2bc4a1335f0f9701be32d94102d7d"} Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.342344 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-f6c87" Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.343679 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-kv8b2" event={"ID":"e0d1d349-d63d-498b-ae15-3121f9ae73f8","Type":"ContainerStarted","Data":"0bec7dcb8f757d70baa9601d9b792a7b74e7b77c5edaedad986391dec0f864b4"} Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.344033 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-kv8b2" Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.345433 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-677c674df7-qbfg2" event={"ID":"2d8a9f3a-6631-4c1e-8381-3bc313837ca0","Type":"ContainerStarted","Data":"ca77cf3221a73d98b6d316f0f4b658716c276f2431b18768ebe579b4b936ce38"} Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.345763 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-677c674df7-qbfg2" Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.348069 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qg79l" event={"ID":"20f92131-aca4-41ea-9144-a23bd9216f49","Type":"ContainerStarted","Data":"27e7fda6eecb21101ee5ce1ea41810a19ec88a60ab7e4cd48cde9037921e12a2"} Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.348210 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qg79l" Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.349430 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v927j" event={"ID":"2d221857-ee77-4165-a351-ecd5fc424970","Type":"ContainerStarted","Data":"f8df7e9acd4d145a93e181d8d0cc2a801323ffcbd2475c34f324413423693526"} Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.378284 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-sxw8d" podStartSLOduration=5.138984984 podStartE2EDuration="29.378263588s" podCreationTimestamp="2026-03-13 10:21:57 +0000 UTC" firstStartedPulling="2026-03-13 10:21:59.99174984 +0000 UTC m=+1094.014279973" lastFinishedPulling="2026-03-13 10:22:24.231028444 +0000 UTC m=+1118.253558577" observedRunningTime="2026-03-13 10:22:26.37796464 +0000 UTC m=+1120.400494773" watchObservedRunningTime="2026-03-13 10:22:26.378263588 +0000 UTC m=+1120.400793721" Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.411636 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-nt7np" podStartSLOduration=4.089249426 podStartE2EDuration="29.411618624s" podCreationTimestamp="2026-03-13 10:21:57 +0000 UTC" firstStartedPulling="2026-03-13 10:22:00.330913518 +0000 UTC m=+1094.353443651" lastFinishedPulling="2026-03-13 10:22:25.653282716 +0000 UTC m=+1119.675812849" observedRunningTime="2026-03-13 10:22:26.405964049 +0000 UTC m=+1120.428494202" watchObservedRunningTime="2026-03-13 10:22:26.411618624 +0000 UTC m=+1120.434148757" Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.485430 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-cfcgn" podStartSLOduration=5.116452755 podStartE2EDuration="29.485408349s" podCreationTimestamp="2026-03-13 10:21:57 +0000 UTC" firstStartedPulling="2026-03-13 10:21:59.811537652 +0000 UTC m=+1093.834067785" lastFinishedPulling="2026-03-13 10:22:24.180493246 +0000 UTC m=+1118.203023379" observedRunningTime="2026-03-13 10:22:26.482877928 +0000 UTC m=+1120.505408081" watchObservedRunningTime="2026-03-13 10:22:26.485408349 +0000 UTC m=+1120.507938482" Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.487508 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qg79l" podStartSLOduration=5.111995689 podStartE2EDuration="29.487488099s" podCreationTimestamp="2026-03-13 10:21:57 +0000 UTC" firstStartedPulling="2026-03-13 10:21:59.804998906 +0000 UTC m=+1093.827529049" lastFinishedPulling="2026-03-13 10:22:24.180491336 +0000 UTC m=+1118.203021459" observedRunningTime="2026-03-13 10:22:26.442513023 +0000 UTC m=+1120.465043166" watchObservedRunningTime="2026-03-13 10:22:26.487488099 +0000 UTC m=+1120.510018232" Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.510680 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-qkr9n" podStartSLOduration=4.140230447 podStartE2EDuration="29.510656762s" podCreationTimestamp="2026-03-13 10:21:57 +0000 UTC" firstStartedPulling="2026-03-13 10:22:00.323160284 +0000 UTC m=+1094.345690417" lastFinishedPulling="2026-03-13 10:22:25.693586599 +0000 UTC m=+1119.716116732" observedRunningTime="2026-03-13 10:22:26.498501231 +0000 UTC m=+1120.521031374" watchObservedRunningTime="2026-03-13 10:22:26.510656762 +0000 UTC m=+1120.533186895" Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.536674 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-kv8b2" podStartSLOduration=4.209780851 podStartE2EDuration="29.536649384s" podCreationTimestamp="2026-03-13 10:21:57 +0000 UTC" firstStartedPulling="2026-03-13 10:22:00.323127443 +0000 UTC m=+1094.345657576" lastFinishedPulling="2026-03-13 10:22:25.649995986 +0000 UTC m=+1119.672526109" observedRunningTime="2026-03-13 10:22:26.532022183 +0000 UTC m=+1120.554552326" watchObservedRunningTime="2026-03-13 10:22:26.536649384 +0000 UTC m=+1120.559179517" Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.632752 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-677c674df7-qbfg2" podStartSLOduration=5.73655299 podStartE2EDuration="29.632726141s" podCreationTimestamp="2026-03-13 10:21:57 +0000 UTC" firstStartedPulling="2026-03-13 10:22:00.284458628 +0000 UTC m=+1094.306988761" lastFinishedPulling="2026-03-13 10:22:24.180631779 +0000 UTC m=+1118.203161912" observedRunningTime="2026-03-13 10:22:26.631113562 +0000 UTC m=+1120.653643695" watchObservedRunningTime="2026-03-13 10:22:26.632726141 +0000 UTC m=+1120.655256274" Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.632984 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-6nb82" podStartSLOduration=5.348263786 podStartE2EDuration="29.632976136s" podCreationTimestamp="2026-03-13 10:21:57 +0000 UTC" firstStartedPulling="2026-03-13 10:21:59.945397982 +0000 UTC m=+1093.967928115" lastFinishedPulling="2026-03-13 10:22:24.230110332 +0000 UTC m=+1118.252640465" observedRunningTime="2026-03-13 10:22:26.573739551 +0000 UTC m=+1120.596269684" watchObservedRunningTime="2026-03-13 10:22:26.632976136 +0000 UTC m=+1120.655506279" Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.728819 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-jwrgq" podStartSLOduration=4.42018119 podStartE2EDuration="29.728801688s" podCreationTimestamp="2026-03-13 10:21:57 +0000 UTC" firstStartedPulling="2026-03-13 10:22:00.373173529 +0000 UTC m=+1094.395703662" lastFinishedPulling="2026-03-13 10:22:25.681794027 +0000 UTC m=+1119.704324160" observedRunningTime="2026-03-13 10:22:26.685173455 +0000 UTC m=+1120.707703608" watchObservedRunningTime="2026-03-13 10:22:26.728801688 +0000 UTC m=+1120.751331821" Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.729634 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-f6c87" podStartSLOduration=4.455868523 podStartE2EDuration="29.729626907s" podCreationTimestamp="2026-03-13 10:21:57 +0000 UTC" firstStartedPulling="2026-03-13 10:21:58.957204968 +0000 UTC m=+1092.979735101" lastFinishedPulling="2026-03-13 10:22:24.230963352 +0000 UTC m=+1118.253493485" observedRunningTime="2026-03-13 10:22:26.725890988 +0000 UTC m=+1120.748421131" watchObservedRunningTime="2026-03-13 10:22:26.729626907 +0000 UTC m=+1120.752157050" Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.775521 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-szd7c" podStartSLOduration=5.402674949 podStartE2EDuration="29.775498444s" podCreationTimestamp="2026-03-13 10:21:57 +0000 UTC" firstStartedPulling="2026-03-13 10:21:59.807902536 +0000 UTC m=+1093.830432669" lastFinishedPulling="2026-03-13 10:22:24.180726031 +0000 UTC m=+1118.203256164" observedRunningTime="2026-03-13 10:22:26.770139855 +0000 UTC m=+1120.792669988" watchObservedRunningTime="2026-03-13 10:22:26.775498444 +0000 UTC m=+1120.798028587" Mar 13 10:22:26 crc kubenswrapper[4632]: I0313 10:22:26.801007 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-628ss" podStartSLOduration=4.448190179 podStartE2EDuration="29.800983803s" podCreationTimestamp="2026-03-13 10:21:57 +0000 UTC" firstStartedPulling="2026-03-13 10:22:00.297223063 +0000 UTC m=+1094.319753186" lastFinishedPulling="2026-03-13 10:22:25.650016677 +0000 UTC m=+1119.672546810" observedRunningTime="2026-03-13 10:22:26.79708495 +0000 UTC m=+1120.819615083" watchObservedRunningTime="2026-03-13 10:22:26.800983803 +0000 UTC m=+1120.823513956" Mar 13 10:22:30 crc kubenswrapper[4632]: I0313 10:22:30.873889 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-metrics-certs\") pod \"openstack-operator-controller-manager-85c677895b-thbc4\" (UID: \"3fdb377f-5a78-4687-82e1-50718514290d\") " pod="openstack-operators/openstack-operator-controller-manager-85c677895b-thbc4" Mar 13 10:22:30 crc kubenswrapper[4632]: I0313 10:22:30.881774 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3fdb377f-5a78-4687-82e1-50718514290d-metrics-certs\") pod \"openstack-operator-controller-manager-85c677895b-thbc4\" (UID: \"3fdb377f-5a78-4687-82e1-50718514290d\") " pod="openstack-operators/openstack-operator-controller-manager-85c677895b-thbc4" Mar 13 10:22:31 crc kubenswrapper[4632]: I0313 10:22:31.091890 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-85c677895b-thbc4" Mar 13 10:22:31 crc kubenswrapper[4632]: I0313 10:22:31.559379 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-85c677895b-thbc4"] Mar 13 10:22:32 crc kubenswrapper[4632]: I0313 10:22:32.407421 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-85c677895b-thbc4" event={"ID":"3fdb377f-5a78-4687-82e1-50718514290d","Type":"ContainerStarted","Data":"8a9acb03a35bd7ffb9f9f29ea9128fffa2172945336e93c38861910e213e1a08"} Mar 13 10:22:32 crc kubenswrapper[4632]: I0313 10:22:32.407748 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-85c677895b-thbc4" event={"ID":"3fdb377f-5a78-4687-82e1-50718514290d","Type":"ContainerStarted","Data":"f8dc92a47be109899fa8b210d0bf31f4e2fade68a0378a4148b8c4c29ad21bf0"} Mar 13 10:22:33 crc kubenswrapper[4632]: I0313 10:22:33.414028 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-85c677895b-thbc4" Mar 13 10:22:35 crc kubenswrapper[4632]: I0313 10:22:35.062124 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-85c677895b-thbc4" podStartSLOduration=37.062106769 podStartE2EDuration="37.062106769s" podCreationTimestamp="2026-03-13 10:21:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:22:33.452702793 +0000 UTC m=+1127.475232946" watchObservedRunningTime="2026-03-13 10:22:35.062106769 +0000 UTC m=+1129.084636902" Mar 13 10:22:37 crc kubenswrapper[4632]: I0313 10:22:37.444996 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-cgh6c" event={"ID":"ff6d4dcb-9eb8-44fc-951e-f2aecd77a639","Type":"ContainerStarted","Data":"45408b4b3380104d170f7dbf124450f3cb3dfa6814358710ecd0eeefe0ba8dba"} Mar 13 10:22:37 crc kubenswrapper[4632]: I0313 10:22:37.446179 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-cgh6c" Mar 13 10:22:37 crc kubenswrapper[4632]: I0313 10:22:37.447689 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-2rv7s" event={"ID":"9a963f9c-ac58-4e21-abfa-fca1279a192d","Type":"ContainerStarted","Data":"5da171cf0f040b8c46b8f4cecc4cb7071dfd001352c9956b9ace45b281cc0e4d"} Mar 13 10:22:37 crc kubenswrapper[4632]: I0313 10:22:37.448300 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-2rv7s" Mar 13 10:22:37 crc kubenswrapper[4632]: I0313 10:22:37.450553 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v927j" event={"ID":"2d221857-ee77-4165-a351-ecd5fc424970","Type":"ContainerStarted","Data":"4bfe620f039be61e9b61330d9268e6476def1f3cae727bb56906984d0ef2ee19"} Mar 13 10:22:37 crc kubenswrapper[4632]: I0313 10:22:37.450768 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v927j" Mar 13 10:22:37 crc kubenswrapper[4632]: I0313 10:22:37.452675 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-flfxh" event={"ID":"1542a9c8-92f6-4bc9-8231-829f649b0b8f","Type":"ContainerStarted","Data":"9eb54935376ed0fc9a365f9c82f2f0039889e05f8d23c20aab04b22f4c557c96"} Mar 13 10:22:37 crc kubenswrapper[4632]: I0313 10:22:37.453470 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-flfxh" Mar 13 10:22:37 crc kubenswrapper[4632]: I0313 10:22:37.458040 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-bkmbn" event={"ID":"c33d0da9-5a04-42d6-80d3-2f558b4a90b0","Type":"ContainerStarted","Data":"45b2faabcb131c3fc71cefac2882855c661528a769ac150dbda2d0db01b89c2f"} Mar 13 10:22:37 crc kubenswrapper[4632]: I0313 10:22:37.458775 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-bkmbn" Mar 13 10:22:37 crc kubenswrapper[4632]: I0313 10:22:37.459909 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-wtzrw" event={"ID":"c8fc6f03-c43b-4ade-92a8-acc5537a4eeb","Type":"ContainerStarted","Data":"ff2ebee46d2497391bf84217958c1331ef2b33747c210e706193cc32df47b16b"} Mar 13 10:22:37 crc kubenswrapper[4632]: I0313 10:22:37.460396 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-wtzrw" Mar 13 10:22:37 crc kubenswrapper[4632]: I0313 10:22:37.461502 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-62gpm" event={"ID":"9e1d6ac6-c4ee-4381-86b8-c337f8c2d6a5","Type":"ContainerStarted","Data":"97199dbb17e5b26ac45577f72ea8130835efdb3aa71d1d178692d4af3ee8d824"} Mar 13 10:22:37 crc kubenswrapper[4632]: I0313 10:22:37.461996 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-62gpm" Mar 13 10:22:37 crc kubenswrapper[4632]: I0313 10:22:37.463151 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-4m8kf" event={"ID":"0a9d48f4-d68b-4ef9-826e-ed619c761405","Type":"ContainerStarted","Data":"cf1a08befb3cdc6b962932151cafb742b4042bea18d08fd1d4b1d27bc01a5ba1"} Mar 13 10:22:37 crc kubenswrapper[4632]: I0313 10:22:37.463617 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-4m8kf" Mar 13 10:22:37 crc kubenswrapper[4632]: I0313 10:22:37.483791 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-cgh6c" podStartSLOduration=4.271047755 podStartE2EDuration="40.483772803s" podCreationTimestamp="2026-03-13 10:21:57 +0000 UTC" firstStartedPulling="2026-03-13 10:21:59.945375542 +0000 UTC m=+1093.967905675" lastFinishedPulling="2026-03-13 10:22:36.15810059 +0000 UTC m=+1130.180630723" observedRunningTime="2026-03-13 10:22:37.474960833 +0000 UTC m=+1131.497490966" watchObservedRunningTime="2026-03-13 10:22:37.483772803 +0000 UTC m=+1131.506302956" Mar 13 10:22:37 crc kubenswrapper[4632]: I0313 10:22:37.571178 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-4m8kf" podStartSLOduration=4.122712669 podStartE2EDuration="40.571154133s" podCreationTimestamp="2026-03-13 10:21:57 +0000 UTC" firstStartedPulling="2026-03-13 10:22:00.271255833 +0000 UTC m=+1094.293785966" lastFinishedPulling="2026-03-13 10:22:36.719697297 +0000 UTC m=+1130.742227430" observedRunningTime="2026-03-13 10:22:37.545326275 +0000 UTC m=+1131.567856408" watchObservedRunningTime="2026-03-13 10:22:37.571154133 +0000 UTC m=+1131.593684276" Mar 13 10:22:37 crc kubenswrapper[4632]: I0313 10:22:37.625548 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-f6c87" Mar 13 10:22:37 crc kubenswrapper[4632]: I0313 10:22:37.640971 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-62gpm" podStartSLOduration=4.412925336 podStartE2EDuration="40.64093057s" podCreationTimestamp="2026-03-13 10:21:57 +0000 UTC" firstStartedPulling="2026-03-13 10:21:59.928227802 +0000 UTC m=+1093.950757935" lastFinishedPulling="2026-03-13 10:22:36.156233026 +0000 UTC m=+1130.178763169" observedRunningTime="2026-03-13 10:22:37.612131552 +0000 UTC m=+1131.634661685" watchObservedRunningTime="2026-03-13 10:22:37.64093057 +0000 UTC m=+1131.663460703" Mar 13 10:22:37 crc kubenswrapper[4632]: I0313 10:22:37.641794 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-flfxh" podStartSLOduration=30.454470456 podStartE2EDuration="40.641786291s" podCreationTimestamp="2026-03-13 10:21:57 +0000 UTC" firstStartedPulling="2026-03-13 10:22:25.96885446 +0000 UTC m=+1119.991384593" lastFinishedPulling="2026-03-13 10:22:36.156170295 +0000 UTC m=+1130.178700428" observedRunningTime="2026-03-13 10:22:37.6346361 +0000 UTC m=+1131.657166233" watchObservedRunningTime="2026-03-13 10:22:37.641786291 +0000 UTC m=+1131.664316424" Mar 13 10:22:37 crc kubenswrapper[4632]: I0313 10:22:37.653698 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-cfcgn" Mar 13 10:22:37 crc kubenswrapper[4632]: I0313 10:22:37.673061 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-wtzrw" podStartSLOduration=4.320086827 podStartE2EDuration="40.673040408s" podCreationTimestamp="2026-03-13 10:21:57 +0000 UTC" firstStartedPulling="2026-03-13 10:21:59.804932244 +0000 UTC m=+1093.827462377" lastFinishedPulling="2026-03-13 10:22:36.157885825 +0000 UTC m=+1130.180415958" observedRunningTime="2026-03-13 10:22:37.664434362 +0000 UTC m=+1131.686964515" watchObservedRunningTime="2026-03-13 10:22:37.673040408 +0000 UTC m=+1131.695570551" Mar 13 10:22:37 crc kubenswrapper[4632]: I0313 10:22:37.682573 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qg79l" Mar 13 10:22:37 crc kubenswrapper[4632]: I0313 10:22:37.703810 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-bkmbn" podStartSLOduration=4.485414439 podStartE2EDuration="40.703791463s" podCreationTimestamp="2026-03-13 10:21:57 +0000 UTC" firstStartedPulling="2026-03-13 10:21:59.945462594 +0000 UTC m=+1093.967992727" lastFinishedPulling="2026-03-13 10:22:36.163839608 +0000 UTC m=+1130.186369751" observedRunningTime="2026-03-13 10:22:37.698092647 +0000 UTC m=+1131.720622780" watchObservedRunningTime="2026-03-13 10:22:37.703791463 +0000 UTC m=+1131.726321596" Mar 13 10:22:37 crc kubenswrapper[4632]: I0313 10:22:37.797632 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-2rv7s" podStartSLOduration=3.871204305 podStartE2EDuration="40.797615526s" podCreationTimestamp="2026-03-13 10:21:57 +0000 UTC" firstStartedPulling="2026-03-13 10:21:59.229813355 +0000 UTC m=+1093.252343488" lastFinishedPulling="2026-03-13 10:22:36.156224576 +0000 UTC m=+1130.178754709" observedRunningTime="2026-03-13 10:22:37.793047797 +0000 UTC m=+1131.815577930" watchObservedRunningTime="2026-03-13 10:22:37.797615526 +0000 UTC m=+1131.820145659" Mar 13 10:22:37 crc kubenswrapper[4632]: I0313 10:22:37.797809 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v927j" podStartSLOduration=30.747011571 podStartE2EDuration="40.797805601s" podCreationTimestamp="2026-03-13 10:21:57 +0000 UTC" firstStartedPulling="2026-03-13 10:22:26.105445996 +0000 UTC m=+1120.127976129" lastFinishedPulling="2026-03-13 10:22:36.156240026 +0000 UTC m=+1130.178770159" observedRunningTime="2026-03-13 10:22:37.765662233 +0000 UTC m=+1131.788192366" watchObservedRunningTime="2026-03-13 10:22:37.797805601 +0000 UTC m=+1131.820335734" Mar 13 10:22:38 crc kubenswrapper[4632]: I0313 10:22:38.001146 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-szd7c" Mar 13 10:22:38 crc kubenswrapper[4632]: I0313 10:22:38.189889 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-6nb82" Mar 13 10:22:38 crc kubenswrapper[4632]: I0313 10:22:38.238532 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-sxw8d" Mar 13 10:22:38 crc kubenswrapper[4632]: I0313 10:22:38.332035 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-628ss" Mar 13 10:22:38 crc kubenswrapper[4632]: I0313 10:22:38.470612 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-wj9qs" event={"ID":"68c5eb80-4214-42c5-a08d-de6012969621","Type":"ContainerStarted","Data":"579d286b9eb7e56fb8f1cb6d18127cc0ece5c920fbbbc7e2c67943e4800bb183"} Mar 13 10:22:38 crc kubenswrapper[4632]: I0313 10:22:38.504894 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-wj9qs" podStartSLOduration=3.244169395 podStartE2EDuration="41.504872785s" podCreationTimestamp="2026-03-13 10:21:57 +0000 UTC" firstStartedPulling="2026-03-13 10:21:59.218160496 +0000 UTC m=+1093.240690629" lastFinishedPulling="2026-03-13 10:22:37.478863886 +0000 UTC m=+1131.501394019" observedRunningTime="2026-03-13 10:22:38.501107125 +0000 UTC m=+1132.523637258" watchObservedRunningTime="2026-03-13 10:22:38.504872785 +0000 UTC m=+1132.527402918" Mar 13 10:22:38 crc kubenswrapper[4632]: I0313 10:22:38.584082 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-qkr9n" Mar 13 10:22:38 crc kubenswrapper[4632]: I0313 10:22:38.766887 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-677c674df7-qbfg2" Mar 13 10:22:38 crc kubenswrapper[4632]: I0313 10:22:38.864755 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-nt7np" Mar 13 10:22:39 crc kubenswrapper[4632]: I0313 10:22:39.067082 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-jwrgq" Mar 13 10:22:39 crc kubenswrapper[4632]: I0313 10:22:39.110992 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-kv8b2" Mar 13 10:22:40 crc kubenswrapper[4632]: I0313 10:22:40.484685 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lzt8" event={"ID":"daba1153-3b28-4234-8dd0-ec20160abbfe","Type":"ContainerStarted","Data":"28d68b5d61b4f271fabf0662eb66dc8da8eb38d3aab1b5660194b1a0aa44a4b3"} Mar 13 10:22:40 crc kubenswrapper[4632]: I0313 10:22:40.504024 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lzt8" podStartSLOduration=3.042150061 podStartE2EDuration="42.504008397s" podCreationTimestamp="2026-03-13 10:21:58 +0000 UTC" firstStartedPulling="2026-03-13 10:22:00.192875129 +0000 UTC m=+1094.215405262" lastFinishedPulling="2026-03-13 10:22:39.654733465 +0000 UTC m=+1133.677263598" observedRunningTime="2026-03-13 10:22:40.500041843 +0000 UTC m=+1134.522572006" watchObservedRunningTime="2026-03-13 10:22:40.504008397 +0000 UTC m=+1134.526538530" Mar 13 10:22:41 crc kubenswrapper[4632]: I0313 10:22:41.099914 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-85c677895b-thbc4" Mar 13 10:22:43 crc kubenswrapper[4632]: I0313 10:22:43.676382 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-flfxh" Mar 13 10:22:44 crc kubenswrapper[4632]: I0313 10:22:44.284107 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v927j" Mar 13 10:22:47 crc kubenswrapper[4632]: I0313 10:22:47.602832 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-wj9qs" Mar 13 10:22:47 crc kubenswrapper[4632]: I0313 10:22:47.606015 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-wj9qs" Mar 13 10:22:47 crc kubenswrapper[4632]: I0313 10:22:47.705179 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-cgh6c" Mar 13 10:22:47 crc kubenswrapper[4632]: I0313 10:22:47.771475 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-2rv7s" Mar 13 10:22:47 crc kubenswrapper[4632]: I0313 10:22:47.821227 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-wtzrw" Mar 13 10:22:48 crc kubenswrapper[4632]: I0313 10:22:48.287199 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-bkmbn" Mar 13 10:22:48 crc kubenswrapper[4632]: I0313 10:22:48.388353 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-62gpm" Mar 13 10:22:48 crc kubenswrapper[4632]: I0313 10:22:48.504118 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-4m8kf" Mar 13 10:22:58 crc kubenswrapper[4632]: I0313 10:22:58.663182 4632 scope.go:117] "RemoveContainer" containerID="6a34c241348123944aa499915ed71c016789c868e3e563c2a1cb71763ed56ad8" Mar 13 10:23:06 crc kubenswrapper[4632]: I0313 10:23:06.189633 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b95c5c449-th4fn"] Mar 13 10:23:06 crc kubenswrapper[4632]: E0313 10:23:06.190439 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bedc1d17-f5c4-4a62-ab0c-f20a002e859b" containerName="oc" Mar 13 10:23:06 crc kubenswrapper[4632]: I0313 10:23:06.190453 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="bedc1d17-f5c4-4a62-ab0c-f20a002e859b" containerName="oc" Mar 13 10:23:06 crc kubenswrapper[4632]: I0313 10:23:06.190587 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="bedc1d17-f5c4-4a62-ab0c-f20a002e859b" containerName="oc" Mar 13 10:23:06 crc kubenswrapper[4632]: I0313 10:23:06.191245 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b95c5c449-th4fn" Mar 13 10:23:06 crc kubenswrapper[4632]: I0313 10:23:06.204722 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-6rt6f" Mar 13 10:23:06 crc kubenswrapper[4632]: I0313 10:23:06.204956 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 13 10:23:06 crc kubenswrapper[4632]: I0313 10:23:06.205149 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 13 10:23:06 crc kubenswrapper[4632]: I0313 10:23:06.205289 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 13 10:23:06 crc kubenswrapper[4632]: I0313 10:23:06.222650 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b95c5c449-th4fn"] Mar 13 10:23:06 crc kubenswrapper[4632]: I0313 10:23:06.240373 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c54n6\" (UniqueName: \"kubernetes.io/projected/dcb4500d-7a53-4091-b3af-394eb0f49130-kube-api-access-c54n6\") pod \"dnsmasq-dns-7b95c5c449-th4fn\" (UID: \"dcb4500d-7a53-4091-b3af-394eb0f49130\") " pod="openstack/dnsmasq-dns-7b95c5c449-th4fn" Mar 13 10:23:06 crc kubenswrapper[4632]: I0313 10:23:06.240442 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcb4500d-7a53-4091-b3af-394eb0f49130-config\") pod \"dnsmasq-dns-7b95c5c449-th4fn\" (UID: \"dcb4500d-7a53-4091-b3af-394eb0f49130\") " pod="openstack/dnsmasq-dns-7b95c5c449-th4fn" Mar 13 10:23:06 crc kubenswrapper[4632]: I0313 10:23:06.276725 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bd9cf7445-frlmw"] Mar 13 10:23:06 crc kubenswrapper[4632]: I0313 10:23:06.277920 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bd9cf7445-frlmw" Mar 13 10:23:06 crc kubenswrapper[4632]: I0313 10:23:06.280520 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 13 10:23:06 crc kubenswrapper[4632]: I0313 10:23:06.294052 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bd9cf7445-frlmw"] Mar 13 10:23:06 crc kubenswrapper[4632]: I0313 10:23:06.342186 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mmhg\" (UniqueName: \"kubernetes.io/projected/da131e9a-8968-4569-a970-3aa95b2a830b-kube-api-access-4mmhg\") pod \"dnsmasq-dns-bd9cf7445-frlmw\" (UID: \"da131e9a-8968-4569-a970-3aa95b2a830b\") " pod="openstack/dnsmasq-dns-bd9cf7445-frlmw" Mar 13 10:23:06 crc kubenswrapper[4632]: I0313 10:23:06.342263 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da131e9a-8968-4569-a970-3aa95b2a830b-config\") pod \"dnsmasq-dns-bd9cf7445-frlmw\" (UID: \"da131e9a-8968-4569-a970-3aa95b2a830b\") " pod="openstack/dnsmasq-dns-bd9cf7445-frlmw" Mar 13 10:23:06 crc kubenswrapper[4632]: I0313 10:23:06.342301 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c54n6\" (UniqueName: \"kubernetes.io/projected/dcb4500d-7a53-4091-b3af-394eb0f49130-kube-api-access-c54n6\") pod \"dnsmasq-dns-7b95c5c449-th4fn\" (UID: \"dcb4500d-7a53-4091-b3af-394eb0f49130\") " pod="openstack/dnsmasq-dns-7b95c5c449-th4fn" Mar 13 10:23:06 crc kubenswrapper[4632]: I0313 10:23:06.342368 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da131e9a-8968-4569-a970-3aa95b2a830b-dns-svc\") pod \"dnsmasq-dns-bd9cf7445-frlmw\" (UID: \"da131e9a-8968-4569-a970-3aa95b2a830b\") " pod="openstack/dnsmasq-dns-bd9cf7445-frlmw" Mar 13 10:23:06 crc kubenswrapper[4632]: I0313 10:23:06.342407 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcb4500d-7a53-4091-b3af-394eb0f49130-config\") pod \"dnsmasq-dns-7b95c5c449-th4fn\" (UID: \"dcb4500d-7a53-4091-b3af-394eb0f49130\") " pod="openstack/dnsmasq-dns-7b95c5c449-th4fn" Mar 13 10:23:06 crc kubenswrapper[4632]: I0313 10:23:06.343788 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcb4500d-7a53-4091-b3af-394eb0f49130-config\") pod \"dnsmasq-dns-7b95c5c449-th4fn\" (UID: \"dcb4500d-7a53-4091-b3af-394eb0f49130\") " pod="openstack/dnsmasq-dns-7b95c5c449-th4fn" Mar 13 10:23:06 crc kubenswrapper[4632]: I0313 10:23:06.372354 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c54n6\" (UniqueName: \"kubernetes.io/projected/dcb4500d-7a53-4091-b3af-394eb0f49130-kube-api-access-c54n6\") pod \"dnsmasq-dns-7b95c5c449-th4fn\" (UID: \"dcb4500d-7a53-4091-b3af-394eb0f49130\") " pod="openstack/dnsmasq-dns-7b95c5c449-th4fn" Mar 13 10:23:06 crc kubenswrapper[4632]: I0313 10:23:06.443824 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da131e9a-8968-4569-a970-3aa95b2a830b-dns-svc\") pod \"dnsmasq-dns-bd9cf7445-frlmw\" (UID: \"da131e9a-8968-4569-a970-3aa95b2a830b\") " pod="openstack/dnsmasq-dns-bd9cf7445-frlmw" Mar 13 10:23:06 crc kubenswrapper[4632]: I0313 10:23:06.444341 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mmhg\" (UniqueName: \"kubernetes.io/projected/da131e9a-8968-4569-a970-3aa95b2a830b-kube-api-access-4mmhg\") pod \"dnsmasq-dns-bd9cf7445-frlmw\" (UID: \"da131e9a-8968-4569-a970-3aa95b2a830b\") " pod="openstack/dnsmasq-dns-bd9cf7445-frlmw" Mar 13 10:23:06 crc kubenswrapper[4632]: I0313 10:23:06.444393 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da131e9a-8968-4569-a970-3aa95b2a830b-config\") pod \"dnsmasq-dns-bd9cf7445-frlmw\" (UID: \"da131e9a-8968-4569-a970-3aa95b2a830b\") " pod="openstack/dnsmasq-dns-bd9cf7445-frlmw" Mar 13 10:23:06 crc kubenswrapper[4632]: I0313 10:23:06.444932 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da131e9a-8968-4569-a970-3aa95b2a830b-dns-svc\") pod \"dnsmasq-dns-bd9cf7445-frlmw\" (UID: \"da131e9a-8968-4569-a970-3aa95b2a830b\") " pod="openstack/dnsmasq-dns-bd9cf7445-frlmw" Mar 13 10:23:06 crc kubenswrapper[4632]: I0313 10:23:06.445487 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da131e9a-8968-4569-a970-3aa95b2a830b-config\") pod \"dnsmasq-dns-bd9cf7445-frlmw\" (UID: \"da131e9a-8968-4569-a970-3aa95b2a830b\") " pod="openstack/dnsmasq-dns-bd9cf7445-frlmw" Mar 13 10:23:06 crc kubenswrapper[4632]: I0313 10:23:06.469661 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mmhg\" (UniqueName: \"kubernetes.io/projected/da131e9a-8968-4569-a970-3aa95b2a830b-kube-api-access-4mmhg\") pod \"dnsmasq-dns-bd9cf7445-frlmw\" (UID: \"da131e9a-8968-4569-a970-3aa95b2a830b\") " pod="openstack/dnsmasq-dns-bd9cf7445-frlmw" Mar 13 10:23:06 crc kubenswrapper[4632]: I0313 10:23:06.540683 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b95c5c449-th4fn" Mar 13 10:23:06 crc kubenswrapper[4632]: I0313 10:23:06.600871 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bd9cf7445-frlmw" Mar 13 10:23:07 crc kubenswrapper[4632]: I0313 10:23:07.075682 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b95c5c449-th4fn"] Mar 13 10:23:07 crc kubenswrapper[4632]: I0313 10:23:07.158883 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bd9cf7445-frlmw"] Mar 13 10:23:07 crc kubenswrapper[4632]: W0313 10:23:07.161305 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda131e9a_8968_4569_a970_3aa95b2a830b.slice/crio-55fcbfaa372a43ed729e32593f5c37c0b24f03c99a93bb7fe4dbc16e4c07e1b0 WatchSource:0}: Error finding container 55fcbfaa372a43ed729e32593f5c37c0b24f03c99a93bb7fe4dbc16e4c07e1b0: Status 404 returned error can't find the container with id 55fcbfaa372a43ed729e32593f5c37c0b24f03c99a93bb7fe4dbc16e4c07e1b0 Mar 13 10:23:07 crc kubenswrapper[4632]: I0313 10:23:07.669547 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b95c5c449-th4fn" event={"ID":"dcb4500d-7a53-4091-b3af-394eb0f49130","Type":"ContainerStarted","Data":"795a9fce6e217788c208128d366941b235a94e772dfaab8892b09ba717741562"} Mar 13 10:23:07 crc kubenswrapper[4632]: I0313 10:23:07.675141 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bd9cf7445-frlmw" event={"ID":"da131e9a-8968-4569-a970-3aa95b2a830b","Type":"ContainerStarted","Data":"55fcbfaa372a43ed729e32593f5c37c0b24f03c99a93bb7fe4dbc16e4c07e1b0"} Mar 13 10:23:09 crc kubenswrapper[4632]: I0313 10:23:09.101803 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b95c5c449-th4fn"] Mar 13 10:23:09 crc kubenswrapper[4632]: I0313 10:23:09.130770 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dcf85566c-59l8m"] Mar 13 10:23:09 crc kubenswrapper[4632]: I0313 10:23:09.134135 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dcf85566c-59l8m" Mar 13 10:23:09 crc kubenswrapper[4632]: I0313 10:23:09.146843 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dcf85566c-59l8m"] Mar 13 10:23:09 crc kubenswrapper[4632]: I0313 10:23:09.204180 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea62b75b-fe31-433d-9ff1-a7333aacb383-dns-svc\") pod \"dnsmasq-dns-dcf85566c-59l8m\" (UID: \"ea62b75b-fe31-433d-9ff1-a7333aacb383\") " pod="openstack/dnsmasq-dns-dcf85566c-59l8m" Mar 13 10:23:09 crc kubenswrapper[4632]: I0313 10:23:09.204263 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-868r4\" (UniqueName: \"kubernetes.io/projected/ea62b75b-fe31-433d-9ff1-a7333aacb383-kube-api-access-868r4\") pod \"dnsmasq-dns-dcf85566c-59l8m\" (UID: \"ea62b75b-fe31-433d-9ff1-a7333aacb383\") " pod="openstack/dnsmasq-dns-dcf85566c-59l8m" Mar 13 10:23:09 crc kubenswrapper[4632]: I0313 10:23:09.204298 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea62b75b-fe31-433d-9ff1-a7333aacb383-config\") pod \"dnsmasq-dns-dcf85566c-59l8m\" (UID: \"ea62b75b-fe31-433d-9ff1-a7333aacb383\") " pod="openstack/dnsmasq-dns-dcf85566c-59l8m" Mar 13 10:23:09 crc kubenswrapper[4632]: I0313 10:23:09.305190 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-868r4\" (UniqueName: \"kubernetes.io/projected/ea62b75b-fe31-433d-9ff1-a7333aacb383-kube-api-access-868r4\") pod \"dnsmasq-dns-dcf85566c-59l8m\" (UID: \"ea62b75b-fe31-433d-9ff1-a7333aacb383\") " pod="openstack/dnsmasq-dns-dcf85566c-59l8m" Mar 13 10:23:09 crc kubenswrapper[4632]: I0313 10:23:09.305247 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea62b75b-fe31-433d-9ff1-a7333aacb383-config\") pod \"dnsmasq-dns-dcf85566c-59l8m\" (UID: \"ea62b75b-fe31-433d-9ff1-a7333aacb383\") " pod="openstack/dnsmasq-dns-dcf85566c-59l8m" Mar 13 10:23:09 crc kubenswrapper[4632]: I0313 10:23:09.305324 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea62b75b-fe31-433d-9ff1-a7333aacb383-dns-svc\") pod \"dnsmasq-dns-dcf85566c-59l8m\" (UID: \"ea62b75b-fe31-433d-9ff1-a7333aacb383\") " pod="openstack/dnsmasq-dns-dcf85566c-59l8m" Mar 13 10:23:09 crc kubenswrapper[4632]: I0313 10:23:09.306405 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea62b75b-fe31-433d-9ff1-a7333aacb383-dns-svc\") pod \"dnsmasq-dns-dcf85566c-59l8m\" (UID: \"ea62b75b-fe31-433d-9ff1-a7333aacb383\") " pod="openstack/dnsmasq-dns-dcf85566c-59l8m" Mar 13 10:23:09 crc kubenswrapper[4632]: I0313 10:23:09.306405 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea62b75b-fe31-433d-9ff1-a7333aacb383-config\") pod \"dnsmasq-dns-dcf85566c-59l8m\" (UID: \"ea62b75b-fe31-433d-9ff1-a7333aacb383\") " pod="openstack/dnsmasq-dns-dcf85566c-59l8m" Mar 13 10:23:09 crc kubenswrapper[4632]: I0313 10:23:09.346993 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-868r4\" (UniqueName: \"kubernetes.io/projected/ea62b75b-fe31-433d-9ff1-a7333aacb383-kube-api-access-868r4\") pod \"dnsmasq-dns-dcf85566c-59l8m\" (UID: \"ea62b75b-fe31-433d-9ff1-a7333aacb383\") " pod="openstack/dnsmasq-dns-dcf85566c-59l8m" Mar 13 10:23:09 crc kubenswrapper[4632]: I0313 10:23:09.467850 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bd9cf7445-frlmw"] Mar 13 10:23:09 crc kubenswrapper[4632]: I0313 10:23:09.472510 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dcf85566c-59l8m" Mar 13 10:23:09 crc kubenswrapper[4632]: I0313 10:23:09.525415 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86545856d7-fkxhx"] Mar 13 10:23:09 crc kubenswrapper[4632]: I0313 10:23:09.527422 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86545856d7-fkxhx" Mar 13 10:23:09 crc kubenswrapper[4632]: I0313 10:23:09.552637 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86545856d7-fkxhx"] Mar 13 10:23:09 crc kubenswrapper[4632]: I0313 10:23:09.610172 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afad0f9-c29c-40e6-8605-1df67a505a82-config\") pod \"dnsmasq-dns-86545856d7-fkxhx\" (UID: \"7afad0f9-c29c-40e6-8605-1df67a505a82\") " pod="openstack/dnsmasq-dns-86545856d7-fkxhx" Mar 13 10:23:09 crc kubenswrapper[4632]: I0313 10:23:09.610224 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdf8h\" (UniqueName: \"kubernetes.io/projected/7afad0f9-c29c-40e6-8605-1df67a505a82-kube-api-access-qdf8h\") pod \"dnsmasq-dns-86545856d7-fkxhx\" (UID: \"7afad0f9-c29c-40e6-8605-1df67a505a82\") " pod="openstack/dnsmasq-dns-86545856d7-fkxhx" Mar 13 10:23:09 crc kubenswrapper[4632]: I0313 10:23:09.610258 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7afad0f9-c29c-40e6-8605-1df67a505a82-dns-svc\") pod \"dnsmasq-dns-86545856d7-fkxhx\" (UID: \"7afad0f9-c29c-40e6-8605-1df67a505a82\") " pod="openstack/dnsmasq-dns-86545856d7-fkxhx" Mar 13 10:23:09 crc kubenswrapper[4632]: I0313 10:23:09.711103 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afad0f9-c29c-40e6-8605-1df67a505a82-config\") pod \"dnsmasq-dns-86545856d7-fkxhx\" (UID: \"7afad0f9-c29c-40e6-8605-1df67a505a82\") " pod="openstack/dnsmasq-dns-86545856d7-fkxhx" Mar 13 10:23:09 crc kubenswrapper[4632]: I0313 10:23:09.711159 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdf8h\" (UniqueName: \"kubernetes.io/projected/7afad0f9-c29c-40e6-8605-1df67a505a82-kube-api-access-qdf8h\") pod \"dnsmasq-dns-86545856d7-fkxhx\" (UID: \"7afad0f9-c29c-40e6-8605-1df67a505a82\") " pod="openstack/dnsmasq-dns-86545856d7-fkxhx" Mar 13 10:23:09 crc kubenswrapper[4632]: I0313 10:23:09.711200 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7afad0f9-c29c-40e6-8605-1df67a505a82-dns-svc\") pod \"dnsmasq-dns-86545856d7-fkxhx\" (UID: \"7afad0f9-c29c-40e6-8605-1df67a505a82\") " pod="openstack/dnsmasq-dns-86545856d7-fkxhx" Mar 13 10:23:09 crc kubenswrapper[4632]: I0313 10:23:09.712185 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7afad0f9-c29c-40e6-8605-1df67a505a82-dns-svc\") pod \"dnsmasq-dns-86545856d7-fkxhx\" (UID: \"7afad0f9-c29c-40e6-8605-1df67a505a82\") " pod="openstack/dnsmasq-dns-86545856d7-fkxhx" Mar 13 10:23:09 crc kubenswrapper[4632]: I0313 10:23:09.712477 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afad0f9-c29c-40e6-8605-1df67a505a82-config\") pod \"dnsmasq-dns-86545856d7-fkxhx\" (UID: \"7afad0f9-c29c-40e6-8605-1df67a505a82\") " pod="openstack/dnsmasq-dns-86545856d7-fkxhx" Mar 13 10:23:09 crc kubenswrapper[4632]: I0313 10:23:09.747858 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdf8h\" (UniqueName: \"kubernetes.io/projected/7afad0f9-c29c-40e6-8605-1df67a505a82-kube-api-access-qdf8h\") pod \"dnsmasq-dns-86545856d7-fkxhx\" (UID: \"7afad0f9-c29c-40e6-8605-1df67a505a82\") " pod="openstack/dnsmasq-dns-86545856d7-fkxhx" Mar 13 10:23:09 crc kubenswrapper[4632]: I0313 10:23:09.910529 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86545856d7-fkxhx" Mar 13 10:23:09 crc kubenswrapper[4632]: I0313 10:23:09.936868 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dcf85566c-59l8m"] Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.289620 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.292208 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.294041 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.294202 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.296823 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.297065 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.301175 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.301513 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.301718 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-m5r4h" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.303846 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.321748 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/159c6cee-c82b-4725-82d6-dbd27216f53c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.321800 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/159c6cee-c82b-4725-82d6-dbd27216f53c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.321825 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/159c6cee-c82b-4725-82d6-dbd27216f53c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.321876 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/159c6cee-c82b-4725-82d6-dbd27216f53c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.321909 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/159c6cee-c82b-4725-82d6-dbd27216f53c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.321930 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.321980 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/159c6cee-c82b-4725-82d6-dbd27216f53c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.322305 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/159c6cee-c82b-4725-82d6-dbd27216f53c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.322331 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8hmx\" (UniqueName: \"kubernetes.io/projected/159c6cee-c82b-4725-82d6-dbd27216f53c-kube-api-access-k8hmx\") pod \"rabbitmq-cell1-server-0\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.322354 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/159c6cee-c82b-4725-82d6-dbd27216f53c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.322379 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/159c6cee-c82b-4725-82d6-dbd27216f53c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.424147 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/159c6cee-c82b-4725-82d6-dbd27216f53c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.424219 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/159c6cee-c82b-4725-82d6-dbd27216f53c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.424249 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.424283 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/159c6cee-c82b-4725-82d6-dbd27216f53c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.424361 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/159c6cee-c82b-4725-82d6-dbd27216f53c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.424382 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8hmx\" (UniqueName: \"kubernetes.io/projected/159c6cee-c82b-4725-82d6-dbd27216f53c-kube-api-access-k8hmx\") pod \"rabbitmq-cell1-server-0\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.424405 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/159c6cee-c82b-4725-82d6-dbd27216f53c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.424431 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/159c6cee-c82b-4725-82d6-dbd27216f53c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.424485 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/159c6cee-c82b-4725-82d6-dbd27216f53c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.424511 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/159c6cee-c82b-4725-82d6-dbd27216f53c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.424559 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/159c6cee-c82b-4725-82d6-dbd27216f53c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.424860 4632 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.426312 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/159c6cee-c82b-4725-82d6-dbd27216f53c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.426631 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/159c6cee-c82b-4725-82d6-dbd27216f53c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.426804 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/159c6cee-c82b-4725-82d6-dbd27216f53c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.424880 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/159c6cee-c82b-4725-82d6-dbd27216f53c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.427883 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/159c6cee-c82b-4725-82d6-dbd27216f53c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.434248 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86545856d7-fkxhx"] Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.434986 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/159c6cee-c82b-4725-82d6-dbd27216f53c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.448190 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/159c6cee-c82b-4725-82d6-dbd27216f53c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.449847 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/159c6cee-c82b-4725-82d6-dbd27216f53c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.452955 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8hmx\" (UniqueName: \"kubernetes.io/projected/159c6cee-c82b-4725-82d6-dbd27216f53c-kube-api-access-k8hmx\") pod \"rabbitmq-cell1-server-0\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.464724 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/159c6cee-c82b-4725-82d6-dbd27216f53c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.475678 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.632528 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.695846 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.701229 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.705531 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-x424t" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.705657 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.707981 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.706003 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.706132 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.706175 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.706258 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.706296 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.832861 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86545856d7-fkxhx" event={"ID":"7afad0f9-c29c-40e6-8605-1df67a505a82","Type":"ContainerStarted","Data":"99dcdc47d7e36bada4f3fc23414bc9bbd494a02612477871943ec84558819c97"} Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.835319 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dcf85566c-59l8m" event={"ID":"ea62b75b-fe31-433d-9ff1-a7333aacb383","Type":"ContainerStarted","Data":"8c2e1684ce51b6e904615df8d377b14786ec45f198df25a296fb0708834a1826"} Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.839025 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/211718f0-f29c-457b-bc2b-487bb76d4801-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " pod="openstack/rabbitmq-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.839066 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/211718f0-f29c-457b-bc2b-487bb76d4801-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " pod="openstack/rabbitmq-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.839140 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kfgh\" (UniqueName: \"kubernetes.io/projected/211718f0-f29c-457b-bc2b-487bb76d4801-kube-api-access-6kfgh\") pod \"rabbitmq-server-0\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " pod="openstack/rabbitmq-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.839713 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/211718f0-f29c-457b-bc2b-487bb76d4801-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " pod="openstack/rabbitmq-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.839737 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/211718f0-f29c-457b-bc2b-487bb76d4801-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " pod="openstack/rabbitmq-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.839795 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/211718f0-f29c-457b-bc2b-487bb76d4801-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " pod="openstack/rabbitmq-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.839849 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/211718f0-f29c-457b-bc2b-487bb76d4801-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " pod="openstack/rabbitmq-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.839905 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/211718f0-f29c-457b-bc2b-487bb76d4801-config-data\") pod \"rabbitmq-server-0\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " pod="openstack/rabbitmq-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.839982 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " pod="openstack/rabbitmq-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.840031 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/211718f0-f29c-457b-bc2b-487bb76d4801-server-conf\") pod \"rabbitmq-server-0\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " pod="openstack/rabbitmq-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.840060 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/211718f0-f29c-457b-bc2b-487bb76d4801-pod-info\") pod \"rabbitmq-server-0\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " pod="openstack/rabbitmq-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.940876 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/211718f0-f29c-457b-bc2b-487bb76d4801-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " pod="openstack/rabbitmq-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.940930 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/211718f0-f29c-457b-bc2b-487bb76d4801-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " pod="openstack/rabbitmq-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.940969 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kfgh\" (UniqueName: \"kubernetes.io/projected/211718f0-f29c-457b-bc2b-487bb76d4801-kube-api-access-6kfgh\") pod \"rabbitmq-server-0\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " pod="openstack/rabbitmq-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.941078 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/211718f0-f29c-457b-bc2b-487bb76d4801-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " pod="openstack/rabbitmq-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.941107 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/211718f0-f29c-457b-bc2b-487bb76d4801-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " pod="openstack/rabbitmq-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.941147 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/211718f0-f29c-457b-bc2b-487bb76d4801-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " pod="openstack/rabbitmq-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.941173 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/211718f0-f29c-457b-bc2b-487bb76d4801-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " pod="openstack/rabbitmq-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.941197 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/211718f0-f29c-457b-bc2b-487bb76d4801-config-data\") pod \"rabbitmq-server-0\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " pod="openstack/rabbitmq-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.941226 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " pod="openstack/rabbitmq-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.941441 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/211718f0-f29c-457b-bc2b-487bb76d4801-server-conf\") pod \"rabbitmq-server-0\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " pod="openstack/rabbitmq-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.941458 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/211718f0-f29c-457b-bc2b-487bb76d4801-pod-info\") pod \"rabbitmq-server-0\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " pod="openstack/rabbitmq-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.941698 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/211718f0-f29c-457b-bc2b-487bb76d4801-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " pod="openstack/rabbitmq-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.942069 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/211718f0-f29c-457b-bc2b-487bb76d4801-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " pod="openstack/rabbitmq-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.945499 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/211718f0-f29c-457b-bc2b-487bb76d4801-config-data\") pod \"rabbitmq-server-0\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " pod="openstack/rabbitmq-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.945528 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/211718f0-f29c-457b-bc2b-487bb76d4801-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " pod="openstack/rabbitmq-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.945732 4632 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.946092 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/211718f0-f29c-457b-bc2b-487bb76d4801-pod-info\") pod \"rabbitmq-server-0\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " pod="openstack/rabbitmq-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.946695 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/211718f0-f29c-457b-bc2b-487bb76d4801-server-conf\") pod \"rabbitmq-server-0\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " pod="openstack/rabbitmq-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.949118 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/211718f0-f29c-457b-bc2b-487bb76d4801-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " pod="openstack/rabbitmq-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.950976 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/211718f0-f29c-457b-bc2b-487bb76d4801-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " pod="openstack/rabbitmq-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.952788 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/211718f0-f29c-457b-bc2b-487bb76d4801-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " pod="openstack/rabbitmq-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.967486 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kfgh\" (UniqueName: \"kubernetes.io/projected/211718f0-f29c-457b-bc2b-487bb76d4801-kube-api-access-6kfgh\") pod \"rabbitmq-server-0\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " pod="openstack/rabbitmq-server-0" Mar 13 10:23:10 crc kubenswrapper[4632]: I0313 10:23:10.991895 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " pod="openstack/rabbitmq-server-0" Mar 13 10:23:11 crc kubenswrapper[4632]: I0313 10:23:11.063241 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 13 10:23:11 crc kubenswrapper[4632]: I0313 10:23:11.502955 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 13 10:23:11 crc kubenswrapper[4632]: W0313 10:23:11.700009 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod159c6cee_c82b_4725_82d6_dbd27216f53c.slice/crio-06613fdc2799f04ea62de7d5a6995bb48161830d28a55edb1ede1542c640e10e WatchSource:0}: Error finding container 06613fdc2799f04ea62de7d5a6995bb48161830d28a55edb1ede1542c640e10e: Status 404 returned error can't find the container with id 06613fdc2799f04ea62de7d5a6995bb48161830d28a55edb1ede1542c640e10e Mar 13 10:23:11 crc kubenswrapper[4632]: I0313 10:23:11.847279 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 13 10:23:11 crc kubenswrapper[4632]: I0313 10:23:11.852599 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 13 10:23:11 crc kubenswrapper[4632]: I0313 10:23:11.859442 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 13 10:23:11 crc kubenswrapper[4632]: I0313 10:23:11.860140 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-j7sp7" Mar 13 10:23:11 crc kubenswrapper[4632]: I0313 10:23:11.864850 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 13 10:23:11 crc kubenswrapper[4632]: I0313 10:23:11.865183 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 13 10:23:11 crc kubenswrapper[4632]: I0313 10:23:11.870122 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"159c6cee-c82b-4725-82d6-dbd27216f53c","Type":"ContainerStarted","Data":"06613fdc2799f04ea62de7d5a6995bb48161830d28a55edb1ede1542c640e10e"} Mar 13 10:23:11 crc kubenswrapper[4632]: I0313 10:23:11.892190 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 13 10:23:11 crc kubenswrapper[4632]: I0313 10:23:11.902713 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 13 10:23:11 crc kubenswrapper[4632]: I0313 10:23:11.965482 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cb2f546-c8c5-4ec9-aba8-d3782431de10-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2cb2f546-c8c5-4ec9-aba8-d3782431de10\") " pod="openstack/openstack-galera-0" Mar 13 10:23:11 crc kubenswrapper[4632]: I0313 10:23:11.965550 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2cb2f546-c8c5-4ec9-aba8-d3782431de10-kolla-config\") pod \"openstack-galera-0\" (UID: \"2cb2f546-c8c5-4ec9-aba8-d3782431de10\") " pod="openstack/openstack-galera-0" Mar 13 10:23:11 crc kubenswrapper[4632]: I0313 10:23:11.965578 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2cb2f546-c8c5-4ec9-aba8-d3782431de10-config-data-default\") pod \"openstack-galera-0\" (UID: \"2cb2f546-c8c5-4ec9-aba8-d3782431de10\") " pod="openstack/openstack-galera-0" Mar 13 10:23:11 crc kubenswrapper[4632]: I0313 10:23:11.965602 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs454\" (UniqueName: \"kubernetes.io/projected/2cb2f546-c8c5-4ec9-aba8-d3782431de10-kube-api-access-qs454\") pod \"openstack-galera-0\" (UID: \"2cb2f546-c8c5-4ec9-aba8-d3782431de10\") " pod="openstack/openstack-galera-0" Mar 13 10:23:11 crc kubenswrapper[4632]: I0313 10:23:11.965657 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2cb2f546-c8c5-4ec9-aba8-d3782431de10-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2cb2f546-c8c5-4ec9-aba8-d3782431de10\") " pod="openstack/openstack-galera-0" Mar 13 10:23:11 crc kubenswrapper[4632]: I0313 10:23:11.965691 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cb2f546-c8c5-4ec9-aba8-d3782431de10-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2cb2f546-c8c5-4ec9-aba8-d3782431de10\") " pod="openstack/openstack-galera-0" Mar 13 10:23:11 crc kubenswrapper[4632]: I0313 10:23:11.965877 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb2f546-c8c5-4ec9-aba8-d3782431de10-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2cb2f546-c8c5-4ec9-aba8-d3782431de10\") " pod="openstack/openstack-galera-0" Mar 13 10:23:11 crc kubenswrapper[4632]: I0313 10:23:11.965976 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"2cb2f546-c8c5-4ec9-aba8-d3782431de10\") " pod="openstack/openstack-galera-0" Mar 13 10:23:12 crc kubenswrapper[4632]: I0313 10:23:12.016738 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 13 10:23:12 crc kubenswrapper[4632]: I0313 10:23:12.077817 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cb2f546-c8c5-4ec9-aba8-d3782431de10-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2cb2f546-c8c5-4ec9-aba8-d3782431de10\") " pod="openstack/openstack-galera-0" Mar 13 10:23:12 crc kubenswrapper[4632]: I0313 10:23:12.077873 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2cb2f546-c8c5-4ec9-aba8-d3782431de10-kolla-config\") pod \"openstack-galera-0\" (UID: \"2cb2f546-c8c5-4ec9-aba8-d3782431de10\") " pod="openstack/openstack-galera-0" Mar 13 10:23:12 crc kubenswrapper[4632]: I0313 10:23:12.077898 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2cb2f546-c8c5-4ec9-aba8-d3782431de10-config-data-default\") pod \"openstack-galera-0\" (UID: \"2cb2f546-c8c5-4ec9-aba8-d3782431de10\") " pod="openstack/openstack-galera-0" Mar 13 10:23:12 crc kubenswrapper[4632]: I0313 10:23:12.077924 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs454\" (UniqueName: \"kubernetes.io/projected/2cb2f546-c8c5-4ec9-aba8-d3782431de10-kube-api-access-qs454\") pod \"openstack-galera-0\" (UID: \"2cb2f546-c8c5-4ec9-aba8-d3782431de10\") " pod="openstack/openstack-galera-0" Mar 13 10:23:12 crc kubenswrapper[4632]: I0313 10:23:12.078001 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2cb2f546-c8c5-4ec9-aba8-d3782431de10-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2cb2f546-c8c5-4ec9-aba8-d3782431de10\") " pod="openstack/openstack-galera-0" Mar 13 10:23:12 crc kubenswrapper[4632]: I0313 10:23:12.078040 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cb2f546-c8c5-4ec9-aba8-d3782431de10-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2cb2f546-c8c5-4ec9-aba8-d3782431de10\") " pod="openstack/openstack-galera-0" Mar 13 10:23:12 crc kubenswrapper[4632]: I0313 10:23:12.078063 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb2f546-c8c5-4ec9-aba8-d3782431de10-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2cb2f546-c8c5-4ec9-aba8-d3782431de10\") " pod="openstack/openstack-galera-0" Mar 13 10:23:12 crc kubenswrapper[4632]: I0313 10:23:12.078093 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"2cb2f546-c8c5-4ec9-aba8-d3782431de10\") " pod="openstack/openstack-galera-0" Mar 13 10:23:12 crc kubenswrapper[4632]: I0313 10:23:12.078892 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2cb2f546-c8c5-4ec9-aba8-d3782431de10-kolla-config\") pod \"openstack-galera-0\" (UID: \"2cb2f546-c8c5-4ec9-aba8-d3782431de10\") " pod="openstack/openstack-galera-0" Mar 13 10:23:12 crc kubenswrapper[4632]: I0313 10:23:12.078988 4632 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"2cb2f546-c8c5-4ec9-aba8-d3782431de10\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-galera-0" Mar 13 10:23:12 crc kubenswrapper[4632]: I0313 10:23:12.080286 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cb2f546-c8c5-4ec9-aba8-d3782431de10-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2cb2f546-c8c5-4ec9-aba8-d3782431de10\") " pod="openstack/openstack-galera-0" Mar 13 10:23:12 crc kubenswrapper[4632]: I0313 10:23:12.080616 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2cb2f546-c8c5-4ec9-aba8-d3782431de10-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2cb2f546-c8c5-4ec9-aba8-d3782431de10\") " pod="openstack/openstack-galera-0" Mar 13 10:23:12 crc kubenswrapper[4632]: I0313 10:23:12.081470 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2cb2f546-c8c5-4ec9-aba8-d3782431de10-config-data-default\") pod \"openstack-galera-0\" (UID: \"2cb2f546-c8c5-4ec9-aba8-d3782431de10\") " pod="openstack/openstack-galera-0" Mar 13 10:23:12 crc kubenswrapper[4632]: I0313 10:23:12.134423 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb2f546-c8c5-4ec9-aba8-d3782431de10-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2cb2f546-c8c5-4ec9-aba8-d3782431de10\") " pod="openstack/openstack-galera-0" Mar 13 10:23:12 crc kubenswrapper[4632]: I0313 10:23:12.134783 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cb2f546-c8c5-4ec9-aba8-d3782431de10-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2cb2f546-c8c5-4ec9-aba8-d3782431de10\") " pod="openstack/openstack-galera-0" Mar 13 10:23:12 crc kubenswrapper[4632]: I0313 10:23:12.180342 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs454\" (UniqueName: \"kubernetes.io/projected/2cb2f546-c8c5-4ec9-aba8-d3782431de10-kube-api-access-qs454\") pod \"openstack-galera-0\" (UID: \"2cb2f546-c8c5-4ec9-aba8-d3782431de10\") " pod="openstack/openstack-galera-0" Mar 13 10:23:12 crc kubenswrapper[4632]: I0313 10:23:12.191790 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"2cb2f546-c8c5-4ec9-aba8-d3782431de10\") " pod="openstack/openstack-galera-0" Mar 13 10:23:12 crc kubenswrapper[4632]: I0313 10:23:12.476713 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 13 10:23:12 crc kubenswrapper[4632]: I0313 10:23:12.900973 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"211718f0-f29c-457b-bc2b-487bb76d4801","Type":"ContainerStarted","Data":"fd0dcad1534e2c23d238622a824c4e32c97444e16220054d2406cb0e89183756"} Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.056655 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.058205 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.072188 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.078898 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.079187 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-rqbfd" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.079374 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.079601 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.089593 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.228876 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1761ca69-46fd-4375-af60-22b3e77c19a2-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1761ca69-46fd-4375-af60-22b3e77c19a2\") " pod="openstack/openstack-cell1-galera-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.229300 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65kkf\" (UniqueName: \"kubernetes.io/projected/1761ca69-46fd-4375-af60-22b3e77c19a2-kube-api-access-65kkf\") pod \"openstack-cell1-galera-0\" (UID: \"1761ca69-46fd-4375-af60-22b3e77c19a2\") " pod="openstack/openstack-cell1-galera-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.229329 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1761ca69-46fd-4375-af60-22b3e77c19a2-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1761ca69-46fd-4375-af60-22b3e77c19a2\") " pod="openstack/openstack-cell1-galera-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.229432 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1761ca69-46fd-4375-af60-22b3e77c19a2-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1761ca69-46fd-4375-af60-22b3e77c19a2\") " pod="openstack/openstack-cell1-galera-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.229477 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1761ca69-46fd-4375-af60-22b3e77c19a2-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1761ca69-46fd-4375-af60-22b3e77c19a2\") " pod="openstack/openstack-cell1-galera-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.229502 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1761ca69-46fd-4375-af60-22b3e77c19a2-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1761ca69-46fd-4375-af60-22b3e77c19a2\") " pod="openstack/openstack-cell1-galera-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.229543 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1761ca69-46fd-4375-af60-22b3e77c19a2-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1761ca69-46fd-4375-af60-22b3e77c19a2\") " pod="openstack/openstack-cell1-galera-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.229595 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1761ca69-46fd-4375-af60-22b3e77c19a2\") " pod="openstack/openstack-cell1-galera-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.331782 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1761ca69-46fd-4375-af60-22b3e77c19a2-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1761ca69-46fd-4375-af60-22b3e77c19a2\") " pod="openstack/openstack-cell1-galera-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.331896 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1761ca69-46fd-4375-af60-22b3e77c19a2\") " pod="openstack/openstack-cell1-galera-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.331961 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1761ca69-46fd-4375-af60-22b3e77c19a2-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1761ca69-46fd-4375-af60-22b3e77c19a2\") " pod="openstack/openstack-cell1-galera-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.331996 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65kkf\" (UniqueName: \"kubernetes.io/projected/1761ca69-46fd-4375-af60-22b3e77c19a2-kube-api-access-65kkf\") pod \"openstack-cell1-galera-0\" (UID: \"1761ca69-46fd-4375-af60-22b3e77c19a2\") " pod="openstack/openstack-cell1-galera-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.332017 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1761ca69-46fd-4375-af60-22b3e77c19a2-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1761ca69-46fd-4375-af60-22b3e77c19a2\") " pod="openstack/openstack-cell1-galera-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.332092 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1761ca69-46fd-4375-af60-22b3e77c19a2-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1761ca69-46fd-4375-af60-22b3e77c19a2\") " pod="openstack/openstack-cell1-galera-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.332131 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1761ca69-46fd-4375-af60-22b3e77c19a2-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1761ca69-46fd-4375-af60-22b3e77c19a2\") " pod="openstack/openstack-cell1-galera-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.332160 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1761ca69-46fd-4375-af60-22b3e77c19a2-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1761ca69-46fd-4375-af60-22b3e77c19a2\") " pod="openstack/openstack-cell1-galera-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.332244 4632 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1761ca69-46fd-4375-af60-22b3e77c19a2\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-cell1-galera-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.332765 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1761ca69-46fd-4375-af60-22b3e77c19a2-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1761ca69-46fd-4375-af60-22b3e77c19a2\") " pod="openstack/openstack-cell1-galera-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.333318 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1761ca69-46fd-4375-af60-22b3e77c19a2-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1761ca69-46fd-4375-af60-22b3e77c19a2\") " pod="openstack/openstack-cell1-galera-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.333643 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1761ca69-46fd-4375-af60-22b3e77c19a2-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1761ca69-46fd-4375-af60-22b3e77c19a2\") " pod="openstack/openstack-cell1-galera-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.333977 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1761ca69-46fd-4375-af60-22b3e77c19a2-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1761ca69-46fd-4375-af60-22b3e77c19a2\") " pod="openstack/openstack-cell1-galera-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.355366 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1761ca69-46fd-4375-af60-22b3e77c19a2-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1761ca69-46fd-4375-af60-22b3e77c19a2\") " pod="openstack/openstack-cell1-galera-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.358136 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1761ca69-46fd-4375-af60-22b3e77c19a2-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1761ca69-46fd-4375-af60-22b3e77c19a2\") " pod="openstack/openstack-cell1-galera-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.369799 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65kkf\" (UniqueName: \"kubernetes.io/projected/1761ca69-46fd-4375-af60-22b3e77c19a2-kube-api-access-65kkf\") pod \"openstack-cell1-galera-0\" (UID: \"1761ca69-46fd-4375-af60-22b3e77c19a2\") " pod="openstack/openstack-cell1-galera-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.421090 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.424047 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.435487 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.435681 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-vcv78" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.435785 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.458183 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.466212 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1761ca69-46fd-4375-af60-22b3e77c19a2\") " pod="openstack/openstack-cell1-galera-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.538001 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9100748-6b15-4ccf-b961-aab1135f08d1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d9100748-6b15-4ccf-b961-aab1135f08d1\") " pod="openstack/memcached-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.538054 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9100748-6b15-4ccf-b961-aab1135f08d1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d9100748-6b15-4ccf-b961-aab1135f08d1\") " pod="openstack/memcached-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.538086 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d9100748-6b15-4ccf-b961-aab1135f08d1-config-data\") pod \"memcached-0\" (UID: \"d9100748-6b15-4ccf-b961-aab1135f08d1\") " pod="openstack/memcached-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.538140 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvr2h\" (UniqueName: \"kubernetes.io/projected/d9100748-6b15-4ccf-b961-aab1135f08d1-kube-api-access-pvr2h\") pod \"memcached-0\" (UID: \"d9100748-6b15-4ccf-b961-aab1135f08d1\") " pod="openstack/memcached-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.538179 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d9100748-6b15-4ccf-b961-aab1135f08d1-kolla-config\") pod \"memcached-0\" (UID: \"d9100748-6b15-4ccf-b961-aab1135f08d1\") " pod="openstack/memcached-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.639660 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9100748-6b15-4ccf-b961-aab1135f08d1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d9100748-6b15-4ccf-b961-aab1135f08d1\") " pod="openstack/memcached-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.640249 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9100748-6b15-4ccf-b961-aab1135f08d1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d9100748-6b15-4ccf-b961-aab1135f08d1\") " pod="openstack/memcached-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.640296 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d9100748-6b15-4ccf-b961-aab1135f08d1-config-data\") pod \"memcached-0\" (UID: \"d9100748-6b15-4ccf-b961-aab1135f08d1\") " pod="openstack/memcached-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.640464 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvr2h\" (UniqueName: \"kubernetes.io/projected/d9100748-6b15-4ccf-b961-aab1135f08d1-kube-api-access-pvr2h\") pod \"memcached-0\" (UID: \"d9100748-6b15-4ccf-b961-aab1135f08d1\") " pod="openstack/memcached-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.640500 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d9100748-6b15-4ccf-b961-aab1135f08d1-kolla-config\") pod \"memcached-0\" (UID: \"d9100748-6b15-4ccf-b961-aab1135f08d1\") " pod="openstack/memcached-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.641555 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d9100748-6b15-4ccf-b961-aab1135f08d1-config-data\") pod \"memcached-0\" (UID: \"d9100748-6b15-4ccf-b961-aab1135f08d1\") " pod="openstack/memcached-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.642691 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d9100748-6b15-4ccf-b961-aab1135f08d1-kolla-config\") pod \"memcached-0\" (UID: \"d9100748-6b15-4ccf-b961-aab1135f08d1\") " pod="openstack/memcached-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.648529 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9100748-6b15-4ccf-b961-aab1135f08d1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d9100748-6b15-4ccf-b961-aab1135f08d1\") " pod="openstack/memcached-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.660479 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9100748-6b15-4ccf-b961-aab1135f08d1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d9100748-6b15-4ccf-b961-aab1135f08d1\") " pod="openstack/memcached-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.665028 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvr2h\" (UniqueName: \"kubernetes.io/projected/d9100748-6b15-4ccf-b961-aab1135f08d1-kube-api-access-pvr2h\") pod \"memcached-0\" (UID: \"d9100748-6b15-4ccf-b961-aab1135f08d1\") " pod="openstack/memcached-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.732334 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.795957 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 13 10:23:13 crc kubenswrapper[4632]: I0313 10:23:13.925241 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2cb2f546-c8c5-4ec9-aba8-d3782431de10","Type":"ContainerStarted","Data":"d523bed9fc8debb17e0b795ccb0624f1e9b1ee50c6bf87e25c1af5e36718ba64"} Mar 13 10:23:14 crc kubenswrapper[4632]: I0313 10:23:14.501200 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 13 10:23:14 crc kubenswrapper[4632]: W0313 10:23:14.591303 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1761ca69_46fd_4375_af60_22b3e77c19a2.slice/crio-6eda8f9cca63eda6a98a0d70fe660cd76972356b76ae16d0745a8444a016580f WatchSource:0}: Error finding container 6eda8f9cca63eda6a98a0d70fe660cd76972356b76ae16d0745a8444a016580f: Status 404 returned error can't find the container with id 6eda8f9cca63eda6a98a0d70fe660cd76972356b76ae16d0745a8444a016580f Mar 13 10:23:14 crc kubenswrapper[4632]: I0313 10:23:14.640522 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 13 10:23:14 crc kubenswrapper[4632]: I0313 10:23:14.989962 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1761ca69-46fd-4375-af60-22b3e77c19a2","Type":"ContainerStarted","Data":"6eda8f9cca63eda6a98a0d70fe660cd76972356b76ae16d0745a8444a016580f"} Mar 13 10:23:15 crc kubenswrapper[4632]: I0313 10:23:15.014115 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d9100748-6b15-4ccf-b961-aab1135f08d1","Type":"ContainerStarted","Data":"1cc9e1cc023d82b95b1d6c1c240824e68c8964912c3afae22861ac002424add2"} Mar 13 10:23:16 crc kubenswrapper[4632]: I0313 10:23:16.074851 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 13 10:23:16 crc kubenswrapper[4632]: I0313 10:23:16.076189 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 13 10:23:16 crc kubenswrapper[4632]: I0313 10:23:16.084971 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 13 10:23:16 crc kubenswrapper[4632]: I0313 10:23:16.088931 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-nrwtz" Mar 13 10:23:16 crc kubenswrapper[4632]: I0313 10:23:16.206488 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfstv\" (UniqueName: \"kubernetes.io/projected/21ce0311-ff05-4626-9663-a373ae31eb56-kube-api-access-hfstv\") pod \"kube-state-metrics-0\" (UID: \"21ce0311-ff05-4626-9663-a373ae31eb56\") " pod="openstack/kube-state-metrics-0" Mar 13 10:23:16 crc kubenswrapper[4632]: I0313 10:23:16.308406 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfstv\" (UniqueName: \"kubernetes.io/projected/21ce0311-ff05-4626-9663-a373ae31eb56-kube-api-access-hfstv\") pod \"kube-state-metrics-0\" (UID: \"21ce0311-ff05-4626-9663-a373ae31eb56\") " pod="openstack/kube-state-metrics-0" Mar 13 10:23:16 crc kubenswrapper[4632]: I0313 10:23:16.353850 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfstv\" (UniqueName: \"kubernetes.io/projected/21ce0311-ff05-4626-9663-a373ae31eb56-kube-api-access-hfstv\") pod \"kube-state-metrics-0\" (UID: \"21ce0311-ff05-4626-9663-a373ae31eb56\") " pod="openstack/kube-state-metrics-0" Mar 13 10:23:16 crc kubenswrapper[4632]: I0313 10:23:16.447109 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 13 10:23:16 crc kubenswrapper[4632]: I0313 10:23:16.918239 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.133192 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-9kd7r"] Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.134795 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9kd7r" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.141065 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-sgqtl" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.141289 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.141411 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.206661 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9kd7r"] Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.245585 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-c5xnp"] Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.250405 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-c5xnp" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.265042 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-c5xnp"] Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.274895 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/eab798dd-482a-4c66-983b-908966cd1f94-var-run\") pod \"ovn-controller-9kd7r\" (UID: \"eab798dd-482a-4c66-983b-908966cd1f94\") " pod="openstack/ovn-controller-9kd7r" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.274964 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/eab798dd-482a-4c66-983b-908966cd1f94-ovn-controller-tls-certs\") pod \"ovn-controller-9kd7r\" (UID: \"eab798dd-482a-4c66-983b-908966cd1f94\") " pod="openstack/ovn-controller-9kd7r" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.274987 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlzh8\" (UniqueName: \"kubernetes.io/projected/eab798dd-482a-4c66-983b-908966cd1f94-kube-api-access-tlzh8\") pod \"ovn-controller-9kd7r\" (UID: \"eab798dd-482a-4c66-983b-908966cd1f94\") " pod="openstack/ovn-controller-9kd7r" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.275013 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eab798dd-482a-4c66-983b-908966cd1f94-scripts\") pod \"ovn-controller-9kd7r\" (UID: \"eab798dd-482a-4c66-983b-908966cd1f94\") " pod="openstack/ovn-controller-9kd7r" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.275034 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eab798dd-482a-4c66-983b-908966cd1f94-combined-ca-bundle\") pod \"ovn-controller-9kd7r\" (UID: \"eab798dd-482a-4c66-983b-908966cd1f94\") " pod="openstack/ovn-controller-9kd7r" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.277182 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/eab798dd-482a-4c66-983b-908966cd1f94-var-log-ovn\") pod \"ovn-controller-9kd7r\" (UID: \"eab798dd-482a-4c66-983b-908966cd1f94\") " pod="openstack/ovn-controller-9kd7r" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.279054 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/eab798dd-482a-4c66-983b-908966cd1f94-var-run-ovn\") pod \"ovn-controller-9kd7r\" (UID: \"eab798dd-482a-4c66-983b-908966cd1f94\") " pod="openstack/ovn-controller-9kd7r" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.386391 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgmn6\" (UniqueName: \"kubernetes.io/projected/d2677b19-4860-497e-a473-6d52d4901d8c-kube-api-access-kgmn6\") pod \"ovn-controller-ovs-c5xnp\" (UID: \"d2677b19-4860-497e-a473-6d52d4901d8c\") " pod="openstack/ovn-controller-ovs-c5xnp" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.386440 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/eab798dd-482a-4c66-983b-908966cd1f94-var-run\") pod \"ovn-controller-9kd7r\" (UID: \"eab798dd-482a-4c66-983b-908966cd1f94\") " pod="openstack/ovn-controller-9kd7r" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.386468 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d2677b19-4860-497e-a473-6d52d4901d8c-var-run\") pod \"ovn-controller-ovs-c5xnp\" (UID: \"d2677b19-4860-497e-a473-6d52d4901d8c\") " pod="openstack/ovn-controller-ovs-c5xnp" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.386490 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/eab798dd-482a-4c66-983b-908966cd1f94-ovn-controller-tls-certs\") pod \"ovn-controller-9kd7r\" (UID: \"eab798dd-482a-4c66-983b-908966cd1f94\") " pod="openstack/ovn-controller-9kd7r" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.386506 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlzh8\" (UniqueName: \"kubernetes.io/projected/eab798dd-482a-4c66-983b-908966cd1f94-kube-api-access-tlzh8\") pod \"ovn-controller-9kd7r\" (UID: \"eab798dd-482a-4c66-983b-908966cd1f94\") " pod="openstack/ovn-controller-9kd7r" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.386527 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d2677b19-4860-497e-a473-6d52d4901d8c-var-lib\") pod \"ovn-controller-ovs-c5xnp\" (UID: \"d2677b19-4860-497e-a473-6d52d4901d8c\") " pod="openstack/ovn-controller-ovs-c5xnp" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.386544 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eab798dd-482a-4c66-983b-908966cd1f94-scripts\") pod \"ovn-controller-9kd7r\" (UID: \"eab798dd-482a-4c66-983b-908966cd1f94\") " pod="openstack/ovn-controller-9kd7r" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.386567 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eab798dd-482a-4c66-983b-908966cd1f94-combined-ca-bundle\") pod \"ovn-controller-9kd7r\" (UID: \"eab798dd-482a-4c66-983b-908966cd1f94\") " pod="openstack/ovn-controller-9kd7r" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.386581 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d2677b19-4860-497e-a473-6d52d4901d8c-etc-ovs\") pod \"ovn-controller-ovs-c5xnp\" (UID: \"d2677b19-4860-497e-a473-6d52d4901d8c\") " pod="openstack/ovn-controller-ovs-c5xnp" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.386602 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2677b19-4860-497e-a473-6d52d4901d8c-scripts\") pod \"ovn-controller-ovs-c5xnp\" (UID: \"d2677b19-4860-497e-a473-6d52d4901d8c\") " pod="openstack/ovn-controller-ovs-c5xnp" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.386627 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/eab798dd-482a-4c66-983b-908966cd1f94-var-log-ovn\") pod \"ovn-controller-9kd7r\" (UID: \"eab798dd-482a-4c66-983b-908966cd1f94\") " pod="openstack/ovn-controller-9kd7r" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.386673 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/eab798dd-482a-4c66-983b-908966cd1f94-var-run-ovn\") pod \"ovn-controller-9kd7r\" (UID: \"eab798dd-482a-4c66-983b-908966cd1f94\") " pod="openstack/ovn-controller-9kd7r" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.386696 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d2677b19-4860-497e-a473-6d52d4901d8c-var-log\") pod \"ovn-controller-ovs-c5xnp\" (UID: \"d2677b19-4860-497e-a473-6d52d4901d8c\") " pod="openstack/ovn-controller-ovs-c5xnp" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.387274 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/eab798dd-482a-4c66-983b-908966cd1f94-var-run\") pod \"ovn-controller-9kd7r\" (UID: \"eab798dd-482a-4c66-983b-908966cd1f94\") " pod="openstack/ovn-controller-9kd7r" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.389177 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/eab798dd-482a-4c66-983b-908966cd1f94-var-log-ovn\") pod \"ovn-controller-9kd7r\" (UID: \"eab798dd-482a-4c66-983b-908966cd1f94\") " pod="openstack/ovn-controller-9kd7r" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.389279 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/eab798dd-482a-4c66-983b-908966cd1f94-var-run-ovn\") pod \"ovn-controller-9kd7r\" (UID: \"eab798dd-482a-4c66-983b-908966cd1f94\") " pod="openstack/ovn-controller-9kd7r" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.394618 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eab798dd-482a-4c66-983b-908966cd1f94-scripts\") pod \"ovn-controller-9kd7r\" (UID: \"eab798dd-482a-4c66-983b-908966cd1f94\") " pod="openstack/ovn-controller-9kd7r" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.403579 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/eab798dd-482a-4c66-983b-908966cd1f94-ovn-controller-tls-certs\") pod \"ovn-controller-9kd7r\" (UID: \"eab798dd-482a-4c66-983b-908966cd1f94\") " pod="openstack/ovn-controller-9kd7r" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.404958 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eab798dd-482a-4c66-983b-908966cd1f94-combined-ca-bundle\") pod \"ovn-controller-9kd7r\" (UID: \"eab798dd-482a-4c66-983b-908966cd1f94\") " pod="openstack/ovn-controller-9kd7r" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.417211 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlzh8\" (UniqueName: \"kubernetes.io/projected/eab798dd-482a-4c66-983b-908966cd1f94-kube-api-access-tlzh8\") pod \"ovn-controller-9kd7r\" (UID: \"eab798dd-482a-4c66-983b-908966cd1f94\") " pod="openstack/ovn-controller-9kd7r" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.488660 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d2677b19-4860-497e-a473-6d52d4901d8c-var-log\") pod \"ovn-controller-ovs-c5xnp\" (UID: \"d2677b19-4860-497e-a473-6d52d4901d8c\") " pod="openstack/ovn-controller-ovs-c5xnp" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.489020 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d2677b19-4860-497e-a473-6d52d4901d8c-var-log\") pod \"ovn-controller-ovs-c5xnp\" (UID: \"d2677b19-4860-497e-a473-6d52d4901d8c\") " pod="openstack/ovn-controller-ovs-c5xnp" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.489072 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgmn6\" (UniqueName: \"kubernetes.io/projected/d2677b19-4860-497e-a473-6d52d4901d8c-kube-api-access-kgmn6\") pod \"ovn-controller-ovs-c5xnp\" (UID: \"d2677b19-4860-497e-a473-6d52d4901d8c\") " pod="openstack/ovn-controller-ovs-c5xnp" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.489125 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d2677b19-4860-497e-a473-6d52d4901d8c-var-run\") pod \"ovn-controller-ovs-c5xnp\" (UID: \"d2677b19-4860-497e-a473-6d52d4901d8c\") " pod="openstack/ovn-controller-ovs-c5xnp" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.489166 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d2677b19-4860-497e-a473-6d52d4901d8c-var-lib\") pod \"ovn-controller-ovs-c5xnp\" (UID: \"d2677b19-4860-497e-a473-6d52d4901d8c\") " pod="openstack/ovn-controller-ovs-c5xnp" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.489319 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d2677b19-4860-497e-a473-6d52d4901d8c-var-run\") pod \"ovn-controller-ovs-c5xnp\" (UID: \"d2677b19-4860-497e-a473-6d52d4901d8c\") " pod="openstack/ovn-controller-ovs-c5xnp" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.489455 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d2677b19-4860-497e-a473-6d52d4901d8c-var-lib\") pod \"ovn-controller-ovs-c5xnp\" (UID: \"d2677b19-4860-497e-a473-6d52d4901d8c\") " pod="openstack/ovn-controller-ovs-c5xnp" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.489199 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d2677b19-4860-497e-a473-6d52d4901d8c-etc-ovs\") pod \"ovn-controller-ovs-c5xnp\" (UID: \"d2677b19-4860-497e-a473-6d52d4901d8c\") " pod="openstack/ovn-controller-ovs-c5xnp" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.489529 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2677b19-4860-497e-a473-6d52d4901d8c-scripts\") pod \"ovn-controller-ovs-c5xnp\" (UID: \"d2677b19-4860-497e-a473-6d52d4901d8c\") " pod="openstack/ovn-controller-ovs-c5xnp" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.489623 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d2677b19-4860-497e-a473-6d52d4901d8c-etc-ovs\") pod \"ovn-controller-ovs-c5xnp\" (UID: \"d2677b19-4860-497e-a473-6d52d4901d8c\") " pod="openstack/ovn-controller-ovs-c5xnp" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.491216 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9kd7r" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.492911 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2677b19-4860-497e-a473-6d52d4901d8c-scripts\") pod \"ovn-controller-ovs-c5xnp\" (UID: \"d2677b19-4860-497e-a473-6d52d4901d8c\") " pod="openstack/ovn-controller-ovs-c5xnp" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.524838 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgmn6\" (UniqueName: \"kubernetes.io/projected/d2677b19-4860-497e-a473-6d52d4901d8c-kube-api-access-kgmn6\") pod \"ovn-controller-ovs-c5xnp\" (UID: \"d2677b19-4860-497e-a473-6d52d4901d8c\") " pod="openstack/ovn-controller-ovs-c5xnp" Mar 13 10:23:19 crc kubenswrapper[4632]: I0313 10:23:19.583155 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-c5xnp" Mar 13 10:23:20 crc kubenswrapper[4632]: I0313 10:23:20.033461 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 13 10:23:20 crc kubenswrapper[4632]: I0313 10:23:20.034984 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 13 10:23:20 crc kubenswrapper[4632]: I0313 10:23:20.040759 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 13 10:23:20 crc kubenswrapper[4632]: I0313 10:23:20.042327 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-ws7xr" Mar 13 10:23:20 crc kubenswrapper[4632]: I0313 10:23:20.043402 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 13 10:23:20 crc kubenswrapper[4632]: I0313 10:23:20.043483 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 13 10:23:20 crc kubenswrapper[4632]: I0313 10:23:20.043995 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 13 10:23:20 crc kubenswrapper[4632]: I0313 10:23:20.067422 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 13 10:23:20 crc kubenswrapper[4632]: I0313 10:23:20.099307 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ee148f1-cc66-4aa0-b603-c8a70f3554f5-config\") pod \"ovsdbserver-nb-0\" (UID: \"4ee148f1-cc66-4aa0-b603-c8a70f3554f5\") " pod="openstack/ovsdbserver-nb-0" Mar 13 10:23:20 crc kubenswrapper[4632]: I0313 10:23:20.099402 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4ee148f1-cc66-4aa0-b603-c8a70f3554f5\") " pod="openstack/ovsdbserver-nb-0" Mar 13 10:23:20 crc kubenswrapper[4632]: I0313 10:23:20.099496 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ee148f1-cc66-4aa0-b603-c8a70f3554f5-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4ee148f1-cc66-4aa0-b603-c8a70f3554f5\") " pod="openstack/ovsdbserver-nb-0" Mar 13 10:23:20 crc kubenswrapper[4632]: I0313 10:23:20.099551 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ee148f1-cc66-4aa0-b603-c8a70f3554f5-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4ee148f1-cc66-4aa0-b603-c8a70f3554f5\") " pod="openstack/ovsdbserver-nb-0" Mar 13 10:23:20 crc kubenswrapper[4632]: I0313 10:23:20.099581 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4ee148f1-cc66-4aa0-b603-c8a70f3554f5-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4ee148f1-cc66-4aa0-b603-c8a70f3554f5\") " pod="openstack/ovsdbserver-nb-0" Mar 13 10:23:20 crc kubenswrapper[4632]: I0313 10:23:20.099618 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ee148f1-cc66-4aa0-b603-c8a70f3554f5-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4ee148f1-cc66-4aa0-b603-c8a70f3554f5\") " pod="openstack/ovsdbserver-nb-0" Mar 13 10:23:20 crc kubenswrapper[4632]: I0313 10:23:20.099673 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mqrx\" (UniqueName: \"kubernetes.io/projected/4ee148f1-cc66-4aa0-b603-c8a70f3554f5-kube-api-access-9mqrx\") pod \"ovsdbserver-nb-0\" (UID: \"4ee148f1-cc66-4aa0-b603-c8a70f3554f5\") " pod="openstack/ovsdbserver-nb-0" Mar 13 10:23:20 crc kubenswrapper[4632]: I0313 10:23:20.099730 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ee148f1-cc66-4aa0-b603-c8a70f3554f5-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4ee148f1-cc66-4aa0-b603-c8a70f3554f5\") " pod="openstack/ovsdbserver-nb-0" Mar 13 10:23:20 crc kubenswrapper[4632]: I0313 10:23:20.201957 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ee148f1-cc66-4aa0-b603-c8a70f3554f5-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4ee148f1-cc66-4aa0-b603-c8a70f3554f5\") " pod="openstack/ovsdbserver-nb-0" Mar 13 10:23:20 crc kubenswrapper[4632]: I0313 10:23:20.202029 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ee148f1-cc66-4aa0-b603-c8a70f3554f5-config\") pod \"ovsdbserver-nb-0\" (UID: \"4ee148f1-cc66-4aa0-b603-c8a70f3554f5\") " pod="openstack/ovsdbserver-nb-0" Mar 13 10:23:20 crc kubenswrapper[4632]: I0313 10:23:20.202061 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4ee148f1-cc66-4aa0-b603-c8a70f3554f5\") " pod="openstack/ovsdbserver-nb-0" Mar 13 10:23:20 crc kubenswrapper[4632]: I0313 10:23:20.202086 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ee148f1-cc66-4aa0-b603-c8a70f3554f5-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4ee148f1-cc66-4aa0-b603-c8a70f3554f5\") " pod="openstack/ovsdbserver-nb-0" Mar 13 10:23:20 crc kubenswrapper[4632]: I0313 10:23:20.202130 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ee148f1-cc66-4aa0-b603-c8a70f3554f5-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4ee148f1-cc66-4aa0-b603-c8a70f3554f5\") " pod="openstack/ovsdbserver-nb-0" Mar 13 10:23:20 crc kubenswrapper[4632]: I0313 10:23:20.202150 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4ee148f1-cc66-4aa0-b603-c8a70f3554f5-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4ee148f1-cc66-4aa0-b603-c8a70f3554f5\") " pod="openstack/ovsdbserver-nb-0" Mar 13 10:23:20 crc kubenswrapper[4632]: I0313 10:23:20.202178 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ee148f1-cc66-4aa0-b603-c8a70f3554f5-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4ee148f1-cc66-4aa0-b603-c8a70f3554f5\") " pod="openstack/ovsdbserver-nb-0" Mar 13 10:23:20 crc kubenswrapper[4632]: I0313 10:23:20.202224 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mqrx\" (UniqueName: \"kubernetes.io/projected/4ee148f1-cc66-4aa0-b603-c8a70f3554f5-kube-api-access-9mqrx\") pod \"ovsdbserver-nb-0\" (UID: \"4ee148f1-cc66-4aa0-b603-c8a70f3554f5\") " pod="openstack/ovsdbserver-nb-0" Mar 13 10:23:20 crc kubenswrapper[4632]: I0313 10:23:20.204695 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ee148f1-cc66-4aa0-b603-c8a70f3554f5-config\") pod \"ovsdbserver-nb-0\" (UID: \"4ee148f1-cc66-4aa0-b603-c8a70f3554f5\") " pod="openstack/ovsdbserver-nb-0" Mar 13 10:23:20 crc kubenswrapper[4632]: I0313 10:23:20.206547 4632 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4ee148f1-cc66-4aa0-b603-c8a70f3554f5\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-nb-0" Mar 13 10:23:20 crc kubenswrapper[4632]: I0313 10:23:20.212453 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ee148f1-cc66-4aa0-b603-c8a70f3554f5-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4ee148f1-cc66-4aa0-b603-c8a70f3554f5\") " pod="openstack/ovsdbserver-nb-0" Mar 13 10:23:20 crc kubenswrapper[4632]: I0313 10:23:20.212677 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4ee148f1-cc66-4aa0-b603-c8a70f3554f5-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4ee148f1-cc66-4aa0-b603-c8a70f3554f5\") " pod="openstack/ovsdbserver-nb-0" Mar 13 10:23:20 crc kubenswrapper[4632]: I0313 10:23:20.214086 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ee148f1-cc66-4aa0-b603-c8a70f3554f5-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4ee148f1-cc66-4aa0-b603-c8a70f3554f5\") " pod="openstack/ovsdbserver-nb-0" Mar 13 10:23:20 crc kubenswrapper[4632]: I0313 10:23:20.214638 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ee148f1-cc66-4aa0-b603-c8a70f3554f5-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4ee148f1-cc66-4aa0-b603-c8a70f3554f5\") " pod="openstack/ovsdbserver-nb-0" Mar 13 10:23:20 crc kubenswrapper[4632]: I0313 10:23:20.215025 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ee148f1-cc66-4aa0-b603-c8a70f3554f5-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4ee148f1-cc66-4aa0-b603-c8a70f3554f5\") " pod="openstack/ovsdbserver-nb-0" Mar 13 10:23:20 crc kubenswrapper[4632]: I0313 10:23:20.234882 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mqrx\" (UniqueName: \"kubernetes.io/projected/4ee148f1-cc66-4aa0-b603-c8a70f3554f5-kube-api-access-9mqrx\") pod \"ovsdbserver-nb-0\" (UID: \"4ee148f1-cc66-4aa0-b603-c8a70f3554f5\") " pod="openstack/ovsdbserver-nb-0" Mar 13 10:23:20 crc kubenswrapper[4632]: I0313 10:23:20.238386 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4ee148f1-cc66-4aa0-b603-c8a70f3554f5\") " pod="openstack/ovsdbserver-nb-0" Mar 13 10:23:20 crc kubenswrapper[4632]: I0313 10:23:20.371859 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 13 10:23:22 crc kubenswrapper[4632]: I0313 10:23:22.673118 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 13 10:23:22 crc kubenswrapper[4632]: I0313 10:23:22.674922 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 13 10:23:22 crc kubenswrapper[4632]: I0313 10:23:22.677294 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 13 10:23:22 crc kubenswrapper[4632]: I0313 10:23:22.677687 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 13 10:23:22 crc kubenswrapper[4632]: I0313 10:23:22.678631 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 13 10:23:22 crc kubenswrapper[4632]: I0313 10:23:22.680118 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-k56t5" Mar 13 10:23:22 crc kubenswrapper[4632]: I0313 10:23:22.699461 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 13 10:23:22 crc kubenswrapper[4632]: I0313 10:23:22.776964 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5529a725-48d8-4a60-91cd-775a4b520c20-config\") pod \"ovsdbserver-sb-0\" (UID: \"5529a725-48d8-4a60-91cd-775a4b520c20\") " pod="openstack/ovsdbserver-sb-0" Mar 13 10:23:22 crc kubenswrapper[4632]: I0313 10:23:22.777043 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5529a725-48d8-4a60-91cd-775a4b520c20-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"5529a725-48d8-4a60-91cd-775a4b520c20\") " pod="openstack/ovsdbserver-sb-0" Mar 13 10:23:22 crc kubenswrapper[4632]: I0313 10:23:22.777183 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5529a725-48d8-4a60-91cd-775a4b520c20-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"5529a725-48d8-4a60-91cd-775a4b520c20\") " pod="openstack/ovsdbserver-sb-0" Mar 13 10:23:22 crc kubenswrapper[4632]: I0313 10:23:22.777258 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5529a725-48d8-4a60-91cd-775a4b520c20-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5529a725-48d8-4a60-91cd-775a4b520c20\") " pod="openstack/ovsdbserver-sb-0" Mar 13 10:23:22 crc kubenswrapper[4632]: I0313 10:23:22.777292 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5529a725-48d8-4a60-91cd-775a4b520c20-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"5529a725-48d8-4a60-91cd-775a4b520c20\") " pod="openstack/ovsdbserver-sb-0" Mar 13 10:23:22 crc kubenswrapper[4632]: I0313 10:23:22.777321 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5529a725-48d8-4a60-91cd-775a4b520c20-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5529a725-48d8-4a60-91cd-775a4b520c20\") " pod="openstack/ovsdbserver-sb-0" Mar 13 10:23:22 crc kubenswrapper[4632]: I0313 10:23:22.777350 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"5529a725-48d8-4a60-91cd-775a4b520c20\") " pod="openstack/ovsdbserver-sb-0" Mar 13 10:23:22 crc kubenswrapper[4632]: I0313 10:23:22.777384 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpcj7\" (UniqueName: \"kubernetes.io/projected/5529a725-48d8-4a60-91cd-775a4b520c20-kube-api-access-gpcj7\") pod \"ovsdbserver-sb-0\" (UID: \"5529a725-48d8-4a60-91cd-775a4b520c20\") " pod="openstack/ovsdbserver-sb-0" Mar 13 10:23:22 crc kubenswrapper[4632]: I0313 10:23:22.879302 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"5529a725-48d8-4a60-91cd-775a4b520c20\") " pod="openstack/ovsdbserver-sb-0" Mar 13 10:23:22 crc kubenswrapper[4632]: I0313 10:23:22.879373 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpcj7\" (UniqueName: \"kubernetes.io/projected/5529a725-48d8-4a60-91cd-775a4b520c20-kube-api-access-gpcj7\") pod \"ovsdbserver-sb-0\" (UID: \"5529a725-48d8-4a60-91cd-775a4b520c20\") " pod="openstack/ovsdbserver-sb-0" Mar 13 10:23:22 crc kubenswrapper[4632]: I0313 10:23:22.879402 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5529a725-48d8-4a60-91cd-775a4b520c20-config\") pod \"ovsdbserver-sb-0\" (UID: \"5529a725-48d8-4a60-91cd-775a4b520c20\") " pod="openstack/ovsdbserver-sb-0" Mar 13 10:23:22 crc kubenswrapper[4632]: I0313 10:23:22.879449 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5529a725-48d8-4a60-91cd-775a4b520c20-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"5529a725-48d8-4a60-91cd-775a4b520c20\") " pod="openstack/ovsdbserver-sb-0" Mar 13 10:23:22 crc kubenswrapper[4632]: I0313 10:23:22.879520 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5529a725-48d8-4a60-91cd-775a4b520c20-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"5529a725-48d8-4a60-91cd-775a4b520c20\") " pod="openstack/ovsdbserver-sb-0" Mar 13 10:23:22 crc kubenswrapper[4632]: I0313 10:23:22.879569 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5529a725-48d8-4a60-91cd-775a4b520c20-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5529a725-48d8-4a60-91cd-775a4b520c20\") " pod="openstack/ovsdbserver-sb-0" Mar 13 10:23:22 crc kubenswrapper[4632]: I0313 10:23:22.879610 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5529a725-48d8-4a60-91cd-775a4b520c20-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"5529a725-48d8-4a60-91cd-775a4b520c20\") " pod="openstack/ovsdbserver-sb-0" Mar 13 10:23:22 crc kubenswrapper[4632]: I0313 10:23:22.879631 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5529a725-48d8-4a60-91cd-775a4b520c20-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5529a725-48d8-4a60-91cd-775a4b520c20\") " pod="openstack/ovsdbserver-sb-0" Mar 13 10:23:22 crc kubenswrapper[4632]: I0313 10:23:22.881074 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5529a725-48d8-4a60-91cd-775a4b520c20-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"5529a725-48d8-4a60-91cd-775a4b520c20\") " pod="openstack/ovsdbserver-sb-0" Mar 13 10:23:22 crc kubenswrapper[4632]: I0313 10:23:22.881356 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5529a725-48d8-4a60-91cd-775a4b520c20-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"5529a725-48d8-4a60-91cd-775a4b520c20\") " pod="openstack/ovsdbserver-sb-0" Mar 13 10:23:22 crc kubenswrapper[4632]: I0313 10:23:22.881475 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5529a725-48d8-4a60-91cd-775a4b520c20-config\") pod \"ovsdbserver-sb-0\" (UID: \"5529a725-48d8-4a60-91cd-775a4b520c20\") " pod="openstack/ovsdbserver-sb-0" Mar 13 10:23:22 crc kubenswrapper[4632]: I0313 10:23:22.881728 4632 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"5529a725-48d8-4a60-91cd-775a4b520c20\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-sb-0" Mar 13 10:23:22 crc kubenswrapper[4632]: I0313 10:23:22.886618 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5529a725-48d8-4a60-91cd-775a4b520c20-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"5529a725-48d8-4a60-91cd-775a4b520c20\") " pod="openstack/ovsdbserver-sb-0" Mar 13 10:23:22 crc kubenswrapper[4632]: I0313 10:23:22.887352 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5529a725-48d8-4a60-91cd-775a4b520c20-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5529a725-48d8-4a60-91cd-775a4b520c20\") " pod="openstack/ovsdbserver-sb-0" Mar 13 10:23:22 crc kubenswrapper[4632]: I0313 10:23:22.889960 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5529a725-48d8-4a60-91cd-775a4b520c20-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5529a725-48d8-4a60-91cd-775a4b520c20\") " pod="openstack/ovsdbserver-sb-0" Mar 13 10:23:22 crc kubenswrapper[4632]: I0313 10:23:22.898524 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpcj7\" (UniqueName: \"kubernetes.io/projected/5529a725-48d8-4a60-91cd-775a4b520c20-kube-api-access-gpcj7\") pod \"ovsdbserver-sb-0\" (UID: \"5529a725-48d8-4a60-91cd-775a4b520c20\") " pod="openstack/ovsdbserver-sb-0" Mar 13 10:23:22 crc kubenswrapper[4632]: I0313 10:23:22.908794 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"5529a725-48d8-4a60-91cd-775a4b520c20\") " pod="openstack/ovsdbserver-sb-0" Mar 13 10:23:23 crc kubenswrapper[4632]: I0313 10:23:23.014082 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 13 10:23:24 crc kubenswrapper[4632]: W0313 10:23:24.789249 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21ce0311_ff05_4626_9663_a373ae31eb56.slice/crio-96915bb97645358a6555ca60c9308596dd68c9b71a65da098dd5679653d9f202 WatchSource:0}: Error finding container 96915bb97645358a6555ca60c9308596dd68c9b71a65da098dd5679653d9f202: Status 404 returned error can't find the container with id 96915bb97645358a6555ca60c9308596dd68c9b71a65da098dd5679653d9f202 Mar 13 10:23:25 crc kubenswrapper[4632]: I0313 10:23:25.299866 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"21ce0311-ff05-4626-9663-a373ae31eb56","Type":"ContainerStarted","Data":"96915bb97645358a6555ca60c9308596dd68c9b71a65da098dd5679653d9f202"} Mar 13 10:23:35 crc kubenswrapper[4632]: E0313 10:23:35.985716 4632 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:e43235cb19da04699a53f42b6a75afe9" Mar 13 10:23:35 crc kubenswrapper[4632]: E0313 10:23:35.987242 4632 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:e43235cb19da04699a53f42b6a75afe9" Mar 13 10:23:35 crc kubenswrapper[4632]: E0313 10:23:35.987374 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:e43235cb19da04699a53f42b6a75afe9,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4mmhg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-bd9cf7445-frlmw_openstack(da131e9a-8968-4569-a970-3aa95b2a830b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 10:23:35 crc kubenswrapper[4632]: E0313 10:23:35.988535 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-bd9cf7445-frlmw" podUID="da131e9a-8968-4569-a970-3aa95b2a830b" Mar 13 10:23:37 crc kubenswrapper[4632]: E0313 10:23:37.172257 4632 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:e43235cb19da04699a53f42b6a75afe9" Mar 13 10:23:37 crc kubenswrapper[4632]: E0313 10:23:37.172529 4632 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:e43235cb19da04699a53f42b6a75afe9" Mar 13 10:23:37 crc kubenswrapper[4632]: E0313 10:23:37.172675 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:e43235cb19da04699a53f42b6a75afe9,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k8hmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(159c6cee-c82b-4725-82d6-dbd27216f53c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 10:23:37 crc kubenswrapper[4632]: E0313 10:23:37.174348 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="159c6cee-c82b-4725-82d6-dbd27216f53c" Mar 13 10:23:37 crc kubenswrapper[4632]: E0313 10:23:37.420424 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:e43235cb19da04699a53f42b6a75afe9\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="159c6cee-c82b-4725-82d6-dbd27216f53c" Mar 13 10:23:39 crc kubenswrapper[4632]: E0313 10:23:39.066018 4632 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:e43235cb19da04699a53f42b6a75afe9" Mar 13 10:23:39 crc kubenswrapper[4632]: E0313 10:23:39.066359 4632 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:e43235cb19da04699a53f42b6a75afe9" Mar 13 10:23:39 crc kubenswrapper[4632]: E0313 10:23:39.071151 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:e43235cb19da04699a53f42b6a75afe9,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qs454,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(2cb2f546-c8c5-4ec9-aba8-d3782431de10): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 10:23:39 crc kubenswrapper[4632]: E0313 10:23:39.073696 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="2cb2f546-c8c5-4ec9-aba8-d3782431de10" Mar 13 10:23:39 crc kubenswrapper[4632]: E0313 10:23:39.099358 4632 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:e43235cb19da04699a53f42b6a75afe9" Mar 13 10:23:39 crc kubenswrapper[4632]: E0313 10:23:39.099415 4632 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:e43235cb19da04699a53f42b6a75afe9" Mar 13 10:23:39 crc kubenswrapper[4632]: E0313 10:23:39.099534 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:e43235cb19da04699a53f42b6a75afe9,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-868r4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-dcf85566c-59l8m_openstack(ea62b75b-fe31-433d-9ff1-a7333aacb383): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 10:23:39 crc kubenswrapper[4632]: E0313 10:23:39.100728 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-dcf85566c-59l8m" podUID="ea62b75b-fe31-433d-9ff1-a7333aacb383" Mar 13 10:23:39 crc kubenswrapper[4632]: E0313 10:23:39.437670 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:e43235cb19da04699a53f42b6a75afe9\\\"\"" pod="openstack/dnsmasq-dns-dcf85566c-59l8m" podUID="ea62b75b-fe31-433d-9ff1-a7333aacb383" Mar 13 10:23:39 crc kubenswrapper[4632]: E0313 10:23:39.438374 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:e43235cb19da04699a53f42b6a75afe9\\\"\"" pod="openstack/openstack-galera-0" podUID="2cb2f546-c8c5-4ec9-aba8-d3782431de10" Mar 13 10:23:39 crc kubenswrapper[4632]: E0313 10:23:39.811727 4632 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-memcached:e43235cb19da04699a53f42b6a75afe9" Mar 13 10:23:39 crc kubenswrapper[4632]: E0313 10:23:39.811788 4632 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-memcached:e43235cb19da04699a53f42b6a75afe9" Mar 13 10:23:39 crc kubenswrapper[4632]: E0313 10:23:39.812296 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-memcached:e43235cb19da04699a53f42b6a75afe9,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n697h566h558h688hd9h59bh5f7h64bhc9h85h94hbdh557h9bh56ch5b4h5fdhd9h5cbh9ch56h64dh55fh694h5cbh54dh65h66ch65ch87h688h59cq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pvr2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(d9100748-6b15-4ccf-b961-aab1135f08d1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 10:23:39 crc kubenswrapper[4632]: E0313 10:23:39.814200 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="d9100748-6b15-4ccf-b961-aab1135f08d1" Mar 13 10:23:39 crc kubenswrapper[4632]: E0313 10:23:39.857233 4632 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:e43235cb19da04699a53f42b6a75afe9" Mar 13 10:23:39 crc kubenswrapper[4632]: E0313 10:23:39.857307 4632 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:e43235cb19da04699a53f42b6a75afe9" Mar 13 10:23:39 crc kubenswrapper[4632]: E0313 10:23:39.857451 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:e43235cb19da04699a53f42b6a75afe9,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6kfgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(211718f0-f29c-457b-bc2b-487bb76d4801): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 10:23:39 crc kubenswrapper[4632]: E0313 10:23:39.858754 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="211718f0-f29c-457b-bc2b-487bb76d4801" Mar 13 10:23:39 crc kubenswrapper[4632]: E0313 10:23:39.911304 4632 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:e43235cb19da04699a53f42b6a75afe9" Mar 13 10:23:39 crc kubenswrapper[4632]: E0313 10:23:39.911564 4632 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:e43235cb19da04699a53f42b6a75afe9" Mar 13 10:23:39 crc kubenswrapper[4632]: E0313 10:23:39.911669 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:e43235cb19da04699a53f42b6a75afe9,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c54n6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7b95c5c449-th4fn_openstack(dcb4500d-7a53-4091-b3af-394eb0f49130): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 10:23:39 crc kubenswrapper[4632]: E0313 10:23:39.913023 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7b95c5c449-th4fn" podUID="dcb4500d-7a53-4091-b3af-394eb0f49130" Mar 13 10:23:39 crc kubenswrapper[4632]: I0313 10:23:39.980398 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bd9cf7445-frlmw" Mar 13 10:23:40 crc kubenswrapper[4632]: I0313 10:23:40.101365 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mmhg\" (UniqueName: \"kubernetes.io/projected/da131e9a-8968-4569-a970-3aa95b2a830b-kube-api-access-4mmhg\") pod \"da131e9a-8968-4569-a970-3aa95b2a830b\" (UID: \"da131e9a-8968-4569-a970-3aa95b2a830b\") " Mar 13 10:23:40 crc kubenswrapper[4632]: I0313 10:23:40.101466 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da131e9a-8968-4569-a970-3aa95b2a830b-dns-svc\") pod \"da131e9a-8968-4569-a970-3aa95b2a830b\" (UID: \"da131e9a-8968-4569-a970-3aa95b2a830b\") " Mar 13 10:23:40 crc kubenswrapper[4632]: I0313 10:23:40.101559 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da131e9a-8968-4569-a970-3aa95b2a830b-config\") pod \"da131e9a-8968-4569-a970-3aa95b2a830b\" (UID: \"da131e9a-8968-4569-a970-3aa95b2a830b\") " Mar 13 10:23:40 crc kubenswrapper[4632]: I0313 10:23:40.103057 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da131e9a-8968-4569-a970-3aa95b2a830b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "da131e9a-8968-4569-a970-3aa95b2a830b" (UID: "da131e9a-8968-4569-a970-3aa95b2a830b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:23:40 crc kubenswrapper[4632]: I0313 10:23:40.103257 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da131e9a-8968-4569-a970-3aa95b2a830b-config" (OuterVolumeSpecName: "config") pod "da131e9a-8968-4569-a970-3aa95b2a830b" (UID: "da131e9a-8968-4569-a970-3aa95b2a830b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:23:40 crc kubenswrapper[4632]: I0313 10:23:40.112187 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da131e9a-8968-4569-a970-3aa95b2a830b-kube-api-access-4mmhg" (OuterVolumeSpecName: "kube-api-access-4mmhg") pod "da131e9a-8968-4569-a970-3aa95b2a830b" (UID: "da131e9a-8968-4569-a970-3aa95b2a830b"). InnerVolumeSpecName "kube-api-access-4mmhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:23:40 crc kubenswrapper[4632]: I0313 10:23:40.206469 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mmhg\" (UniqueName: \"kubernetes.io/projected/da131e9a-8968-4569-a970-3aa95b2a830b-kube-api-access-4mmhg\") on node \"crc\" DevicePath \"\"" Mar 13 10:23:40 crc kubenswrapper[4632]: I0313 10:23:40.206498 4632 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da131e9a-8968-4569-a970-3aa95b2a830b-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 13 10:23:40 crc kubenswrapper[4632]: I0313 10:23:40.206509 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da131e9a-8968-4569-a970-3aa95b2a830b-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:23:40 crc kubenswrapper[4632]: I0313 10:23:40.450123 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bd9cf7445-frlmw" Mar 13 10:23:40 crc kubenswrapper[4632]: I0313 10:23:40.451226 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bd9cf7445-frlmw" event={"ID":"da131e9a-8968-4569-a970-3aa95b2a830b","Type":"ContainerDied","Data":"55fcbfaa372a43ed729e32593f5c37c0b24f03c99a93bb7fe4dbc16e4c07e1b0"} Mar 13 10:23:40 crc kubenswrapper[4632]: E0313 10:23:40.457033 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:e43235cb19da04699a53f42b6a75afe9\\\"\"" pod="openstack/rabbitmq-server-0" podUID="211718f0-f29c-457b-bc2b-487bb76d4801" Mar 13 10:23:40 crc kubenswrapper[4632]: E0313 10:23:40.457042 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-memcached:e43235cb19da04699a53f42b6a75afe9\\\"\"" pod="openstack/memcached-0" podUID="d9100748-6b15-4ccf-b961-aab1135f08d1" Mar 13 10:23:40 crc kubenswrapper[4632]: I0313 10:23:40.461206 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 10:23:40 crc kubenswrapper[4632]: I0313 10:23:40.461269 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 10:23:40 crc kubenswrapper[4632]: I0313 10:23:40.489173 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9kd7r"] Mar 13 10:23:40 crc kubenswrapper[4632]: I0313 10:23:40.632157 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bd9cf7445-frlmw"] Mar 13 10:23:40 crc kubenswrapper[4632]: I0313 10:23:40.646779 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bd9cf7445-frlmw"] Mar 13 10:23:40 crc kubenswrapper[4632]: I0313 10:23:40.724665 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 13 10:23:40 crc kubenswrapper[4632]: W0313 10:23:40.886779 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ee148f1_cc66_4aa0_b603_c8a70f3554f5.slice/crio-2fdfb53a72f64e053dd71b35245437359614c655b2ac8c95f14508dcc5053577 WatchSource:0}: Error finding container 2fdfb53a72f64e053dd71b35245437359614c655b2ac8c95f14508dcc5053577: Status 404 returned error can't find the container with id 2fdfb53a72f64e053dd71b35245437359614c655b2ac8c95f14508dcc5053577 Mar 13 10:23:40 crc kubenswrapper[4632]: W0313 10:23:40.891812 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeab798dd_482a_4c66_983b_908966cd1f94.slice/crio-83331c2a9a3d5614a4cf49eb307ac4e28725cc613d758576db2b95cc3de7bb84 WatchSource:0}: Error finding container 83331c2a9a3d5614a4cf49eb307ac4e28725cc613d758576db2b95cc3de7bb84: Status 404 returned error can't find the container with id 83331c2a9a3d5614a4cf49eb307ac4e28725cc613d758576db2b95cc3de7bb84 Mar 13 10:23:40 crc kubenswrapper[4632]: I0313 10:23:40.904761 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b95c5c449-th4fn" Mar 13 10:23:40 crc kubenswrapper[4632]: I0313 10:23:40.944420 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcb4500d-7a53-4091-b3af-394eb0f49130-config\") pod \"dcb4500d-7a53-4091-b3af-394eb0f49130\" (UID: \"dcb4500d-7a53-4091-b3af-394eb0f49130\") " Mar 13 10:23:40 crc kubenswrapper[4632]: I0313 10:23:40.944667 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c54n6\" (UniqueName: \"kubernetes.io/projected/dcb4500d-7a53-4091-b3af-394eb0f49130-kube-api-access-c54n6\") pod \"dcb4500d-7a53-4091-b3af-394eb0f49130\" (UID: \"dcb4500d-7a53-4091-b3af-394eb0f49130\") " Mar 13 10:23:40 crc kubenswrapper[4632]: I0313 10:23:40.946314 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcb4500d-7a53-4091-b3af-394eb0f49130-config" (OuterVolumeSpecName: "config") pod "dcb4500d-7a53-4091-b3af-394eb0f49130" (UID: "dcb4500d-7a53-4091-b3af-394eb0f49130"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:23:40 crc kubenswrapper[4632]: I0313 10:23:40.949976 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcb4500d-7a53-4091-b3af-394eb0f49130-kube-api-access-c54n6" (OuterVolumeSpecName: "kube-api-access-c54n6") pod "dcb4500d-7a53-4091-b3af-394eb0f49130" (UID: "dcb4500d-7a53-4091-b3af-394eb0f49130"). InnerVolumeSpecName "kube-api-access-c54n6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:23:41 crc kubenswrapper[4632]: I0313 10:23:41.046050 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c54n6\" (UniqueName: \"kubernetes.io/projected/dcb4500d-7a53-4091-b3af-394eb0f49130-kube-api-access-c54n6\") on node \"crc\" DevicePath \"\"" Mar 13 10:23:41 crc kubenswrapper[4632]: I0313 10:23:41.046084 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcb4500d-7a53-4091-b3af-394eb0f49130-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:23:41 crc kubenswrapper[4632]: I0313 10:23:41.474489 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1761ca69-46fd-4375-af60-22b3e77c19a2","Type":"ContainerStarted","Data":"38fabecc1af392ae2500911da4ea37a128aafd15dfb148ef01201cadf3cfb5e8"} Mar 13 10:23:41 crc kubenswrapper[4632]: I0313 10:23:41.477618 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4ee148f1-cc66-4aa0-b603-c8a70f3554f5","Type":"ContainerStarted","Data":"2fdfb53a72f64e053dd71b35245437359614c655b2ac8c95f14508dcc5053577"} Mar 13 10:23:41 crc kubenswrapper[4632]: I0313 10:23:41.480006 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b95c5c449-th4fn" event={"ID":"dcb4500d-7a53-4091-b3af-394eb0f49130","Type":"ContainerDied","Data":"795a9fce6e217788c208128d366941b235a94e772dfaab8892b09ba717741562"} Mar 13 10:23:41 crc kubenswrapper[4632]: I0313 10:23:41.480046 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b95c5c449-th4fn" Mar 13 10:23:41 crc kubenswrapper[4632]: I0313 10:23:41.482721 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9kd7r" event={"ID":"eab798dd-482a-4c66-983b-908966cd1f94","Type":"ContainerStarted","Data":"83331c2a9a3d5614a4cf49eb307ac4e28725cc613d758576db2b95cc3de7bb84"} Mar 13 10:23:41 crc kubenswrapper[4632]: I0313 10:23:41.572768 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-c5xnp"] Mar 13 10:23:41 crc kubenswrapper[4632]: I0313 10:23:41.590106 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b95c5c449-th4fn"] Mar 13 10:23:41 crc kubenswrapper[4632]: I0313 10:23:41.607582 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b95c5c449-th4fn"] Mar 13 10:23:41 crc kubenswrapper[4632]: I0313 10:23:41.722174 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 13 10:23:41 crc kubenswrapper[4632]: W0313 10:23:41.765094 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5529a725_48d8_4a60_91cd_775a4b520c20.slice/crio-b6992dcf3b7705827a5dd88fc12a83e39cc06f6db59b355289a33a8b11826133 WatchSource:0}: Error finding container b6992dcf3b7705827a5dd88fc12a83e39cc06f6db59b355289a33a8b11826133: Status 404 returned error can't find the container with id b6992dcf3b7705827a5dd88fc12a83e39cc06f6db59b355289a33a8b11826133 Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.056786 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da131e9a-8968-4569-a970-3aa95b2a830b" path="/var/lib/kubelet/pods/da131e9a-8968-4569-a970-3aa95b2a830b/volumes" Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.057402 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcb4500d-7a53-4091-b3af-394eb0f49130" path="/var/lib/kubelet/pods/dcb4500d-7a53-4091-b3af-394eb0f49130/volumes" Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.371999 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-798sf"] Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.373288 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-798sf" Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.380590 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.387108 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-798sf"] Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.509525 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9246fc4f-3716-4a8b-9854-52137cf04e9a-ovs-rundir\") pod \"ovn-controller-metrics-798sf\" (UID: \"9246fc4f-3716-4a8b-9854-52137cf04e9a\") " pod="openstack/ovn-controller-metrics-798sf" Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.509826 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9246fc4f-3716-4a8b-9854-52137cf04e9a-ovn-rundir\") pod \"ovn-controller-metrics-798sf\" (UID: \"9246fc4f-3716-4a8b-9854-52137cf04e9a\") " pod="openstack/ovn-controller-metrics-798sf" Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.509983 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9246fc4f-3716-4a8b-9854-52137cf04e9a-config\") pod \"ovn-controller-metrics-798sf\" (UID: \"9246fc4f-3716-4a8b-9854-52137cf04e9a\") " pod="openstack/ovn-controller-metrics-798sf" Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.510050 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jstns\" (UniqueName: \"kubernetes.io/projected/9246fc4f-3716-4a8b-9854-52137cf04e9a-kube-api-access-jstns\") pod \"ovn-controller-metrics-798sf\" (UID: \"9246fc4f-3716-4a8b-9854-52137cf04e9a\") " pod="openstack/ovn-controller-metrics-798sf" Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.510075 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9246fc4f-3716-4a8b-9854-52137cf04e9a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-798sf\" (UID: \"9246fc4f-3716-4a8b-9854-52137cf04e9a\") " pod="openstack/ovn-controller-metrics-798sf" Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.510160 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9246fc4f-3716-4a8b-9854-52137cf04e9a-combined-ca-bundle\") pod \"ovn-controller-metrics-798sf\" (UID: \"9246fc4f-3716-4a8b-9854-52137cf04e9a\") " pod="openstack/ovn-controller-metrics-798sf" Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.559719 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"5529a725-48d8-4a60-91cd-775a4b520c20","Type":"ContainerStarted","Data":"b6992dcf3b7705827a5dd88fc12a83e39cc06f6db59b355289a33a8b11826133"} Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.562227 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-c5xnp" event={"ID":"d2677b19-4860-497e-a473-6d52d4901d8c","Type":"ContainerStarted","Data":"0a2653ff7d965ff6780efa77e0c16b65c2fa0483036255f3de2799ed24f8a7a4"} Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.612062 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9246fc4f-3716-4a8b-9854-52137cf04e9a-ovn-rundir\") pod \"ovn-controller-metrics-798sf\" (UID: \"9246fc4f-3716-4a8b-9854-52137cf04e9a\") " pod="openstack/ovn-controller-metrics-798sf" Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.612163 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9246fc4f-3716-4a8b-9854-52137cf04e9a-config\") pod \"ovn-controller-metrics-798sf\" (UID: \"9246fc4f-3716-4a8b-9854-52137cf04e9a\") " pod="openstack/ovn-controller-metrics-798sf" Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.612208 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jstns\" (UniqueName: \"kubernetes.io/projected/9246fc4f-3716-4a8b-9854-52137cf04e9a-kube-api-access-jstns\") pod \"ovn-controller-metrics-798sf\" (UID: \"9246fc4f-3716-4a8b-9854-52137cf04e9a\") " pod="openstack/ovn-controller-metrics-798sf" Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.612237 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9246fc4f-3716-4a8b-9854-52137cf04e9a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-798sf\" (UID: \"9246fc4f-3716-4a8b-9854-52137cf04e9a\") " pod="openstack/ovn-controller-metrics-798sf" Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.612298 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9246fc4f-3716-4a8b-9854-52137cf04e9a-combined-ca-bundle\") pod \"ovn-controller-metrics-798sf\" (UID: \"9246fc4f-3716-4a8b-9854-52137cf04e9a\") " pod="openstack/ovn-controller-metrics-798sf" Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.612325 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9246fc4f-3716-4a8b-9854-52137cf04e9a-ovs-rundir\") pod \"ovn-controller-metrics-798sf\" (UID: \"9246fc4f-3716-4a8b-9854-52137cf04e9a\") " pod="openstack/ovn-controller-metrics-798sf" Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.612714 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9246fc4f-3716-4a8b-9854-52137cf04e9a-ovs-rundir\") pod \"ovn-controller-metrics-798sf\" (UID: \"9246fc4f-3716-4a8b-9854-52137cf04e9a\") " pod="openstack/ovn-controller-metrics-798sf" Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.612796 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9246fc4f-3716-4a8b-9854-52137cf04e9a-ovn-rundir\") pod \"ovn-controller-metrics-798sf\" (UID: \"9246fc4f-3716-4a8b-9854-52137cf04e9a\") " pod="openstack/ovn-controller-metrics-798sf" Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.613817 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9246fc4f-3716-4a8b-9854-52137cf04e9a-config\") pod \"ovn-controller-metrics-798sf\" (UID: \"9246fc4f-3716-4a8b-9854-52137cf04e9a\") " pod="openstack/ovn-controller-metrics-798sf" Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.630280 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9246fc4f-3716-4a8b-9854-52137cf04e9a-combined-ca-bundle\") pod \"ovn-controller-metrics-798sf\" (UID: \"9246fc4f-3716-4a8b-9854-52137cf04e9a\") " pod="openstack/ovn-controller-metrics-798sf" Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.631592 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9246fc4f-3716-4a8b-9854-52137cf04e9a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-798sf\" (UID: \"9246fc4f-3716-4a8b-9854-52137cf04e9a\") " pod="openstack/ovn-controller-metrics-798sf" Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.654470 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jstns\" (UniqueName: \"kubernetes.io/projected/9246fc4f-3716-4a8b-9854-52137cf04e9a-kube-api-access-jstns\") pod \"ovn-controller-metrics-798sf\" (UID: \"9246fc4f-3716-4a8b-9854-52137cf04e9a\") " pod="openstack/ovn-controller-metrics-798sf" Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.693265 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86545856d7-fkxhx"] Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.719925 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7df696cbbf-4tc7r"] Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.721535 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7df696cbbf-4tc7r" Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.737798 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.739410 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df11136a-b7d7-4a5a-a0cc-d0ebbc069b34-config\") pod \"dnsmasq-dns-7df696cbbf-4tc7r\" (UID: \"df11136a-b7d7-4a5a-a0cc-d0ebbc069b34\") " pod="openstack/dnsmasq-dns-7df696cbbf-4tc7r" Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.739473 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7phzq\" (UniqueName: \"kubernetes.io/projected/df11136a-b7d7-4a5a-a0cc-d0ebbc069b34-kube-api-access-7phzq\") pod \"dnsmasq-dns-7df696cbbf-4tc7r\" (UID: \"df11136a-b7d7-4a5a-a0cc-d0ebbc069b34\") " pod="openstack/dnsmasq-dns-7df696cbbf-4tc7r" Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.739693 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df11136a-b7d7-4a5a-a0cc-d0ebbc069b34-dns-svc\") pod \"dnsmasq-dns-7df696cbbf-4tc7r\" (UID: \"df11136a-b7d7-4a5a-a0cc-d0ebbc069b34\") " pod="openstack/dnsmasq-dns-7df696cbbf-4tc7r" Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.739720 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df11136a-b7d7-4a5a-a0cc-d0ebbc069b34-ovsdbserver-nb\") pod \"dnsmasq-dns-7df696cbbf-4tc7r\" (UID: \"df11136a-b7d7-4a5a-a0cc-d0ebbc069b34\") " pod="openstack/dnsmasq-dns-7df696cbbf-4tc7r" Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.754866 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7df696cbbf-4tc7r"] Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.841916 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df11136a-b7d7-4a5a-a0cc-d0ebbc069b34-dns-svc\") pod \"dnsmasq-dns-7df696cbbf-4tc7r\" (UID: \"df11136a-b7d7-4a5a-a0cc-d0ebbc069b34\") " pod="openstack/dnsmasq-dns-7df696cbbf-4tc7r" Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.841990 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df11136a-b7d7-4a5a-a0cc-d0ebbc069b34-ovsdbserver-nb\") pod \"dnsmasq-dns-7df696cbbf-4tc7r\" (UID: \"df11136a-b7d7-4a5a-a0cc-d0ebbc069b34\") " pod="openstack/dnsmasq-dns-7df696cbbf-4tc7r" Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.842070 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df11136a-b7d7-4a5a-a0cc-d0ebbc069b34-config\") pod \"dnsmasq-dns-7df696cbbf-4tc7r\" (UID: \"df11136a-b7d7-4a5a-a0cc-d0ebbc069b34\") " pod="openstack/dnsmasq-dns-7df696cbbf-4tc7r" Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.842102 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7phzq\" (UniqueName: \"kubernetes.io/projected/df11136a-b7d7-4a5a-a0cc-d0ebbc069b34-kube-api-access-7phzq\") pod \"dnsmasq-dns-7df696cbbf-4tc7r\" (UID: \"df11136a-b7d7-4a5a-a0cc-d0ebbc069b34\") " pod="openstack/dnsmasq-dns-7df696cbbf-4tc7r" Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.843721 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df11136a-b7d7-4a5a-a0cc-d0ebbc069b34-ovsdbserver-nb\") pod \"dnsmasq-dns-7df696cbbf-4tc7r\" (UID: \"df11136a-b7d7-4a5a-a0cc-d0ebbc069b34\") " pod="openstack/dnsmasq-dns-7df696cbbf-4tc7r" Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.844443 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df11136a-b7d7-4a5a-a0cc-d0ebbc069b34-config\") pod \"dnsmasq-dns-7df696cbbf-4tc7r\" (UID: \"df11136a-b7d7-4a5a-a0cc-d0ebbc069b34\") " pod="openstack/dnsmasq-dns-7df696cbbf-4tc7r" Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.844512 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-798sf" Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.848215 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df11136a-b7d7-4a5a-a0cc-d0ebbc069b34-dns-svc\") pod \"dnsmasq-dns-7df696cbbf-4tc7r\" (UID: \"df11136a-b7d7-4a5a-a0cc-d0ebbc069b34\") " pod="openstack/dnsmasq-dns-7df696cbbf-4tc7r" Mar 13 10:23:42 crc kubenswrapper[4632]: I0313 10:23:42.874015 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7phzq\" (UniqueName: \"kubernetes.io/projected/df11136a-b7d7-4a5a-a0cc-d0ebbc069b34-kube-api-access-7phzq\") pod \"dnsmasq-dns-7df696cbbf-4tc7r\" (UID: \"df11136a-b7d7-4a5a-a0cc-d0ebbc069b34\") " pod="openstack/dnsmasq-dns-7df696cbbf-4tc7r" Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.086826 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dcf85566c-59l8m"] Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.133699 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7df696cbbf-4tc7r" Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.229487 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79dfb79747-jv5m6"] Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.231319 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79dfb79747-jv5m6" Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.240065 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.254222 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79dfb79747-jv5m6"] Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.357175 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a362448e-8daa-4bf4-958f-f3ca135be228-config\") pod \"dnsmasq-dns-79dfb79747-jv5m6\" (UID: \"a362448e-8daa-4bf4-958f-f3ca135be228\") " pod="openstack/dnsmasq-dns-79dfb79747-jv5m6" Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.357849 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a362448e-8daa-4bf4-958f-f3ca135be228-ovsdbserver-sb\") pod \"dnsmasq-dns-79dfb79747-jv5m6\" (UID: \"a362448e-8daa-4bf4-958f-f3ca135be228\") " pod="openstack/dnsmasq-dns-79dfb79747-jv5m6" Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.358055 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a362448e-8daa-4bf4-958f-f3ca135be228-dns-svc\") pod \"dnsmasq-dns-79dfb79747-jv5m6\" (UID: \"a362448e-8daa-4bf4-958f-f3ca135be228\") " pod="openstack/dnsmasq-dns-79dfb79747-jv5m6" Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.358212 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a362448e-8daa-4bf4-958f-f3ca135be228-ovsdbserver-nb\") pod \"dnsmasq-dns-79dfb79747-jv5m6\" (UID: \"a362448e-8daa-4bf4-958f-f3ca135be228\") " pod="openstack/dnsmasq-dns-79dfb79747-jv5m6" Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.358299 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pv67\" (UniqueName: \"kubernetes.io/projected/a362448e-8daa-4bf4-958f-f3ca135be228-kube-api-access-5pv67\") pod \"dnsmasq-dns-79dfb79747-jv5m6\" (UID: \"a362448e-8daa-4bf4-958f-f3ca135be228\") " pod="openstack/dnsmasq-dns-79dfb79747-jv5m6" Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.461648 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a362448e-8daa-4bf4-958f-f3ca135be228-config\") pod \"dnsmasq-dns-79dfb79747-jv5m6\" (UID: \"a362448e-8daa-4bf4-958f-f3ca135be228\") " pod="openstack/dnsmasq-dns-79dfb79747-jv5m6" Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.461703 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a362448e-8daa-4bf4-958f-f3ca135be228-ovsdbserver-sb\") pod \"dnsmasq-dns-79dfb79747-jv5m6\" (UID: \"a362448e-8daa-4bf4-958f-f3ca135be228\") " pod="openstack/dnsmasq-dns-79dfb79747-jv5m6" Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.461776 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a362448e-8daa-4bf4-958f-f3ca135be228-dns-svc\") pod \"dnsmasq-dns-79dfb79747-jv5m6\" (UID: \"a362448e-8daa-4bf4-958f-f3ca135be228\") " pod="openstack/dnsmasq-dns-79dfb79747-jv5m6" Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.461808 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a362448e-8daa-4bf4-958f-f3ca135be228-ovsdbserver-nb\") pod \"dnsmasq-dns-79dfb79747-jv5m6\" (UID: \"a362448e-8daa-4bf4-958f-f3ca135be228\") " pod="openstack/dnsmasq-dns-79dfb79747-jv5m6" Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.461835 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pv67\" (UniqueName: \"kubernetes.io/projected/a362448e-8daa-4bf4-958f-f3ca135be228-kube-api-access-5pv67\") pod \"dnsmasq-dns-79dfb79747-jv5m6\" (UID: \"a362448e-8daa-4bf4-958f-f3ca135be228\") " pod="openstack/dnsmasq-dns-79dfb79747-jv5m6" Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.463177 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a362448e-8daa-4bf4-958f-f3ca135be228-dns-svc\") pod \"dnsmasq-dns-79dfb79747-jv5m6\" (UID: \"a362448e-8daa-4bf4-958f-f3ca135be228\") " pod="openstack/dnsmasq-dns-79dfb79747-jv5m6" Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.463275 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a362448e-8daa-4bf4-958f-f3ca135be228-ovsdbserver-sb\") pod \"dnsmasq-dns-79dfb79747-jv5m6\" (UID: \"a362448e-8daa-4bf4-958f-f3ca135be228\") " pod="openstack/dnsmasq-dns-79dfb79747-jv5m6" Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.463753 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a362448e-8daa-4bf4-958f-f3ca135be228-ovsdbserver-nb\") pod \"dnsmasq-dns-79dfb79747-jv5m6\" (UID: \"a362448e-8daa-4bf4-958f-f3ca135be228\") " pod="openstack/dnsmasq-dns-79dfb79747-jv5m6" Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.464013 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a362448e-8daa-4bf4-958f-f3ca135be228-config\") pod \"dnsmasq-dns-79dfb79747-jv5m6\" (UID: \"a362448e-8daa-4bf4-958f-f3ca135be228\") " pod="openstack/dnsmasq-dns-79dfb79747-jv5m6" Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.511607 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pv67\" (UniqueName: \"kubernetes.io/projected/a362448e-8daa-4bf4-958f-f3ca135be228-kube-api-access-5pv67\") pod \"dnsmasq-dns-79dfb79747-jv5m6\" (UID: \"a362448e-8daa-4bf4-958f-f3ca135be228\") " pod="openstack/dnsmasq-dns-79dfb79747-jv5m6" Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.610191 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"21ce0311-ff05-4626-9663-a373ae31eb56","Type":"ContainerStarted","Data":"1a16adc836bd27406a32b6f9e9672d40ce7e70e9caf414e0a9334fb34a8ec7ab"} Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.610499 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.617294 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79dfb79747-jv5m6" Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.621466 4632 generic.go:334] "Generic (PLEG): container finished" podID="7afad0f9-c29c-40e6-8605-1df67a505a82" containerID="3265e919f0b844473557cc692c4854b5d839d0a7869d7f08dc68a3ae52955bc2" exitCode=0 Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.621495 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86545856d7-fkxhx" event={"ID":"7afad0f9-c29c-40e6-8605-1df67a505a82","Type":"ContainerDied","Data":"3265e919f0b844473557cc692c4854b5d839d0a7869d7f08dc68a3ae52955bc2"} Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.654633 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=9.923342576 podStartE2EDuration="27.654611941s" podCreationTimestamp="2026-03-13 10:23:16 +0000 UTC" firstStartedPulling="2026-03-13 10:23:24.795462405 +0000 UTC m=+1178.817992538" lastFinishedPulling="2026-03-13 10:23:42.52673177 +0000 UTC m=+1196.549261903" observedRunningTime="2026-03-13 10:23:43.634721873 +0000 UTC m=+1197.657252016" watchObservedRunningTime="2026-03-13 10:23:43.654611941 +0000 UTC m=+1197.677142074" Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.794193 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dcf85566c-59l8m" Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.878844 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea62b75b-fe31-433d-9ff1-a7333aacb383-dns-svc\") pod \"ea62b75b-fe31-433d-9ff1-a7333aacb383\" (UID: \"ea62b75b-fe31-433d-9ff1-a7333aacb383\") " Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.884430 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea62b75b-fe31-433d-9ff1-a7333aacb383-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ea62b75b-fe31-433d-9ff1-a7333aacb383" (UID: "ea62b75b-fe31-433d-9ff1-a7333aacb383"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.886521 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-798sf"] Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.892716 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-868r4\" (UniqueName: \"kubernetes.io/projected/ea62b75b-fe31-433d-9ff1-a7333aacb383-kube-api-access-868r4\") pod \"ea62b75b-fe31-433d-9ff1-a7333aacb383\" (UID: \"ea62b75b-fe31-433d-9ff1-a7333aacb383\") " Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.892772 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea62b75b-fe31-433d-9ff1-a7333aacb383-config\") pod \"ea62b75b-fe31-433d-9ff1-a7333aacb383\" (UID: \"ea62b75b-fe31-433d-9ff1-a7333aacb383\") " Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.893354 4632 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea62b75b-fe31-433d-9ff1-a7333aacb383-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.893783 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea62b75b-fe31-433d-9ff1-a7333aacb383-config" (OuterVolumeSpecName: "config") pod "ea62b75b-fe31-433d-9ff1-a7333aacb383" (UID: "ea62b75b-fe31-433d-9ff1-a7333aacb383"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.895007 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7df696cbbf-4tc7r"] Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.911899 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea62b75b-fe31-433d-9ff1-a7333aacb383-kube-api-access-868r4" (OuterVolumeSpecName: "kube-api-access-868r4") pod "ea62b75b-fe31-433d-9ff1-a7333aacb383" (UID: "ea62b75b-fe31-433d-9ff1-a7333aacb383"). InnerVolumeSpecName "kube-api-access-868r4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.995483 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-868r4\" (UniqueName: \"kubernetes.io/projected/ea62b75b-fe31-433d-9ff1-a7333aacb383-kube-api-access-868r4\") on node \"crc\" DevicePath \"\"" Mar 13 10:23:43 crc kubenswrapper[4632]: I0313 10:23:43.995521 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea62b75b-fe31-433d-9ff1-a7333aacb383-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:23:44 crc kubenswrapper[4632]: I0313 10:23:44.061782 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86545856d7-fkxhx" Mar 13 10:23:44 crc kubenswrapper[4632]: I0313 10:23:44.096110 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afad0f9-c29c-40e6-8605-1df67a505a82-config\") pod \"7afad0f9-c29c-40e6-8605-1df67a505a82\" (UID: \"7afad0f9-c29c-40e6-8605-1df67a505a82\") " Mar 13 10:23:44 crc kubenswrapper[4632]: I0313 10:23:44.096314 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7afad0f9-c29c-40e6-8605-1df67a505a82-dns-svc\") pod \"7afad0f9-c29c-40e6-8605-1df67a505a82\" (UID: \"7afad0f9-c29c-40e6-8605-1df67a505a82\") " Mar 13 10:23:44 crc kubenswrapper[4632]: I0313 10:23:44.096404 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdf8h\" (UniqueName: \"kubernetes.io/projected/7afad0f9-c29c-40e6-8605-1df67a505a82-kube-api-access-qdf8h\") pod \"7afad0f9-c29c-40e6-8605-1df67a505a82\" (UID: \"7afad0f9-c29c-40e6-8605-1df67a505a82\") " Mar 13 10:23:44 crc kubenswrapper[4632]: I0313 10:23:44.101694 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afad0f9-c29c-40e6-8605-1df67a505a82-kube-api-access-qdf8h" (OuterVolumeSpecName: "kube-api-access-qdf8h") pod "7afad0f9-c29c-40e6-8605-1df67a505a82" (UID: "7afad0f9-c29c-40e6-8605-1df67a505a82"). InnerVolumeSpecName "kube-api-access-qdf8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:23:44 crc kubenswrapper[4632]: I0313 10:23:44.119299 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afad0f9-c29c-40e6-8605-1df67a505a82-config" (OuterVolumeSpecName: "config") pod "7afad0f9-c29c-40e6-8605-1df67a505a82" (UID: "7afad0f9-c29c-40e6-8605-1df67a505a82"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:23:44 crc kubenswrapper[4632]: I0313 10:23:44.120743 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afad0f9-c29c-40e6-8605-1df67a505a82-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7afad0f9-c29c-40e6-8605-1df67a505a82" (UID: "7afad0f9-c29c-40e6-8605-1df67a505a82"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:23:44 crc kubenswrapper[4632]: I0313 10:23:44.198875 4632 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7afad0f9-c29c-40e6-8605-1df67a505a82-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 13 10:23:44 crc kubenswrapper[4632]: I0313 10:23:44.198910 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdf8h\" (UniqueName: \"kubernetes.io/projected/7afad0f9-c29c-40e6-8605-1df67a505a82-kube-api-access-qdf8h\") on node \"crc\" DevicePath \"\"" Mar 13 10:23:44 crc kubenswrapper[4632]: I0313 10:23:44.198921 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afad0f9-c29c-40e6-8605-1df67a505a82-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:23:44 crc kubenswrapper[4632]: I0313 10:23:44.300548 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79dfb79747-jv5m6"] Mar 13 10:23:44 crc kubenswrapper[4632]: I0313 10:23:44.648291 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86545856d7-fkxhx" Mar 13 10:23:44 crc kubenswrapper[4632]: I0313 10:23:44.648293 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86545856d7-fkxhx" event={"ID":"7afad0f9-c29c-40e6-8605-1df67a505a82","Type":"ContainerDied","Data":"99dcdc47d7e36bada4f3fc23414bc9bbd494a02612477871943ec84558819c97"} Mar 13 10:23:44 crc kubenswrapper[4632]: I0313 10:23:44.648822 4632 scope.go:117] "RemoveContainer" containerID="3265e919f0b844473557cc692c4854b5d839d0a7869d7f08dc68a3ae52955bc2" Mar 13 10:23:44 crc kubenswrapper[4632]: I0313 10:23:44.651797 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-798sf" event={"ID":"9246fc4f-3716-4a8b-9854-52137cf04e9a","Type":"ContainerStarted","Data":"baeb1292c403de7ee01a776c7dd912c73407e5f4c64078e55174c0cb15e01bae"} Mar 13 10:23:44 crc kubenswrapper[4632]: I0313 10:23:44.654311 4632 generic.go:334] "Generic (PLEG): container finished" podID="df11136a-b7d7-4a5a-a0cc-d0ebbc069b34" containerID="2915f2c9d1176c44121e26cd23d0bb33a0c1b7ffaf310d2e0081ce7eb76b0909" exitCode=0 Mar 13 10:23:44 crc kubenswrapper[4632]: I0313 10:23:44.654464 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7df696cbbf-4tc7r" event={"ID":"df11136a-b7d7-4a5a-a0cc-d0ebbc069b34","Type":"ContainerDied","Data":"2915f2c9d1176c44121e26cd23d0bb33a0c1b7ffaf310d2e0081ce7eb76b0909"} Mar 13 10:23:44 crc kubenswrapper[4632]: I0313 10:23:44.654542 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7df696cbbf-4tc7r" event={"ID":"df11136a-b7d7-4a5a-a0cc-d0ebbc069b34","Type":"ContainerStarted","Data":"c5cd8fa4d9fefd8144c63785679d7e91126ba9c71436b6d101b2ee0cc8ea3019"} Mar 13 10:23:44 crc kubenswrapper[4632]: I0313 10:23:44.656038 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dcf85566c-59l8m" event={"ID":"ea62b75b-fe31-433d-9ff1-a7333aacb383","Type":"ContainerDied","Data":"8c2e1684ce51b6e904615df8d377b14786ec45f198df25a296fb0708834a1826"} Mar 13 10:23:44 crc kubenswrapper[4632]: I0313 10:23:44.656139 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dcf85566c-59l8m" Mar 13 10:23:44 crc kubenswrapper[4632]: I0313 10:23:44.661918 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79dfb79747-jv5m6" event={"ID":"a362448e-8daa-4bf4-958f-f3ca135be228","Type":"ContainerStarted","Data":"353c97e7f46060d145aa3be9824f787e63d6eed5891607427c9311023caa0833"} Mar 13 10:23:44 crc kubenswrapper[4632]: I0313 10:23:44.762873 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dcf85566c-59l8m"] Mar 13 10:23:44 crc kubenswrapper[4632]: I0313 10:23:44.766661 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dcf85566c-59l8m"] Mar 13 10:23:44 crc kubenswrapper[4632]: I0313 10:23:44.780752 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86545856d7-fkxhx"] Mar 13 10:23:44 crc kubenswrapper[4632]: I0313 10:23:44.786174 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86545856d7-fkxhx"] Mar 13 10:23:45 crc kubenswrapper[4632]: I0313 10:23:45.682599 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7df696cbbf-4tc7r" event={"ID":"df11136a-b7d7-4a5a-a0cc-d0ebbc069b34","Type":"ContainerStarted","Data":"e8191df8902be4a4da6f0e247d64bea9567a78aed70ff1b1918dad0f09a75382"} Mar 13 10:23:45 crc kubenswrapper[4632]: I0313 10:23:45.682917 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7df696cbbf-4tc7r" Mar 13 10:23:45 crc kubenswrapper[4632]: I0313 10:23:45.687008 4632 generic.go:334] "Generic (PLEG): container finished" podID="a362448e-8daa-4bf4-958f-f3ca135be228" containerID="c9a88952f81b62d419132fe9a18256ffde7daf30602bf205173e43b0963b20c3" exitCode=0 Mar 13 10:23:45 crc kubenswrapper[4632]: I0313 10:23:45.687142 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79dfb79747-jv5m6" event={"ID":"a362448e-8daa-4bf4-958f-f3ca135be228","Type":"ContainerDied","Data":"c9a88952f81b62d419132fe9a18256ffde7daf30602bf205173e43b0963b20c3"} Mar 13 10:23:45 crc kubenswrapper[4632]: I0313 10:23:45.718346 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7df696cbbf-4tc7r" podStartSLOduration=3.718328062 podStartE2EDuration="3.718328062s" podCreationTimestamp="2026-03-13 10:23:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:23:45.711590857 +0000 UTC m=+1199.734121010" watchObservedRunningTime="2026-03-13 10:23:45.718328062 +0000 UTC m=+1199.740858195" Mar 13 10:23:46 crc kubenswrapper[4632]: I0313 10:23:46.056630 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afad0f9-c29c-40e6-8605-1df67a505a82" path="/var/lib/kubelet/pods/7afad0f9-c29c-40e6-8605-1df67a505a82/volumes" Mar 13 10:23:46 crc kubenswrapper[4632]: I0313 10:23:46.057401 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea62b75b-fe31-433d-9ff1-a7333aacb383" path="/var/lib/kubelet/pods/ea62b75b-fe31-433d-9ff1-a7333aacb383/volumes" Mar 13 10:23:46 crc kubenswrapper[4632]: I0313 10:23:46.698278 4632 generic.go:334] "Generic (PLEG): container finished" podID="1761ca69-46fd-4375-af60-22b3e77c19a2" containerID="38fabecc1af392ae2500911da4ea37a128aafd15dfb148ef01201cadf3cfb5e8" exitCode=0 Mar 13 10:23:46 crc kubenswrapper[4632]: I0313 10:23:46.699366 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1761ca69-46fd-4375-af60-22b3e77c19a2","Type":"ContainerDied","Data":"38fabecc1af392ae2500911da4ea37a128aafd15dfb148ef01201cadf3cfb5e8"} Mar 13 10:23:49 crc kubenswrapper[4632]: I0313 10:23:49.720794 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79dfb79747-jv5m6" event={"ID":"a362448e-8daa-4bf4-958f-f3ca135be228","Type":"ContainerStarted","Data":"6e1c032b958be8592683422ee06f119be07d42e5fc24c06ebfd10193412b1ccc"} Mar 13 10:23:49 crc kubenswrapper[4632]: I0313 10:23:49.721399 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79dfb79747-jv5m6" Mar 13 10:23:49 crc kubenswrapper[4632]: I0313 10:23:49.742486 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79dfb79747-jv5m6" podStartSLOduration=6.742470529 podStartE2EDuration="6.742470529s" podCreationTimestamp="2026-03-13 10:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:23:49.739329452 +0000 UTC m=+1203.761859615" watchObservedRunningTime="2026-03-13 10:23:49.742470529 +0000 UTC m=+1203.765000662" Mar 13 10:23:53 crc kubenswrapper[4632]: I0313 10:23:53.136012 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7df696cbbf-4tc7r" Mar 13 10:23:55 crc kubenswrapper[4632]: I0313 10:23:55.772627 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d9100748-6b15-4ccf-b961-aab1135f08d1","Type":"ContainerStarted","Data":"1b263ebf136efa3203186a62eec927f384736d1b2e1990d4a68dbaa9f6638698"} Mar 13 10:23:55 crc kubenswrapper[4632]: I0313 10:23:55.774039 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 13 10:23:55 crc kubenswrapper[4632]: I0313 10:23:55.787171 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2cb2f546-c8c5-4ec9-aba8-d3782431de10","Type":"ContainerStarted","Data":"aa55e67bc4a499dbbd5e317a3b00bbfa4846877aed563132955d84afe5164371"} Mar 13 10:23:55 crc kubenswrapper[4632]: I0313 10:23:55.797339 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-798sf" event={"ID":"9246fc4f-3716-4a8b-9854-52137cf04e9a","Type":"ContainerStarted","Data":"c9098f2c3a39988551490caac37c53effbd8dc1e78c733223e6d49d975e2886d"} Mar 13 10:23:55 crc kubenswrapper[4632]: I0313 10:23:55.810299 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.605021047 podStartE2EDuration="42.810277034s" podCreationTimestamp="2026-03-13 10:23:13 +0000 UTC" firstStartedPulling="2026-03-13 10:23:14.649528326 +0000 UTC m=+1168.672058459" lastFinishedPulling="2026-03-13 10:23:54.854784313 +0000 UTC m=+1208.877314446" observedRunningTime="2026-03-13 10:23:55.792818577 +0000 UTC m=+1209.815348720" watchObservedRunningTime="2026-03-13 10:23:55.810277034 +0000 UTC m=+1209.832807167" Mar 13 10:23:55 crc kubenswrapper[4632]: I0313 10:23:55.820664 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1761ca69-46fd-4375-af60-22b3e77c19a2","Type":"ContainerStarted","Data":"8d6e3c3bf2cc94b4f346233606a1c5c55a2993e7644d8a78c77dc12972c98a9e"} Mar 13 10:23:55 crc kubenswrapper[4632]: I0313 10:23:55.837659 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4ee148f1-cc66-4aa0-b603-c8a70f3554f5","Type":"ContainerStarted","Data":"7846dbbc8d612576b942082c32786f4118590cc8dbfca6330420b7f7e10d85e9"} Mar 13 10:23:55 crc kubenswrapper[4632]: I0313 10:23:55.850560 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"5529a725-48d8-4a60-91cd-775a4b520c20","Type":"ContainerStarted","Data":"484b3992ae7deb9ebbf37851b58541e4ffa16a07a4275161ef47376ec6250e84"} Mar 13 10:23:55 crc kubenswrapper[4632]: I0313 10:23:55.853575 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=18.275293193 podStartE2EDuration="43.853558736s" podCreationTimestamp="2026-03-13 10:23:12 +0000 UTC" firstStartedPulling="2026-03-13 10:23:14.600191815 +0000 UTC m=+1168.622721958" lastFinishedPulling="2026-03-13 10:23:40.178457368 +0000 UTC m=+1194.200987501" observedRunningTime="2026-03-13 10:23:55.849962878 +0000 UTC m=+1209.872493021" watchObservedRunningTime="2026-03-13 10:23:55.853558736 +0000 UTC m=+1209.876088869" Mar 13 10:23:55 crc kubenswrapper[4632]: I0313 10:23:55.863077 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9kd7r" event={"ID":"eab798dd-482a-4c66-983b-908966cd1f94","Type":"ContainerStarted","Data":"80f6e66214e6fd7a5ba2fbf97142f0099cd2a2a9c54e115d0323772b3f5f702b"} Mar 13 10:23:55 crc kubenswrapper[4632]: I0313 10:23:55.863178 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-9kd7r" Mar 13 10:23:55 crc kubenswrapper[4632]: I0313 10:23:55.869408 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-c5xnp" event={"ID":"d2677b19-4860-497e-a473-6d52d4901d8c","Type":"ContainerStarted","Data":"11b6da1f5ac161c5fa2ede8304d7459c7fa06380905e5f947dc2d54c0f26b4cf"} Mar 13 10:23:55 crc kubenswrapper[4632]: I0313 10:23:55.923024 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-798sf" podStartSLOduration=2.98828617 podStartE2EDuration="13.92300546s" podCreationTimestamp="2026-03-13 10:23:42 +0000 UTC" firstStartedPulling="2026-03-13 10:23:43.920092574 +0000 UTC m=+1197.942622707" lastFinishedPulling="2026-03-13 10:23:54.854811864 +0000 UTC m=+1208.877341997" observedRunningTime="2026-03-13 10:23:55.883403148 +0000 UTC m=+1209.905933281" watchObservedRunningTime="2026-03-13 10:23:55.92300546 +0000 UTC m=+1209.945535593" Mar 13 10:23:55 crc kubenswrapper[4632]: I0313 10:23:55.923283 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-9kd7r" podStartSLOduration=22.989840578 podStartE2EDuration="36.923277587s" podCreationTimestamp="2026-03-13 10:23:19 +0000 UTC" firstStartedPulling="2026-03-13 10:23:40.89673919 +0000 UTC m=+1194.919269323" lastFinishedPulling="2026-03-13 10:23:54.830176199 +0000 UTC m=+1208.852706332" observedRunningTime="2026-03-13 10:23:55.919170566 +0000 UTC m=+1209.941700709" watchObservedRunningTime="2026-03-13 10:23:55.923277587 +0000 UTC m=+1209.945807730" Mar 13 10:23:56 crc kubenswrapper[4632]: I0313 10:23:56.465053 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 13 10:23:56 crc kubenswrapper[4632]: I0313 10:23:56.877546 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"5529a725-48d8-4a60-91cd-775a4b520c20","Type":"ContainerStarted","Data":"72cf388b074df1070acb7f0728aa38efe39290a683180db18cbd4deeba546b7f"} Mar 13 10:23:56 crc kubenswrapper[4632]: I0313 10:23:56.887593 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"159c6cee-c82b-4725-82d6-dbd27216f53c","Type":"ContainerStarted","Data":"d5bd67d741203861cfd1afa23ec3f20fd6236a99625563ac3c10816dbb2a6677"} Mar 13 10:23:56 crc kubenswrapper[4632]: I0313 10:23:56.900908 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=22.83559854 podStartE2EDuration="35.900892461s" podCreationTimestamp="2026-03-13 10:23:21 +0000 UTC" firstStartedPulling="2026-03-13 10:23:41.778961734 +0000 UTC m=+1195.801491867" lastFinishedPulling="2026-03-13 10:23:54.844255655 +0000 UTC m=+1208.866785788" observedRunningTime="2026-03-13 10:23:56.899235691 +0000 UTC m=+1210.921765834" watchObservedRunningTime="2026-03-13 10:23:56.900892461 +0000 UTC m=+1210.923422594" Mar 13 10:23:56 crc kubenswrapper[4632]: I0313 10:23:56.907245 4632 generic.go:334] "Generic (PLEG): container finished" podID="d2677b19-4860-497e-a473-6d52d4901d8c" containerID="11b6da1f5ac161c5fa2ede8304d7459c7fa06380905e5f947dc2d54c0f26b4cf" exitCode=0 Mar 13 10:23:56 crc kubenswrapper[4632]: I0313 10:23:56.907338 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-c5xnp" event={"ID":"d2677b19-4860-497e-a473-6d52d4901d8c","Type":"ContainerDied","Data":"11b6da1f5ac161c5fa2ede8304d7459c7fa06380905e5f947dc2d54c0f26b4cf"} Mar 13 10:23:56 crc kubenswrapper[4632]: I0313 10:23:56.913556 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"211718f0-f29c-457b-bc2b-487bb76d4801","Type":"ContainerStarted","Data":"92d546a480b1e583e7b11dc48ab2d570a4a8d7af0616de2352d72ca175520f17"} Mar 13 10:23:56 crc kubenswrapper[4632]: I0313 10:23:56.918496 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4ee148f1-cc66-4aa0-b603-c8a70f3554f5","Type":"ContainerStarted","Data":"a71593ee348ef6c77356c7339c1905f6ce2c2c09c03e7e4db8c667a05c46e720"} Mar 13 10:23:57 crc kubenswrapper[4632]: I0313 10:23:57.091736 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=25.153601378 podStartE2EDuration="39.091702342s" podCreationTimestamp="2026-03-13 10:23:18 +0000 UTC" firstStartedPulling="2026-03-13 10:23:40.892078515 +0000 UTC m=+1194.914608648" lastFinishedPulling="2026-03-13 10:23:54.830179479 +0000 UTC m=+1208.852709612" observedRunningTime="2026-03-13 10:23:57.085690275 +0000 UTC m=+1211.108220418" watchObservedRunningTime="2026-03-13 10:23:57.091702342 +0000 UTC m=+1211.114232485" Mar 13 10:23:57 crc kubenswrapper[4632]: I0313 10:23:57.929965 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-c5xnp" event={"ID":"d2677b19-4860-497e-a473-6d52d4901d8c","Type":"ContainerStarted","Data":"ae34fea57a4127d6113c858d3b0966d7368d212d3cd1d556e2036d7d77bdfa0f"} Mar 13 10:23:57 crc kubenswrapper[4632]: I0313 10:23:57.930324 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-c5xnp" event={"ID":"d2677b19-4860-497e-a473-6d52d4901d8c","Type":"ContainerStarted","Data":"808a50dadd2432be26181ff455295e45b780e27c923ac2d92d7852cd6644117a"} Mar 13 10:23:57 crc kubenswrapper[4632]: I0313 10:23:57.952154 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-c5xnp" podStartSLOduration=26.003174455 podStartE2EDuration="38.952098261s" podCreationTimestamp="2026-03-13 10:23:19 +0000 UTC" firstStartedPulling="2026-03-13 10:23:41.757455526 +0000 UTC m=+1195.779985659" lastFinishedPulling="2026-03-13 10:23:54.706379332 +0000 UTC m=+1208.728909465" observedRunningTime="2026-03-13 10:23:57.950933782 +0000 UTC m=+1211.973463915" watchObservedRunningTime="2026-03-13 10:23:57.952098261 +0000 UTC m=+1211.974628394" Mar 13 10:23:58 crc kubenswrapper[4632]: I0313 10:23:58.014966 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 13 10:23:58 crc kubenswrapper[4632]: I0313 10:23:58.623154 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79dfb79747-jv5m6" Mar 13 10:23:58 crc kubenswrapper[4632]: I0313 10:23:58.698598 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7df696cbbf-4tc7r"] Mar 13 10:23:58 crc kubenswrapper[4632]: I0313 10:23:58.698917 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7df696cbbf-4tc7r" podUID="df11136a-b7d7-4a5a-a0cc-d0ebbc069b34" containerName="dnsmasq-dns" containerID="cri-o://e8191df8902be4a4da6f0e247d64bea9567a78aed70ff1b1918dad0f09a75382" gracePeriod=10 Mar 13 10:23:58 crc kubenswrapper[4632]: I0313 10:23:58.944925 4632 generic.go:334] "Generic (PLEG): container finished" podID="df11136a-b7d7-4a5a-a0cc-d0ebbc069b34" containerID="e8191df8902be4a4da6f0e247d64bea9567a78aed70ff1b1918dad0f09a75382" exitCode=0 Mar 13 10:23:58 crc kubenswrapper[4632]: I0313 10:23:58.945622 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7df696cbbf-4tc7r" event={"ID":"df11136a-b7d7-4a5a-a0cc-d0ebbc069b34","Type":"ContainerDied","Data":"e8191df8902be4a4da6f0e247d64bea9567a78aed70ff1b1918dad0f09a75382"} Mar 13 10:23:58 crc kubenswrapper[4632]: I0313 10:23:58.946395 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-c5xnp" Mar 13 10:23:58 crc kubenswrapper[4632]: I0313 10:23:58.946482 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-c5xnp" Mar 13 10:23:59 crc kubenswrapper[4632]: I0313 10:23:59.016030 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 13 10:23:59 crc kubenswrapper[4632]: I0313 10:23:59.067043 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 13 10:23:59 crc kubenswrapper[4632]: I0313 10:23:59.211833 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7df696cbbf-4tc7r" Mar 13 10:23:59 crc kubenswrapper[4632]: I0313 10:23:59.295519 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df11136a-b7d7-4a5a-a0cc-d0ebbc069b34-config\") pod \"df11136a-b7d7-4a5a-a0cc-d0ebbc069b34\" (UID: \"df11136a-b7d7-4a5a-a0cc-d0ebbc069b34\") " Mar 13 10:23:59 crc kubenswrapper[4632]: I0313 10:23:59.295637 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df11136a-b7d7-4a5a-a0cc-d0ebbc069b34-ovsdbserver-nb\") pod \"df11136a-b7d7-4a5a-a0cc-d0ebbc069b34\" (UID: \"df11136a-b7d7-4a5a-a0cc-d0ebbc069b34\") " Mar 13 10:23:59 crc kubenswrapper[4632]: I0313 10:23:59.295696 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7phzq\" (UniqueName: \"kubernetes.io/projected/df11136a-b7d7-4a5a-a0cc-d0ebbc069b34-kube-api-access-7phzq\") pod \"df11136a-b7d7-4a5a-a0cc-d0ebbc069b34\" (UID: \"df11136a-b7d7-4a5a-a0cc-d0ebbc069b34\") " Mar 13 10:23:59 crc kubenswrapper[4632]: I0313 10:23:59.295715 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df11136a-b7d7-4a5a-a0cc-d0ebbc069b34-dns-svc\") pod \"df11136a-b7d7-4a5a-a0cc-d0ebbc069b34\" (UID: \"df11136a-b7d7-4a5a-a0cc-d0ebbc069b34\") " Mar 13 10:23:59 crc kubenswrapper[4632]: I0313 10:23:59.309354 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df11136a-b7d7-4a5a-a0cc-d0ebbc069b34-kube-api-access-7phzq" (OuterVolumeSpecName: "kube-api-access-7phzq") pod "df11136a-b7d7-4a5a-a0cc-d0ebbc069b34" (UID: "df11136a-b7d7-4a5a-a0cc-d0ebbc069b34"). InnerVolumeSpecName "kube-api-access-7phzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:23:59 crc kubenswrapper[4632]: I0313 10:23:59.334477 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df11136a-b7d7-4a5a-a0cc-d0ebbc069b34-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "df11136a-b7d7-4a5a-a0cc-d0ebbc069b34" (UID: "df11136a-b7d7-4a5a-a0cc-d0ebbc069b34"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:23:59 crc kubenswrapper[4632]: I0313 10:23:59.349516 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df11136a-b7d7-4a5a-a0cc-d0ebbc069b34-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "df11136a-b7d7-4a5a-a0cc-d0ebbc069b34" (UID: "df11136a-b7d7-4a5a-a0cc-d0ebbc069b34"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:23:59 crc kubenswrapper[4632]: I0313 10:23:59.352598 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df11136a-b7d7-4a5a-a0cc-d0ebbc069b34-config" (OuterVolumeSpecName: "config") pod "df11136a-b7d7-4a5a-a0cc-d0ebbc069b34" (UID: "df11136a-b7d7-4a5a-a0cc-d0ebbc069b34"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:23:59 crc kubenswrapper[4632]: I0313 10:23:59.373545 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 13 10:23:59 crc kubenswrapper[4632]: I0313 10:23:59.397815 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df11136a-b7d7-4a5a-a0cc-d0ebbc069b34-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:23:59 crc kubenswrapper[4632]: I0313 10:23:59.397853 4632 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df11136a-b7d7-4a5a-a0cc-d0ebbc069b34-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 13 10:23:59 crc kubenswrapper[4632]: I0313 10:23:59.397871 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7phzq\" (UniqueName: \"kubernetes.io/projected/df11136a-b7d7-4a5a-a0cc-d0ebbc069b34-kube-api-access-7phzq\") on node \"crc\" DevicePath \"\"" Mar 13 10:23:59 crc kubenswrapper[4632]: I0313 10:23:59.397883 4632 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df11136a-b7d7-4a5a-a0cc-d0ebbc069b34-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 13 10:23:59 crc kubenswrapper[4632]: I0313 10:23:59.416133 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 13 10:23:59 crc kubenswrapper[4632]: I0313 10:23:59.955112 4632 generic.go:334] "Generic (PLEG): container finished" podID="2cb2f546-c8c5-4ec9-aba8-d3782431de10" containerID="aa55e67bc4a499dbbd5e317a3b00bbfa4846877aed563132955d84afe5164371" exitCode=0 Mar 13 10:23:59 crc kubenswrapper[4632]: I0313 10:23:59.955221 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2cb2f546-c8c5-4ec9-aba8-d3782431de10","Type":"ContainerDied","Data":"aa55e67bc4a499dbbd5e317a3b00bbfa4846877aed563132955d84afe5164371"} Mar 13 10:23:59 crc kubenswrapper[4632]: I0313 10:23:59.958483 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7df696cbbf-4tc7r" event={"ID":"df11136a-b7d7-4a5a-a0cc-d0ebbc069b34","Type":"ContainerDied","Data":"c5cd8fa4d9fefd8144c63785679d7e91126ba9c71436b6d101b2ee0cc8ea3019"} Mar 13 10:23:59 crc kubenswrapper[4632]: I0313 10:23:59.958550 4632 scope.go:117] "RemoveContainer" containerID="e8191df8902be4a4da6f0e247d64bea9567a78aed70ff1b1918dad0f09a75382" Mar 13 10:23:59 crc kubenswrapper[4632]: I0313 10:23:59.958674 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7df696cbbf-4tc7r" Mar 13 10:23:59 crc kubenswrapper[4632]: I0313 10:23:59.959183 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 13 10:24:00 crc kubenswrapper[4632]: I0313 10:24:00.097101 4632 scope.go:117] "RemoveContainer" containerID="2915f2c9d1176c44121e26cd23d0bb33a0c1b7ffaf310d2e0081ce7eb76b0909" Mar 13 10:24:00 crc kubenswrapper[4632]: I0313 10:24:00.097790 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7df696cbbf-4tc7r"] Mar 13 10:24:00 crc kubenswrapper[4632]: I0313 10:24:00.110493 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7df696cbbf-4tc7r"] Mar 13 10:24:00 crc kubenswrapper[4632]: I0313 10:24:00.161728 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556624-tnl4c"] Mar 13 10:24:00 crc kubenswrapper[4632]: E0313 10:24:00.162189 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df11136a-b7d7-4a5a-a0cc-d0ebbc069b34" containerName="init" Mar 13 10:24:00 crc kubenswrapper[4632]: I0313 10:24:00.162211 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="df11136a-b7d7-4a5a-a0cc-d0ebbc069b34" containerName="init" Mar 13 10:24:00 crc kubenswrapper[4632]: E0313 10:24:00.162243 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df11136a-b7d7-4a5a-a0cc-d0ebbc069b34" containerName="dnsmasq-dns" Mar 13 10:24:00 crc kubenswrapper[4632]: I0313 10:24:00.162251 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="df11136a-b7d7-4a5a-a0cc-d0ebbc069b34" containerName="dnsmasq-dns" Mar 13 10:24:00 crc kubenswrapper[4632]: E0313 10:24:00.162274 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7afad0f9-c29c-40e6-8605-1df67a505a82" containerName="init" Mar 13 10:24:00 crc kubenswrapper[4632]: I0313 10:24:00.162282 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="7afad0f9-c29c-40e6-8605-1df67a505a82" containerName="init" Mar 13 10:24:00 crc kubenswrapper[4632]: I0313 10:24:00.162478 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="7afad0f9-c29c-40e6-8605-1df67a505a82" containerName="init" Mar 13 10:24:00 crc kubenswrapper[4632]: I0313 10:24:00.162500 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="df11136a-b7d7-4a5a-a0cc-d0ebbc069b34" containerName="dnsmasq-dns" Mar 13 10:24:00 crc kubenswrapper[4632]: I0313 10:24:00.163174 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556624-tnl4c" Mar 13 10:24:00 crc kubenswrapper[4632]: I0313 10:24:00.166345 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 10:24:00 crc kubenswrapper[4632]: I0313 10:24:00.166564 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 10:24:00 crc kubenswrapper[4632]: I0313 10:24:00.167134 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 10:24:00 crc kubenswrapper[4632]: I0313 10:24:00.202184 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556624-tnl4c"] Mar 13 10:24:00 crc kubenswrapper[4632]: I0313 10:24:00.318008 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52gwq\" (UniqueName: \"kubernetes.io/projected/5b2374d5-8d19-4837-8d91-79df0e65fc1f-kube-api-access-52gwq\") pod \"auto-csr-approver-29556624-tnl4c\" (UID: \"5b2374d5-8d19-4837-8d91-79df0e65fc1f\") " pod="openshift-infra/auto-csr-approver-29556624-tnl4c" Mar 13 10:24:00 crc kubenswrapper[4632]: I0313 10:24:00.419059 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 13 10:24:00 crc kubenswrapper[4632]: I0313 10:24:00.419518 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52gwq\" (UniqueName: \"kubernetes.io/projected/5b2374d5-8d19-4837-8d91-79df0e65fc1f-kube-api-access-52gwq\") pod \"auto-csr-approver-29556624-tnl4c\" (UID: \"5b2374d5-8d19-4837-8d91-79df0e65fc1f\") " pod="openshift-infra/auto-csr-approver-29556624-tnl4c" Mar 13 10:24:00 crc kubenswrapper[4632]: I0313 10:24:00.444258 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52gwq\" (UniqueName: \"kubernetes.io/projected/5b2374d5-8d19-4837-8d91-79df0e65fc1f-kube-api-access-52gwq\") pod \"auto-csr-approver-29556624-tnl4c\" (UID: \"5b2374d5-8d19-4837-8d91-79df0e65fc1f\") " pod="openshift-infra/auto-csr-approver-29556624-tnl4c" Mar 13 10:24:00 crc kubenswrapper[4632]: I0313 10:24:00.493283 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556624-tnl4c" Mar 13 10:24:00 crc kubenswrapper[4632]: I0313 10:24:00.938490 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556624-tnl4c"] Mar 13 10:24:00 crc kubenswrapper[4632]: I0313 10:24:00.972058 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2cb2f546-c8c5-4ec9-aba8-d3782431de10","Type":"ContainerStarted","Data":"cdf295326a62a01129c4f9b5741f57b3d80d103e3c5a6bf64f5cc1951034264f"} Mar 13 10:24:00 crc kubenswrapper[4632]: I0313 10:24:00.974459 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556624-tnl4c" event={"ID":"5b2374d5-8d19-4837-8d91-79df0e65fc1f","Type":"ContainerStarted","Data":"c1fe076f0e8c292e4cfd8e3aab67afc57f3617492b69247c418c8f293cfd491f"} Mar 13 10:24:01 crc kubenswrapper[4632]: I0313 10:24:01.007434 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371985.847382 podStartE2EDuration="51.007393999s" podCreationTimestamp="2026-03-13 10:23:10 +0000 UTC" firstStartedPulling="2026-03-13 10:23:13.194067668 +0000 UTC m=+1167.216597801" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:24:00.999040724 +0000 UTC m=+1215.021570847" watchObservedRunningTime="2026-03-13 10:24:01.007393999 +0000 UTC m=+1215.029924132" Mar 13 10:24:01 crc kubenswrapper[4632]: I0313 10:24:01.027830 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 13 10:24:01 crc kubenswrapper[4632]: I0313 10:24:01.329575 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 13 10:24:01 crc kubenswrapper[4632]: I0313 10:24:01.331125 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 13 10:24:01 crc kubenswrapper[4632]: I0313 10:24:01.336576 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 13 10:24:01 crc kubenswrapper[4632]: I0313 10:24:01.337654 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 13 10:24:01 crc kubenswrapper[4632]: I0313 10:24:01.338015 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 13 10:24:01 crc kubenswrapper[4632]: I0313 10:24:01.338205 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-6rcwx" Mar 13 10:24:01 crc kubenswrapper[4632]: I0313 10:24:01.364471 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 13 10:24:01 crc kubenswrapper[4632]: I0313 10:24:01.443850 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a169306-9d47-41ae-8667-1efb89c43d82-config\") pod \"ovn-northd-0\" (UID: \"9a169306-9d47-41ae-8667-1efb89c43d82\") " pod="openstack/ovn-northd-0" Mar 13 10:24:01 crc kubenswrapper[4632]: I0313 10:24:01.444077 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a169306-9d47-41ae-8667-1efb89c43d82-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9a169306-9d47-41ae-8667-1efb89c43d82\") " pod="openstack/ovn-northd-0" Mar 13 10:24:01 crc kubenswrapper[4632]: I0313 10:24:01.444117 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9a169306-9d47-41ae-8667-1efb89c43d82-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9a169306-9d47-41ae-8667-1efb89c43d82\") " pod="openstack/ovn-northd-0" Mar 13 10:24:01 crc kubenswrapper[4632]: I0313 10:24:01.444142 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znvdq\" (UniqueName: \"kubernetes.io/projected/9a169306-9d47-41ae-8667-1efb89c43d82-kube-api-access-znvdq\") pod \"ovn-northd-0\" (UID: \"9a169306-9d47-41ae-8667-1efb89c43d82\") " pod="openstack/ovn-northd-0" Mar 13 10:24:01 crc kubenswrapper[4632]: I0313 10:24:01.444342 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a169306-9d47-41ae-8667-1efb89c43d82-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9a169306-9d47-41ae-8667-1efb89c43d82\") " pod="openstack/ovn-northd-0" Mar 13 10:24:01 crc kubenswrapper[4632]: I0313 10:24:01.444398 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a169306-9d47-41ae-8667-1efb89c43d82-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9a169306-9d47-41ae-8667-1efb89c43d82\") " pod="openstack/ovn-northd-0" Mar 13 10:24:01 crc kubenswrapper[4632]: I0313 10:24:01.444432 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a169306-9d47-41ae-8667-1efb89c43d82-scripts\") pod \"ovn-northd-0\" (UID: \"9a169306-9d47-41ae-8667-1efb89c43d82\") " pod="openstack/ovn-northd-0" Mar 13 10:24:01 crc kubenswrapper[4632]: I0313 10:24:01.546189 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a169306-9d47-41ae-8667-1efb89c43d82-config\") pod \"ovn-northd-0\" (UID: \"9a169306-9d47-41ae-8667-1efb89c43d82\") " pod="openstack/ovn-northd-0" Mar 13 10:24:01 crc kubenswrapper[4632]: I0313 10:24:01.546273 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a169306-9d47-41ae-8667-1efb89c43d82-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9a169306-9d47-41ae-8667-1efb89c43d82\") " pod="openstack/ovn-northd-0" Mar 13 10:24:01 crc kubenswrapper[4632]: I0313 10:24:01.546311 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9a169306-9d47-41ae-8667-1efb89c43d82-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9a169306-9d47-41ae-8667-1efb89c43d82\") " pod="openstack/ovn-northd-0" Mar 13 10:24:01 crc kubenswrapper[4632]: I0313 10:24:01.546343 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znvdq\" (UniqueName: \"kubernetes.io/projected/9a169306-9d47-41ae-8667-1efb89c43d82-kube-api-access-znvdq\") pod \"ovn-northd-0\" (UID: \"9a169306-9d47-41ae-8667-1efb89c43d82\") " pod="openstack/ovn-northd-0" Mar 13 10:24:01 crc kubenswrapper[4632]: I0313 10:24:01.546867 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a169306-9d47-41ae-8667-1efb89c43d82-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9a169306-9d47-41ae-8667-1efb89c43d82\") " pod="openstack/ovn-northd-0" Mar 13 10:24:01 crc kubenswrapper[4632]: I0313 10:24:01.547010 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9a169306-9d47-41ae-8667-1efb89c43d82-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9a169306-9d47-41ae-8667-1efb89c43d82\") " pod="openstack/ovn-northd-0" Mar 13 10:24:01 crc kubenswrapper[4632]: I0313 10:24:01.547284 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a169306-9d47-41ae-8667-1efb89c43d82-config\") pod \"ovn-northd-0\" (UID: \"9a169306-9d47-41ae-8667-1efb89c43d82\") " pod="openstack/ovn-northd-0" Mar 13 10:24:01 crc kubenswrapper[4632]: I0313 10:24:01.547551 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a169306-9d47-41ae-8667-1efb89c43d82-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9a169306-9d47-41ae-8667-1efb89c43d82\") " pod="openstack/ovn-northd-0" Mar 13 10:24:01 crc kubenswrapper[4632]: I0313 10:24:01.547581 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a169306-9d47-41ae-8667-1efb89c43d82-scripts\") pod \"ovn-northd-0\" (UID: \"9a169306-9d47-41ae-8667-1efb89c43d82\") " pod="openstack/ovn-northd-0" Mar 13 10:24:01 crc kubenswrapper[4632]: I0313 10:24:01.548537 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a169306-9d47-41ae-8667-1efb89c43d82-scripts\") pod \"ovn-northd-0\" (UID: \"9a169306-9d47-41ae-8667-1efb89c43d82\") " pod="openstack/ovn-northd-0" Mar 13 10:24:01 crc kubenswrapper[4632]: I0313 10:24:01.552329 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a169306-9d47-41ae-8667-1efb89c43d82-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9a169306-9d47-41ae-8667-1efb89c43d82\") " pod="openstack/ovn-northd-0" Mar 13 10:24:01 crc kubenswrapper[4632]: I0313 10:24:01.556238 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a169306-9d47-41ae-8667-1efb89c43d82-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9a169306-9d47-41ae-8667-1efb89c43d82\") " pod="openstack/ovn-northd-0" Mar 13 10:24:01 crc kubenswrapper[4632]: I0313 10:24:01.564884 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a169306-9d47-41ae-8667-1efb89c43d82-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9a169306-9d47-41ae-8667-1efb89c43d82\") " pod="openstack/ovn-northd-0" Mar 13 10:24:01 crc kubenswrapper[4632]: I0313 10:24:01.575911 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znvdq\" (UniqueName: \"kubernetes.io/projected/9a169306-9d47-41ae-8667-1efb89c43d82-kube-api-access-znvdq\") pod \"ovn-northd-0\" (UID: \"9a169306-9d47-41ae-8667-1efb89c43d82\") " pod="openstack/ovn-northd-0" Mar 13 10:24:01 crc kubenswrapper[4632]: I0313 10:24:01.650929 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 13 10:24:02 crc kubenswrapper[4632]: I0313 10:24:02.055670 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df11136a-b7d7-4a5a-a0cc-d0ebbc069b34" path="/var/lib/kubelet/pods/df11136a-b7d7-4a5a-a0cc-d0ebbc069b34/volumes" Mar 13 10:24:02 crc kubenswrapper[4632]: I0313 10:24:02.231863 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 13 10:24:02 crc kubenswrapper[4632]: W0313 10:24:02.232372 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a169306_9d47_41ae_8667_1efb89c43d82.slice/crio-1761d9710c7607181cd1ab28cdd44ec208e3e218f4f1cf1283e7637db0199561 WatchSource:0}: Error finding container 1761d9710c7607181cd1ab28cdd44ec208e3e218f4f1cf1283e7637db0199561: Status 404 returned error can't find the container with id 1761d9710c7607181cd1ab28cdd44ec208e3e218f4f1cf1283e7637db0199561 Mar 13 10:24:02 crc kubenswrapper[4632]: I0313 10:24:02.477240 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 13 10:24:02 crc kubenswrapper[4632]: I0313 10:24:02.477845 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 13 10:24:03 crc kubenswrapper[4632]: I0313 10:24:03.016891 4632 generic.go:334] "Generic (PLEG): container finished" podID="5b2374d5-8d19-4837-8d91-79df0e65fc1f" containerID="6257821be47ec7e5943095f3b1d29a6e6fd0a1190515cb74642f7cb762d806d1" exitCode=0 Mar 13 10:24:03 crc kubenswrapper[4632]: I0313 10:24:03.016983 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556624-tnl4c" event={"ID":"5b2374d5-8d19-4837-8d91-79df0e65fc1f","Type":"ContainerDied","Data":"6257821be47ec7e5943095f3b1d29a6e6fd0a1190515cb74642f7cb762d806d1"} Mar 13 10:24:03 crc kubenswrapper[4632]: I0313 10:24:03.018630 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9a169306-9d47-41ae-8667-1efb89c43d82","Type":"ContainerStarted","Data":"1761d9710c7607181cd1ab28cdd44ec208e3e218f4f1cf1283e7637db0199561"} Mar 13 10:24:03 crc kubenswrapper[4632]: I0313 10:24:03.734849 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 13 10:24:03 crc kubenswrapper[4632]: I0313 10:24:03.735854 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 13 10:24:03 crc kubenswrapper[4632]: I0313 10:24:03.799145 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 13 10:24:03 crc kubenswrapper[4632]: I0313 10:24:03.846817 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 13 10:24:04 crc kubenswrapper[4632]: I0313 10:24:04.043608 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9a169306-9d47-41ae-8667-1efb89c43d82","Type":"ContainerStarted","Data":"31e7d5bdbc4af9e778f12b375dde292167a77cf66ef9c81c12db83b3be575a88"} Mar 13 10:24:04 crc kubenswrapper[4632]: I0313 10:24:04.043709 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9a169306-9d47-41ae-8667-1efb89c43d82","Type":"ContainerStarted","Data":"37e8b2b92022c84ec3213a38ce8f6e0c39ef2af8b108102efff96d21fcecf14a"} Mar 13 10:24:04 crc kubenswrapper[4632]: I0313 10:24:04.056026 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 13 10:24:04 crc kubenswrapper[4632]: I0313 10:24:04.080887 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.297183455 podStartE2EDuration="3.080863182s" podCreationTimestamp="2026-03-13 10:24:01 +0000 UTC" firstStartedPulling="2026-03-13 10:24:02.234511104 +0000 UTC m=+1216.257041237" lastFinishedPulling="2026-03-13 10:24:03.018190831 +0000 UTC m=+1217.040720964" observedRunningTime="2026-03-13 10:24:04.077281175 +0000 UTC m=+1218.099811328" watchObservedRunningTime="2026-03-13 10:24:04.080863182 +0000 UTC m=+1218.103393325" Mar 13 10:24:04 crc kubenswrapper[4632]: I0313 10:24:04.174136 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 13 10:24:04 crc kubenswrapper[4632]: I0313 10:24:04.441162 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556624-tnl4c" Mar 13 10:24:04 crc kubenswrapper[4632]: I0313 10:24:04.636295 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52gwq\" (UniqueName: \"kubernetes.io/projected/5b2374d5-8d19-4837-8d91-79df0e65fc1f-kube-api-access-52gwq\") pod \"5b2374d5-8d19-4837-8d91-79df0e65fc1f\" (UID: \"5b2374d5-8d19-4837-8d91-79df0e65fc1f\") " Mar 13 10:24:04 crc kubenswrapper[4632]: I0313 10:24:04.644774 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b2374d5-8d19-4837-8d91-79df0e65fc1f-kube-api-access-52gwq" (OuterVolumeSpecName: "kube-api-access-52gwq") pod "5b2374d5-8d19-4837-8d91-79df0e65fc1f" (UID: "5b2374d5-8d19-4837-8d91-79df0e65fc1f"). InnerVolumeSpecName "kube-api-access-52gwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:24:04 crc kubenswrapper[4632]: I0313 10:24:04.738491 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52gwq\" (UniqueName: \"kubernetes.io/projected/5b2374d5-8d19-4837-8d91-79df0e65fc1f-kube-api-access-52gwq\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:05 crc kubenswrapper[4632]: I0313 10:24:05.047061 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556624-tnl4c" Mar 13 10:24:05 crc kubenswrapper[4632]: I0313 10:24:05.048032 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556624-tnl4c" event={"ID":"5b2374d5-8d19-4837-8d91-79df0e65fc1f","Type":"ContainerDied","Data":"c1fe076f0e8c292e4cfd8e3aab67afc57f3617492b69247c418c8f293cfd491f"} Mar 13 10:24:05 crc kubenswrapper[4632]: I0313 10:24:05.048056 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1fe076f0e8c292e4cfd8e3aab67afc57f3617492b69247c418c8f293cfd491f" Mar 13 10:24:05 crc kubenswrapper[4632]: I0313 10:24:05.090205 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 13 10:24:05 crc kubenswrapper[4632]: I0313 10:24:05.184522 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 13 10:24:05 crc kubenswrapper[4632]: I0313 10:24:05.510000 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556618-ngbmk"] Mar 13 10:24:05 crc kubenswrapper[4632]: I0313 10:24:05.518552 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556618-ngbmk"] Mar 13 10:24:06 crc kubenswrapper[4632]: I0313 10:24:06.056367 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b93f1106-edf9-4cde-9acb-e265d8e07191" path="/var/lib/kubelet/pods/b93f1106-edf9-4cde-9acb-e265d8e07191/volumes" Mar 13 10:24:06 crc kubenswrapper[4632]: I0313 10:24:06.396013 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b59dbc87f-7zwrj"] Mar 13 10:24:06 crc kubenswrapper[4632]: E0313 10:24:06.396333 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b2374d5-8d19-4837-8d91-79df0e65fc1f" containerName="oc" Mar 13 10:24:06 crc kubenswrapper[4632]: I0313 10:24:06.396349 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b2374d5-8d19-4837-8d91-79df0e65fc1f" containerName="oc" Mar 13 10:24:06 crc kubenswrapper[4632]: I0313 10:24:06.396520 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b2374d5-8d19-4837-8d91-79df0e65fc1f" containerName="oc" Mar 13 10:24:06 crc kubenswrapper[4632]: I0313 10:24:06.397398 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" Mar 13 10:24:06 crc kubenswrapper[4632]: I0313 10:24:06.430503 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b59dbc87f-7zwrj"] Mar 13 10:24:06 crc kubenswrapper[4632]: I0313 10:24:06.572045 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7203640d-964c-4c28-8cc2-6a7ae27cdab3-ovsdbserver-nb\") pod \"dnsmasq-dns-5b59dbc87f-7zwrj\" (UID: \"7203640d-964c-4c28-8cc2-6a7ae27cdab3\") " pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" Mar 13 10:24:06 crc kubenswrapper[4632]: I0313 10:24:06.572113 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7203640d-964c-4c28-8cc2-6a7ae27cdab3-dns-svc\") pod \"dnsmasq-dns-5b59dbc87f-7zwrj\" (UID: \"7203640d-964c-4c28-8cc2-6a7ae27cdab3\") " pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" Mar 13 10:24:06 crc kubenswrapper[4632]: I0313 10:24:06.572156 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r95hn\" (UniqueName: \"kubernetes.io/projected/7203640d-964c-4c28-8cc2-6a7ae27cdab3-kube-api-access-r95hn\") pod \"dnsmasq-dns-5b59dbc87f-7zwrj\" (UID: \"7203640d-964c-4c28-8cc2-6a7ae27cdab3\") " pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" Mar 13 10:24:06 crc kubenswrapper[4632]: I0313 10:24:06.572207 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7203640d-964c-4c28-8cc2-6a7ae27cdab3-config\") pod \"dnsmasq-dns-5b59dbc87f-7zwrj\" (UID: \"7203640d-964c-4c28-8cc2-6a7ae27cdab3\") " pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" Mar 13 10:24:06 crc kubenswrapper[4632]: I0313 10:24:06.572240 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7203640d-964c-4c28-8cc2-6a7ae27cdab3-ovsdbserver-sb\") pod \"dnsmasq-dns-5b59dbc87f-7zwrj\" (UID: \"7203640d-964c-4c28-8cc2-6a7ae27cdab3\") " pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" Mar 13 10:24:06 crc kubenswrapper[4632]: I0313 10:24:06.673441 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7203640d-964c-4c28-8cc2-6a7ae27cdab3-config\") pod \"dnsmasq-dns-5b59dbc87f-7zwrj\" (UID: \"7203640d-964c-4c28-8cc2-6a7ae27cdab3\") " pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" Mar 13 10:24:06 crc kubenswrapper[4632]: I0313 10:24:06.673527 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7203640d-964c-4c28-8cc2-6a7ae27cdab3-ovsdbserver-sb\") pod \"dnsmasq-dns-5b59dbc87f-7zwrj\" (UID: \"7203640d-964c-4c28-8cc2-6a7ae27cdab3\") " pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" Mar 13 10:24:06 crc kubenswrapper[4632]: I0313 10:24:06.673568 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7203640d-964c-4c28-8cc2-6a7ae27cdab3-ovsdbserver-nb\") pod \"dnsmasq-dns-5b59dbc87f-7zwrj\" (UID: \"7203640d-964c-4c28-8cc2-6a7ae27cdab3\") " pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" Mar 13 10:24:06 crc kubenswrapper[4632]: I0313 10:24:06.673639 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7203640d-964c-4c28-8cc2-6a7ae27cdab3-dns-svc\") pod \"dnsmasq-dns-5b59dbc87f-7zwrj\" (UID: \"7203640d-964c-4c28-8cc2-6a7ae27cdab3\") " pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" Mar 13 10:24:06 crc kubenswrapper[4632]: I0313 10:24:06.673707 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r95hn\" (UniqueName: \"kubernetes.io/projected/7203640d-964c-4c28-8cc2-6a7ae27cdab3-kube-api-access-r95hn\") pod \"dnsmasq-dns-5b59dbc87f-7zwrj\" (UID: \"7203640d-964c-4c28-8cc2-6a7ae27cdab3\") " pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" Mar 13 10:24:06 crc kubenswrapper[4632]: I0313 10:24:06.674733 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7203640d-964c-4c28-8cc2-6a7ae27cdab3-config\") pod \"dnsmasq-dns-5b59dbc87f-7zwrj\" (UID: \"7203640d-964c-4c28-8cc2-6a7ae27cdab3\") " pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" Mar 13 10:24:06 crc kubenswrapper[4632]: I0313 10:24:06.674782 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7203640d-964c-4c28-8cc2-6a7ae27cdab3-ovsdbserver-nb\") pod \"dnsmasq-dns-5b59dbc87f-7zwrj\" (UID: \"7203640d-964c-4c28-8cc2-6a7ae27cdab3\") " pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" Mar 13 10:24:06 crc kubenswrapper[4632]: I0313 10:24:06.675643 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7203640d-964c-4c28-8cc2-6a7ae27cdab3-ovsdbserver-sb\") pod \"dnsmasq-dns-5b59dbc87f-7zwrj\" (UID: \"7203640d-964c-4c28-8cc2-6a7ae27cdab3\") " pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" Mar 13 10:24:06 crc kubenswrapper[4632]: I0313 10:24:06.675658 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7203640d-964c-4c28-8cc2-6a7ae27cdab3-dns-svc\") pod \"dnsmasq-dns-5b59dbc87f-7zwrj\" (UID: \"7203640d-964c-4c28-8cc2-6a7ae27cdab3\") " pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" Mar 13 10:24:06 crc kubenswrapper[4632]: I0313 10:24:06.694265 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r95hn\" (UniqueName: \"kubernetes.io/projected/7203640d-964c-4c28-8cc2-6a7ae27cdab3-kube-api-access-r95hn\") pod \"dnsmasq-dns-5b59dbc87f-7zwrj\" (UID: \"7203640d-964c-4c28-8cc2-6a7ae27cdab3\") " pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" Mar 13 10:24:06 crc kubenswrapper[4632]: I0313 10:24:06.718051 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" Mar 13 10:24:07 crc kubenswrapper[4632]: I0313 10:24:07.231457 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b59dbc87f-7zwrj"] Mar 13 10:24:07 crc kubenswrapper[4632]: W0313 10:24:07.238344 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7203640d_964c_4c28_8cc2_6a7ae27cdab3.slice/crio-501cdcd9d1f38a4b8b82ad7d76e2b6765f391cfadd65ee750e8254d78d76de84 WatchSource:0}: Error finding container 501cdcd9d1f38a4b8b82ad7d76e2b6765f391cfadd65ee750e8254d78d76de84: Status 404 returned error can't find the container with id 501cdcd9d1f38a4b8b82ad7d76e2b6765f391cfadd65ee750e8254d78d76de84 Mar 13 10:24:07 crc kubenswrapper[4632]: I0313 10:24:07.662370 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Mar 13 10:24:07 crc kubenswrapper[4632]: I0313 10:24:07.672409 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 13 10:24:07 crc kubenswrapper[4632]: I0313 10:24:07.678342 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Mar 13 10:24:07 crc kubenswrapper[4632]: I0313 10:24:07.678340 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Mar 13 10:24:07 crc kubenswrapper[4632]: I0313 10:24:07.678572 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-nmkc7" Mar 13 10:24:07 crc kubenswrapper[4632]: I0313 10:24:07.678817 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Mar 13 10:24:07 crc kubenswrapper[4632]: I0313 10:24:07.734214 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 13 10:24:07 crc kubenswrapper[4632]: I0313 10:24:07.793422 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e37b3d77-de2e-4be9-9984-550d4ba0f2f0-lock\") pod \"swift-storage-0\" (UID: \"e37b3d77-de2e-4be9-9984-550d4ba0f2f0\") " pod="openstack/swift-storage-0" Mar 13 10:24:07 crc kubenswrapper[4632]: I0313 10:24:07.793655 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e37b3d77-de2e-4be9-9984-550d4ba0f2f0-etc-swift\") pod \"swift-storage-0\" (UID: \"e37b3d77-de2e-4be9-9984-550d4ba0f2f0\") " pod="openstack/swift-storage-0" Mar 13 10:24:07 crc kubenswrapper[4632]: I0313 10:24:07.793735 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65nwv\" (UniqueName: \"kubernetes.io/projected/e37b3d77-de2e-4be9-9984-550d4ba0f2f0-kube-api-access-65nwv\") pod \"swift-storage-0\" (UID: \"e37b3d77-de2e-4be9-9984-550d4ba0f2f0\") " pod="openstack/swift-storage-0" Mar 13 10:24:07 crc kubenswrapper[4632]: I0313 10:24:07.793843 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37b3d77-de2e-4be9-9984-550d4ba0f2f0-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"e37b3d77-de2e-4be9-9984-550d4ba0f2f0\") " pod="openstack/swift-storage-0" Mar 13 10:24:07 crc kubenswrapper[4632]: I0313 10:24:07.793873 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"e37b3d77-de2e-4be9-9984-550d4ba0f2f0\") " pod="openstack/swift-storage-0" Mar 13 10:24:07 crc kubenswrapper[4632]: I0313 10:24:07.794103 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e37b3d77-de2e-4be9-9984-550d4ba0f2f0-cache\") pod \"swift-storage-0\" (UID: \"e37b3d77-de2e-4be9-9984-550d4ba0f2f0\") " pod="openstack/swift-storage-0" Mar 13 10:24:07 crc kubenswrapper[4632]: I0313 10:24:07.895333 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e37b3d77-de2e-4be9-9984-550d4ba0f2f0-cache\") pod \"swift-storage-0\" (UID: \"e37b3d77-de2e-4be9-9984-550d4ba0f2f0\") " pod="openstack/swift-storage-0" Mar 13 10:24:07 crc kubenswrapper[4632]: I0313 10:24:07.895683 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e37b3d77-de2e-4be9-9984-550d4ba0f2f0-lock\") pod \"swift-storage-0\" (UID: \"e37b3d77-de2e-4be9-9984-550d4ba0f2f0\") " pod="openstack/swift-storage-0" Mar 13 10:24:07 crc kubenswrapper[4632]: I0313 10:24:07.895821 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e37b3d77-de2e-4be9-9984-550d4ba0f2f0-etc-swift\") pod \"swift-storage-0\" (UID: \"e37b3d77-de2e-4be9-9984-550d4ba0f2f0\") " pod="openstack/swift-storage-0" Mar 13 10:24:07 crc kubenswrapper[4632]: I0313 10:24:07.895918 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65nwv\" (UniqueName: \"kubernetes.io/projected/e37b3d77-de2e-4be9-9984-550d4ba0f2f0-kube-api-access-65nwv\") pod \"swift-storage-0\" (UID: \"e37b3d77-de2e-4be9-9984-550d4ba0f2f0\") " pod="openstack/swift-storage-0" Mar 13 10:24:07 crc kubenswrapper[4632]: I0313 10:24:07.895992 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e37b3d77-de2e-4be9-9984-550d4ba0f2f0-lock\") pod \"swift-storage-0\" (UID: \"e37b3d77-de2e-4be9-9984-550d4ba0f2f0\") " pod="openstack/swift-storage-0" Mar 13 10:24:07 crc kubenswrapper[4632]: I0313 10:24:07.895863 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e37b3d77-de2e-4be9-9984-550d4ba0f2f0-cache\") pod \"swift-storage-0\" (UID: \"e37b3d77-de2e-4be9-9984-550d4ba0f2f0\") " pod="openstack/swift-storage-0" Mar 13 10:24:07 crc kubenswrapper[4632]: E0313 10:24:07.896153 4632 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 13 10:24:07 crc kubenswrapper[4632]: E0313 10:24:07.896250 4632 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 13 10:24:07 crc kubenswrapper[4632]: E0313 10:24:07.896350 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e37b3d77-de2e-4be9-9984-550d4ba0f2f0-etc-swift podName:e37b3d77-de2e-4be9-9984-550d4ba0f2f0 nodeName:}" failed. No retries permitted until 2026-03-13 10:24:08.39631989 +0000 UTC m=+1222.418850023 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e37b3d77-de2e-4be9-9984-550d4ba0f2f0-etc-swift") pod "swift-storage-0" (UID: "e37b3d77-de2e-4be9-9984-550d4ba0f2f0") : configmap "swift-ring-files" not found Mar 13 10:24:07 crc kubenswrapper[4632]: I0313 10:24:07.896676 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37b3d77-de2e-4be9-9984-550d4ba0f2f0-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"e37b3d77-de2e-4be9-9984-550d4ba0f2f0\") " pod="openstack/swift-storage-0" Mar 13 10:24:07 crc kubenswrapper[4632]: I0313 10:24:07.896730 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"e37b3d77-de2e-4be9-9984-550d4ba0f2f0\") " pod="openstack/swift-storage-0" Mar 13 10:24:07 crc kubenswrapper[4632]: I0313 10:24:07.897434 4632 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"e37b3d77-de2e-4be9-9984-550d4ba0f2f0\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/swift-storage-0" Mar 13 10:24:07 crc kubenswrapper[4632]: I0313 10:24:07.904029 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37b3d77-de2e-4be9-9984-550d4ba0f2f0-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"e37b3d77-de2e-4be9-9984-550d4ba0f2f0\") " pod="openstack/swift-storage-0" Mar 13 10:24:07 crc kubenswrapper[4632]: I0313 10:24:07.921770 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65nwv\" (UniqueName: \"kubernetes.io/projected/e37b3d77-de2e-4be9-9984-550d4ba0f2f0-kube-api-access-65nwv\") pod \"swift-storage-0\" (UID: \"e37b3d77-de2e-4be9-9984-550d4ba0f2f0\") " pod="openstack/swift-storage-0" Mar 13 10:24:07 crc kubenswrapper[4632]: I0313 10:24:07.926979 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"e37b3d77-de2e-4be9-9984-550d4ba0f2f0\") " pod="openstack/swift-storage-0" Mar 13 10:24:08 crc kubenswrapper[4632]: I0313 10:24:08.071596 4632 generic.go:334] "Generic (PLEG): container finished" podID="7203640d-964c-4c28-8cc2-6a7ae27cdab3" containerID="f74cf11731f4fec2422112ef6bdd1e43cc133692a8363ef95d5bb5847ffb0fd1" exitCode=0 Mar 13 10:24:08 crc kubenswrapper[4632]: I0313 10:24:08.071660 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" event={"ID":"7203640d-964c-4c28-8cc2-6a7ae27cdab3","Type":"ContainerDied","Data":"f74cf11731f4fec2422112ef6bdd1e43cc133692a8363ef95d5bb5847ffb0fd1"} Mar 13 10:24:08 crc kubenswrapper[4632]: I0313 10:24:08.071692 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" event={"ID":"7203640d-964c-4c28-8cc2-6a7ae27cdab3","Type":"ContainerStarted","Data":"501cdcd9d1f38a4b8b82ad7d76e2b6765f391cfadd65ee750e8254d78d76de84"} Mar 13 10:24:08 crc kubenswrapper[4632]: I0313 10:24:08.404209 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e37b3d77-de2e-4be9-9984-550d4ba0f2f0-etc-swift\") pod \"swift-storage-0\" (UID: \"e37b3d77-de2e-4be9-9984-550d4ba0f2f0\") " pod="openstack/swift-storage-0" Mar 13 10:24:08 crc kubenswrapper[4632]: E0313 10:24:08.404425 4632 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 13 10:24:08 crc kubenswrapper[4632]: E0313 10:24:08.404728 4632 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 13 10:24:08 crc kubenswrapper[4632]: E0313 10:24:08.404799 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e37b3d77-de2e-4be9-9984-550d4ba0f2f0-etc-swift podName:e37b3d77-de2e-4be9-9984-550d4ba0f2f0 nodeName:}" failed. No retries permitted until 2026-03-13 10:24:09.404777604 +0000 UTC m=+1223.427307737 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e37b3d77-de2e-4be9-9984-550d4ba0f2f0-etc-swift") pod "swift-storage-0" (UID: "e37b3d77-de2e-4be9-9984-550d4ba0f2f0") : configmap "swift-ring-files" not found Mar 13 10:24:09 crc kubenswrapper[4632]: I0313 10:24:09.081082 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" event={"ID":"7203640d-964c-4c28-8cc2-6a7ae27cdab3","Type":"ContainerStarted","Data":"f1255f2b0d97d7bcc13a7045fc5d8e4778eece89f9f6f1d468ae8c05e428c6f7"} Mar 13 10:24:09 crc kubenswrapper[4632]: I0313 10:24:09.081458 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" Mar 13 10:24:09 crc kubenswrapper[4632]: I0313 10:24:09.102172 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" podStartSLOduration=3.102152103 podStartE2EDuration="3.102152103s" podCreationTimestamp="2026-03-13 10:24:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:24:09.098136646 +0000 UTC m=+1223.120666789" watchObservedRunningTime="2026-03-13 10:24:09.102152103 +0000 UTC m=+1223.124682236" Mar 13 10:24:09 crc kubenswrapper[4632]: I0313 10:24:09.422813 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e37b3d77-de2e-4be9-9984-550d4ba0f2f0-etc-swift\") pod \"swift-storage-0\" (UID: \"e37b3d77-de2e-4be9-9984-550d4ba0f2f0\") " pod="openstack/swift-storage-0" Mar 13 10:24:09 crc kubenswrapper[4632]: E0313 10:24:09.423000 4632 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 13 10:24:09 crc kubenswrapper[4632]: E0313 10:24:09.423713 4632 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 13 10:24:09 crc kubenswrapper[4632]: E0313 10:24:09.423776 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e37b3d77-de2e-4be9-9984-550d4ba0f2f0-etc-swift podName:e37b3d77-de2e-4be9-9984-550d4ba0f2f0 nodeName:}" failed. No retries permitted until 2026-03-13 10:24:11.423756293 +0000 UTC m=+1225.446286426 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e37b3d77-de2e-4be9-9984-550d4ba0f2f0-etc-swift") pod "swift-storage-0" (UID: "e37b3d77-de2e-4be9-9984-550d4ba0f2f0") : configmap "swift-ring-files" not found Mar 13 10:24:10 crc kubenswrapper[4632]: I0313 10:24:10.461165 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 10:24:10 crc kubenswrapper[4632]: I0313 10:24:10.461241 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 10:24:10 crc kubenswrapper[4632]: I0313 10:24:10.827676 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-c8lh5"] Mar 13 10:24:10 crc kubenswrapper[4632]: I0313 10:24:10.828816 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-c8lh5" Mar 13 10:24:10 crc kubenswrapper[4632]: I0313 10:24:10.832055 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 13 10:24:10 crc kubenswrapper[4632]: I0313 10:24:10.865339 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-c8lh5"] Mar 13 10:24:10 crc kubenswrapper[4632]: I0313 10:24:10.951306 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc41c555-17e5-4785-a003-3f8e9f10d799-operator-scripts\") pod \"root-account-create-update-c8lh5\" (UID: \"cc41c555-17e5-4785-a003-3f8e9f10d799\") " pod="openstack/root-account-create-update-c8lh5" Mar 13 10:24:10 crc kubenswrapper[4632]: I0313 10:24:10.952793 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvft2\" (UniqueName: \"kubernetes.io/projected/cc41c555-17e5-4785-a003-3f8e9f10d799-kube-api-access-gvft2\") pod \"root-account-create-update-c8lh5\" (UID: \"cc41c555-17e5-4785-a003-3f8e9f10d799\") " pod="openstack/root-account-create-update-c8lh5" Mar 13 10:24:11 crc kubenswrapper[4632]: I0313 10:24:11.055321 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvft2\" (UniqueName: \"kubernetes.io/projected/cc41c555-17e5-4785-a003-3f8e9f10d799-kube-api-access-gvft2\") pod \"root-account-create-update-c8lh5\" (UID: \"cc41c555-17e5-4785-a003-3f8e9f10d799\") " pod="openstack/root-account-create-update-c8lh5" Mar 13 10:24:11 crc kubenswrapper[4632]: I0313 10:24:11.055706 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc41c555-17e5-4785-a003-3f8e9f10d799-operator-scripts\") pod \"root-account-create-update-c8lh5\" (UID: \"cc41c555-17e5-4785-a003-3f8e9f10d799\") " pod="openstack/root-account-create-update-c8lh5" Mar 13 10:24:11 crc kubenswrapper[4632]: I0313 10:24:11.056667 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc41c555-17e5-4785-a003-3f8e9f10d799-operator-scripts\") pod \"root-account-create-update-c8lh5\" (UID: \"cc41c555-17e5-4785-a003-3f8e9f10d799\") " pod="openstack/root-account-create-update-c8lh5" Mar 13 10:24:11 crc kubenswrapper[4632]: I0313 10:24:11.078918 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvft2\" (UniqueName: \"kubernetes.io/projected/cc41c555-17e5-4785-a003-3f8e9f10d799-kube-api-access-gvft2\") pod \"root-account-create-update-c8lh5\" (UID: \"cc41c555-17e5-4785-a003-3f8e9f10d799\") " pod="openstack/root-account-create-update-c8lh5" Mar 13 10:24:11 crc kubenswrapper[4632]: I0313 10:24:11.161392 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-c8lh5" Mar 13 10:24:11 crc kubenswrapper[4632]: I0313 10:24:11.451274 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-mkdcg"] Mar 13 10:24:11 crc kubenswrapper[4632]: I0313 10:24:11.455725 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mkdcg" Mar 13 10:24:11 crc kubenswrapper[4632]: I0313 10:24:11.460108 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Mar 13 10:24:11 crc kubenswrapper[4632]: I0313 10:24:11.460848 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 13 10:24:11 crc kubenswrapper[4632]: I0313 10:24:11.463683 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-mkdcg"] Mar 13 10:24:11 crc kubenswrapper[4632]: I0313 10:24:11.464013 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Mar 13 10:24:11 crc kubenswrapper[4632]: I0313 10:24:11.464015 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e37b3d77-de2e-4be9-9984-550d4ba0f2f0-etc-swift\") pod \"swift-storage-0\" (UID: \"e37b3d77-de2e-4be9-9984-550d4ba0f2f0\") " pod="openstack/swift-storage-0" Mar 13 10:24:11 crc kubenswrapper[4632]: E0313 10:24:11.464285 4632 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 13 10:24:11 crc kubenswrapper[4632]: E0313 10:24:11.464306 4632 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 13 10:24:11 crc kubenswrapper[4632]: E0313 10:24:11.464363 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e37b3d77-de2e-4be9-9984-550d4ba0f2f0-etc-swift podName:e37b3d77-de2e-4be9-9984-550d4ba0f2f0 nodeName:}" failed. No retries permitted until 2026-03-13 10:24:15.464346907 +0000 UTC m=+1229.486877040 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e37b3d77-de2e-4be9-9984-550d4ba0f2f0-etc-swift") pod "swift-storage-0" (UID: "e37b3d77-de2e-4be9-9984-550d4ba0f2f0") : configmap "swift-ring-files" not found Mar 13 10:24:11 crc kubenswrapper[4632]: I0313 10:24:11.566327 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bc39c52e-008f-40c1-b93b-532707127fcd-ring-data-devices\") pod \"swift-ring-rebalance-mkdcg\" (UID: \"bc39c52e-008f-40c1-b93b-532707127fcd\") " pod="openstack/swift-ring-rebalance-mkdcg" Mar 13 10:24:11 crc kubenswrapper[4632]: I0313 10:24:11.566581 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bc39c52e-008f-40c1-b93b-532707127fcd-etc-swift\") pod \"swift-ring-rebalance-mkdcg\" (UID: \"bc39c52e-008f-40c1-b93b-532707127fcd\") " pod="openstack/swift-ring-rebalance-mkdcg" Mar 13 10:24:11 crc kubenswrapper[4632]: I0313 10:24:11.566622 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bc39c52e-008f-40c1-b93b-532707127fcd-dispersionconf\") pod \"swift-ring-rebalance-mkdcg\" (UID: \"bc39c52e-008f-40c1-b93b-532707127fcd\") " pod="openstack/swift-ring-rebalance-mkdcg" Mar 13 10:24:11 crc kubenswrapper[4632]: I0313 10:24:11.566721 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bc39c52e-008f-40c1-b93b-532707127fcd-swiftconf\") pod \"swift-ring-rebalance-mkdcg\" (UID: \"bc39c52e-008f-40c1-b93b-532707127fcd\") " pod="openstack/swift-ring-rebalance-mkdcg" Mar 13 10:24:11 crc kubenswrapper[4632]: I0313 10:24:11.566749 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7f9b\" (UniqueName: \"kubernetes.io/projected/bc39c52e-008f-40c1-b93b-532707127fcd-kube-api-access-v7f9b\") pod \"swift-ring-rebalance-mkdcg\" (UID: \"bc39c52e-008f-40c1-b93b-532707127fcd\") " pod="openstack/swift-ring-rebalance-mkdcg" Mar 13 10:24:11 crc kubenswrapper[4632]: I0313 10:24:11.566816 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc39c52e-008f-40c1-b93b-532707127fcd-scripts\") pod \"swift-ring-rebalance-mkdcg\" (UID: \"bc39c52e-008f-40c1-b93b-532707127fcd\") " pod="openstack/swift-ring-rebalance-mkdcg" Mar 13 10:24:11 crc kubenswrapper[4632]: I0313 10:24:11.566836 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc39c52e-008f-40c1-b93b-532707127fcd-combined-ca-bundle\") pod \"swift-ring-rebalance-mkdcg\" (UID: \"bc39c52e-008f-40c1-b93b-532707127fcd\") " pod="openstack/swift-ring-rebalance-mkdcg" Mar 13 10:24:11 crc kubenswrapper[4632]: I0313 10:24:11.615154 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-c8lh5"] Mar 13 10:24:11 crc kubenswrapper[4632]: I0313 10:24:11.671827 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bc39c52e-008f-40c1-b93b-532707127fcd-ring-data-devices\") pod \"swift-ring-rebalance-mkdcg\" (UID: \"bc39c52e-008f-40c1-b93b-532707127fcd\") " pod="openstack/swift-ring-rebalance-mkdcg" Mar 13 10:24:11 crc kubenswrapper[4632]: I0313 10:24:11.671876 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bc39c52e-008f-40c1-b93b-532707127fcd-etc-swift\") pod \"swift-ring-rebalance-mkdcg\" (UID: \"bc39c52e-008f-40c1-b93b-532707127fcd\") " pod="openstack/swift-ring-rebalance-mkdcg" Mar 13 10:24:11 crc kubenswrapper[4632]: I0313 10:24:11.671895 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bc39c52e-008f-40c1-b93b-532707127fcd-dispersionconf\") pod \"swift-ring-rebalance-mkdcg\" (UID: \"bc39c52e-008f-40c1-b93b-532707127fcd\") " pod="openstack/swift-ring-rebalance-mkdcg" Mar 13 10:24:11 crc kubenswrapper[4632]: I0313 10:24:11.672854 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bc39c52e-008f-40c1-b93b-532707127fcd-ring-data-devices\") pod \"swift-ring-rebalance-mkdcg\" (UID: \"bc39c52e-008f-40c1-b93b-532707127fcd\") " pod="openstack/swift-ring-rebalance-mkdcg" Mar 13 10:24:11 crc kubenswrapper[4632]: I0313 10:24:11.673099 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bc39c52e-008f-40c1-b93b-532707127fcd-etc-swift\") pod \"swift-ring-rebalance-mkdcg\" (UID: \"bc39c52e-008f-40c1-b93b-532707127fcd\") " pod="openstack/swift-ring-rebalance-mkdcg" Mar 13 10:24:11 crc kubenswrapper[4632]: I0313 10:24:11.674783 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bc39c52e-008f-40c1-b93b-532707127fcd-swiftconf\") pod \"swift-ring-rebalance-mkdcg\" (UID: \"bc39c52e-008f-40c1-b93b-532707127fcd\") " pod="openstack/swift-ring-rebalance-mkdcg" Mar 13 10:24:11 crc kubenswrapper[4632]: I0313 10:24:11.674847 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7f9b\" (UniqueName: \"kubernetes.io/projected/bc39c52e-008f-40c1-b93b-532707127fcd-kube-api-access-v7f9b\") pod \"swift-ring-rebalance-mkdcg\" (UID: \"bc39c52e-008f-40c1-b93b-532707127fcd\") " pod="openstack/swift-ring-rebalance-mkdcg" Mar 13 10:24:11 crc kubenswrapper[4632]: I0313 10:24:11.674929 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc39c52e-008f-40c1-b93b-532707127fcd-scripts\") pod \"swift-ring-rebalance-mkdcg\" (UID: \"bc39c52e-008f-40c1-b93b-532707127fcd\") " pod="openstack/swift-ring-rebalance-mkdcg" Mar 13 10:24:11 crc kubenswrapper[4632]: I0313 10:24:11.674969 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc39c52e-008f-40c1-b93b-532707127fcd-combined-ca-bundle\") pod \"swift-ring-rebalance-mkdcg\" (UID: \"bc39c52e-008f-40c1-b93b-532707127fcd\") " pod="openstack/swift-ring-rebalance-mkdcg" Mar 13 10:24:11 crc kubenswrapper[4632]: I0313 10:24:11.676205 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc39c52e-008f-40c1-b93b-532707127fcd-scripts\") pod \"swift-ring-rebalance-mkdcg\" (UID: \"bc39c52e-008f-40c1-b93b-532707127fcd\") " pod="openstack/swift-ring-rebalance-mkdcg" Mar 13 10:24:11 crc kubenswrapper[4632]: I0313 10:24:11.688450 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bc39c52e-008f-40c1-b93b-532707127fcd-dispersionconf\") pod \"swift-ring-rebalance-mkdcg\" (UID: \"bc39c52e-008f-40c1-b93b-532707127fcd\") " pod="openstack/swift-ring-rebalance-mkdcg" Mar 13 10:24:11 crc kubenswrapper[4632]: I0313 10:24:11.688860 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc39c52e-008f-40c1-b93b-532707127fcd-combined-ca-bundle\") pod \"swift-ring-rebalance-mkdcg\" (UID: \"bc39c52e-008f-40c1-b93b-532707127fcd\") " pod="openstack/swift-ring-rebalance-mkdcg" Mar 13 10:24:11 crc kubenswrapper[4632]: I0313 10:24:11.689846 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bc39c52e-008f-40c1-b93b-532707127fcd-swiftconf\") pod \"swift-ring-rebalance-mkdcg\" (UID: \"bc39c52e-008f-40c1-b93b-532707127fcd\") " pod="openstack/swift-ring-rebalance-mkdcg" Mar 13 10:24:11 crc kubenswrapper[4632]: I0313 10:24:11.697761 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7f9b\" (UniqueName: \"kubernetes.io/projected/bc39c52e-008f-40c1-b93b-532707127fcd-kube-api-access-v7f9b\") pod \"swift-ring-rebalance-mkdcg\" (UID: \"bc39c52e-008f-40c1-b93b-532707127fcd\") " pod="openstack/swift-ring-rebalance-mkdcg" Mar 13 10:24:11 crc kubenswrapper[4632]: I0313 10:24:11.778784 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mkdcg" Mar 13 10:24:12 crc kubenswrapper[4632]: I0313 10:24:12.107878 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-c8lh5" event={"ID":"cc41c555-17e5-4785-a003-3f8e9f10d799","Type":"ContainerStarted","Data":"9011fe3e8ff19daa76b8d8bddf336d224d69f10272938404d994caa9a1a4d6ee"} Mar 13 10:24:12 crc kubenswrapper[4632]: I0313 10:24:12.108392 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-c8lh5" event={"ID":"cc41c555-17e5-4785-a003-3f8e9f10d799","Type":"ContainerStarted","Data":"0b853f73f9460c789c867abbd82e3ae379a221406d460cfe9764bd3d4e71050b"} Mar 13 10:24:12 crc kubenswrapper[4632]: I0313 10:24:12.217809 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-c8lh5" podStartSLOduration=2.217788002 podStartE2EDuration="2.217788002s" podCreationTimestamp="2026-03-13 10:24:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:24:12.13333445 +0000 UTC m=+1226.155864593" watchObservedRunningTime="2026-03-13 10:24:12.217788002 +0000 UTC m=+1226.240318135" Mar 13 10:24:12 crc kubenswrapper[4632]: I0313 10:24:12.227542 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-mkdcg"] Mar 13 10:24:13 crc kubenswrapper[4632]: I0313 10:24:13.116177 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-mkdcg" event={"ID":"bc39c52e-008f-40c1-b93b-532707127fcd","Type":"ContainerStarted","Data":"b9c30e7d71115b424270718b1169d7d7c69bba98c01559c66a08b1a331e3ccdd"} Mar 13 10:24:13 crc kubenswrapper[4632]: I0313 10:24:13.119176 4632 generic.go:334] "Generic (PLEG): container finished" podID="cc41c555-17e5-4785-a003-3f8e9f10d799" containerID="9011fe3e8ff19daa76b8d8bddf336d224d69f10272938404d994caa9a1a4d6ee" exitCode=0 Mar 13 10:24:13 crc kubenswrapper[4632]: I0313 10:24:13.119244 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-c8lh5" event={"ID":"cc41c555-17e5-4785-a003-3f8e9f10d799","Type":"ContainerDied","Data":"9011fe3e8ff19daa76b8d8bddf336d224d69f10272938404d994caa9a1a4d6ee"} Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.001113 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-7hqpw"] Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.002671 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7hqpw" Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.010649 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-7hqpw"] Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.130774 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-9698-account-create-update-9kfhv"] Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.137816 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9698-account-create-update-9kfhv" Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.142992 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wszzh\" (UniqueName: \"kubernetes.io/projected/584d2818-4b22-468f-b296-bd1850c7915b-kube-api-access-wszzh\") pod \"glance-db-create-7hqpw\" (UID: \"584d2818-4b22-468f-b296-bd1850c7915b\") " pod="openstack/glance-db-create-7hqpw" Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.143116 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/584d2818-4b22-468f-b296-bd1850c7915b-operator-scripts\") pod \"glance-db-create-7hqpw\" (UID: \"584d2818-4b22-468f-b296-bd1850c7915b\") " pod="openstack/glance-db-create-7hqpw" Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.143570 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.150668 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9698-account-create-update-9kfhv"] Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.245541 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wszzh\" (UniqueName: \"kubernetes.io/projected/584d2818-4b22-468f-b296-bd1850c7915b-kube-api-access-wszzh\") pod \"glance-db-create-7hqpw\" (UID: \"584d2818-4b22-468f-b296-bd1850c7915b\") " pod="openstack/glance-db-create-7hqpw" Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.245623 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/584d2818-4b22-468f-b296-bd1850c7915b-operator-scripts\") pod \"glance-db-create-7hqpw\" (UID: \"584d2818-4b22-468f-b296-bd1850c7915b\") " pod="openstack/glance-db-create-7hqpw" Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.245696 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2610abab-1da4-4912-9ca7-f2aa2d7c0486-operator-scripts\") pod \"glance-9698-account-create-update-9kfhv\" (UID: \"2610abab-1da4-4912-9ca7-f2aa2d7c0486\") " pod="openstack/glance-9698-account-create-update-9kfhv" Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.245858 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29xj8\" (UniqueName: \"kubernetes.io/projected/2610abab-1da4-4912-9ca7-f2aa2d7c0486-kube-api-access-29xj8\") pod \"glance-9698-account-create-update-9kfhv\" (UID: \"2610abab-1da4-4912-9ca7-f2aa2d7c0486\") " pod="openstack/glance-9698-account-create-update-9kfhv" Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.247873 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/584d2818-4b22-468f-b296-bd1850c7915b-operator-scripts\") pod \"glance-db-create-7hqpw\" (UID: \"584d2818-4b22-468f-b296-bd1850c7915b\") " pod="openstack/glance-db-create-7hqpw" Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.283661 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wszzh\" (UniqueName: \"kubernetes.io/projected/584d2818-4b22-468f-b296-bd1850c7915b-kube-api-access-wszzh\") pod \"glance-db-create-7hqpw\" (UID: \"584d2818-4b22-468f-b296-bd1850c7915b\") " pod="openstack/glance-db-create-7hqpw" Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.347883 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2610abab-1da4-4912-9ca7-f2aa2d7c0486-operator-scripts\") pod \"glance-9698-account-create-update-9kfhv\" (UID: \"2610abab-1da4-4912-9ca7-f2aa2d7c0486\") " pod="openstack/glance-9698-account-create-update-9kfhv" Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.348085 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29xj8\" (UniqueName: \"kubernetes.io/projected/2610abab-1da4-4912-9ca7-f2aa2d7c0486-kube-api-access-29xj8\") pod \"glance-9698-account-create-update-9kfhv\" (UID: \"2610abab-1da4-4912-9ca7-f2aa2d7c0486\") " pod="openstack/glance-9698-account-create-update-9kfhv" Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.349306 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2610abab-1da4-4912-9ca7-f2aa2d7c0486-operator-scripts\") pod \"glance-9698-account-create-update-9kfhv\" (UID: \"2610abab-1da4-4912-9ca7-f2aa2d7c0486\") " pod="openstack/glance-9698-account-create-update-9kfhv" Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.362448 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7hqpw" Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.373659 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29xj8\" (UniqueName: \"kubernetes.io/projected/2610abab-1da4-4912-9ca7-f2aa2d7c0486-kube-api-access-29xj8\") pod \"glance-9698-account-create-update-9kfhv\" (UID: \"2610abab-1da4-4912-9ca7-f2aa2d7c0486\") " pod="openstack/glance-9698-account-create-update-9kfhv" Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.476449 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9698-account-create-update-9kfhv" Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.760874 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-bfb6b"] Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.761989 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bfb6b" Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.782684 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-bfb6b"] Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.828121 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-c8lh5" Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.863484 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e353045-e09b-4cd2-b659-1954485ec8db-operator-scripts\") pod \"keystone-db-create-bfb6b\" (UID: \"8e353045-e09b-4cd2-b659-1954485ec8db\") " pod="openstack/keystone-db-create-bfb6b" Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.863727 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjh6b\" (UniqueName: \"kubernetes.io/projected/8e353045-e09b-4cd2-b659-1954485ec8db-kube-api-access-fjh6b\") pod \"keystone-db-create-bfb6b\" (UID: \"8e353045-e09b-4cd2-b659-1954485ec8db\") " pod="openstack/keystone-db-create-bfb6b" Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.901559 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-ab0c-account-create-update-tr7hx"] Mar 13 10:24:14 crc kubenswrapper[4632]: E0313 10:24:14.901962 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc41c555-17e5-4785-a003-3f8e9f10d799" containerName="mariadb-account-create-update" Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.901987 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc41c555-17e5-4785-a003-3f8e9f10d799" containerName="mariadb-account-create-update" Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.902369 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc41c555-17e5-4785-a003-3f8e9f10d799" containerName="mariadb-account-create-update" Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.903175 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ab0c-account-create-update-tr7hx" Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.907649 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.922893 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-ab0c-account-create-update-tr7hx"] Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.965767 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc41c555-17e5-4785-a003-3f8e9f10d799-operator-scripts\") pod \"cc41c555-17e5-4785-a003-3f8e9f10d799\" (UID: \"cc41c555-17e5-4785-a003-3f8e9f10d799\") " Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.965921 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvft2\" (UniqueName: \"kubernetes.io/projected/cc41c555-17e5-4785-a003-3f8e9f10d799-kube-api-access-gvft2\") pod \"cc41c555-17e5-4785-a003-3f8e9f10d799\" (UID: \"cc41c555-17e5-4785-a003-3f8e9f10d799\") " Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.966252 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjh6b\" (UniqueName: \"kubernetes.io/projected/8e353045-e09b-4cd2-b659-1954485ec8db-kube-api-access-fjh6b\") pod \"keystone-db-create-bfb6b\" (UID: \"8e353045-e09b-4cd2-b659-1954485ec8db\") " pod="openstack/keystone-db-create-bfb6b" Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.966338 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e353045-e09b-4cd2-b659-1954485ec8db-operator-scripts\") pod \"keystone-db-create-bfb6b\" (UID: \"8e353045-e09b-4cd2-b659-1954485ec8db\") " pod="openstack/keystone-db-create-bfb6b" Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.969709 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc41c555-17e5-4785-a003-3f8e9f10d799-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cc41c555-17e5-4785-a003-3f8e9f10d799" (UID: "cc41c555-17e5-4785-a003-3f8e9f10d799"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.970692 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e353045-e09b-4cd2-b659-1954485ec8db-operator-scripts\") pod \"keystone-db-create-bfb6b\" (UID: \"8e353045-e09b-4cd2-b659-1954485ec8db\") " pod="openstack/keystone-db-create-bfb6b" Mar 13 10:24:14 crc kubenswrapper[4632]: I0313 10:24:14.989419 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc41c555-17e5-4785-a003-3f8e9f10d799-kube-api-access-gvft2" (OuterVolumeSpecName: "kube-api-access-gvft2") pod "cc41c555-17e5-4785-a003-3f8e9f10d799" (UID: "cc41c555-17e5-4785-a003-3f8e9f10d799"). InnerVolumeSpecName "kube-api-access-gvft2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:24:15 crc kubenswrapper[4632]: I0313 10:24:15.013969 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjh6b\" (UniqueName: \"kubernetes.io/projected/8e353045-e09b-4cd2-b659-1954485ec8db-kube-api-access-fjh6b\") pod \"keystone-db-create-bfb6b\" (UID: \"8e353045-e09b-4cd2-b659-1954485ec8db\") " pod="openstack/keystone-db-create-bfb6b" Mar 13 10:24:15 crc kubenswrapper[4632]: I0313 10:24:15.014084 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-64xvf"] Mar 13 10:24:15 crc kubenswrapper[4632]: I0313 10:24:15.015706 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-64xvf" Mar 13 10:24:15 crc kubenswrapper[4632]: I0313 10:24:15.025400 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-64xvf"] Mar 13 10:24:15 crc kubenswrapper[4632]: I0313 10:24:15.072384 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpvp9\" (UniqueName: \"kubernetes.io/projected/6c84aa49-2900-4a14-b81b-bb03e925d1b7-kube-api-access-wpvp9\") pod \"keystone-ab0c-account-create-update-tr7hx\" (UID: \"6c84aa49-2900-4a14-b81b-bb03e925d1b7\") " pod="openstack/keystone-ab0c-account-create-update-tr7hx" Mar 13 10:24:15 crc kubenswrapper[4632]: I0313 10:24:15.077174 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c84aa49-2900-4a14-b81b-bb03e925d1b7-operator-scripts\") pod \"keystone-ab0c-account-create-update-tr7hx\" (UID: \"6c84aa49-2900-4a14-b81b-bb03e925d1b7\") " pod="openstack/keystone-ab0c-account-create-update-tr7hx" Mar 13 10:24:15 crc kubenswrapper[4632]: I0313 10:24:15.077673 4632 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc41c555-17e5-4785-a003-3f8e9f10d799-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:15 crc kubenswrapper[4632]: I0313 10:24:15.077701 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvft2\" (UniqueName: \"kubernetes.io/projected/cc41c555-17e5-4785-a003-3f8e9f10d799-kube-api-access-gvft2\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:15 crc kubenswrapper[4632]: I0313 10:24:15.102510 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-a750-account-create-update-7wk26"] Mar 13 10:24:15 crc kubenswrapper[4632]: I0313 10:24:15.104426 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a750-account-create-update-7wk26" Mar 13 10:24:15 crc kubenswrapper[4632]: I0313 10:24:15.108911 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Mar 13 10:24:15 crc kubenswrapper[4632]: I0313 10:24:15.114829 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a750-account-create-update-7wk26"] Mar 13 10:24:15 crc kubenswrapper[4632]: I0313 10:24:15.151562 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bfb6b" Mar 13 10:24:15 crc kubenswrapper[4632]: I0313 10:24:15.155233 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-c8lh5" event={"ID":"cc41c555-17e5-4785-a003-3f8e9f10d799","Type":"ContainerDied","Data":"0b853f73f9460c789c867abbd82e3ae379a221406d460cfe9764bd3d4e71050b"} Mar 13 10:24:15 crc kubenswrapper[4632]: I0313 10:24:15.155283 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b853f73f9460c789c867abbd82e3ae379a221406d460cfe9764bd3d4e71050b" Mar 13 10:24:15 crc kubenswrapper[4632]: I0313 10:24:15.155339 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-c8lh5" Mar 13 10:24:15 crc kubenswrapper[4632]: I0313 10:24:15.180226 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlcdq\" (UniqueName: \"kubernetes.io/projected/5f09e2f4-4f82-4388-9b5a-a9e890d3a950-kube-api-access-mlcdq\") pod \"placement-db-create-64xvf\" (UID: \"5f09e2f4-4f82-4388-9b5a-a9e890d3a950\") " pod="openstack/placement-db-create-64xvf" Mar 13 10:24:15 crc kubenswrapper[4632]: I0313 10:24:15.180348 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c84aa49-2900-4a14-b81b-bb03e925d1b7-operator-scripts\") pod \"keystone-ab0c-account-create-update-tr7hx\" (UID: \"6c84aa49-2900-4a14-b81b-bb03e925d1b7\") " pod="openstack/keystone-ab0c-account-create-update-tr7hx" Mar 13 10:24:15 crc kubenswrapper[4632]: I0313 10:24:15.180599 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f09e2f4-4f82-4388-9b5a-a9e890d3a950-operator-scripts\") pod \"placement-db-create-64xvf\" (UID: \"5f09e2f4-4f82-4388-9b5a-a9e890d3a950\") " pod="openstack/placement-db-create-64xvf" Mar 13 10:24:15 crc kubenswrapper[4632]: I0313 10:24:15.181312 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpvp9\" (UniqueName: \"kubernetes.io/projected/6c84aa49-2900-4a14-b81b-bb03e925d1b7-kube-api-access-wpvp9\") pod \"keystone-ab0c-account-create-update-tr7hx\" (UID: \"6c84aa49-2900-4a14-b81b-bb03e925d1b7\") " pod="openstack/keystone-ab0c-account-create-update-tr7hx" Mar 13 10:24:15 crc kubenswrapper[4632]: I0313 10:24:15.182203 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c84aa49-2900-4a14-b81b-bb03e925d1b7-operator-scripts\") pod \"keystone-ab0c-account-create-update-tr7hx\" (UID: \"6c84aa49-2900-4a14-b81b-bb03e925d1b7\") " pod="openstack/keystone-ab0c-account-create-update-tr7hx" Mar 13 10:24:15 crc kubenswrapper[4632]: I0313 10:24:15.198087 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpvp9\" (UniqueName: \"kubernetes.io/projected/6c84aa49-2900-4a14-b81b-bb03e925d1b7-kube-api-access-wpvp9\") pod \"keystone-ab0c-account-create-update-tr7hx\" (UID: \"6c84aa49-2900-4a14-b81b-bb03e925d1b7\") " pod="openstack/keystone-ab0c-account-create-update-tr7hx" Mar 13 10:24:15 crc kubenswrapper[4632]: I0313 10:24:15.225080 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ab0c-account-create-update-tr7hx" Mar 13 10:24:15 crc kubenswrapper[4632]: I0313 10:24:15.290316 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlcdq\" (UniqueName: \"kubernetes.io/projected/5f09e2f4-4f82-4388-9b5a-a9e890d3a950-kube-api-access-mlcdq\") pod \"placement-db-create-64xvf\" (UID: \"5f09e2f4-4f82-4388-9b5a-a9e890d3a950\") " pod="openstack/placement-db-create-64xvf" Mar 13 10:24:15 crc kubenswrapper[4632]: I0313 10:24:15.290706 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wds8g\" (UniqueName: \"kubernetes.io/projected/c4f6b362-7670-4867-b8fa-1f4c6170389f-kube-api-access-wds8g\") pod \"placement-a750-account-create-update-7wk26\" (UID: \"c4f6b362-7670-4867-b8fa-1f4c6170389f\") " pod="openstack/placement-a750-account-create-update-7wk26" Mar 13 10:24:15 crc kubenswrapper[4632]: I0313 10:24:15.291216 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f09e2f4-4f82-4388-9b5a-a9e890d3a950-operator-scripts\") pod \"placement-db-create-64xvf\" (UID: \"5f09e2f4-4f82-4388-9b5a-a9e890d3a950\") " pod="openstack/placement-db-create-64xvf" Mar 13 10:24:15 crc kubenswrapper[4632]: I0313 10:24:15.291290 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4f6b362-7670-4867-b8fa-1f4c6170389f-operator-scripts\") pod \"placement-a750-account-create-update-7wk26\" (UID: \"c4f6b362-7670-4867-b8fa-1f4c6170389f\") " pod="openstack/placement-a750-account-create-update-7wk26" Mar 13 10:24:15 crc kubenswrapper[4632]: I0313 10:24:15.293156 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f09e2f4-4f82-4388-9b5a-a9e890d3a950-operator-scripts\") pod \"placement-db-create-64xvf\" (UID: \"5f09e2f4-4f82-4388-9b5a-a9e890d3a950\") " pod="openstack/placement-db-create-64xvf" Mar 13 10:24:15 crc kubenswrapper[4632]: I0313 10:24:15.314000 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlcdq\" (UniqueName: \"kubernetes.io/projected/5f09e2f4-4f82-4388-9b5a-a9e890d3a950-kube-api-access-mlcdq\") pod \"placement-db-create-64xvf\" (UID: \"5f09e2f4-4f82-4388-9b5a-a9e890d3a950\") " pod="openstack/placement-db-create-64xvf" Mar 13 10:24:15 crc kubenswrapper[4632]: I0313 10:24:15.348531 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-64xvf" Mar 13 10:24:15 crc kubenswrapper[4632]: I0313 10:24:15.394142 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wds8g\" (UniqueName: \"kubernetes.io/projected/c4f6b362-7670-4867-b8fa-1f4c6170389f-kube-api-access-wds8g\") pod \"placement-a750-account-create-update-7wk26\" (UID: \"c4f6b362-7670-4867-b8fa-1f4c6170389f\") " pod="openstack/placement-a750-account-create-update-7wk26" Mar 13 10:24:15 crc kubenswrapper[4632]: I0313 10:24:15.394355 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4f6b362-7670-4867-b8fa-1f4c6170389f-operator-scripts\") pod \"placement-a750-account-create-update-7wk26\" (UID: \"c4f6b362-7670-4867-b8fa-1f4c6170389f\") " pod="openstack/placement-a750-account-create-update-7wk26" Mar 13 10:24:15 crc kubenswrapper[4632]: I0313 10:24:15.395234 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4f6b362-7670-4867-b8fa-1f4c6170389f-operator-scripts\") pod \"placement-a750-account-create-update-7wk26\" (UID: \"c4f6b362-7670-4867-b8fa-1f4c6170389f\") " pod="openstack/placement-a750-account-create-update-7wk26" Mar 13 10:24:15 crc kubenswrapper[4632]: I0313 10:24:15.416924 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wds8g\" (UniqueName: \"kubernetes.io/projected/c4f6b362-7670-4867-b8fa-1f4c6170389f-kube-api-access-wds8g\") pod \"placement-a750-account-create-update-7wk26\" (UID: \"c4f6b362-7670-4867-b8fa-1f4c6170389f\") " pod="openstack/placement-a750-account-create-update-7wk26" Mar 13 10:24:15 crc kubenswrapper[4632]: I0313 10:24:15.427082 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a750-account-create-update-7wk26" Mar 13 10:24:15 crc kubenswrapper[4632]: I0313 10:24:15.495010 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e37b3d77-de2e-4be9-9984-550d4ba0f2f0-etc-swift\") pod \"swift-storage-0\" (UID: \"e37b3d77-de2e-4be9-9984-550d4ba0f2f0\") " pod="openstack/swift-storage-0" Mar 13 10:24:15 crc kubenswrapper[4632]: E0313 10:24:15.495234 4632 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 13 10:24:15 crc kubenswrapper[4632]: E0313 10:24:15.495249 4632 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 13 10:24:15 crc kubenswrapper[4632]: E0313 10:24:15.495291 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e37b3d77-de2e-4be9-9984-550d4ba0f2f0-etc-swift podName:e37b3d77-de2e-4be9-9984-550d4ba0f2f0 nodeName:}" failed. No retries permitted until 2026-03-13 10:24:23.495277311 +0000 UTC m=+1237.517807444 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e37b3d77-de2e-4be9-9984-550d4ba0f2f0-etc-swift") pod "swift-storage-0" (UID: "e37b3d77-de2e-4be9-9984-550d4ba0f2f0") : configmap "swift-ring-files" not found Mar 13 10:24:16 crc kubenswrapper[4632]: W0313 10:24:16.703184 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2610abab_1da4_4912_9ca7_f2aa2d7c0486.slice/crio-93ffdbe4b1ae6dbe85c9cdaa72f589075fb53d7d372bcaee83be3fb573b289c4 WatchSource:0}: Error finding container 93ffdbe4b1ae6dbe85c9cdaa72f589075fb53d7d372bcaee83be3fb573b289c4: Status 404 returned error can't find the container with id 93ffdbe4b1ae6dbe85c9cdaa72f589075fb53d7d372bcaee83be3fb573b289c4 Mar 13 10:24:16 crc kubenswrapper[4632]: I0313 10:24:16.710594 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9698-account-create-update-9kfhv"] Mar 13 10:24:16 crc kubenswrapper[4632]: I0313 10:24:16.720718 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" Mar 13 10:24:16 crc kubenswrapper[4632]: I0313 10:24:16.799281 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79dfb79747-jv5m6"] Mar 13 10:24:16 crc kubenswrapper[4632]: I0313 10:24:16.799555 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79dfb79747-jv5m6" podUID="a362448e-8daa-4bf4-958f-f3ca135be228" containerName="dnsmasq-dns" containerID="cri-o://6e1c032b958be8592683422ee06f119be07d42e5fc24c06ebfd10193412b1ccc" gracePeriod=10 Mar 13 10:24:16 crc kubenswrapper[4632]: I0313 10:24:16.889118 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-bfb6b"] Mar 13 10:24:17 crc kubenswrapper[4632]: I0313 10:24:17.070068 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-c8lh5"] Mar 13 10:24:17 crc kubenswrapper[4632]: I0313 10:24:17.091310 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-c8lh5"] Mar 13 10:24:17 crc kubenswrapper[4632]: W0313 10:24:17.170582 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4f6b362_7670_4867_b8fa_1f4c6170389f.slice/crio-8da340b9259c4e3950c1e2300189828fc046a0a3f7a7546b25adfdd801a1d232 WatchSource:0}: Error finding container 8da340b9259c4e3950c1e2300189828fc046a0a3f7a7546b25adfdd801a1d232: Status 404 returned error can't find the container with id 8da340b9259c4e3950c1e2300189828fc046a0a3f7a7546b25adfdd801a1d232 Mar 13 10:24:17 crc kubenswrapper[4632]: I0313 10:24:17.178593 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a750-account-create-update-7wk26"] Mar 13 10:24:17 crc kubenswrapper[4632]: I0313 10:24:17.194577 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-7hqpw"] Mar 13 10:24:17 crc kubenswrapper[4632]: I0313 10:24:17.196467 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9698-account-create-update-9kfhv" event={"ID":"2610abab-1da4-4912-9ca7-f2aa2d7c0486","Type":"ContainerStarted","Data":"c0ed44d952b9a10d8f17f6b274d11ae8079f72b678bca2ec969eb44a14c0f18e"} Mar 13 10:24:17 crc kubenswrapper[4632]: I0313 10:24:17.196505 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9698-account-create-update-9kfhv" event={"ID":"2610abab-1da4-4912-9ca7-f2aa2d7c0486","Type":"ContainerStarted","Data":"93ffdbe4b1ae6dbe85c9cdaa72f589075fb53d7d372bcaee83be3fb573b289c4"} Mar 13 10:24:17 crc kubenswrapper[4632]: I0313 10:24:17.206278 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-ab0c-account-create-update-tr7hx"] Mar 13 10:24:17 crc kubenswrapper[4632]: I0313 10:24:17.207218 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bfb6b" event={"ID":"8e353045-e09b-4cd2-b659-1954485ec8db","Type":"ContainerStarted","Data":"e12bb579655132c65f7afaf171587507463b77c9b73d0902f8981397a2c342cd"} Mar 13 10:24:17 crc kubenswrapper[4632]: I0313 10:24:17.207343 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bfb6b" event={"ID":"8e353045-e09b-4cd2-b659-1954485ec8db","Type":"ContainerStarted","Data":"535f1a0d62083c8f1779cf07cdf7b6b338543f49b074ea77690edaf35b0bd836"} Mar 13 10:24:17 crc kubenswrapper[4632]: W0313 10:24:17.208400 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod584d2818_4b22_468f_b296_bd1850c7915b.slice/crio-4a12c0a38209ede38a672765a10b8d7b9b038d2da32f90548d5d8030e926a912 WatchSource:0}: Error finding container 4a12c0a38209ede38a672765a10b8d7b9b038d2da32f90548d5d8030e926a912: Status 404 returned error can't find the container with id 4a12c0a38209ede38a672765a10b8d7b9b038d2da32f90548d5d8030e926a912 Mar 13 10:24:17 crc kubenswrapper[4632]: I0313 10:24:17.218261 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-64xvf"] Mar 13 10:24:17 crc kubenswrapper[4632]: I0313 10:24:17.224442 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-9698-account-create-update-9kfhv" podStartSLOduration=3.224418293 podStartE2EDuration="3.224418293s" podCreationTimestamp="2026-03-13 10:24:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:24:17.214923801 +0000 UTC m=+1231.237453934" watchObservedRunningTime="2026-03-13 10:24:17.224418293 +0000 UTC m=+1231.246948416" Mar 13 10:24:17 crc kubenswrapper[4632]: I0313 10:24:17.231261 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-mkdcg" event={"ID":"bc39c52e-008f-40c1-b93b-532707127fcd","Type":"ContainerStarted","Data":"c1060b701ea818cb4c5d1e5e94618270eed048e8bd16d50775b43c9b34c6b1b9"} Mar 13 10:24:17 crc kubenswrapper[4632]: I0313 10:24:17.248046 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-bfb6b" podStartSLOduration=3.247987741 podStartE2EDuration="3.247987741s" podCreationTimestamp="2026-03-13 10:24:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:24:17.23324607 +0000 UTC m=+1231.255776203" watchObservedRunningTime="2026-03-13 10:24:17.247987741 +0000 UTC m=+1231.270517874" Mar 13 10:24:17 crc kubenswrapper[4632]: I0313 10:24:17.265442 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-mkdcg" podStartSLOduration=2.2120488050000002 podStartE2EDuration="6.2654154s" podCreationTimestamp="2026-03-13 10:24:11 +0000 UTC" firstStartedPulling="2026-03-13 10:24:12.227328045 +0000 UTC m=+1226.249858178" lastFinishedPulling="2026-03-13 10:24:16.28069464 +0000 UTC m=+1230.303224773" observedRunningTime="2026-03-13 10:24:17.260723324 +0000 UTC m=+1231.283253477" watchObservedRunningTime="2026-03-13 10:24:17.2654154 +0000 UTC m=+1231.287945533" Mar 13 10:24:17 crc kubenswrapper[4632]: W0313 10:24:17.266027 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c84aa49_2900_4a14_b81b_bb03e925d1b7.slice/crio-c64cf5189ab7a5f49e61bd270ad953bb57c962bfc193a6a4df7169ba439b5084 WatchSource:0}: Error finding container c64cf5189ab7a5f49e61bd270ad953bb57c962bfc193a6a4df7169ba439b5084: Status 404 returned error can't find the container with id c64cf5189ab7a5f49e61bd270ad953bb57c962bfc193a6a4df7169ba439b5084 Mar 13 10:24:17 crc kubenswrapper[4632]: I0313 10:24:17.266190 4632 generic.go:334] "Generic (PLEG): container finished" podID="a362448e-8daa-4bf4-958f-f3ca135be228" containerID="6e1c032b958be8592683422ee06f119be07d42e5fc24c06ebfd10193412b1ccc" exitCode=0 Mar 13 10:24:17 crc kubenswrapper[4632]: I0313 10:24:17.266266 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79dfb79747-jv5m6" event={"ID":"a362448e-8daa-4bf4-958f-f3ca135be228","Type":"ContainerDied","Data":"6e1c032b958be8592683422ee06f119be07d42e5fc24c06ebfd10193412b1ccc"} Mar 13 10:24:17 crc kubenswrapper[4632]: I0313 10:24:17.714243 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79dfb79747-jv5m6" Mar 13 10:24:17 crc kubenswrapper[4632]: I0313 10:24:17.841737 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a362448e-8daa-4bf4-958f-f3ca135be228-ovsdbserver-sb\") pod \"a362448e-8daa-4bf4-958f-f3ca135be228\" (UID: \"a362448e-8daa-4bf4-958f-f3ca135be228\") " Mar 13 10:24:17 crc kubenswrapper[4632]: I0313 10:24:17.841990 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pv67\" (UniqueName: \"kubernetes.io/projected/a362448e-8daa-4bf4-958f-f3ca135be228-kube-api-access-5pv67\") pod \"a362448e-8daa-4bf4-958f-f3ca135be228\" (UID: \"a362448e-8daa-4bf4-958f-f3ca135be228\") " Mar 13 10:24:17 crc kubenswrapper[4632]: I0313 10:24:17.842183 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a362448e-8daa-4bf4-958f-f3ca135be228-dns-svc\") pod \"a362448e-8daa-4bf4-958f-f3ca135be228\" (UID: \"a362448e-8daa-4bf4-958f-f3ca135be228\") " Mar 13 10:24:17 crc kubenswrapper[4632]: I0313 10:24:17.842276 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a362448e-8daa-4bf4-958f-f3ca135be228-config\") pod \"a362448e-8daa-4bf4-958f-f3ca135be228\" (UID: \"a362448e-8daa-4bf4-958f-f3ca135be228\") " Mar 13 10:24:17 crc kubenswrapper[4632]: I0313 10:24:17.842411 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a362448e-8daa-4bf4-958f-f3ca135be228-ovsdbserver-nb\") pod \"a362448e-8daa-4bf4-958f-f3ca135be228\" (UID: \"a362448e-8daa-4bf4-958f-f3ca135be228\") " Mar 13 10:24:17 crc kubenswrapper[4632]: I0313 10:24:17.852307 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a362448e-8daa-4bf4-958f-f3ca135be228-kube-api-access-5pv67" (OuterVolumeSpecName: "kube-api-access-5pv67") pod "a362448e-8daa-4bf4-958f-f3ca135be228" (UID: "a362448e-8daa-4bf4-958f-f3ca135be228"). InnerVolumeSpecName "kube-api-access-5pv67". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:24:17 crc kubenswrapper[4632]: I0313 10:24:17.946035 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pv67\" (UniqueName: \"kubernetes.io/projected/a362448e-8daa-4bf4-958f-f3ca135be228-kube-api-access-5pv67\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:18 crc kubenswrapper[4632]: I0313 10:24:18.045802 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a362448e-8daa-4bf4-958f-f3ca135be228-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a362448e-8daa-4bf4-958f-f3ca135be228" (UID: "a362448e-8daa-4bf4-958f-f3ca135be228"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:24:18 crc kubenswrapper[4632]: I0313 10:24:18.047076 4632 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a362448e-8daa-4bf4-958f-f3ca135be228-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:18 crc kubenswrapper[4632]: I0313 10:24:18.072805 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc41c555-17e5-4785-a003-3f8e9f10d799" path="/var/lib/kubelet/pods/cc41c555-17e5-4785-a003-3f8e9f10d799/volumes" Mar 13 10:24:18 crc kubenswrapper[4632]: I0313 10:24:18.111237 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a362448e-8daa-4bf4-958f-f3ca135be228-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a362448e-8daa-4bf4-958f-f3ca135be228" (UID: "a362448e-8daa-4bf4-958f-f3ca135be228"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:24:18 crc kubenswrapper[4632]: I0313 10:24:18.149805 4632 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a362448e-8daa-4bf4-958f-f3ca135be228-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:18 crc kubenswrapper[4632]: I0313 10:24:18.150882 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a362448e-8daa-4bf4-958f-f3ca135be228-config" (OuterVolumeSpecName: "config") pod "a362448e-8daa-4bf4-958f-f3ca135be228" (UID: "a362448e-8daa-4bf4-958f-f3ca135be228"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:24:18 crc kubenswrapper[4632]: I0313 10:24:18.153703 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a362448e-8daa-4bf4-958f-f3ca135be228-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a362448e-8daa-4bf4-958f-f3ca135be228" (UID: "a362448e-8daa-4bf4-958f-f3ca135be228"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:24:18 crc kubenswrapper[4632]: I0313 10:24:18.251460 4632 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a362448e-8daa-4bf4-958f-f3ca135be228-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:18 crc kubenswrapper[4632]: I0313 10:24:18.251842 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a362448e-8daa-4bf4-958f-f3ca135be228-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:18 crc kubenswrapper[4632]: I0313 10:24:18.292146 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79dfb79747-jv5m6" Mar 13 10:24:18 crc kubenswrapper[4632]: I0313 10:24:18.293794 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79dfb79747-jv5m6" event={"ID":"a362448e-8daa-4bf4-958f-f3ca135be228","Type":"ContainerDied","Data":"353c97e7f46060d145aa3be9824f787e63d6eed5891607427c9311023caa0833"} Mar 13 10:24:18 crc kubenswrapper[4632]: I0313 10:24:18.294031 4632 scope.go:117] "RemoveContainer" containerID="6e1c032b958be8592683422ee06f119be07d42e5fc24c06ebfd10193412b1ccc" Mar 13 10:24:18 crc kubenswrapper[4632]: I0313 10:24:18.298656 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a750-account-create-update-7wk26" event={"ID":"c4f6b362-7670-4867-b8fa-1f4c6170389f","Type":"ContainerStarted","Data":"10bcedf0effae05b832e3793407fcf2703d9df4f7136a8211c78de6b0a99c17b"} Mar 13 10:24:18 crc kubenswrapper[4632]: I0313 10:24:18.298862 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a750-account-create-update-7wk26" event={"ID":"c4f6b362-7670-4867-b8fa-1f4c6170389f","Type":"ContainerStarted","Data":"8da340b9259c4e3950c1e2300189828fc046a0a3f7a7546b25adfdd801a1d232"} Mar 13 10:24:18 crc kubenswrapper[4632]: I0313 10:24:18.304581 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ab0c-account-create-update-tr7hx" event={"ID":"6c84aa49-2900-4a14-b81b-bb03e925d1b7","Type":"ContainerStarted","Data":"40127d251d4cb7407ae0ce8a1705cd5210171fb2a750df3289fa3b2b9a54b055"} Mar 13 10:24:18 crc kubenswrapper[4632]: I0313 10:24:18.304866 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ab0c-account-create-update-tr7hx" event={"ID":"6c84aa49-2900-4a14-b81b-bb03e925d1b7","Type":"ContainerStarted","Data":"c64cf5189ab7a5f49e61bd270ad953bb57c962bfc193a6a4df7169ba439b5084"} Mar 13 10:24:18 crc kubenswrapper[4632]: I0313 10:24:18.318179 4632 generic.go:334] "Generic (PLEG): container finished" podID="2610abab-1da4-4912-9ca7-f2aa2d7c0486" containerID="c0ed44d952b9a10d8f17f6b274d11ae8079f72b678bca2ec969eb44a14c0f18e" exitCode=0 Mar 13 10:24:18 crc kubenswrapper[4632]: I0313 10:24:18.318245 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9698-account-create-update-9kfhv" event={"ID":"2610abab-1da4-4912-9ca7-f2aa2d7c0486","Type":"ContainerDied","Data":"c0ed44d952b9a10d8f17f6b274d11ae8079f72b678bca2ec969eb44a14c0f18e"} Mar 13 10:24:18 crc kubenswrapper[4632]: I0313 10:24:18.332500 4632 generic.go:334] "Generic (PLEG): container finished" podID="8e353045-e09b-4cd2-b659-1954485ec8db" containerID="e12bb579655132c65f7afaf171587507463b77c9b73d0902f8981397a2c342cd" exitCode=0 Mar 13 10:24:18 crc kubenswrapper[4632]: I0313 10:24:18.332602 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bfb6b" event={"ID":"8e353045-e09b-4cd2-b659-1954485ec8db","Type":"ContainerDied","Data":"e12bb579655132c65f7afaf171587507463b77c9b73d0902f8981397a2c342cd"} Mar 13 10:24:18 crc kubenswrapper[4632]: I0313 10:24:18.336502 4632 generic.go:334] "Generic (PLEG): container finished" podID="5f09e2f4-4f82-4388-9b5a-a9e890d3a950" containerID="05358506b7b8a5602da80aa6b4985f897c7b0818d4a2f70ed84421563493ee78" exitCode=0 Mar 13 10:24:18 crc kubenswrapper[4632]: I0313 10:24:18.336567 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-64xvf" event={"ID":"5f09e2f4-4f82-4388-9b5a-a9e890d3a950","Type":"ContainerDied","Data":"05358506b7b8a5602da80aa6b4985f897c7b0818d4a2f70ed84421563493ee78"} Mar 13 10:24:18 crc kubenswrapper[4632]: I0313 10:24:18.336594 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-64xvf" event={"ID":"5f09e2f4-4f82-4388-9b5a-a9e890d3a950","Type":"ContainerStarted","Data":"fa3bd3847d87a827b95ad3e0b17ca932be91f863713ca115297ee8fc7b29e228"} Mar 13 10:24:18 crc kubenswrapper[4632]: I0313 10:24:18.342066 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-a750-account-create-update-7wk26" podStartSLOduration=3.342040163 podStartE2EDuration="3.342040163s" podCreationTimestamp="2026-03-13 10:24:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:24:18.326183234 +0000 UTC m=+1232.348713387" watchObservedRunningTime="2026-03-13 10:24:18.342040163 +0000 UTC m=+1232.364570316" Mar 13 10:24:18 crc kubenswrapper[4632]: I0313 10:24:18.346314 4632 generic.go:334] "Generic (PLEG): container finished" podID="584d2818-4b22-468f-b296-bd1850c7915b" containerID="2fd6ae14a44d07bfe626dada3603473befbf9326ca83648414737abd80e0ce5e" exitCode=0 Mar 13 10:24:18 crc kubenswrapper[4632]: I0313 10:24:18.346470 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7hqpw" event={"ID":"584d2818-4b22-468f-b296-bd1850c7915b","Type":"ContainerDied","Data":"2fd6ae14a44d07bfe626dada3603473befbf9326ca83648414737abd80e0ce5e"} Mar 13 10:24:18 crc kubenswrapper[4632]: I0313 10:24:18.346517 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7hqpw" event={"ID":"584d2818-4b22-468f-b296-bd1850c7915b","Type":"ContainerStarted","Data":"4a12c0a38209ede38a672765a10b8d7b9b038d2da32f90548d5d8030e926a912"} Mar 13 10:24:18 crc kubenswrapper[4632]: I0313 10:24:18.369108 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-ab0c-account-create-update-tr7hx" podStartSLOduration=4.369091057 podStartE2EDuration="4.369091057s" podCreationTimestamp="2026-03-13 10:24:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:24:18.346268037 +0000 UTC m=+1232.368798170" watchObservedRunningTime="2026-03-13 10:24:18.369091057 +0000 UTC m=+1232.391621190" Mar 13 10:24:18 crc kubenswrapper[4632]: I0313 10:24:18.375345 4632 scope.go:117] "RemoveContainer" containerID="c9a88952f81b62d419132fe9a18256ffde7daf30602bf205173e43b0963b20c3" Mar 13 10:24:18 crc kubenswrapper[4632]: I0313 10:24:18.432654 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79dfb79747-jv5m6"] Mar 13 10:24:18 crc kubenswrapper[4632]: I0313 10:24:18.441916 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79dfb79747-jv5m6"] Mar 13 10:24:19 crc kubenswrapper[4632]: I0313 10:24:19.357243 4632 generic.go:334] "Generic (PLEG): container finished" podID="6c84aa49-2900-4a14-b81b-bb03e925d1b7" containerID="40127d251d4cb7407ae0ce8a1705cd5210171fb2a750df3289fa3b2b9a54b055" exitCode=0 Mar 13 10:24:19 crc kubenswrapper[4632]: I0313 10:24:19.357312 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ab0c-account-create-update-tr7hx" event={"ID":"6c84aa49-2900-4a14-b81b-bb03e925d1b7","Type":"ContainerDied","Data":"40127d251d4cb7407ae0ce8a1705cd5210171fb2a750df3289fa3b2b9a54b055"} Mar 13 10:24:19 crc kubenswrapper[4632]: I0313 10:24:19.360539 4632 generic.go:334] "Generic (PLEG): container finished" podID="c4f6b362-7670-4867-b8fa-1f4c6170389f" containerID="10bcedf0effae05b832e3793407fcf2703d9df4f7136a8211c78de6b0a99c17b" exitCode=0 Mar 13 10:24:19 crc kubenswrapper[4632]: I0313 10:24:19.360766 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a750-account-create-update-7wk26" event={"ID":"c4f6b362-7670-4867-b8fa-1f4c6170389f","Type":"ContainerDied","Data":"10bcedf0effae05b832e3793407fcf2703d9df4f7136a8211c78de6b0a99c17b"} Mar 13 10:24:19 crc kubenswrapper[4632]: I0313 10:24:19.888199 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bfb6b" Mar 13 10:24:19 crc kubenswrapper[4632]: I0313 10:24:19.980480 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjh6b\" (UniqueName: \"kubernetes.io/projected/8e353045-e09b-4cd2-b659-1954485ec8db-kube-api-access-fjh6b\") pod \"8e353045-e09b-4cd2-b659-1954485ec8db\" (UID: \"8e353045-e09b-4cd2-b659-1954485ec8db\") " Mar 13 10:24:19 crc kubenswrapper[4632]: I0313 10:24:19.980563 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e353045-e09b-4cd2-b659-1954485ec8db-operator-scripts\") pod \"8e353045-e09b-4cd2-b659-1954485ec8db\" (UID: \"8e353045-e09b-4cd2-b659-1954485ec8db\") " Mar 13 10:24:19 crc kubenswrapper[4632]: I0313 10:24:19.981074 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e353045-e09b-4cd2-b659-1954485ec8db-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8e353045-e09b-4cd2-b659-1954485ec8db" (UID: "8e353045-e09b-4cd2-b659-1954485ec8db"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:24:19 crc kubenswrapper[4632]: I0313 10:24:19.986506 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e353045-e09b-4cd2-b659-1954485ec8db-kube-api-access-fjh6b" (OuterVolumeSpecName: "kube-api-access-fjh6b") pod "8e353045-e09b-4cd2-b659-1954485ec8db" (UID: "8e353045-e09b-4cd2-b659-1954485ec8db"). InnerVolumeSpecName "kube-api-access-fjh6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.034474 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7hqpw" Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.050998 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-64xvf" Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.058190 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9698-account-create-update-9kfhv" Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.070486 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a362448e-8daa-4bf4-958f-f3ca135be228" path="/var/lib/kubelet/pods/a362448e-8daa-4bf4-958f-f3ca135be228/volumes" Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.082744 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjh6b\" (UniqueName: \"kubernetes.io/projected/8e353045-e09b-4cd2-b659-1954485ec8db-kube-api-access-fjh6b\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.082765 4632 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e353045-e09b-4cd2-b659-1954485ec8db-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.183905 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2610abab-1da4-4912-9ca7-f2aa2d7c0486-operator-scripts\") pod \"2610abab-1da4-4912-9ca7-f2aa2d7c0486\" (UID: \"2610abab-1da4-4912-9ca7-f2aa2d7c0486\") " Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.184005 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlcdq\" (UniqueName: \"kubernetes.io/projected/5f09e2f4-4f82-4388-9b5a-a9e890d3a950-kube-api-access-mlcdq\") pod \"5f09e2f4-4f82-4388-9b5a-a9e890d3a950\" (UID: \"5f09e2f4-4f82-4388-9b5a-a9e890d3a950\") " Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.184109 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wszzh\" (UniqueName: \"kubernetes.io/projected/584d2818-4b22-468f-b296-bd1850c7915b-kube-api-access-wszzh\") pod \"584d2818-4b22-468f-b296-bd1850c7915b\" (UID: \"584d2818-4b22-468f-b296-bd1850c7915b\") " Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.184149 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29xj8\" (UniqueName: \"kubernetes.io/projected/2610abab-1da4-4912-9ca7-f2aa2d7c0486-kube-api-access-29xj8\") pod \"2610abab-1da4-4912-9ca7-f2aa2d7c0486\" (UID: \"2610abab-1da4-4912-9ca7-f2aa2d7c0486\") " Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.184219 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/584d2818-4b22-468f-b296-bd1850c7915b-operator-scripts\") pod \"584d2818-4b22-468f-b296-bd1850c7915b\" (UID: \"584d2818-4b22-468f-b296-bd1850c7915b\") " Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.184255 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f09e2f4-4f82-4388-9b5a-a9e890d3a950-operator-scripts\") pod \"5f09e2f4-4f82-4388-9b5a-a9e890d3a950\" (UID: \"5f09e2f4-4f82-4388-9b5a-a9e890d3a950\") " Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.184979 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2610abab-1da4-4912-9ca7-f2aa2d7c0486-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2610abab-1da4-4912-9ca7-f2aa2d7c0486" (UID: "2610abab-1da4-4912-9ca7-f2aa2d7c0486"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.185098 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/584d2818-4b22-468f-b296-bd1850c7915b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "584d2818-4b22-468f-b296-bd1850c7915b" (UID: "584d2818-4b22-468f-b296-bd1850c7915b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.185236 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f09e2f4-4f82-4388-9b5a-a9e890d3a950-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5f09e2f4-4f82-4388-9b5a-a9e890d3a950" (UID: "5f09e2f4-4f82-4388-9b5a-a9e890d3a950"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.187015 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584d2818-4b22-468f-b296-bd1850c7915b-kube-api-access-wszzh" (OuterVolumeSpecName: "kube-api-access-wszzh") pod "584d2818-4b22-468f-b296-bd1850c7915b" (UID: "584d2818-4b22-468f-b296-bd1850c7915b"). InnerVolumeSpecName "kube-api-access-wszzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.187142 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f09e2f4-4f82-4388-9b5a-a9e890d3a950-kube-api-access-mlcdq" (OuterVolumeSpecName: "kube-api-access-mlcdq") pod "5f09e2f4-4f82-4388-9b5a-a9e890d3a950" (UID: "5f09e2f4-4f82-4388-9b5a-a9e890d3a950"). InnerVolumeSpecName "kube-api-access-mlcdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.187352 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2610abab-1da4-4912-9ca7-f2aa2d7c0486-kube-api-access-29xj8" (OuterVolumeSpecName: "kube-api-access-29xj8") pod "2610abab-1da4-4912-9ca7-f2aa2d7c0486" (UID: "2610abab-1da4-4912-9ca7-f2aa2d7c0486"). InnerVolumeSpecName "kube-api-access-29xj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.285829 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wszzh\" (UniqueName: \"kubernetes.io/projected/584d2818-4b22-468f-b296-bd1850c7915b-kube-api-access-wszzh\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.285883 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29xj8\" (UniqueName: \"kubernetes.io/projected/2610abab-1da4-4912-9ca7-f2aa2d7c0486-kube-api-access-29xj8\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.285912 4632 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/584d2818-4b22-468f-b296-bd1850c7915b-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.285930 4632 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f09e2f4-4f82-4388-9b5a-a9e890d3a950-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.285976 4632 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2610abab-1da4-4912-9ca7-f2aa2d7c0486-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.285994 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlcdq\" (UniqueName: \"kubernetes.io/projected/5f09e2f4-4f82-4388-9b5a-a9e890d3a950-kube-api-access-mlcdq\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.368878 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9698-account-create-update-9kfhv" event={"ID":"2610abab-1da4-4912-9ca7-f2aa2d7c0486","Type":"ContainerDied","Data":"93ffdbe4b1ae6dbe85c9cdaa72f589075fb53d7d372bcaee83be3fb573b289c4"} Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.368919 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93ffdbe4b1ae6dbe85c9cdaa72f589075fb53d7d372bcaee83be3fb573b289c4" Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.368957 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9698-account-create-update-9kfhv" Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.370588 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bfb6b" event={"ID":"8e353045-e09b-4cd2-b659-1954485ec8db","Type":"ContainerDied","Data":"535f1a0d62083c8f1779cf07cdf7b6b338543f49b074ea77690edaf35b0bd836"} Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.370628 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="535f1a0d62083c8f1779cf07cdf7b6b338543f49b074ea77690edaf35b0bd836" Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.370681 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bfb6b" Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.372381 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-64xvf" event={"ID":"5f09e2f4-4f82-4388-9b5a-a9e890d3a950","Type":"ContainerDied","Data":"fa3bd3847d87a827b95ad3e0b17ca932be91f863713ca115297ee8fc7b29e228"} Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.372408 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa3bd3847d87a827b95ad3e0b17ca932be91f863713ca115297ee8fc7b29e228" Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.372390 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-64xvf" Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.376386 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7hqpw" Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.376562 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7hqpw" event={"ID":"584d2818-4b22-468f-b296-bd1850c7915b","Type":"ContainerDied","Data":"4a12c0a38209ede38a672765a10b8d7b9b038d2da32f90548d5d8030e926a912"} Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.376599 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a12c0a38209ede38a672765a10b8d7b9b038d2da32f90548d5d8030e926a912" Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.725969 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ab0c-account-create-update-tr7hx" Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.771270 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a750-account-create-update-7wk26" Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.796758 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c84aa49-2900-4a14-b81b-bb03e925d1b7-operator-scripts\") pod \"6c84aa49-2900-4a14-b81b-bb03e925d1b7\" (UID: \"6c84aa49-2900-4a14-b81b-bb03e925d1b7\") " Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.796908 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpvp9\" (UniqueName: \"kubernetes.io/projected/6c84aa49-2900-4a14-b81b-bb03e925d1b7-kube-api-access-wpvp9\") pod \"6c84aa49-2900-4a14-b81b-bb03e925d1b7\" (UID: \"6c84aa49-2900-4a14-b81b-bb03e925d1b7\") " Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.798318 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c84aa49-2900-4a14-b81b-bb03e925d1b7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6c84aa49-2900-4a14-b81b-bb03e925d1b7" (UID: "6c84aa49-2900-4a14-b81b-bb03e925d1b7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.801588 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c84aa49-2900-4a14-b81b-bb03e925d1b7-kube-api-access-wpvp9" (OuterVolumeSpecName: "kube-api-access-wpvp9") pod "6c84aa49-2900-4a14-b81b-bb03e925d1b7" (UID: "6c84aa49-2900-4a14-b81b-bb03e925d1b7"). InnerVolumeSpecName "kube-api-access-wpvp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.900581 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wds8g\" (UniqueName: \"kubernetes.io/projected/c4f6b362-7670-4867-b8fa-1f4c6170389f-kube-api-access-wds8g\") pod \"c4f6b362-7670-4867-b8fa-1f4c6170389f\" (UID: \"c4f6b362-7670-4867-b8fa-1f4c6170389f\") " Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.903334 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4f6b362-7670-4867-b8fa-1f4c6170389f-operator-scripts\") pod \"c4f6b362-7670-4867-b8fa-1f4c6170389f\" (UID: \"c4f6b362-7670-4867-b8fa-1f4c6170389f\") " Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.903772 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4f6b362-7670-4867-b8fa-1f4c6170389f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c4f6b362-7670-4867-b8fa-1f4c6170389f" (UID: "c4f6b362-7670-4867-b8fa-1f4c6170389f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.904161 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpvp9\" (UniqueName: \"kubernetes.io/projected/6c84aa49-2900-4a14-b81b-bb03e925d1b7-kube-api-access-wpvp9\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.904191 4632 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4f6b362-7670-4867-b8fa-1f4c6170389f-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.904204 4632 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c84aa49-2900-4a14-b81b-bb03e925d1b7-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:20 crc kubenswrapper[4632]: I0313 10:24:20.905761 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4f6b362-7670-4867-b8fa-1f4c6170389f-kube-api-access-wds8g" (OuterVolumeSpecName: "kube-api-access-wds8g") pod "c4f6b362-7670-4867-b8fa-1f4c6170389f" (UID: "c4f6b362-7670-4867-b8fa-1f4c6170389f"). InnerVolumeSpecName "kube-api-access-wds8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:24:21 crc kubenswrapper[4632]: I0313 10:24:21.005920 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wds8g\" (UniqueName: \"kubernetes.io/projected/c4f6b362-7670-4867-b8fa-1f4c6170389f-kube-api-access-wds8g\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:21 crc kubenswrapper[4632]: I0313 10:24:21.386169 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ab0c-account-create-update-tr7hx" event={"ID":"6c84aa49-2900-4a14-b81b-bb03e925d1b7","Type":"ContainerDied","Data":"c64cf5189ab7a5f49e61bd270ad953bb57c962bfc193a6a4df7169ba439b5084"} Mar 13 10:24:21 crc kubenswrapper[4632]: I0313 10:24:21.386828 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c64cf5189ab7a5f49e61bd270ad953bb57c962bfc193a6a4df7169ba439b5084" Mar 13 10:24:21 crc kubenswrapper[4632]: I0313 10:24:21.386331 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ab0c-account-create-update-tr7hx" Mar 13 10:24:21 crc kubenswrapper[4632]: I0313 10:24:21.387897 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a750-account-create-update-7wk26" event={"ID":"c4f6b362-7670-4867-b8fa-1f4c6170389f","Type":"ContainerDied","Data":"8da340b9259c4e3950c1e2300189828fc046a0a3f7a7546b25adfdd801a1d232"} Mar 13 10:24:21 crc kubenswrapper[4632]: I0313 10:24:21.387985 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8da340b9259c4e3950c1e2300189828fc046a0a3f7a7546b25adfdd801a1d232" Mar 13 10:24:21 crc kubenswrapper[4632]: I0313 10:24:21.388039 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a750-account-create-update-7wk26" Mar 13 10:24:21 crc kubenswrapper[4632]: I0313 10:24:21.713072 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 13 10:24:22 crc kubenswrapper[4632]: I0313 10:24:22.067984 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-lrjmj"] Mar 13 10:24:22 crc kubenswrapper[4632]: E0313 10:24:22.068391 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e353045-e09b-4cd2-b659-1954485ec8db" containerName="mariadb-database-create" Mar 13 10:24:22 crc kubenswrapper[4632]: I0313 10:24:22.068411 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e353045-e09b-4cd2-b659-1954485ec8db" containerName="mariadb-database-create" Mar 13 10:24:22 crc kubenswrapper[4632]: E0313 10:24:22.068436 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c84aa49-2900-4a14-b81b-bb03e925d1b7" containerName="mariadb-account-create-update" Mar 13 10:24:22 crc kubenswrapper[4632]: I0313 10:24:22.068444 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c84aa49-2900-4a14-b81b-bb03e925d1b7" containerName="mariadb-account-create-update" Mar 13 10:24:22 crc kubenswrapper[4632]: E0313 10:24:22.068476 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f09e2f4-4f82-4388-9b5a-a9e890d3a950" containerName="mariadb-database-create" Mar 13 10:24:22 crc kubenswrapper[4632]: I0313 10:24:22.068485 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f09e2f4-4f82-4388-9b5a-a9e890d3a950" containerName="mariadb-database-create" Mar 13 10:24:22 crc kubenswrapper[4632]: E0313 10:24:22.068498 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="584d2818-4b22-468f-b296-bd1850c7915b" containerName="mariadb-database-create" Mar 13 10:24:22 crc kubenswrapper[4632]: I0313 10:24:22.068508 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="584d2818-4b22-468f-b296-bd1850c7915b" containerName="mariadb-database-create" Mar 13 10:24:22 crc kubenswrapper[4632]: E0313 10:24:22.068525 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a362448e-8daa-4bf4-958f-f3ca135be228" containerName="dnsmasq-dns" Mar 13 10:24:22 crc kubenswrapper[4632]: I0313 10:24:22.068535 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="a362448e-8daa-4bf4-958f-f3ca135be228" containerName="dnsmasq-dns" Mar 13 10:24:22 crc kubenswrapper[4632]: E0313 10:24:22.068549 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a362448e-8daa-4bf4-958f-f3ca135be228" containerName="init" Mar 13 10:24:22 crc kubenswrapper[4632]: I0313 10:24:22.068559 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="a362448e-8daa-4bf4-958f-f3ca135be228" containerName="init" Mar 13 10:24:22 crc kubenswrapper[4632]: E0313 10:24:22.068569 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4f6b362-7670-4867-b8fa-1f4c6170389f" containerName="mariadb-account-create-update" Mar 13 10:24:22 crc kubenswrapper[4632]: I0313 10:24:22.068578 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f6b362-7670-4867-b8fa-1f4c6170389f" containerName="mariadb-account-create-update" Mar 13 10:24:22 crc kubenswrapper[4632]: E0313 10:24:22.068597 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2610abab-1da4-4912-9ca7-f2aa2d7c0486" containerName="mariadb-account-create-update" Mar 13 10:24:22 crc kubenswrapper[4632]: I0313 10:24:22.068605 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="2610abab-1da4-4912-9ca7-f2aa2d7c0486" containerName="mariadb-account-create-update" Mar 13 10:24:22 crc kubenswrapper[4632]: I0313 10:24:22.068811 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e353045-e09b-4cd2-b659-1954485ec8db" containerName="mariadb-database-create" Mar 13 10:24:22 crc kubenswrapper[4632]: I0313 10:24:22.068830 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c84aa49-2900-4a14-b81b-bb03e925d1b7" containerName="mariadb-account-create-update" Mar 13 10:24:22 crc kubenswrapper[4632]: I0313 10:24:22.068841 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="a362448e-8daa-4bf4-958f-f3ca135be228" containerName="dnsmasq-dns" Mar 13 10:24:22 crc kubenswrapper[4632]: I0313 10:24:22.068849 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4f6b362-7670-4867-b8fa-1f4c6170389f" containerName="mariadb-account-create-update" Mar 13 10:24:22 crc kubenswrapper[4632]: I0313 10:24:22.068861 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="584d2818-4b22-468f-b296-bd1850c7915b" containerName="mariadb-database-create" Mar 13 10:24:22 crc kubenswrapper[4632]: I0313 10:24:22.068872 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="2610abab-1da4-4912-9ca7-f2aa2d7c0486" containerName="mariadb-account-create-update" Mar 13 10:24:22 crc kubenswrapper[4632]: I0313 10:24:22.068884 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f09e2f4-4f82-4388-9b5a-a9e890d3a950" containerName="mariadb-database-create" Mar 13 10:24:22 crc kubenswrapper[4632]: I0313 10:24:22.069558 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lrjmj" Mar 13 10:24:22 crc kubenswrapper[4632]: I0313 10:24:22.074987 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 13 10:24:22 crc kubenswrapper[4632]: I0313 10:24:22.081255 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lrjmj"] Mar 13 10:24:22 crc kubenswrapper[4632]: I0313 10:24:22.225217 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d670715-74f3-46a6-974c-b6953af9fdb7-operator-scripts\") pod \"root-account-create-update-lrjmj\" (UID: \"2d670715-74f3-46a6-974c-b6953af9fdb7\") " pod="openstack/root-account-create-update-lrjmj" Mar 13 10:24:22 crc kubenswrapper[4632]: I0313 10:24:22.225301 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhtfx\" (UniqueName: \"kubernetes.io/projected/2d670715-74f3-46a6-974c-b6953af9fdb7-kube-api-access-qhtfx\") pod \"root-account-create-update-lrjmj\" (UID: \"2d670715-74f3-46a6-974c-b6953af9fdb7\") " pod="openstack/root-account-create-update-lrjmj" Mar 13 10:24:22 crc kubenswrapper[4632]: I0313 10:24:22.327456 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d670715-74f3-46a6-974c-b6953af9fdb7-operator-scripts\") pod \"root-account-create-update-lrjmj\" (UID: \"2d670715-74f3-46a6-974c-b6953af9fdb7\") " pod="openstack/root-account-create-update-lrjmj" Mar 13 10:24:22 crc kubenswrapper[4632]: I0313 10:24:22.328258 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhtfx\" (UniqueName: \"kubernetes.io/projected/2d670715-74f3-46a6-974c-b6953af9fdb7-kube-api-access-qhtfx\") pod \"root-account-create-update-lrjmj\" (UID: \"2d670715-74f3-46a6-974c-b6953af9fdb7\") " pod="openstack/root-account-create-update-lrjmj" Mar 13 10:24:22 crc kubenswrapper[4632]: I0313 10:24:22.328404 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d670715-74f3-46a6-974c-b6953af9fdb7-operator-scripts\") pod \"root-account-create-update-lrjmj\" (UID: \"2d670715-74f3-46a6-974c-b6953af9fdb7\") " pod="openstack/root-account-create-update-lrjmj" Mar 13 10:24:22 crc kubenswrapper[4632]: I0313 10:24:22.353165 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhtfx\" (UniqueName: \"kubernetes.io/projected/2d670715-74f3-46a6-974c-b6953af9fdb7-kube-api-access-qhtfx\") pod \"root-account-create-update-lrjmj\" (UID: \"2d670715-74f3-46a6-974c-b6953af9fdb7\") " pod="openstack/root-account-create-update-lrjmj" Mar 13 10:24:22 crc kubenswrapper[4632]: I0313 10:24:22.392035 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lrjmj" Mar 13 10:24:22 crc kubenswrapper[4632]: I0313 10:24:22.902378 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lrjmj"] Mar 13 10:24:22 crc kubenswrapper[4632]: W0313 10:24:22.912282 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d670715_74f3_46a6_974c_b6953af9fdb7.slice/crio-4e87d116be82121d7fc1873346d45f93e13e538ff7fb3e09adc605646d99d9ef WatchSource:0}: Error finding container 4e87d116be82121d7fc1873346d45f93e13e538ff7fb3e09adc605646d99d9ef: Status 404 returned error can't find the container with id 4e87d116be82121d7fc1873346d45f93e13e538ff7fb3e09adc605646d99d9ef Mar 13 10:24:23 crc kubenswrapper[4632]: I0313 10:24:23.406745 4632 generic.go:334] "Generic (PLEG): container finished" podID="2d670715-74f3-46a6-974c-b6953af9fdb7" containerID="9bfb87771985986bb5edbb713355c76b663fe8b23df1170e73c42c65479f44df" exitCode=0 Mar 13 10:24:23 crc kubenswrapper[4632]: I0313 10:24:23.407048 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lrjmj" event={"ID":"2d670715-74f3-46a6-974c-b6953af9fdb7","Type":"ContainerDied","Data":"9bfb87771985986bb5edbb713355c76b663fe8b23df1170e73c42c65479f44df"} Mar 13 10:24:23 crc kubenswrapper[4632]: I0313 10:24:23.407074 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lrjmj" event={"ID":"2d670715-74f3-46a6-974c-b6953af9fdb7","Type":"ContainerStarted","Data":"4e87d116be82121d7fc1873346d45f93e13e538ff7fb3e09adc605646d99d9ef"} Mar 13 10:24:23 crc kubenswrapper[4632]: I0313 10:24:23.553014 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e37b3d77-de2e-4be9-9984-550d4ba0f2f0-etc-swift\") pod \"swift-storage-0\" (UID: \"e37b3d77-de2e-4be9-9984-550d4ba0f2f0\") " pod="openstack/swift-storage-0" Mar 13 10:24:23 crc kubenswrapper[4632]: E0313 10:24:23.553226 4632 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 13 10:24:23 crc kubenswrapper[4632]: E0313 10:24:23.554449 4632 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 13 10:24:23 crc kubenswrapper[4632]: E0313 10:24:23.554511 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e37b3d77-de2e-4be9-9984-550d4ba0f2f0-etc-swift podName:e37b3d77-de2e-4be9-9984-550d4ba0f2f0 nodeName:}" failed. No retries permitted until 2026-03-13 10:24:39.554493624 +0000 UTC m=+1253.577023757 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e37b3d77-de2e-4be9-9984-550d4ba0f2f0-etc-swift") pod "swift-storage-0" (UID: "e37b3d77-de2e-4be9-9984-550d4ba0f2f0") : configmap "swift-ring-files" not found Mar 13 10:24:24 crc kubenswrapper[4632]: I0313 10:24:24.280853 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-l6hpb"] Mar 13 10:24:24 crc kubenswrapper[4632]: I0313 10:24:24.281876 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-l6hpb" Mar 13 10:24:24 crc kubenswrapper[4632]: I0313 10:24:24.285829 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Mar 13 10:24:24 crc kubenswrapper[4632]: I0313 10:24:24.292379 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-l6hpb"] Mar 13 10:24:24 crc kubenswrapper[4632]: I0313 10:24:24.295364 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qpd5p" Mar 13 10:24:24 crc kubenswrapper[4632]: I0313 10:24:24.367200 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjbxx\" (UniqueName: \"kubernetes.io/projected/4f1c5663-463b-45e2-b200-64e73e6d5698-kube-api-access-fjbxx\") pod \"glance-db-sync-l6hpb\" (UID: \"4f1c5663-463b-45e2-b200-64e73e6d5698\") " pod="openstack/glance-db-sync-l6hpb" Mar 13 10:24:24 crc kubenswrapper[4632]: I0313 10:24:24.367253 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f1c5663-463b-45e2-b200-64e73e6d5698-config-data\") pod \"glance-db-sync-l6hpb\" (UID: \"4f1c5663-463b-45e2-b200-64e73e6d5698\") " pod="openstack/glance-db-sync-l6hpb" Mar 13 10:24:24 crc kubenswrapper[4632]: I0313 10:24:24.367326 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4f1c5663-463b-45e2-b200-64e73e6d5698-db-sync-config-data\") pod \"glance-db-sync-l6hpb\" (UID: \"4f1c5663-463b-45e2-b200-64e73e6d5698\") " pod="openstack/glance-db-sync-l6hpb" Mar 13 10:24:24 crc kubenswrapper[4632]: I0313 10:24:24.367345 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1c5663-463b-45e2-b200-64e73e6d5698-combined-ca-bundle\") pod \"glance-db-sync-l6hpb\" (UID: \"4f1c5663-463b-45e2-b200-64e73e6d5698\") " pod="openstack/glance-db-sync-l6hpb" Mar 13 10:24:24 crc kubenswrapper[4632]: I0313 10:24:24.469295 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4f1c5663-463b-45e2-b200-64e73e6d5698-db-sync-config-data\") pod \"glance-db-sync-l6hpb\" (UID: \"4f1c5663-463b-45e2-b200-64e73e6d5698\") " pod="openstack/glance-db-sync-l6hpb" Mar 13 10:24:24 crc kubenswrapper[4632]: I0313 10:24:24.469339 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1c5663-463b-45e2-b200-64e73e6d5698-combined-ca-bundle\") pod \"glance-db-sync-l6hpb\" (UID: \"4f1c5663-463b-45e2-b200-64e73e6d5698\") " pod="openstack/glance-db-sync-l6hpb" Mar 13 10:24:24 crc kubenswrapper[4632]: I0313 10:24:24.469434 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjbxx\" (UniqueName: \"kubernetes.io/projected/4f1c5663-463b-45e2-b200-64e73e6d5698-kube-api-access-fjbxx\") pod \"glance-db-sync-l6hpb\" (UID: \"4f1c5663-463b-45e2-b200-64e73e6d5698\") " pod="openstack/glance-db-sync-l6hpb" Mar 13 10:24:24 crc kubenswrapper[4632]: I0313 10:24:24.469460 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f1c5663-463b-45e2-b200-64e73e6d5698-config-data\") pod \"glance-db-sync-l6hpb\" (UID: \"4f1c5663-463b-45e2-b200-64e73e6d5698\") " pod="openstack/glance-db-sync-l6hpb" Mar 13 10:24:24 crc kubenswrapper[4632]: I0313 10:24:24.478658 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4f1c5663-463b-45e2-b200-64e73e6d5698-db-sync-config-data\") pod \"glance-db-sync-l6hpb\" (UID: \"4f1c5663-463b-45e2-b200-64e73e6d5698\") " pod="openstack/glance-db-sync-l6hpb" Mar 13 10:24:24 crc kubenswrapper[4632]: I0313 10:24:24.487771 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f1c5663-463b-45e2-b200-64e73e6d5698-config-data\") pod \"glance-db-sync-l6hpb\" (UID: \"4f1c5663-463b-45e2-b200-64e73e6d5698\") " pod="openstack/glance-db-sync-l6hpb" Mar 13 10:24:24 crc kubenswrapper[4632]: I0313 10:24:24.490387 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1c5663-463b-45e2-b200-64e73e6d5698-combined-ca-bundle\") pod \"glance-db-sync-l6hpb\" (UID: \"4f1c5663-463b-45e2-b200-64e73e6d5698\") " pod="openstack/glance-db-sync-l6hpb" Mar 13 10:24:24 crc kubenswrapper[4632]: I0313 10:24:24.493619 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjbxx\" (UniqueName: \"kubernetes.io/projected/4f1c5663-463b-45e2-b200-64e73e6d5698-kube-api-access-fjbxx\") pod \"glance-db-sync-l6hpb\" (UID: \"4f1c5663-463b-45e2-b200-64e73e6d5698\") " pod="openstack/glance-db-sync-l6hpb" Mar 13 10:24:24 crc kubenswrapper[4632]: I0313 10:24:24.597363 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-l6hpb" Mar 13 10:24:24 crc kubenswrapper[4632]: I0313 10:24:24.825145 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lrjmj" Mar 13 10:24:24 crc kubenswrapper[4632]: I0313 10:24:24.977508 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhtfx\" (UniqueName: \"kubernetes.io/projected/2d670715-74f3-46a6-974c-b6953af9fdb7-kube-api-access-qhtfx\") pod \"2d670715-74f3-46a6-974c-b6953af9fdb7\" (UID: \"2d670715-74f3-46a6-974c-b6953af9fdb7\") " Mar 13 10:24:24 crc kubenswrapper[4632]: I0313 10:24:24.977714 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d670715-74f3-46a6-974c-b6953af9fdb7-operator-scripts\") pod \"2d670715-74f3-46a6-974c-b6953af9fdb7\" (UID: \"2d670715-74f3-46a6-974c-b6953af9fdb7\") " Mar 13 10:24:24 crc kubenswrapper[4632]: I0313 10:24:24.979469 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d670715-74f3-46a6-974c-b6953af9fdb7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2d670715-74f3-46a6-974c-b6953af9fdb7" (UID: "2d670715-74f3-46a6-974c-b6953af9fdb7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:24:24 crc kubenswrapper[4632]: I0313 10:24:24.986742 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d670715-74f3-46a6-974c-b6953af9fdb7-kube-api-access-qhtfx" (OuterVolumeSpecName: "kube-api-access-qhtfx") pod "2d670715-74f3-46a6-974c-b6953af9fdb7" (UID: "2d670715-74f3-46a6-974c-b6953af9fdb7"). InnerVolumeSpecName "kube-api-access-qhtfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:24:25 crc kubenswrapper[4632]: I0313 10:24:25.079884 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhtfx\" (UniqueName: \"kubernetes.io/projected/2d670715-74f3-46a6-974c-b6953af9fdb7-kube-api-access-qhtfx\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:25 crc kubenswrapper[4632]: I0313 10:24:25.080233 4632 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d670715-74f3-46a6-974c-b6953af9fdb7-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:25 crc kubenswrapper[4632]: I0313 10:24:25.266886 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-l6hpb"] Mar 13 10:24:25 crc kubenswrapper[4632]: W0313 10:24:25.285401 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f1c5663_463b_45e2_b200_64e73e6d5698.slice/crio-0a5d62eda0a21b4de62c912c034c3914a852ed117fa1d5a908a4b0e7b70dc6a3 WatchSource:0}: Error finding container 0a5d62eda0a21b4de62c912c034c3914a852ed117fa1d5a908a4b0e7b70dc6a3: Status 404 returned error can't find the container with id 0a5d62eda0a21b4de62c912c034c3914a852ed117fa1d5a908a4b0e7b70dc6a3 Mar 13 10:24:25 crc kubenswrapper[4632]: I0313 10:24:25.424196 4632 generic.go:334] "Generic (PLEG): container finished" podID="bc39c52e-008f-40c1-b93b-532707127fcd" containerID="c1060b701ea818cb4c5d1e5e94618270eed048e8bd16d50775b43c9b34c6b1b9" exitCode=0 Mar 13 10:24:25 crc kubenswrapper[4632]: I0313 10:24:25.424275 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-mkdcg" event={"ID":"bc39c52e-008f-40c1-b93b-532707127fcd","Type":"ContainerDied","Data":"c1060b701ea818cb4c5d1e5e94618270eed048e8bd16d50775b43c9b34c6b1b9"} Mar 13 10:24:25 crc kubenswrapper[4632]: I0313 10:24:25.425727 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-l6hpb" event={"ID":"4f1c5663-463b-45e2-b200-64e73e6d5698","Type":"ContainerStarted","Data":"0a5d62eda0a21b4de62c912c034c3914a852ed117fa1d5a908a4b0e7b70dc6a3"} Mar 13 10:24:25 crc kubenswrapper[4632]: I0313 10:24:25.427711 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lrjmj" event={"ID":"2d670715-74f3-46a6-974c-b6953af9fdb7","Type":"ContainerDied","Data":"4e87d116be82121d7fc1873346d45f93e13e538ff7fb3e09adc605646d99d9ef"} Mar 13 10:24:25 crc kubenswrapper[4632]: I0313 10:24:25.427752 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e87d116be82121d7fc1873346d45f93e13e538ff7fb3e09adc605646d99d9ef" Mar 13 10:24:25 crc kubenswrapper[4632]: I0313 10:24:25.427762 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lrjmj" Mar 13 10:24:26 crc kubenswrapper[4632]: I0313 10:24:26.802904 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mkdcg" Mar 13 10:24:26 crc kubenswrapper[4632]: I0313 10:24:26.928397 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bc39c52e-008f-40c1-b93b-532707127fcd-swiftconf\") pod \"bc39c52e-008f-40c1-b93b-532707127fcd\" (UID: \"bc39c52e-008f-40c1-b93b-532707127fcd\") " Mar 13 10:24:26 crc kubenswrapper[4632]: I0313 10:24:26.928480 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bc39c52e-008f-40c1-b93b-532707127fcd-dispersionconf\") pod \"bc39c52e-008f-40c1-b93b-532707127fcd\" (UID: \"bc39c52e-008f-40c1-b93b-532707127fcd\") " Mar 13 10:24:26 crc kubenswrapper[4632]: I0313 10:24:26.928596 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bc39c52e-008f-40c1-b93b-532707127fcd-ring-data-devices\") pod \"bc39c52e-008f-40c1-b93b-532707127fcd\" (UID: \"bc39c52e-008f-40c1-b93b-532707127fcd\") " Mar 13 10:24:26 crc kubenswrapper[4632]: I0313 10:24:26.928634 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc39c52e-008f-40c1-b93b-532707127fcd-scripts\") pod \"bc39c52e-008f-40c1-b93b-532707127fcd\" (UID: \"bc39c52e-008f-40c1-b93b-532707127fcd\") " Mar 13 10:24:26 crc kubenswrapper[4632]: I0313 10:24:26.928695 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bc39c52e-008f-40c1-b93b-532707127fcd-etc-swift\") pod \"bc39c52e-008f-40c1-b93b-532707127fcd\" (UID: \"bc39c52e-008f-40c1-b93b-532707127fcd\") " Mar 13 10:24:26 crc kubenswrapper[4632]: I0313 10:24:26.928735 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7f9b\" (UniqueName: \"kubernetes.io/projected/bc39c52e-008f-40c1-b93b-532707127fcd-kube-api-access-v7f9b\") pod \"bc39c52e-008f-40c1-b93b-532707127fcd\" (UID: \"bc39c52e-008f-40c1-b93b-532707127fcd\") " Mar 13 10:24:26 crc kubenswrapper[4632]: I0313 10:24:26.928786 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc39c52e-008f-40c1-b93b-532707127fcd-combined-ca-bundle\") pod \"bc39c52e-008f-40c1-b93b-532707127fcd\" (UID: \"bc39c52e-008f-40c1-b93b-532707127fcd\") " Mar 13 10:24:26 crc kubenswrapper[4632]: I0313 10:24:26.929365 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc39c52e-008f-40c1-b93b-532707127fcd-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "bc39c52e-008f-40c1-b93b-532707127fcd" (UID: "bc39c52e-008f-40c1-b93b-532707127fcd"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:24:26 crc kubenswrapper[4632]: I0313 10:24:26.929672 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc39c52e-008f-40c1-b93b-532707127fcd-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "bc39c52e-008f-40c1-b93b-532707127fcd" (UID: "bc39c52e-008f-40c1-b93b-532707127fcd"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:24:26 crc kubenswrapper[4632]: I0313 10:24:26.930278 4632 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bc39c52e-008f-40c1-b93b-532707127fcd-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:26 crc kubenswrapper[4632]: I0313 10:24:26.930296 4632 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bc39c52e-008f-40c1-b93b-532707127fcd-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:26 crc kubenswrapper[4632]: I0313 10:24:26.933858 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc39c52e-008f-40c1-b93b-532707127fcd-kube-api-access-v7f9b" (OuterVolumeSpecName: "kube-api-access-v7f9b") pod "bc39c52e-008f-40c1-b93b-532707127fcd" (UID: "bc39c52e-008f-40c1-b93b-532707127fcd"). InnerVolumeSpecName "kube-api-access-v7f9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:24:26 crc kubenswrapper[4632]: I0313 10:24:26.950055 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc39c52e-008f-40c1-b93b-532707127fcd-scripts" (OuterVolumeSpecName: "scripts") pod "bc39c52e-008f-40c1-b93b-532707127fcd" (UID: "bc39c52e-008f-40c1-b93b-532707127fcd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:24:26 crc kubenswrapper[4632]: I0313 10:24:26.955542 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc39c52e-008f-40c1-b93b-532707127fcd-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "bc39c52e-008f-40c1-b93b-532707127fcd" (UID: "bc39c52e-008f-40c1-b93b-532707127fcd"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:24:26 crc kubenswrapper[4632]: I0313 10:24:26.970766 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc39c52e-008f-40c1-b93b-532707127fcd-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "bc39c52e-008f-40c1-b93b-532707127fcd" (UID: "bc39c52e-008f-40c1-b93b-532707127fcd"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:24:26 crc kubenswrapper[4632]: I0313 10:24:26.973268 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc39c52e-008f-40c1-b93b-532707127fcd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc39c52e-008f-40c1-b93b-532707127fcd" (UID: "bc39c52e-008f-40c1-b93b-532707127fcd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:24:27 crc kubenswrapper[4632]: I0313 10:24:27.031485 4632 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bc39c52e-008f-40c1-b93b-532707127fcd-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:27 crc kubenswrapper[4632]: I0313 10:24:27.031527 4632 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bc39c52e-008f-40c1-b93b-532707127fcd-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:27 crc kubenswrapper[4632]: I0313 10:24:27.031540 4632 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc39c52e-008f-40c1-b93b-532707127fcd-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:27 crc kubenswrapper[4632]: I0313 10:24:27.031552 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7f9b\" (UniqueName: \"kubernetes.io/projected/bc39c52e-008f-40c1-b93b-532707127fcd-kube-api-access-v7f9b\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:27 crc kubenswrapper[4632]: I0313 10:24:27.031563 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc39c52e-008f-40c1-b93b-532707127fcd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:27 crc kubenswrapper[4632]: I0313 10:24:27.449188 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-mkdcg" event={"ID":"bc39c52e-008f-40c1-b93b-532707127fcd","Type":"ContainerDied","Data":"b9c30e7d71115b424270718b1169d7d7c69bba98c01559c66a08b1a331e3ccdd"} Mar 13 10:24:27 crc kubenswrapper[4632]: I0313 10:24:27.449237 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9c30e7d71115b424270718b1169d7d7c69bba98c01559c66a08b1a331e3ccdd" Mar 13 10:24:27 crc kubenswrapper[4632]: I0313 10:24:27.449300 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mkdcg" Mar 13 10:24:28 crc kubenswrapper[4632]: I0313 10:24:28.462253 4632 generic.go:334] "Generic (PLEG): container finished" podID="159c6cee-c82b-4725-82d6-dbd27216f53c" containerID="d5bd67d741203861cfd1afa23ec3f20fd6236a99625563ac3c10816dbb2a6677" exitCode=0 Mar 13 10:24:28 crc kubenswrapper[4632]: I0313 10:24:28.462818 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"159c6cee-c82b-4725-82d6-dbd27216f53c","Type":"ContainerDied","Data":"d5bd67d741203861cfd1afa23ec3f20fd6236a99625563ac3c10816dbb2a6677"} Mar 13 10:24:28 crc kubenswrapper[4632]: I0313 10:24:28.465827 4632 generic.go:334] "Generic (PLEG): container finished" podID="211718f0-f29c-457b-bc2b-487bb76d4801" containerID="92d546a480b1e583e7b11dc48ab2d570a4a8d7af0616de2352d72ca175520f17" exitCode=0 Mar 13 10:24:28 crc kubenswrapper[4632]: I0313 10:24:28.465869 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"211718f0-f29c-457b-bc2b-487bb76d4801","Type":"ContainerDied","Data":"92d546a480b1e583e7b11dc48ab2d570a4a8d7af0616de2352d72ca175520f17"} Mar 13 10:24:29 crc kubenswrapper[4632]: I0313 10:24:29.475088 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"159c6cee-c82b-4725-82d6-dbd27216f53c","Type":"ContainerStarted","Data":"d8fa91cb90a686638520d703bb5ab925cd9f40c680cdbe53067f753945b6ae3f"} Mar 13 10:24:29 crc kubenswrapper[4632]: I0313 10:24:29.475384 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:24:29 crc kubenswrapper[4632]: I0313 10:24:29.478919 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"211718f0-f29c-457b-bc2b-487bb76d4801","Type":"ContainerStarted","Data":"40d92cf95f1cc26685e0359414b43dbdc31eeb90ab4b39c564b241d3fcc263fe"} Mar 13 10:24:29 crc kubenswrapper[4632]: I0313 10:24:29.479213 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 13 10:24:29 crc kubenswrapper[4632]: I0313 10:24:29.516492 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.349846911 podStartE2EDuration="1m20.516476465s" podCreationTimestamp="2026-03-13 10:23:09 +0000 UTC" firstStartedPulling="2026-03-13 10:23:11.720523443 +0000 UTC m=+1165.743053576" lastFinishedPulling="2026-03-13 10:23:54.887152997 +0000 UTC m=+1208.909683130" observedRunningTime="2026-03-13 10:24:29.513441459 +0000 UTC m=+1243.535971592" watchObservedRunningTime="2026-03-13 10:24:29.516476465 +0000 UTC m=+1243.539006598" Mar 13 10:24:29 crc kubenswrapper[4632]: I0313 10:24:29.531826 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-9kd7r" podUID="eab798dd-482a-4c66-983b-908966cd1f94" containerName="ovn-controller" probeResult="failure" output=< Mar 13 10:24:29 crc kubenswrapper[4632]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 13 10:24:29 crc kubenswrapper[4632]: > Mar 13 10:24:29 crc kubenswrapper[4632]: I0313 10:24:29.541037 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.786566195 podStartE2EDuration="1m20.541023066s" podCreationTimestamp="2026-03-13 10:23:09 +0000 UTC" firstStartedPulling="2026-03-13 10:23:12.100430705 +0000 UTC m=+1166.122960838" lastFinishedPulling="2026-03-13 10:23:54.854887576 +0000 UTC m=+1208.877417709" observedRunningTime="2026-03-13 10:24:29.534485606 +0000 UTC m=+1243.557015739" watchObservedRunningTime="2026-03-13 10:24:29.541023066 +0000 UTC m=+1243.563553189" Mar 13 10:24:29 crc kubenswrapper[4632]: I0313 10:24:29.632351 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-c5xnp" Mar 13 10:24:29 crc kubenswrapper[4632]: I0313 10:24:29.646500 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-c5xnp" Mar 13 10:24:29 crc kubenswrapper[4632]: I0313 10:24:29.884393 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-9kd7r-config-d8xq4"] Mar 13 10:24:29 crc kubenswrapper[4632]: E0313 10:24:29.884778 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc39c52e-008f-40c1-b93b-532707127fcd" containerName="swift-ring-rebalance" Mar 13 10:24:29 crc kubenswrapper[4632]: I0313 10:24:29.884795 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc39c52e-008f-40c1-b93b-532707127fcd" containerName="swift-ring-rebalance" Mar 13 10:24:29 crc kubenswrapper[4632]: E0313 10:24:29.884809 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d670715-74f3-46a6-974c-b6953af9fdb7" containerName="mariadb-account-create-update" Mar 13 10:24:29 crc kubenswrapper[4632]: I0313 10:24:29.884817 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d670715-74f3-46a6-974c-b6953af9fdb7" containerName="mariadb-account-create-update" Mar 13 10:24:29 crc kubenswrapper[4632]: I0313 10:24:29.885061 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d670715-74f3-46a6-974c-b6953af9fdb7" containerName="mariadb-account-create-update" Mar 13 10:24:29 crc kubenswrapper[4632]: I0313 10:24:29.885079 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc39c52e-008f-40c1-b93b-532707127fcd" containerName="swift-ring-rebalance" Mar 13 10:24:29 crc kubenswrapper[4632]: I0313 10:24:29.885704 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9kd7r-config-d8xq4" Mar 13 10:24:29 crc kubenswrapper[4632]: I0313 10:24:29.890728 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 13 10:24:29 crc kubenswrapper[4632]: I0313 10:24:29.908785 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9kd7r-config-d8xq4"] Mar 13 10:24:30 crc kubenswrapper[4632]: I0313 10:24:30.004373 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8dae62c5-076b-4a06-9d12-c955d9131ef3-var-log-ovn\") pod \"ovn-controller-9kd7r-config-d8xq4\" (UID: \"8dae62c5-076b-4a06-9d12-c955d9131ef3\") " pod="openstack/ovn-controller-9kd7r-config-d8xq4" Mar 13 10:24:30 crc kubenswrapper[4632]: I0313 10:24:30.004518 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8dae62c5-076b-4a06-9d12-c955d9131ef3-var-run\") pod \"ovn-controller-9kd7r-config-d8xq4\" (UID: \"8dae62c5-076b-4a06-9d12-c955d9131ef3\") " pod="openstack/ovn-controller-9kd7r-config-d8xq4" Mar 13 10:24:30 crc kubenswrapper[4632]: I0313 10:24:30.004692 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8dae62c5-076b-4a06-9d12-c955d9131ef3-scripts\") pod \"ovn-controller-9kd7r-config-d8xq4\" (UID: \"8dae62c5-076b-4a06-9d12-c955d9131ef3\") " pod="openstack/ovn-controller-9kd7r-config-d8xq4" Mar 13 10:24:30 crc kubenswrapper[4632]: I0313 10:24:30.004778 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8dae62c5-076b-4a06-9d12-c955d9131ef3-var-run-ovn\") pod \"ovn-controller-9kd7r-config-d8xq4\" (UID: \"8dae62c5-076b-4a06-9d12-c955d9131ef3\") " pod="openstack/ovn-controller-9kd7r-config-d8xq4" Mar 13 10:24:30 crc kubenswrapper[4632]: I0313 10:24:30.005007 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8dae62c5-076b-4a06-9d12-c955d9131ef3-additional-scripts\") pod \"ovn-controller-9kd7r-config-d8xq4\" (UID: \"8dae62c5-076b-4a06-9d12-c955d9131ef3\") " pod="openstack/ovn-controller-9kd7r-config-d8xq4" Mar 13 10:24:30 crc kubenswrapper[4632]: I0313 10:24:30.005079 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2fkr\" (UniqueName: \"kubernetes.io/projected/8dae62c5-076b-4a06-9d12-c955d9131ef3-kube-api-access-q2fkr\") pod \"ovn-controller-9kd7r-config-d8xq4\" (UID: \"8dae62c5-076b-4a06-9d12-c955d9131ef3\") " pod="openstack/ovn-controller-9kd7r-config-d8xq4" Mar 13 10:24:30 crc kubenswrapper[4632]: I0313 10:24:30.106415 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8dae62c5-076b-4a06-9d12-c955d9131ef3-additional-scripts\") pod \"ovn-controller-9kd7r-config-d8xq4\" (UID: \"8dae62c5-076b-4a06-9d12-c955d9131ef3\") " pod="openstack/ovn-controller-9kd7r-config-d8xq4" Mar 13 10:24:30 crc kubenswrapper[4632]: I0313 10:24:30.106482 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2fkr\" (UniqueName: \"kubernetes.io/projected/8dae62c5-076b-4a06-9d12-c955d9131ef3-kube-api-access-q2fkr\") pod \"ovn-controller-9kd7r-config-d8xq4\" (UID: \"8dae62c5-076b-4a06-9d12-c955d9131ef3\") " pod="openstack/ovn-controller-9kd7r-config-d8xq4" Mar 13 10:24:30 crc kubenswrapper[4632]: I0313 10:24:30.106575 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8dae62c5-076b-4a06-9d12-c955d9131ef3-var-log-ovn\") pod \"ovn-controller-9kd7r-config-d8xq4\" (UID: \"8dae62c5-076b-4a06-9d12-c955d9131ef3\") " pod="openstack/ovn-controller-9kd7r-config-d8xq4" Mar 13 10:24:30 crc kubenswrapper[4632]: I0313 10:24:30.106628 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8dae62c5-076b-4a06-9d12-c955d9131ef3-var-run\") pod \"ovn-controller-9kd7r-config-d8xq4\" (UID: \"8dae62c5-076b-4a06-9d12-c955d9131ef3\") " pod="openstack/ovn-controller-9kd7r-config-d8xq4" Mar 13 10:24:30 crc kubenswrapper[4632]: I0313 10:24:30.106703 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8dae62c5-076b-4a06-9d12-c955d9131ef3-scripts\") pod \"ovn-controller-9kd7r-config-d8xq4\" (UID: \"8dae62c5-076b-4a06-9d12-c955d9131ef3\") " pod="openstack/ovn-controller-9kd7r-config-d8xq4" Mar 13 10:24:30 crc kubenswrapper[4632]: I0313 10:24:30.106766 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8dae62c5-076b-4a06-9d12-c955d9131ef3-var-run-ovn\") pod \"ovn-controller-9kd7r-config-d8xq4\" (UID: \"8dae62c5-076b-4a06-9d12-c955d9131ef3\") " pod="openstack/ovn-controller-9kd7r-config-d8xq4" Mar 13 10:24:30 crc kubenswrapper[4632]: I0313 10:24:30.106912 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8dae62c5-076b-4a06-9d12-c955d9131ef3-var-log-ovn\") pod \"ovn-controller-9kd7r-config-d8xq4\" (UID: \"8dae62c5-076b-4a06-9d12-c955d9131ef3\") " pod="openstack/ovn-controller-9kd7r-config-d8xq4" Mar 13 10:24:30 crc kubenswrapper[4632]: I0313 10:24:30.106987 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8dae62c5-076b-4a06-9d12-c955d9131ef3-var-run-ovn\") pod \"ovn-controller-9kd7r-config-d8xq4\" (UID: \"8dae62c5-076b-4a06-9d12-c955d9131ef3\") " pod="openstack/ovn-controller-9kd7r-config-d8xq4" Mar 13 10:24:30 crc kubenswrapper[4632]: I0313 10:24:30.107499 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8dae62c5-076b-4a06-9d12-c955d9131ef3-var-run\") pod \"ovn-controller-9kd7r-config-d8xq4\" (UID: \"8dae62c5-076b-4a06-9d12-c955d9131ef3\") " pod="openstack/ovn-controller-9kd7r-config-d8xq4" Mar 13 10:24:30 crc kubenswrapper[4632]: I0313 10:24:30.108487 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8dae62c5-076b-4a06-9d12-c955d9131ef3-additional-scripts\") pod \"ovn-controller-9kd7r-config-d8xq4\" (UID: \"8dae62c5-076b-4a06-9d12-c955d9131ef3\") " pod="openstack/ovn-controller-9kd7r-config-d8xq4" Mar 13 10:24:30 crc kubenswrapper[4632]: I0313 10:24:30.109256 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8dae62c5-076b-4a06-9d12-c955d9131ef3-scripts\") pod \"ovn-controller-9kd7r-config-d8xq4\" (UID: \"8dae62c5-076b-4a06-9d12-c955d9131ef3\") " pod="openstack/ovn-controller-9kd7r-config-d8xq4" Mar 13 10:24:30 crc kubenswrapper[4632]: I0313 10:24:30.129677 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2fkr\" (UniqueName: \"kubernetes.io/projected/8dae62c5-076b-4a06-9d12-c955d9131ef3-kube-api-access-q2fkr\") pod \"ovn-controller-9kd7r-config-d8xq4\" (UID: \"8dae62c5-076b-4a06-9d12-c955d9131ef3\") " pod="openstack/ovn-controller-9kd7r-config-d8xq4" Mar 13 10:24:30 crc kubenswrapper[4632]: I0313 10:24:30.209680 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9kd7r-config-d8xq4" Mar 13 10:24:30 crc kubenswrapper[4632]: I0313 10:24:30.913692 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9kd7r-config-d8xq4"] Mar 13 10:24:31 crc kubenswrapper[4632]: I0313 10:24:31.538575 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9kd7r-config-d8xq4" event={"ID":"8dae62c5-076b-4a06-9d12-c955d9131ef3","Type":"ContainerStarted","Data":"572bb794023bd7d53a23050c721933f004db547126df9eaf9b5f8e767603f2d3"} Mar 13 10:24:31 crc kubenswrapper[4632]: I0313 10:24:31.538888 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9kd7r-config-d8xq4" event={"ID":"8dae62c5-076b-4a06-9d12-c955d9131ef3","Type":"ContainerStarted","Data":"f1388733facb7a844093a7b6aa30ee0201fcf9db5bf9311d8265080c24484714"} Mar 13 10:24:31 crc kubenswrapper[4632]: I0313 10:24:31.557970 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-9kd7r-config-d8xq4" podStartSLOduration=2.557928129 podStartE2EDuration="2.557928129s" podCreationTimestamp="2026-03-13 10:24:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:24:31.555880268 +0000 UTC m=+1245.578410401" watchObservedRunningTime="2026-03-13 10:24:31.557928129 +0000 UTC m=+1245.580458262" Mar 13 10:24:32 crc kubenswrapper[4632]: I0313 10:24:32.567253 4632 generic.go:334] "Generic (PLEG): container finished" podID="8dae62c5-076b-4a06-9d12-c955d9131ef3" containerID="572bb794023bd7d53a23050c721933f004db547126df9eaf9b5f8e767603f2d3" exitCode=0 Mar 13 10:24:32 crc kubenswrapper[4632]: I0313 10:24:32.567251 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9kd7r-config-d8xq4" event={"ID":"8dae62c5-076b-4a06-9d12-c955d9131ef3","Type":"ContainerDied","Data":"572bb794023bd7d53a23050c721933f004db547126df9eaf9b5f8e767603f2d3"} Mar 13 10:24:33 crc kubenswrapper[4632]: I0313 10:24:33.968447 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9kd7r-config-d8xq4" Mar 13 10:24:34 crc kubenswrapper[4632]: I0313 10:24:34.098877 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8dae62c5-076b-4a06-9d12-c955d9131ef3-var-run\") pod \"8dae62c5-076b-4a06-9d12-c955d9131ef3\" (UID: \"8dae62c5-076b-4a06-9d12-c955d9131ef3\") " Mar 13 10:24:34 crc kubenswrapper[4632]: I0313 10:24:34.098984 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dae62c5-076b-4a06-9d12-c955d9131ef3-var-run" (OuterVolumeSpecName: "var-run") pod "8dae62c5-076b-4a06-9d12-c955d9131ef3" (UID: "8dae62c5-076b-4a06-9d12-c955d9131ef3"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:24:34 crc kubenswrapper[4632]: I0313 10:24:34.099121 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8dae62c5-076b-4a06-9d12-c955d9131ef3-additional-scripts\") pod \"8dae62c5-076b-4a06-9d12-c955d9131ef3\" (UID: \"8dae62c5-076b-4a06-9d12-c955d9131ef3\") " Mar 13 10:24:34 crc kubenswrapper[4632]: I0313 10:24:34.099166 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8dae62c5-076b-4a06-9d12-c955d9131ef3-scripts\") pod \"8dae62c5-076b-4a06-9d12-c955d9131ef3\" (UID: \"8dae62c5-076b-4a06-9d12-c955d9131ef3\") " Mar 13 10:24:34 crc kubenswrapper[4632]: I0313 10:24:34.099188 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8dae62c5-076b-4a06-9d12-c955d9131ef3-var-log-ovn\") pod \"8dae62c5-076b-4a06-9d12-c955d9131ef3\" (UID: \"8dae62c5-076b-4a06-9d12-c955d9131ef3\") " Mar 13 10:24:34 crc kubenswrapper[4632]: I0313 10:24:34.099213 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2fkr\" (UniqueName: \"kubernetes.io/projected/8dae62c5-076b-4a06-9d12-c955d9131ef3-kube-api-access-q2fkr\") pod \"8dae62c5-076b-4a06-9d12-c955d9131ef3\" (UID: \"8dae62c5-076b-4a06-9d12-c955d9131ef3\") " Mar 13 10:24:34 crc kubenswrapper[4632]: I0313 10:24:34.099303 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dae62c5-076b-4a06-9d12-c955d9131ef3-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "8dae62c5-076b-4a06-9d12-c955d9131ef3" (UID: "8dae62c5-076b-4a06-9d12-c955d9131ef3"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:24:34 crc kubenswrapper[4632]: I0313 10:24:34.099349 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8dae62c5-076b-4a06-9d12-c955d9131ef3-var-run-ovn\") pod \"8dae62c5-076b-4a06-9d12-c955d9131ef3\" (UID: \"8dae62c5-076b-4a06-9d12-c955d9131ef3\") " Mar 13 10:24:34 crc kubenswrapper[4632]: I0313 10:24:34.099489 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dae62c5-076b-4a06-9d12-c955d9131ef3-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "8dae62c5-076b-4a06-9d12-c955d9131ef3" (UID: "8dae62c5-076b-4a06-9d12-c955d9131ef3"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:24:34 crc kubenswrapper[4632]: I0313 10:24:34.099828 4632 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8dae62c5-076b-4a06-9d12-c955d9131ef3-var-run\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:34 crc kubenswrapper[4632]: I0313 10:24:34.099870 4632 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8dae62c5-076b-4a06-9d12-c955d9131ef3-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:34 crc kubenswrapper[4632]: I0313 10:24:34.099880 4632 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8dae62c5-076b-4a06-9d12-c955d9131ef3-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:34 crc kubenswrapper[4632]: I0313 10:24:34.100197 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dae62c5-076b-4a06-9d12-c955d9131ef3-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "8dae62c5-076b-4a06-9d12-c955d9131ef3" (UID: "8dae62c5-076b-4a06-9d12-c955d9131ef3"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:24:34 crc kubenswrapper[4632]: I0313 10:24:34.100521 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dae62c5-076b-4a06-9d12-c955d9131ef3-scripts" (OuterVolumeSpecName: "scripts") pod "8dae62c5-076b-4a06-9d12-c955d9131ef3" (UID: "8dae62c5-076b-4a06-9d12-c955d9131ef3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:24:34 crc kubenswrapper[4632]: I0313 10:24:34.109233 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dae62c5-076b-4a06-9d12-c955d9131ef3-kube-api-access-q2fkr" (OuterVolumeSpecName: "kube-api-access-q2fkr") pod "8dae62c5-076b-4a06-9d12-c955d9131ef3" (UID: "8dae62c5-076b-4a06-9d12-c955d9131ef3"). InnerVolumeSpecName "kube-api-access-q2fkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:24:34 crc kubenswrapper[4632]: I0313 10:24:34.201463 4632 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8dae62c5-076b-4a06-9d12-c955d9131ef3-additional-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:34 crc kubenswrapper[4632]: I0313 10:24:34.201499 4632 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8dae62c5-076b-4a06-9d12-c955d9131ef3-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:34 crc kubenswrapper[4632]: I0313 10:24:34.201512 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2fkr\" (UniqueName: \"kubernetes.io/projected/8dae62c5-076b-4a06-9d12-c955d9131ef3-kube-api-access-q2fkr\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:34 crc kubenswrapper[4632]: I0313 10:24:34.539965 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-9kd7r" Mar 13 10:24:34 crc kubenswrapper[4632]: I0313 10:24:34.591488 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9kd7r-config-d8xq4" event={"ID":"8dae62c5-076b-4a06-9d12-c955d9131ef3","Type":"ContainerDied","Data":"f1388733facb7a844093a7b6aa30ee0201fcf9db5bf9311d8265080c24484714"} Mar 13 10:24:34 crc kubenswrapper[4632]: I0313 10:24:34.591545 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1388733facb7a844093a7b6aa30ee0201fcf9db5bf9311d8265080c24484714" Mar 13 10:24:34 crc kubenswrapper[4632]: I0313 10:24:34.591585 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9kd7r-config-d8xq4" Mar 13 10:24:35 crc kubenswrapper[4632]: I0313 10:24:35.156523 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-9kd7r-config-d8xq4"] Mar 13 10:24:35 crc kubenswrapper[4632]: I0313 10:24:35.178151 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-9kd7r-config-d8xq4"] Mar 13 10:24:35 crc kubenswrapper[4632]: I0313 10:24:35.337895 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-9kd7r-config-jtj4g"] Mar 13 10:24:35 crc kubenswrapper[4632]: E0313 10:24:35.338361 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dae62c5-076b-4a06-9d12-c955d9131ef3" containerName="ovn-config" Mar 13 10:24:35 crc kubenswrapper[4632]: I0313 10:24:35.338387 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dae62c5-076b-4a06-9d12-c955d9131ef3" containerName="ovn-config" Mar 13 10:24:35 crc kubenswrapper[4632]: I0313 10:24:35.338624 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dae62c5-076b-4a06-9d12-c955d9131ef3" containerName="ovn-config" Mar 13 10:24:35 crc kubenswrapper[4632]: I0313 10:24:35.339269 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9kd7r-config-jtj4g" Mar 13 10:24:35 crc kubenswrapper[4632]: I0313 10:24:35.343662 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 13 10:24:35 crc kubenswrapper[4632]: I0313 10:24:35.370869 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9kd7r-config-jtj4g"] Mar 13 10:24:35 crc kubenswrapper[4632]: I0313 10:24:35.431668 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/67f877b5-12d3-4b48-a9eb-9ee2629e830a-var-log-ovn\") pod \"ovn-controller-9kd7r-config-jtj4g\" (UID: \"67f877b5-12d3-4b48-a9eb-9ee2629e830a\") " pod="openstack/ovn-controller-9kd7r-config-jtj4g" Mar 13 10:24:35 crc kubenswrapper[4632]: I0313 10:24:35.431715 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/67f877b5-12d3-4b48-a9eb-9ee2629e830a-var-run\") pod \"ovn-controller-9kd7r-config-jtj4g\" (UID: \"67f877b5-12d3-4b48-a9eb-9ee2629e830a\") " pod="openstack/ovn-controller-9kd7r-config-jtj4g" Mar 13 10:24:35 crc kubenswrapper[4632]: I0313 10:24:35.431752 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4j57\" (UniqueName: \"kubernetes.io/projected/67f877b5-12d3-4b48-a9eb-9ee2629e830a-kube-api-access-n4j57\") pod \"ovn-controller-9kd7r-config-jtj4g\" (UID: \"67f877b5-12d3-4b48-a9eb-9ee2629e830a\") " pod="openstack/ovn-controller-9kd7r-config-jtj4g" Mar 13 10:24:35 crc kubenswrapper[4632]: I0313 10:24:35.431776 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67f877b5-12d3-4b48-a9eb-9ee2629e830a-scripts\") pod \"ovn-controller-9kd7r-config-jtj4g\" (UID: \"67f877b5-12d3-4b48-a9eb-9ee2629e830a\") " pod="openstack/ovn-controller-9kd7r-config-jtj4g" Mar 13 10:24:35 crc kubenswrapper[4632]: I0313 10:24:35.431830 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/67f877b5-12d3-4b48-a9eb-9ee2629e830a-var-run-ovn\") pod \"ovn-controller-9kd7r-config-jtj4g\" (UID: \"67f877b5-12d3-4b48-a9eb-9ee2629e830a\") " pod="openstack/ovn-controller-9kd7r-config-jtj4g" Mar 13 10:24:35 crc kubenswrapper[4632]: I0313 10:24:35.431864 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/67f877b5-12d3-4b48-a9eb-9ee2629e830a-additional-scripts\") pod \"ovn-controller-9kd7r-config-jtj4g\" (UID: \"67f877b5-12d3-4b48-a9eb-9ee2629e830a\") " pod="openstack/ovn-controller-9kd7r-config-jtj4g" Mar 13 10:24:35 crc kubenswrapper[4632]: I0313 10:24:35.533668 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/67f877b5-12d3-4b48-a9eb-9ee2629e830a-additional-scripts\") pod \"ovn-controller-9kd7r-config-jtj4g\" (UID: \"67f877b5-12d3-4b48-a9eb-9ee2629e830a\") " pod="openstack/ovn-controller-9kd7r-config-jtj4g" Mar 13 10:24:35 crc kubenswrapper[4632]: I0313 10:24:35.533777 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/67f877b5-12d3-4b48-a9eb-9ee2629e830a-var-log-ovn\") pod \"ovn-controller-9kd7r-config-jtj4g\" (UID: \"67f877b5-12d3-4b48-a9eb-9ee2629e830a\") " pod="openstack/ovn-controller-9kd7r-config-jtj4g" Mar 13 10:24:35 crc kubenswrapper[4632]: I0313 10:24:35.533807 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/67f877b5-12d3-4b48-a9eb-9ee2629e830a-var-run\") pod \"ovn-controller-9kd7r-config-jtj4g\" (UID: \"67f877b5-12d3-4b48-a9eb-9ee2629e830a\") " pod="openstack/ovn-controller-9kd7r-config-jtj4g" Mar 13 10:24:35 crc kubenswrapper[4632]: I0313 10:24:35.533848 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4j57\" (UniqueName: \"kubernetes.io/projected/67f877b5-12d3-4b48-a9eb-9ee2629e830a-kube-api-access-n4j57\") pod \"ovn-controller-9kd7r-config-jtj4g\" (UID: \"67f877b5-12d3-4b48-a9eb-9ee2629e830a\") " pod="openstack/ovn-controller-9kd7r-config-jtj4g" Mar 13 10:24:35 crc kubenswrapper[4632]: I0313 10:24:35.533880 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67f877b5-12d3-4b48-a9eb-9ee2629e830a-scripts\") pod \"ovn-controller-9kd7r-config-jtj4g\" (UID: \"67f877b5-12d3-4b48-a9eb-9ee2629e830a\") " pod="openstack/ovn-controller-9kd7r-config-jtj4g" Mar 13 10:24:35 crc kubenswrapper[4632]: I0313 10:24:35.533967 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/67f877b5-12d3-4b48-a9eb-9ee2629e830a-var-run-ovn\") pod \"ovn-controller-9kd7r-config-jtj4g\" (UID: \"67f877b5-12d3-4b48-a9eb-9ee2629e830a\") " pod="openstack/ovn-controller-9kd7r-config-jtj4g" Mar 13 10:24:35 crc kubenswrapper[4632]: I0313 10:24:35.534304 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/67f877b5-12d3-4b48-a9eb-9ee2629e830a-var-run-ovn\") pod \"ovn-controller-9kd7r-config-jtj4g\" (UID: \"67f877b5-12d3-4b48-a9eb-9ee2629e830a\") " pod="openstack/ovn-controller-9kd7r-config-jtj4g" Mar 13 10:24:35 crc kubenswrapper[4632]: I0313 10:24:35.535428 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/67f877b5-12d3-4b48-a9eb-9ee2629e830a-additional-scripts\") pod \"ovn-controller-9kd7r-config-jtj4g\" (UID: \"67f877b5-12d3-4b48-a9eb-9ee2629e830a\") " pod="openstack/ovn-controller-9kd7r-config-jtj4g" Mar 13 10:24:35 crc kubenswrapper[4632]: I0313 10:24:35.535500 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/67f877b5-12d3-4b48-a9eb-9ee2629e830a-var-log-ovn\") pod \"ovn-controller-9kd7r-config-jtj4g\" (UID: \"67f877b5-12d3-4b48-a9eb-9ee2629e830a\") " pod="openstack/ovn-controller-9kd7r-config-jtj4g" Mar 13 10:24:35 crc kubenswrapper[4632]: I0313 10:24:35.535548 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/67f877b5-12d3-4b48-a9eb-9ee2629e830a-var-run\") pod \"ovn-controller-9kd7r-config-jtj4g\" (UID: \"67f877b5-12d3-4b48-a9eb-9ee2629e830a\") " pod="openstack/ovn-controller-9kd7r-config-jtj4g" Mar 13 10:24:35 crc kubenswrapper[4632]: I0313 10:24:35.537786 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67f877b5-12d3-4b48-a9eb-9ee2629e830a-scripts\") pod \"ovn-controller-9kd7r-config-jtj4g\" (UID: \"67f877b5-12d3-4b48-a9eb-9ee2629e830a\") " pod="openstack/ovn-controller-9kd7r-config-jtj4g" Mar 13 10:24:35 crc kubenswrapper[4632]: I0313 10:24:35.580983 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4j57\" (UniqueName: \"kubernetes.io/projected/67f877b5-12d3-4b48-a9eb-9ee2629e830a-kube-api-access-n4j57\") pod \"ovn-controller-9kd7r-config-jtj4g\" (UID: \"67f877b5-12d3-4b48-a9eb-9ee2629e830a\") " pod="openstack/ovn-controller-9kd7r-config-jtj4g" Mar 13 10:24:35 crc kubenswrapper[4632]: I0313 10:24:35.656046 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9kd7r-config-jtj4g" Mar 13 10:24:36 crc kubenswrapper[4632]: I0313 10:24:36.056320 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dae62c5-076b-4a06-9d12-c955d9131ef3" path="/var/lib/kubelet/pods/8dae62c5-076b-4a06-9d12-c955d9131ef3/volumes" Mar 13 10:24:39 crc kubenswrapper[4632]: I0313 10:24:39.646230 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e37b3d77-de2e-4be9-9984-550d4ba0f2f0-etc-swift\") pod \"swift-storage-0\" (UID: \"e37b3d77-de2e-4be9-9984-550d4ba0f2f0\") " pod="openstack/swift-storage-0" Mar 13 10:24:39 crc kubenswrapper[4632]: I0313 10:24:39.654932 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e37b3d77-de2e-4be9-9984-550d4ba0f2f0-etc-swift\") pod \"swift-storage-0\" (UID: \"e37b3d77-de2e-4be9-9984-550d4ba0f2f0\") " pod="openstack/swift-storage-0" Mar 13 10:24:39 crc kubenswrapper[4632]: I0313 10:24:39.794973 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 13 10:24:40 crc kubenswrapper[4632]: I0313 10:24:40.460502 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 10:24:40 crc kubenswrapper[4632]: I0313 10:24:40.460568 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 10:24:40 crc kubenswrapper[4632]: I0313 10:24:40.460612 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 10:24:40 crc kubenswrapper[4632]: I0313 10:24:40.461302 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e9a22f93dffae95945f5e47a3d15b0ebe11dc6b72712dcbe34fa0191ff687b27"} pod="openshift-machine-config-operator/machine-config-daemon-zkscb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 10:24:40 crc kubenswrapper[4632]: I0313 10:24:40.461368 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" containerID="cri-o://e9a22f93dffae95945f5e47a3d15b0ebe11dc6b72712dcbe34fa0191ff687b27" gracePeriod=600 Mar 13 10:24:40 crc kubenswrapper[4632]: I0313 10:24:40.637332 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:24:40 crc kubenswrapper[4632]: I0313 10:24:40.646780 4632 generic.go:334] "Generic (PLEG): container finished" podID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerID="e9a22f93dffae95945f5e47a3d15b0ebe11dc6b72712dcbe34fa0191ff687b27" exitCode=0 Mar 13 10:24:40 crc kubenswrapper[4632]: I0313 10:24:40.646809 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerDied","Data":"e9a22f93dffae95945f5e47a3d15b0ebe11dc6b72712dcbe34fa0191ff687b27"} Mar 13 10:24:40 crc kubenswrapper[4632]: I0313 10:24:40.646832 4632 scope.go:117] "RemoveContainer" containerID="624a339b1e1f8b218223c2e3440b7f9925bb18567bb6def4fcf3bfc022198658" Mar 13 10:24:41 crc kubenswrapper[4632]: I0313 10:24:41.067219 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 13 10:24:42 crc kubenswrapper[4632]: I0313 10:24:42.846959 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-pnvjb"] Mar 13 10:24:42 crc kubenswrapper[4632]: I0313 10:24:42.847933 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pnvjb" Mar 13 10:24:42 crc kubenswrapper[4632]: I0313 10:24:42.870014 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-pnvjb"] Mar 13 10:24:42 crc kubenswrapper[4632]: I0313 10:24:42.921579 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb216b07-9809-4b8b-857b-ac1192747b9c-operator-scripts\") pod \"cinder-db-create-pnvjb\" (UID: \"cb216b07-9809-4b8b-857b-ac1192747b9c\") " pod="openstack/cinder-db-create-pnvjb" Mar 13 10:24:42 crc kubenswrapper[4632]: I0313 10:24:42.921670 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgncl\" (UniqueName: \"kubernetes.io/projected/cb216b07-9809-4b8b-857b-ac1192747b9c-kube-api-access-fgncl\") pod \"cinder-db-create-pnvjb\" (UID: \"cb216b07-9809-4b8b-857b-ac1192747b9c\") " pod="openstack/cinder-db-create-pnvjb" Mar 13 10:24:42 crc kubenswrapper[4632]: I0313 10:24:42.976836 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-21a0-account-create-update-4clr7"] Mar 13 10:24:42 crc kubenswrapper[4632]: I0313 10:24:42.981029 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-21a0-account-create-update-4clr7" Mar 13 10:24:42 crc kubenswrapper[4632]: I0313 10:24:42.994503 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 13 10:24:42 crc kubenswrapper[4632]: I0313 10:24:42.995717 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-21a0-account-create-update-4clr7"] Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.024058 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgncl\" (UniqueName: \"kubernetes.io/projected/cb216b07-9809-4b8b-857b-ac1192747b9c-kube-api-access-fgncl\") pod \"cinder-db-create-pnvjb\" (UID: \"cb216b07-9809-4b8b-857b-ac1192747b9c\") " pod="openstack/cinder-db-create-pnvjb" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.024185 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a03b92ea-cd2c-455d-a88e-1d57b958b138-operator-scripts\") pod \"cinder-21a0-account-create-update-4clr7\" (UID: \"a03b92ea-cd2c-455d-a88e-1d57b958b138\") " pod="openstack/cinder-21a0-account-create-update-4clr7" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.024334 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb216b07-9809-4b8b-857b-ac1192747b9c-operator-scripts\") pod \"cinder-db-create-pnvjb\" (UID: \"cb216b07-9809-4b8b-857b-ac1192747b9c\") " pod="openstack/cinder-db-create-pnvjb" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.024410 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skrpd\" (UniqueName: \"kubernetes.io/projected/a03b92ea-cd2c-455d-a88e-1d57b958b138-kube-api-access-skrpd\") pod \"cinder-21a0-account-create-update-4clr7\" (UID: \"a03b92ea-cd2c-455d-a88e-1d57b958b138\") " pod="openstack/cinder-21a0-account-create-update-4clr7" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.025687 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb216b07-9809-4b8b-857b-ac1192747b9c-operator-scripts\") pod \"cinder-db-create-pnvjb\" (UID: \"cb216b07-9809-4b8b-857b-ac1192747b9c\") " pod="openstack/cinder-db-create-pnvjb" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.080024 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgncl\" (UniqueName: \"kubernetes.io/projected/cb216b07-9809-4b8b-857b-ac1192747b9c-kube-api-access-fgncl\") pod \"cinder-db-create-pnvjb\" (UID: \"cb216b07-9809-4b8b-857b-ac1192747b9c\") " pod="openstack/cinder-db-create-pnvjb" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.126213 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a03b92ea-cd2c-455d-a88e-1d57b958b138-operator-scripts\") pod \"cinder-21a0-account-create-update-4clr7\" (UID: \"a03b92ea-cd2c-455d-a88e-1d57b958b138\") " pod="openstack/cinder-21a0-account-create-update-4clr7" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.126657 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skrpd\" (UniqueName: \"kubernetes.io/projected/a03b92ea-cd2c-455d-a88e-1d57b958b138-kube-api-access-skrpd\") pod \"cinder-21a0-account-create-update-4clr7\" (UID: \"a03b92ea-cd2c-455d-a88e-1d57b958b138\") " pod="openstack/cinder-21a0-account-create-update-4clr7" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.127651 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a03b92ea-cd2c-455d-a88e-1d57b958b138-operator-scripts\") pod \"cinder-21a0-account-create-update-4clr7\" (UID: \"a03b92ea-cd2c-455d-a88e-1d57b958b138\") " pod="openstack/cinder-21a0-account-create-update-4clr7" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.159089 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skrpd\" (UniqueName: \"kubernetes.io/projected/a03b92ea-cd2c-455d-a88e-1d57b958b138-kube-api-access-skrpd\") pod \"cinder-21a0-account-create-update-4clr7\" (UID: \"a03b92ea-cd2c-455d-a88e-1d57b958b138\") " pod="openstack/cinder-21a0-account-create-update-4clr7" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.171789 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pnvjb" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.306049 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-21a0-account-create-update-4clr7" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.314238 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-kp87n"] Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.315259 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-kp87n" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.322717 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-kp87n"] Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.409254 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-mq9np"] Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.410191 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-mq9np" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.415910 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.416931 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.419657 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-llpcf" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.422785 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.432130 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcdcfad1-d735-4b55-ae65-0ce16bdbc79d-operator-scripts\") pod \"heat-db-create-kp87n\" (UID: \"bcdcfad1-d735-4b55-ae65-0ce16bdbc79d\") " pod="openstack/heat-db-create-kp87n" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.432414 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdvfj\" (UniqueName: \"kubernetes.io/projected/bcdcfad1-d735-4b55-ae65-0ce16bdbc79d-kube-api-access-kdvfj\") pod \"heat-db-create-kp87n\" (UID: \"bcdcfad1-d735-4b55-ae65-0ce16bdbc79d\") " pod="openstack/heat-db-create-kp87n" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.441486 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-mq9np"] Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.474151 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-b742-account-create-update-gfdkg"] Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.475089 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-b742-account-create-update-gfdkg" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.482159 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.513432 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-dwf4b"] Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.514732 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dwf4b" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.531545 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-b742-account-create-update-gfdkg"] Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.534291 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hh8q\" (UniqueName: \"kubernetes.io/projected/e824ae7d-dbbd-496b-b8b0-8b5c59a4d419-kube-api-access-5hh8q\") pod \"keystone-db-sync-mq9np\" (UID: \"e824ae7d-dbbd-496b-b8b0-8b5c59a4d419\") " pod="openstack/keystone-db-sync-mq9np" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.534348 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdvfj\" (UniqueName: \"kubernetes.io/projected/bcdcfad1-d735-4b55-ae65-0ce16bdbc79d-kube-api-access-kdvfj\") pod \"heat-db-create-kp87n\" (UID: \"bcdcfad1-d735-4b55-ae65-0ce16bdbc79d\") " pod="openstack/heat-db-create-kp87n" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.534380 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e824ae7d-dbbd-496b-b8b0-8b5c59a4d419-combined-ca-bundle\") pod \"keystone-db-sync-mq9np\" (UID: \"e824ae7d-dbbd-496b-b8b0-8b5c59a4d419\") " pod="openstack/keystone-db-sync-mq9np" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.534400 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee94a050-f905-44f1-a5da-16536b8cdfa7-operator-scripts\") pod \"heat-b742-account-create-update-gfdkg\" (UID: \"ee94a050-f905-44f1-a5da-16536b8cdfa7\") " pod="openstack/heat-b742-account-create-update-gfdkg" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.534433 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e824ae7d-dbbd-496b-b8b0-8b5c59a4d419-config-data\") pod \"keystone-db-sync-mq9np\" (UID: \"e824ae7d-dbbd-496b-b8b0-8b5c59a4d419\") " pod="openstack/keystone-db-sync-mq9np" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.534460 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zg7q\" (UniqueName: \"kubernetes.io/projected/ee94a050-f905-44f1-a5da-16536b8cdfa7-kube-api-access-7zg7q\") pod \"heat-b742-account-create-update-gfdkg\" (UID: \"ee94a050-f905-44f1-a5da-16536b8cdfa7\") " pod="openstack/heat-b742-account-create-update-gfdkg" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.534501 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcdcfad1-d735-4b55-ae65-0ce16bdbc79d-operator-scripts\") pod \"heat-db-create-kp87n\" (UID: \"bcdcfad1-d735-4b55-ae65-0ce16bdbc79d\") " pod="openstack/heat-db-create-kp87n" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.535128 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcdcfad1-d735-4b55-ae65-0ce16bdbc79d-operator-scripts\") pod \"heat-db-create-kp87n\" (UID: \"bcdcfad1-d735-4b55-ae65-0ce16bdbc79d\") " pod="openstack/heat-db-create-kp87n" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.554520 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-dwf4b"] Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.595121 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdvfj\" (UniqueName: \"kubernetes.io/projected/bcdcfad1-d735-4b55-ae65-0ce16bdbc79d-kube-api-access-kdvfj\") pod \"heat-db-create-kp87n\" (UID: \"bcdcfad1-d735-4b55-ae65-0ce16bdbc79d\") " pod="openstack/heat-db-create-kp87n" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.641706 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-kp87n" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.645991 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hh8q\" (UniqueName: \"kubernetes.io/projected/e824ae7d-dbbd-496b-b8b0-8b5c59a4d419-kube-api-access-5hh8q\") pod \"keystone-db-sync-mq9np\" (UID: \"e824ae7d-dbbd-496b-b8b0-8b5c59a4d419\") " pod="openstack/keystone-db-sync-mq9np" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.646084 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/239c554e-360d-4f04-86f0-b2b98974bad3-operator-scripts\") pod \"neutron-db-create-dwf4b\" (UID: \"239c554e-360d-4f04-86f0-b2b98974bad3\") " pod="openstack/neutron-db-create-dwf4b" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.646128 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hprxm\" (UniqueName: \"kubernetes.io/projected/239c554e-360d-4f04-86f0-b2b98974bad3-kube-api-access-hprxm\") pod \"neutron-db-create-dwf4b\" (UID: \"239c554e-360d-4f04-86f0-b2b98974bad3\") " pod="openstack/neutron-db-create-dwf4b" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.646154 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e824ae7d-dbbd-496b-b8b0-8b5c59a4d419-combined-ca-bundle\") pod \"keystone-db-sync-mq9np\" (UID: \"e824ae7d-dbbd-496b-b8b0-8b5c59a4d419\") " pod="openstack/keystone-db-sync-mq9np" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.646180 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee94a050-f905-44f1-a5da-16536b8cdfa7-operator-scripts\") pod \"heat-b742-account-create-update-gfdkg\" (UID: \"ee94a050-f905-44f1-a5da-16536b8cdfa7\") " pod="openstack/heat-b742-account-create-update-gfdkg" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.646235 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e824ae7d-dbbd-496b-b8b0-8b5c59a4d419-config-data\") pod \"keystone-db-sync-mq9np\" (UID: \"e824ae7d-dbbd-496b-b8b0-8b5c59a4d419\") " pod="openstack/keystone-db-sync-mq9np" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.646271 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zg7q\" (UniqueName: \"kubernetes.io/projected/ee94a050-f905-44f1-a5da-16536b8cdfa7-kube-api-access-7zg7q\") pod \"heat-b742-account-create-update-gfdkg\" (UID: \"ee94a050-f905-44f1-a5da-16536b8cdfa7\") " pod="openstack/heat-b742-account-create-update-gfdkg" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.652990 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee94a050-f905-44f1-a5da-16536b8cdfa7-operator-scripts\") pod \"heat-b742-account-create-update-gfdkg\" (UID: \"ee94a050-f905-44f1-a5da-16536b8cdfa7\") " pod="openstack/heat-b742-account-create-update-gfdkg" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.674896 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e824ae7d-dbbd-496b-b8b0-8b5c59a4d419-combined-ca-bundle\") pod \"keystone-db-sync-mq9np\" (UID: \"e824ae7d-dbbd-496b-b8b0-8b5c59a4d419\") " pod="openstack/keystone-db-sync-mq9np" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.675632 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e824ae7d-dbbd-496b-b8b0-8b5c59a4d419-config-data\") pod \"keystone-db-sync-mq9np\" (UID: \"e824ae7d-dbbd-496b-b8b0-8b5c59a4d419\") " pod="openstack/keystone-db-sync-mq9np" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.707696 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hh8q\" (UniqueName: \"kubernetes.io/projected/e824ae7d-dbbd-496b-b8b0-8b5c59a4d419-kube-api-access-5hh8q\") pod \"keystone-db-sync-mq9np\" (UID: \"e824ae7d-dbbd-496b-b8b0-8b5c59a4d419\") " pod="openstack/keystone-db-sync-mq9np" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.722871 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-da66-account-create-update-tk8pd"] Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.724150 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-da66-account-create-update-tk8pd" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.731662 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-mq9np" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.732655 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.749015 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/239c554e-360d-4f04-86f0-b2b98974bad3-operator-scripts\") pod \"neutron-db-create-dwf4b\" (UID: \"239c554e-360d-4f04-86f0-b2b98974bad3\") " pod="openstack/neutron-db-create-dwf4b" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.749285 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hprxm\" (UniqueName: \"kubernetes.io/projected/239c554e-360d-4f04-86f0-b2b98974bad3-kube-api-access-hprxm\") pod \"neutron-db-create-dwf4b\" (UID: \"239c554e-360d-4f04-86f0-b2b98974bad3\") " pod="openstack/neutron-db-create-dwf4b" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.750155 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/239c554e-360d-4f04-86f0-b2b98974bad3-operator-scripts\") pod \"neutron-db-create-dwf4b\" (UID: \"239c554e-360d-4f04-86f0-b2b98974bad3\") " pod="openstack/neutron-db-create-dwf4b" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.766966 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-da66-account-create-update-tk8pd"] Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.790034 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-g7pfc"] Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.791003 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zg7q\" (UniqueName: \"kubernetes.io/projected/ee94a050-f905-44f1-a5da-16536b8cdfa7-kube-api-access-7zg7q\") pod \"heat-b742-account-create-update-gfdkg\" (UID: \"ee94a050-f905-44f1-a5da-16536b8cdfa7\") " pod="openstack/heat-b742-account-create-update-gfdkg" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.791317 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-g7pfc" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.793528 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-g7pfc"] Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.796316 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-b742-account-create-update-gfdkg" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.843698 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hprxm\" (UniqueName: \"kubernetes.io/projected/239c554e-360d-4f04-86f0-b2b98974bad3-kube-api-access-hprxm\") pod \"neutron-db-create-dwf4b\" (UID: \"239c554e-360d-4f04-86f0-b2b98974bad3\") " pod="openstack/neutron-db-create-dwf4b" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.850485 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsxc4\" (UniqueName: \"kubernetes.io/projected/0d045bc7-38b2-46f5-8cd8-cf634003bedf-kube-api-access-fsxc4\") pod \"neutron-da66-account-create-update-tk8pd\" (UID: \"0d045bc7-38b2-46f5-8cd8-cf634003bedf\") " pod="openstack/neutron-da66-account-create-update-tk8pd" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.850570 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km2sr\" (UniqueName: \"kubernetes.io/projected/aa0000da-8f11-4e97-8ab5-1bcfea0ac894-kube-api-access-km2sr\") pod \"barbican-db-create-g7pfc\" (UID: \"aa0000da-8f11-4e97-8ab5-1bcfea0ac894\") " pod="openstack/barbican-db-create-g7pfc" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.850601 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa0000da-8f11-4e97-8ab5-1bcfea0ac894-operator-scripts\") pod \"barbican-db-create-g7pfc\" (UID: \"aa0000da-8f11-4e97-8ab5-1bcfea0ac894\") " pod="openstack/barbican-db-create-g7pfc" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.850728 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d045bc7-38b2-46f5-8cd8-cf634003bedf-operator-scripts\") pod \"neutron-da66-account-create-update-tk8pd\" (UID: \"0d045bc7-38b2-46f5-8cd8-cf634003bedf\") " pod="openstack/neutron-da66-account-create-update-tk8pd" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.952463 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsxc4\" (UniqueName: \"kubernetes.io/projected/0d045bc7-38b2-46f5-8cd8-cf634003bedf-kube-api-access-fsxc4\") pod \"neutron-da66-account-create-update-tk8pd\" (UID: \"0d045bc7-38b2-46f5-8cd8-cf634003bedf\") " pod="openstack/neutron-da66-account-create-update-tk8pd" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.952558 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km2sr\" (UniqueName: \"kubernetes.io/projected/aa0000da-8f11-4e97-8ab5-1bcfea0ac894-kube-api-access-km2sr\") pod \"barbican-db-create-g7pfc\" (UID: \"aa0000da-8f11-4e97-8ab5-1bcfea0ac894\") " pod="openstack/barbican-db-create-g7pfc" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.952602 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa0000da-8f11-4e97-8ab5-1bcfea0ac894-operator-scripts\") pod \"barbican-db-create-g7pfc\" (UID: \"aa0000da-8f11-4e97-8ab5-1bcfea0ac894\") " pod="openstack/barbican-db-create-g7pfc" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.952660 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d045bc7-38b2-46f5-8cd8-cf634003bedf-operator-scripts\") pod \"neutron-da66-account-create-update-tk8pd\" (UID: \"0d045bc7-38b2-46f5-8cd8-cf634003bedf\") " pod="openstack/neutron-da66-account-create-update-tk8pd" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.953533 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d045bc7-38b2-46f5-8cd8-cf634003bedf-operator-scripts\") pod \"neutron-da66-account-create-update-tk8pd\" (UID: \"0d045bc7-38b2-46f5-8cd8-cf634003bedf\") " pod="openstack/neutron-da66-account-create-update-tk8pd" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.954091 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa0000da-8f11-4e97-8ab5-1bcfea0ac894-operator-scripts\") pod \"barbican-db-create-g7pfc\" (UID: \"aa0000da-8f11-4e97-8ab5-1bcfea0ac894\") " pod="openstack/barbican-db-create-g7pfc" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.985584 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km2sr\" (UniqueName: \"kubernetes.io/projected/aa0000da-8f11-4e97-8ab5-1bcfea0ac894-kube-api-access-km2sr\") pod \"barbican-db-create-g7pfc\" (UID: \"aa0000da-8f11-4e97-8ab5-1bcfea0ac894\") " pod="openstack/barbican-db-create-g7pfc" Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.994224 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-4dec-account-create-update-hfnth"] Mar 13 10:24:43 crc kubenswrapper[4632]: I0313 10:24:43.999244 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsxc4\" (UniqueName: \"kubernetes.io/projected/0d045bc7-38b2-46f5-8cd8-cf634003bedf-kube-api-access-fsxc4\") pod \"neutron-da66-account-create-update-tk8pd\" (UID: \"0d045bc7-38b2-46f5-8cd8-cf634003bedf\") " pod="openstack/neutron-da66-account-create-update-tk8pd" Mar 13 10:24:44 crc kubenswrapper[4632]: I0313 10:24:44.000732 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4dec-account-create-update-hfnth" Mar 13 10:24:44 crc kubenswrapper[4632]: I0313 10:24:44.003066 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Mar 13 10:24:44 crc kubenswrapper[4632]: I0313 10:24:44.021139 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4dec-account-create-update-hfnth"] Mar 13 10:24:44 crc kubenswrapper[4632]: I0313 10:24:44.055449 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64q7x\" (UniqueName: \"kubernetes.io/projected/47870992-2db9-46f4-84d9-fd50fb9851eb-kube-api-access-64q7x\") pod \"barbican-4dec-account-create-update-hfnth\" (UID: \"47870992-2db9-46f4-84d9-fd50fb9851eb\") " pod="openstack/barbican-4dec-account-create-update-hfnth" Mar 13 10:24:44 crc kubenswrapper[4632]: I0313 10:24:44.055559 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47870992-2db9-46f4-84d9-fd50fb9851eb-operator-scripts\") pod \"barbican-4dec-account-create-update-hfnth\" (UID: \"47870992-2db9-46f4-84d9-fd50fb9851eb\") " pod="openstack/barbican-4dec-account-create-update-hfnth" Mar 13 10:24:44 crc kubenswrapper[4632]: I0313 10:24:44.091843 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-da66-account-create-update-tk8pd" Mar 13 10:24:44 crc kubenswrapper[4632]: I0313 10:24:44.118584 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-g7pfc" Mar 13 10:24:44 crc kubenswrapper[4632]: I0313 10:24:44.140748 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dwf4b" Mar 13 10:24:44 crc kubenswrapper[4632]: I0313 10:24:44.156728 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47870992-2db9-46f4-84d9-fd50fb9851eb-operator-scripts\") pod \"barbican-4dec-account-create-update-hfnth\" (UID: \"47870992-2db9-46f4-84d9-fd50fb9851eb\") " pod="openstack/barbican-4dec-account-create-update-hfnth" Mar 13 10:24:44 crc kubenswrapper[4632]: I0313 10:24:44.157172 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64q7x\" (UniqueName: \"kubernetes.io/projected/47870992-2db9-46f4-84d9-fd50fb9851eb-kube-api-access-64q7x\") pod \"barbican-4dec-account-create-update-hfnth\" (UID: \"47870992-2db9-46f4-84d9-fd50fb9851eb\") " pod="openstack/barbican-4dec-account-create-update-hfnth" Mar 13 10:24:44 crc kubenswrapper[4632]: I0313 10:24:44.157670 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47870992-2db9-46f4-84d9-fd50fb9851eb-operator-scripts\") pod \"barbican-4dec-account-create-update-hfnth\" (UID: \"47870992-2db9-46f4-84d9-fd50fb9851eb\") " pod="openstack/barbican-4dec-account-create-update-hfnth" Mar 13 10:24:44 crc kubenswrapper[4632]: I0313 10:24:44.185053 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64q7x\" (UniqueName: \"kubernetes.io/projected/47870992-2db9-46f4-84d9-fd50fb9851eb-kube-api-access-64q7x\") pod \"barbican-4dec-account-create-update-hfnth\" (UID: \"47870992-2db9-46f4-84d9-fd50fb9851eb\") " pod="openstack/barbican-4dec-account-create-update-hfnth" Mar 13 10:24:44 crc kubenswrapper[4632]: I0313 10:24:44.323251 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4dec-account-create-update-hfnth" Mar 13 10:24:44 crc kubenswrapper[4632]: E0313 10:24:44.730075 4632 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-glance-api:e43235cb19da04699a53f42b6a75afe9" Mar 13 10:24:44 crc kubenswrapper[4632]: E0313 10:24:44.730138 4632 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-glance-api:e43235cb19da04699a53f42b6a75afe9" Mar 13 10:24:44 crc kubenswrapper[4632]: E0313 10:24:44.730274 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-glance-api:e43235cb19da04699a53f42b6a75afe9,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fjbxx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-l6hpb_openstack(4f1c5663-463b-45e2-b200-64e73e6d5698): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 10:24:44 crc kubenswrapper[4632]: E0313 10:24:44.734066 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-l6hpb" podUID="4f1c5663-463b-45e2-b200-64e73e6d5698" Mar 13 10:24:45 crc kubenswrapper[4632]: I0313 10:24:45.384891 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-21a0-account-create-update-4clr7"] Mar 13 10:24:45 crc kubenswrapper[4632]: I0313 10:24:45.635834 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-mq9np"] Mar 13 10:24:45 crc kubenswrapper[4632]: I0313 10:24:45.762924 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-21a0-account-create-update-4clr7" event={"ID":"a03b92ea-cd2c-455d-a88e-1d57b958b138","Type":"ContainerStarted","Data":"4f01ad0f17dc70f28110656c522ad63fcb71aa346a374ece30c18a84e29887e5"} Mar 13 10:24:45 crc kubenswrapper[4632]: W0313 10:24:45.766606 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode824ae7d_dbbd_496b_b8b0_8b5c59a4d419.slice/crio-1a3020d6e5b66dad152669406220e67cb7be099d82ff8fd4925d6504c1176fb1 WatchSource:0}: Error finding container 1a3020d6e5b66dad152669406220e67cb7be099d82ff8fd4925d6504c1176fb1: Status 404 returned error can't find the container with id 1a3020d6e5b66dad152669406220e67cb7be099d82ff8fd4925d6504c1176fb1 Mar 13 10:24:45 crc kubenswrapper[4632]: E0313 10:24:45.776087 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-glance-api:e43235cb19da04699a53f42b6a75afe9\\\"\"" pod="openstack/glance-db-sync-l6hpb" podUID="4f1c5663-463b-45e2-b200-64e73e6d5698" Mar 13 10:24:45 crc kubenswrapper[4632]: I0313 10:24:45.791604 4632 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 10:24:46 crc kubenswrapper[4632]: I0313 10:24:46.371850 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-kp87n"] Mar 13 10:24:46 crc kubenswrapper[4632]: I0313 10:24:46.383091 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9kd7r-config-jtj4g"] Mar 13 10:24:46 crc kubenswrapper[4632]: W0313 10:24:46.386635 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcdcfad1_d735_4b55_ae65_0ce16bdbc79d.slice/crio-e1cd870de29627e6c40862edb50e2b89e40d4c4c61f895dbabfb5c6605e37291 WatchSource:0}: Error finding container e1cd870de29627e6c40862edb50e2b89e40d4c4c61f895dbabfb5c6605e37291: Status 404 returned error can't find the container with id e1cd870de29627e6c40862edb50e2b89e40d4c4c61f895dbabfb5c6605e37291 Mar 13 10:24:46 crc kubenswrapper[4632]: I0313 10:24:46.438323 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-dwf4b"] Mar 13 10:24:46 crc kubenswrapper[4632]: I0313 10:24:46.453012 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-da66-account-create-update-tk8pd"] Mar 13 10:24:46 crc kubenswrapper[4632]: I0313 10:24:46.493744 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-g7pfc"] Mar 13 10:24:46 crc kubenswrapper[4632]: I0313 10:24:46.500426 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4dec-account-create-update-hfnth"] Mar 13 10:24:46 crc kubenswrapper[4632]: I0313 10:24:46.511562 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-pnvjb"] Mar 13 10:24:46 crc kubenswrapper[4632]: W0313 10:24:46.565043 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa0000da_8f11_4e97_8ab5_1bcfea0ac894.slice/crio-64833dd5dcca1cf0d9b0e7fcfd2870e2d6c8b40bb49e5e7b154150c2bf852051 WatchSource:0}: Error finding container 64833dd5dcca1cf0d9b0e7fcfd2870e2d6c8b40bb49e5e7b154150c2bf852051: Status 404 returned error can't find the container with id 64833dd5dcca1cf0d9b0e7fcfd2870e2d6c8b40bb49e5e7b154150c2bf852051 Mar 13 10:24:46 crc kubenswrapper[4632]: I0313 10:24:46.654106 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-b742-account-create-update-gfdkg"] Mar 13 10:24:46 crc kubenswrapper[4632]: I0313 10:24:46.679856 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 13 10:24:46 crc kubenswrapper[4632]: I0313 10:24:46.792830 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-da66-account-create-update-tk8pd" event={"ID":"0d045bc7-38b2-46f5-8cd8-cf634003bedf","Type":"ContainerStarted","Data":"6cabee2602a6d9e4308c8db70b1d7f8643862ae4eef1ae7803777760563d87cb"} Mar 13 10:24:46 crc kubenswrapper[4632]: I0313 10:24:46.794887 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dwf4b" event={"ID":"239c554e-360d-4f04-86f0-b2b98974bad3","Type":"ContainerStarted","Data":"d57d142d20024782bc299e1b548d02139291fdbb43f3a8108c7af8762342c79e"} Mar 13 10:24:46 crc kubenswrapper[4632]: I0313 10:24:46.797987 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pnvjb" event={"ID":"cb216b07-9809-4b8b-857b-ac1192747b9c","Type":"ContainerStarted","Data":"417ef1960b2d4ea70aed07efa739673778171eebc874e98d5bdf429380cac86f"} Mar 13 10:24:46 crc kubenswrapper[4632]: I0313 10:24:46.814337 4632 generic.go:334] "Generic (PLEG): container finished" podID="a03b92ea-cd2c-455d-a88e-1d57b958b138" containerID="24cb5f7263654577bea6ec83ce575dcb325e9b55c8adac840790cd7a29363013" exitCode=0 Mar 13 10:24:46 crc kubenswrapper[4632]: I0313 10:24:46.814424 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-21a0-account-create-update-4clr7" event={"ID":"a03b92ea-cd2c-455d-a88e-1d57b958b138","Type":"ContainerDied","Data":"24cb5f7263654577bea6ec83ce575dcb325e9b55c8adac840790cd7a29363013"} Mar 13 10:24:46 crc kubenswrapper[4632]: I0313 10:24:46.845989 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9kd7r-config-jtj4g" event={"ID":"67f877b5-12d3-4b48-a9eb-9ee2629e830a","Type":"ContainerStarted","Data":"ce1ab79e8690eb9d1f16d7b7f5d9ff52729195fe7c60d80326b0842004a4a53d"} Mar 13 10:24:46 crc kubenswrapper[4632]: I0313 10:24:46.852752 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerStarted","Data":"a148dfa9ef48de458189e9fda19ce88937bedd25c3ec76e22d14f43a4745805f"} Mar 13 10:24:46 crc kubenswrapper[4632]: I0313 10:24:46.862265 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4dec-account-create-update-hfnth" event={"ID":"47870992-2db9-46f4-84d9-fd50fb9851eb","Type":"ContainerStarted","Data":"664b8a378c78fddcf14389393b1cc3a53fe85b08aab7f467156058884d9c4350"} Mar 13 10:24:46 crc kubenswrapper[4632]: I0313 10:24:46.864194 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-b742-account-create-update-gfdkg" event={"ID":"ee94a050-f905-44f1-a5da-16536b8cdfa7","Type":"ContainerStarted","Data":"c71c108608caeb76931caff20f7b0c7e5d8d5c389c0440d31bb543598d28dfb8"} Mar 13 10:24:46 crc kubenswrapper[4632]: I0313 10:24:46.865504 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e37b3d77-de2e-4be9-9984-550d4ba0f2f0","Type":"ContainerStarted","Data":"0cec076a2f38e51ffee08b1302a0f7ffd219d4ce1350c5fb23f8c33fa0bdbf2d"} Mar 13 10:24:46 crc kubenswrapper[4632]: I0313 10:24:46.871708 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-mq9np" event={"ID":"e824ae7d-dbbd-496b-b8b0-8b5c59a4d419","Type":"ContainerStarted","Data":"1a3020d6e5b66dad152669406220e67cb7be099d82ff8fd4925d6504c1176fb1"} Mar 13 10:24:46 crc kubenswrapper[4632]: I0313 10:24:46.879399 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-g7pfc" event={"ID":"aa0000da-8f11-4e97-8ab5-1bcfea0ac894","Type":"ContainerStarted","Data":"64833dd5dcca1cf0d9b0e7fcfd2870e2d6c8b40bb49e5e7b154150c2bf852051"} Mar 13 10:24:46 crc kubenswrapper[4632]: I0313 10:24:46.882313 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-kp87n" event={"ID":"bcdcfad1-d735-4b55-ae65-0ce16bdbc79d","Type":"ContainerStarted","Data":"e1cd870de29627e6c40862edb50e2b89e40d4c4c61f895dbabfb5c6605e37291"} Mar 13 10:24:46 crc kubenswrapper[4632]: I0313 10:24:46.916553 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-kp87n" podStartSLOduration=3.916530163 podStartE2EDuration="3.916530163s" podCreationTimestamp="2026-03-13 10:24:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:24:46.904050406 +0000 UTC m=+1260.926580549" watchObservedRunningTime="2026-03-13 10:24:46.916530163 +0000 UTC m=+1260.939060306" Mar 13 10:24:47 crc kubenswrapper[4632]: I0313 10:24:47.894975 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-da66-account-create-update-tk8pd" event={"ID":"0d045bc7-38b2-46f5-8cd8-cf634003bedf","Type":"ContainerStarted","Data":"f13e115025698b8daa562f4881b31bb57b43cf222144f35c644ca079c94f546c"} Mar 13 10:24:47 crc kubenswrapper[4632]: I0313 10:24:47.897862 4632 generic.go:334] "Generic (PLEG): container finished" podID="239c554e-360d-4f04-86f0-b2b98974bad3" containerID="209b78ccf3afd3b3582d4d4eae9056be2d6d19f860431a427d43f1899c69be92" exitCode=0 Mar 13 10:24:47 crc kubenswrapper[4632]: I0313 10:24:47.897978 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dwf4b" event={"ID":"239c554e-360d-4f04-86f0-b2b98974bad3","Type":"ContainerDied","Data":"209b78ccf3afd3b3582d4d4eae9056be2d6d19f860431a427d43f1899c69be92"} Mar 13 10:24:47 crc kubenswrapper[4632]: I0313 10:24:47.901476 4632 generic.go:334] "Generic (PLEG): container finished" podID="bcdcfad1-d735-4b55-ae65-0ce16bdbc79d" containerID="1ead25cb79a035bd17ce1b8995cb1c20666089312b5c266ebcbccc7e66e7c0cc" exitCode=0 Mar 13 10:24:47 crc kubenswrapper[4632]: I0313 10:24:47.901575 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-kp87n" event={"ID":"bcdcfad1-d735-4b55-ae65-0ce16bdbc79d","Type":"ContainerDied","Data":"1ead25cb79a035bd17ce1b8995cb1c20666089312b5c266ebcbccc7e66e7c0cc"} Mar 13 10:24:47 crc kubenswrapper[4632]: I0313 10:24:47.904636 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4dec-account-create-update-hfnth" event={"ID":"47870992-2db9-46f4-84d9-fd50fb9851eb","Type":"ContainerStarted","Data":"dc07b5437ef3867ede6e9debff7196fad98555045e8df8dafdb4a11a7fb9808e"} Mar 13 10:24:47 crc kubenswrapper[4632]: I0313 10:24:47.934714 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-4dec-account-create-update-hfnth" podStartSLOduration=4.934692032 podStartE2EDuration="4.934692032s" podCreationTimestamp="2026-03-13 10:24:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:24:47.934481697 +0000 UTC m=+1261.957011840" watchObservedRunningTime="2026-03-13 10:24:47.934692032 +0000 UTC m=+1261.957222165" Mar 13 10:24:47 crc kubenswrapper[4632]: I0313 10:24:47.937136 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-da66-account-create-update-tk8pd" podStartSLOduration=4.937117261 podStartE2EDuration="4.937117261s" podCreationTimestamp="2026-03-13 10:24:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:24:47.920324439 +0000 UTC m=+1261.942854572" watchObservedRunningTime="2026-03-13 10:24:47.937117261 +0000 UTC m=+1261.959647394" Mar 13 10:24:49 crc kubenswrapper[4632]: I0313 10:24:48.351723 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-21a0-account-create-update-4clr7" Mar 13 10:24:49 crc kubenswrapper[4632]: I0313 10:24:48.517900 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skrpd\" (UniqueName: \"kubernetes.io/projected/a03b92ea-cd2c-455d-a88e-1d57b958b138-kube-api-access-skrpd\") pod \"a03b92ea-cd2c-455d-a88e-1d57b958b138\" (UID: \"a03b92ea-cd2c-455d-a88e-1d57b958b138\") " Mar 13 10:24:49 crc kubenswrapper[4632]: I0313 10:24:48.518023 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a03b92ea-cd2c-455d-a88e-1d57b958b138-operator-scripts\") pod \"a03b92ea-cd2c-455d-a88e-1d57b958b138\" (UID: \"a03b92ea-cd2c-455d-a88e-1d57b958b138\") " Mar 13 10:24:49 crc kubenswrapper[4632]: I0313 10:24:48.518983 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a03b92ea-cd2c-455d-a88e-1d57b958b138-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a03b92ea-cd2c-455d-a88e-1d57b958b138" (UID: "a03b92ea-cd2c-455d-a88e-1d57b958b138"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:24:49 crc kubenswrapper[4632]: I0313 10:24:48.534220 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a03b92ea-cd2c-455d-a88e-1d57b958b138-kube-api-access-skrpd" (OuterVolumeSpecName: "kube-api-access-skrpd") pod "a03b92ea-cd2c-455d-a88e-1d57b958b138" (UID: "a03b92ea-cd2c-455d-a88e-1d57b958b138"). InnerVolumeSpecName "kube-api-access-skrpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:24:49 crc kubenswrapper[4632]: I0313 10:24:48.625008 4632 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a03b92ea-cd2c-455d-a88e-1d57b958b138-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:49 crc kubenswrapper[4632]: I0313 10:24:48.625033 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skrpd\" (UniqueName: \"kubernetes.io/projected/a03b92ea-cd2c-455d-a88e-1d57b958b138-kube-api-access-skrpd\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:49 crc kubenswrapper[4632]: I0313 10:24:48.922242 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pnvjb" event={"ID":"cb216b07-9809-4b8b-857b-ac1192747b9c","Type":"ContainerStarted","Data":"207587c5bdcbf92f71ab5aedfecf2486734ea587705753fb95e8790e674e977d"} Mar 13 10:24:49 crc kubenswrapper[4632]: I0313 10:24:48.926762 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-21a0-account-create-update-4clr7" Mar 13 10:24:49 crc kubenswrapper[4632]: I0313 10:24:48.934636 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-21a0-account-create-update-4clr7" event={"ID":"a03b92ea-cd2c-455d-a88e-1d57b958b138","Type":"ContainerDied","Data":"4f01ad0f17dc70f28110656c522ad63fcb71aa346a374ece30c18a84e29887e5"} Mar 13 10:24:49 crc kubenswrapper[4632]: I0313 10:24:48.934785 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f01ad0f17dc70f28110656c522ad63fcb71aa346a374ece30c18a84e29887e5" Mar 13 10:24:49 crc kubenswrapper[4632]: I0313 10:24:48.953610 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-pnvjb" podStartSLOduration=6.953590799 podStartE2EDuration="6.953590799s" podCreationTimestamp="2026-03-13 10:24:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:24:48.945656064 +0000 UTC m=+1262.968186197" watchObservedRunningTime="2026-03-13 10:24:48.953590799 +0000 UTC m=+1262.976120932" Mar 13 10:24:49 crc kubenswrapper[4632]: I0313 10:24:48.955099 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-g7pfc" event={"ID":"aa0000da-8f11-4e97-8ab5-1bcfea0ac894","Type":"ContainerStarted","Data":"f79fdacee095a4d2c557179a3aeeb0eea1874c7280d8a656f2dd9779cf567f1e"} Mar 13 10:24:49 crc kubenswrapper[4632]: I0313 10:24:48.975142 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9kd7r-config-jtj4g" event={"ID":"67f877b5-12d3-4b48-a9eb-9ee2629e830a","Type":"ContainerStarted","Data":"c6b6fdf02c5b942ff5eb86fa09449efd1927d429db47c31ad2d68c9602235d4f"} Mar 13 10:24:49 crc kubenswrapper[4632]: I0313 10:24:48.991261 4632 generic.go:334] "Generic (PLEG): container finished" podID="47870992-2db9-46f4-84d9-fd50fb9851eb" containerID="dc07b5437ef3867ede6e9debff7196fad98555045e8df8dafdb4a11a7fb9808e" exitCode=0 Mar 13 10:24:49 crc kubenswrapper[4632]: I0313 10:24:48.991345 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4dec-account-create-update-hfnth" event={"ID":"47870992-2db9-46f4-84d9-fd50fb9851eb","Type":"ContainerDied","Data":"dc07b5437ef3867ede6e9debff7196fad98555045e8df8dafdb4a11a7fb9808e"} Mar 13 10:24:49 crc kubenswrapper[4632]: I0313 10:24:48.991817 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-g7pfc" podStartSLOduration=5.991803587 podStartE2EDuration="5.991803587s" podCreationTimestamp="2026-03-13 10:24:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:24:48.977089746 +0000 UTC m=+1262.999619879" watchObservedRunningTime="2026-03-13 10:24:48.991803587 +0000 UTC m=+1263.014333720" Mar 13 10:24:49 crc kubenswrapper[4632]: I0313 10:24:49.001301 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-b742-account-create-update-gfdkg" event={"ID":"ee94a050-f905-44f1-a5da-16536b8cdfa7","Type":"ContainerStarted","Data":"d92125a86d78e277913519dc023b0643c481c49ac75357c10f1cb11e638c36a3"} Mar 13 10:24:49 crc kubenswrapper[4632]: I0313 10:24:49.004981 4632 generic.go:334] "Generic (PLEG): container finished" podID="0d045bc7-38b2-46f5-8cd8-cf634003bedf" containerID="f13e115025698b8daa562f4881b31bb57b43cf222144f35c644ca079c94f546c" exitCode=0 Mar 13 10:24:49 crc kubenswrapper[4632]: I0313 10:24:49.005241 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-da66-account-create-update-tk8pd" event={"ID":"0d045bc7-38b2-46f5-8cd8-cf634003bedf","Type":"ContainerDied","Data":"f13e115025698b8daa562f4881b31bb57b43cf222144f35c644ca079c94f546c"} Mar 13 10:24:49 crc kubenswrapper[4632]: I0313 10:24:49.007912 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-9kd7r-config-jtj4g" podStartSLOduration=14.007897792 podStartE2EDuration="14.007897792s" podCreationTimestamp="2026-03-13 10:24:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:24:49.002497439 +0000 UTC m=+1263.025027572" watchObservedRunningTime="2026-03-13 10:24:49.007897792 +0000 UTC m=+1263.030427935" Mar 13 10:24:49 crc kubenswrapper[4632]: I0313 10:24:49.063826 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-b742-account-create-update-gfdkg" podStartSLOduration=6.063802904 podStartE2EDuration="6.063802904s" podCreationTimestamp="2026-03-13 10:24:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:24:49.029992043 +0000 UTC m=+1263.052522196" watchObservedRunningTime="2026-03-13 10:24:49.063802904 +0000 UTC m=+1263.086333037" Mar 13 10:24:49 crc kubenswrapper[4632]: I0313 10:24:49.841361 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-kp87n" Mar 13 10:24:49 crc kubenswrapper[4632]: I0313 10:24:49.850278 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dwf4b" Mar 13 10:24:49 crc kubenswrapper[4632]: I0313 10:24:49.892506 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hprxm\" (UniqueName: \"kubernetes.io/projected/239c554e-360d-4f04-86f0-b2b98974bad3-kube-api-access-hprxm\") pod \"239c554e-360d-4f04-86f0-b2b98974bad3\" (UID: \"239c554e-360d-4f04-86f0-b2b98974bad3\") " Mar 13 10:24:49 crc kubenswrapper[4632]: I0313 10:24:49.892582 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdvfj\" (UniqueName: \"kubernetes.io/projected/bcdcfad1-d735-4b55-ae65-0ce16bdbc79d-kube-api-access-kdvfj\") pod \"bcdcfad1-d735-4b55-ae65-0ce16bdbc79d\" (UID: \"bcdcfad1-d735-4b55-ae65-0ce16bdbc79d\") " Mar 13 10:24:49 crc kubenswrapper[4632]: I0313 10:24:49.892629 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcdcfad1-d735-4b55-ae65-0ce16bdbc79d-operator-scripts\") pod \"bcdcfad1-d735-4b55-ae65-0ce16bdbc79d\" (UID: \"bcdcfad1-d735-4b55-ae65-0ce16bdbc79d\") " Mar 13 10:24:49 crc kubenswrapper[4632]: I0313 10:24:49.892698 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/239c554e-360d-4f04-86f0-b2b98974bad3-operator-scripts\") pod \"239c554e-360d-4f04-86f0-b2b98974bad3\" (UID: \"239c554e-360d-4f04-86f0-b2b98974bad3\") " Mar 13 10:24:49 crc kubenswrapper[4632]: I0313 10:24:49.894608 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/239c554e-360d-4f04-86f0-b2b98974bad3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "239c554e-360d-4f04-86f0-b2b98974bad3" (UID: "239c554e-360d-4f04-86f0-b2b98974bad3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:24:49 crc kubenswrapper[4632]: I0313 10:24:49.895300 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcdcfad1-d735-4b55-ae65-0ce16bdbc79d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bcdcfad1-d735-4b55-ae65-0ce16bdbc79d" (UID: "bcdcfad1-d735-4b55-ae65-0ce16bdbc79d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:24:49 crc kubenswrapper[4632]: I0313 10:24:49.899732 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcdcfad1-d735-4b55-ae65-0ce16bdbc79d-kube-api-access-kdvfj" (OuterVolumeSpecName: "kube-api-access-kdvfj") pod "bcdcfad1-d735-4b55-ae65-0ce16bdbc79d" (UID: "bcdcfad1-d735-4b55-ae65-0ce16bdbc79d"). InnerVolumeSpecName "kube-api-access-kdvfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:24:49 crc kubenswrapper[4632]: I0313 10:24:49.899877 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/239c554e-360d-4f04-86f0-b2b98974bad3-kube-api-access-hprxm" (OuterVolumeSpecName: "kube-api-access-hprxm") pod "239c554e-360d-4f04-86f0-b2b98974bad3" (UID: "239c554e-360d-4f04-86f0-b2b98974bad3"). InnerVolumeSpecName "kube-api-access-hprxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:24:49 crc kubenswrapper[4632]: I0313 10:24:49.995050 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hprxm\" (UniqueName: \"kubernetes.io/projected/239c554e-360d-4f04-86f0-b2b98974bad3-kube-api-access-hprxm\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:49 crc kubenswrapper[4632]: I0313 10:24:49.995100 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdvfj\" (UniqueName: \"kubernetes.io/projected/bcdcfad1-d735-4b55-ae65-0ce16bdbc79d-kube-api-access-kdvfj\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:49 crc kubenswrapper[4632]: I0313 10:24:49.995115 4632 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcdcfad1-d735-4b55-ae65-0ce16bdbc79d-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:49 crc kubenswrapper[4632]: I0313 10:24:49.995127 4632 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/239c554e-360d-4f04-86f0-b2b98974bad3-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:50 crc kubenswrapper[4632]: I0313 10:24:50.019932 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-kp87n" event={"ID":"bcdcfad1-d735-4b55-ae65-0ce16bdbc79d","Type":"ContainerDied","Data":"e1cd870de29627e6c40862edb50e2b89e40d4c4c61f895dbabfb5c6605e37291"} Mar 13 10:24:50 crc kubenswrapper[4632]: I0313 10:24:50.019993 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1cd870de29627e6c40862edb50e2b89e40d4c4c61f895dbabfb5c6605e37291" Mar 13 10:24:50 crc kubenswrapper[4632]: I0313 10:24:50.020004 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-kp87n" Mar 13 10:24:50 crc kubenswrapper[4632]: I0313 10:24:50.021787 4632 generic.go:334] "Generic (PLEG): container finished" podID="67f877b5-12d3-4b48-a9eb-9ee2629e830a" containerID="c6b6fdf02c5b942ff5eb86fa09449efd1927d429db47c31ad2d68c9602235d4f" exitCode=0 Mar 13 10:24:50 crc kubenswrapper[4632]: I0313 10:24:50.021863 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9kd7r-config-jtj4g" event={"ID":"67f877b5-12d3-4b48-a9eb-9ee2629e830a","Type":"ContainerDied","Data":"c6b6fdf02c5b942ff5eb86fa09449efd1927d429db47c31ad2d68c9602235d4f"} Mar 13 10:24:50 crc kubenswrapper[4632]: I0313 10:24:50.025560 4632 generic.go:334] "Generic (PLEG): container finished" podID="ee94a050-f905-44f1-a5da-16536b8cdfa7" containerID="d92125a86d78e277913519dc023b0643c481c49ac75357c10f1cb11e638c36a3" exitCode=0 Mar 13 10:24:50 crc kubenswrapper[4632]: I0313 10:24:50.025674 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-b742-account-create-update-gfdkg" event={"ID":"ee94a050-f905-44f1-a5da-16536b8cdfa7","Type":"ContainerDied","Data":"d92125a86d78e277913519dc023b0643c481c49ac75357c10f1cb11e638c36a3"} Mar 13 10:24:50 crc kubenswrapper[4632]: I0313 10:24:50.027143 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e37b3d77-de2e-4be9-9984-550d4ba0f2f0","Type":"ContainerStarted","Data":"00ea6bda0abc557d34319366ebb47ed2d4d334b085146aa961d521324d378058"} Mar 13 10:24:50 crc kubenswrapper[4632]: I0313 10:24:50.028585 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dwf4b" event={"ID":"239c554e-360d-4f04-86f0-b2b98974bad3","Type":"ContainerDied","Data":"d57d142d20024782bc299e1b548d02139291fdbb43f3a8108c7af8762342c79e"} Mar 13 10:24:50 crc kubenswrapper[4632]: I0313 10:24:50.028628 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d57d142d20024782bc299e1b548d02139291fdbb43f3a8108c7af8762342c79e" Mar 13 10:24:50 crc kubenswrapper[4632]: I0313 10:24:50.028690 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dwf4b" Mar 13 10:24:50 crc kubenswrapper[4632]: I0313 10:24:50.032988 4632 generic.go:334] "Generic (PLEG): container finished" podID="cb216b07-9809-4b8b-857b-ac1192747b9c" containerID="207587c5bdcbf92f71ab5aedfecf2486734ea587705753fb95e8790e674e977d" exitCode=0 Mar 13 10:24:50 crc kubenswrapper[4632]: I0313 10:24:50.033043 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pnvjb" event={"ID":"cb216b07-9809-4b8b-857b-ac1192747b9c","Type":"ContainerDied","Data":"207587c5bdcbf92f71ab5aedfecf2486734ea587705753fb95e8790e674e977d"} Mar 13 10:24:50 crc kubenswrapper[4632]: I0313 10:24:50.034547 4632 generic.go:334] "Generic (PLEG): container finished" podID="aa0000da-8f11-4e97-8ab5-1bcfea0ac894" containerID="f79fdacee095a4d2c557179a3aeeb0eea1874c7280d8a656f2dd9779cf567f1e" exitCode=0 Mar 13 10:24:50 crc kubenswrapper[4632]: I0313 10:24:50.034586 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-g7pfc" event={"ID":"aa0000da-8f11-4e97-8ab5-1bcfea0ac894","Type":"ContainerDied","Data":"f79fdacee095a4d2c557179a3aeeb0eea1874c7280d8a656f2dd9779cf567f1e"} Mar 13 10:24:51 crc kubenswrapper[4632]: I0313 10:24:51.068714 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e37b3d77-de2e-4be9-9984-550d4ba0f2f0","Type":"ContainerStarted","Data":"68efd16a592a98149fb78b8c3ca36bfb289a67545bf9bdb5079bbfc32d02d606"} Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.654280 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-g7pfc" Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.678592 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pnvjb" Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.686453 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4dec-account-create-update-hfnth" Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.705392 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9kd7r-config-jtj4g" Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.716217 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-b742-account-create-update-gfdkg" Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.724002 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-da66-account-create-update-tk8pd" Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.797863 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa0000da-8f11-4e97-8ab5-1bcfea0ac894-operator-scripts\") pod \"aa0000da-8f11-4e97-8ab5-1bcfea0ac894\" (UID: \"aa0000da-8f11-4e97-8ab5-1bcfea0ac894\") " Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.797962 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb216b07-9809-4b8b-857b-ac1192747b9c-operator-scripts\") pod \"cb216b07-9809-4b8b-857b-ac1192747b9c\" (UID: \"cb216b07-9809-4b8b-857b-ac1192747b9c\") " Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.798026 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47870992-2db9-46f4-84d9-fd50fb9851eb-operator-scripts\") pod \"47870992-2db9-46f4-84d9-fd50fb9851eb\" (UID: \"47870992-2db9-46f4-84d9-fd50fb9851eb\") " Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.798062 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64q7x\" (UniqueName: \"kubernetes.io/projected/47870992-2db9-46f4-84d9-fd50fb9851eb-kube-api-access-64q7x\") pod \"47870992-2db9-46f4-84d9-fd50fb9851eb\" (UID: \"47870992-2db9-46f4-84d9-fd50fb9851eb\") " Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.798092 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-km2sr\" (UniqueName: \"kubernetes.io/projected/aa0000da-8f11-4e97-8ab5-1bcfea0ac894-kube-api-access-km2sr\") pod \"aa0000da-8f11-4e97-8ab5-1bcfea0ac894\" (UID: \"aa0000da-8f11-4e97-8ab5-1bcfea0ac894\") " Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.798195 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgncl\" (UniqueName: \"kubernetes.io/projected/cb216b07-9809-4b8b-857b-ac1192747b9c-kube-api-access-fgncl\") pod \"cb216b07-9809-4b8b-857b-ac1192747b9c\" (UID: \"cb216b07-9809-4b8b-857b-ac1192747b9c\") " Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.801738 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa0000da-8f11-4e97-8ab5-1bcfea0ac894-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aa0000da-8f11-4e97-8ab5-1bcfea0ac894" (UID: "aa0000da-8f11-4e97-8ab5-1bcfea0ac894"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.802239 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb216b07-9809-4b8b-857b-ac1192747b9c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cb216b07-9809-4b8b-857b-ac1192747b9c" (UID: "cb216b07-9809-4b8b-857b-ac1192747b9c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.802671 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47870992-2db9-46f4-84d9-fd50fb9851eb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "47870992-2db9-46f4-84d9-fd50fb9851eb" (UID: "47870992-2db9-46f4-84d9-fd50fb9851eb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.805574 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb216b07-9809-4b8b-857b-ac1192747b9c-kube-api-access-fgncl" (OuterVolumeSpecName: "kube-api-access-fgncl") pod "cb216b07-9809-4b8b-857b-ac1192747b9c" (UID: "cb216b07-9809-4b8b-857b-ac1192747b9c"). InnerVolumeSpecName "kube-api-access-fgncl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.810438 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa0000da-8f11-4e97-8ab5-1bcfea0ac894-kube-api-access-km2sr" (OuterVolumeSpecName: "kube-api-access-km2sr") pod "aa0000da-8f11-4e97-8ab5-1bcfea0ac894" (UID: "aa0000da-8f11-4e97-8ab5-1bcfea0ac894"). InnerVolumeSpecName "kube-api-access-km2sr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.825194 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47870992-2db9-46f4-84d9-fd50fb9851eb-kube-api-access-64q7x" (OuterVolumeSpecName: "kube-api-access-64q7x") pod "47870992-2db9-46f4-84d9-fd50fb9851eb" (UID: "47870992-2db9-46f4-84d9-fd50fb9851eb"). InnerVolumeSpecName "kube-api-access-64q7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.899783 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zg7q\" (UniqueName: \"kubernetes.io/projected/ee94a050-f905-44f1-a5da-16536b8cdfa7-kube-api-access-7zg7q\") pod \"ee94a050-f905-44f1-a5da-16536b8cdfa7\" (UID: \"ee94a050-f905-44f1-a5da-16536b8cdfa7\") " Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.899842 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/67f877b5-12d3-4b48-a9eb-9ee2629e830a-var-run-ovn\") pod \"67f877b5-12d3-4b48-a9eb-9ee2629e830a\" (UID: \"67f877b5-12d3-4b48-a9eb-9ee2629e830a\") " Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.899881 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d045bc7-38b2-46f5-8cd8-cf634003bedf-operator-scripts\") pod \"0d045bc7-38b2-46f5-8cd8-cf634003bedf\" (UID: \"0d045bc7-38b2-46f5-8cd8-cf634003bedf\") " Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.899923 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee94a050-f905-44f1-a5da-16536b8cdfa7-operator-scripts\") pod \"ee94a050-f905-44f1-a5da-16536b8cdfa7\" (UID: \"ee94a050-f905-44f1-a5da-16536b8cdfa7\") " Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.899979 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/67f877b5-12d3-4b48-a9eb-9ee2629e830a-var-log-ovn\") pod \"67f877b5-12d3-4b48-a9eb-9ee2629e830a\" (UID: \"67f877b5-12d3-4b48-a9eb-9ee2629e830a\") " Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.900037 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/67f877b5-12d3-4b48-a9eb-9ee2629e830a-additional-scripts\") pod \"67f877b5-12d3-4b48-a9eb-9ee2629e830a\" (UID: \"67f877b5-12d3-4b48-a9eb-9ee2629e830a\") " Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.900073 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsxc4\" (UniqueName: \"kubernetes.io/projected/0d045bc7-38b2-46f5-8cd8-cf634003bedf-kube-api-access-fsxc4\") pod \"0d045bc7-38b2-46f5-8cd8-cf634003bedf\" (UID: \"0d045bc7-38b2-46f5-8cd8-cf634003bedf\") " Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.900178 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67f877b5-12d3-4b48-a9eb-9ee2629e830a-scripts\") pod \"67f877b5-12d3-4b48-a9eb-9ee2629e830a\" (UID: \"67f877b5-12d3-4b48-a9eb-9ee2629e830a\") " Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.900288 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/67f877b5-12d3-4b48-a9eb-9ee2629e830a-var-run\") pod \"67f877b5-12d3-4b48-a9eb-9ee2629e830a\" (UID: \"67f877b5-12d3-4b48-a9eb-9ee2629e830a\") " Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.900322 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4j57\" (UniqueName: \"kubernetes.io/projected/67f877b5-12d3-4b48-a9eb-9ee2629e830a-kube-api-access-n4j57\") pod \"67f877b5-12d3-4b48-a9eb-9ee2629e830a\" (UID: \"67f877b5-12d3-4b48-a9eb-9ee2629e830a\") " Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.900388 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67f877b5-12d3-4b48-a9eb-9ee2629e830a-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "67f877b5-12d3-4b48-a9eb-9ee2629e830a" (UID: "67f877b5-12d3-4b48-a9eb-9ee2629e830a"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.900749 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgncl\" (UniqueName: \"kubernetes.io/projected/cb216b07-9809-4b8b-857b-ac1192747b9c-kube-api-access-fgncl\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.900771 4632 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa0000da-8f11-4e97-8ab5-1bcfea0ac894-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.900784 4632 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb216b07-9809-4b8b-857b-ac1192747b9c-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.900797 4632 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/67f877b5-12d3-4b48-a9eb-9ee2629e830a-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.900809 4632 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47870992-2db9-46f4-84d9-fd50fb9851eb-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.900821 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64q7x\" (UniqueName: \"kubernetes.io/projected/47870992-2db9-46f4-84d9-fd50fb9851eb-kube-api-access-64q7x\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.900832 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-km2sr\" (UniqueName: \"kubernetes.io/projected/aa0000da-8f11-4e97-8ab5-1bcfea0ac894-kube-api-access-km2sr\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.900875 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee94a050-f905-44f1-a5da-16536b8cdfa7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ee94a050-f905-44f1-a5da-16536b8cdfa7" (UID: "ee94a050-f905-44f1-a5da-16536b8cdfa7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.901023 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67f877b5-12d3-4b48-a9eb-9ee2629e830a-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "67f877b5-12d3-4b48-a9eb-9ee2629e830a" (UID: "67f877b5-12d3-4b48-a9eb-9ee2629e830a"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.901497 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67f877b5-12d3-4b48-a9eb-9ee2629e830a-var-run" (OuterVolumeSpecName: "var-run") pod "67f877b5-12d3-4b48-a9eb-9ee2629e830a" (UID: "67f877b5-12d3-4b48-a9eb-9ee2629e830a"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.901685 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d045bc7-38b2-46f5-8cd8-cf634003bedf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0d045bc7-38b2-46f5-8cd8-cf634003bedf" (UID: "0d045bc7-38b2-46f5-8cd8-cf634003bedf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.901730 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67f877b5-12d3-4b48-a9eb-9ee2629e830a-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "67f877b5-12d3-4b48-a9eb-9ee2629e830a" (UID: "67f877b5-12d3-4b48-a9eb-9ee2629e830a"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.902430 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67f877b5-12d3-4b48-a9eb-9ee2629e830a-scripts" (OuterVolumeSpecName: "scripts") pod "67f877b5-12d3-4b48-a9eb-9ee2629e830a" (UID: "67f877b5-12d3-4b48-a9eb-9ee2629e830a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.904581 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee94a050-f905-44f1-a5da-16536b8cdfa7-kube-api-access-7zg7q" (OuterVolumeSpecName: "kube-api-access-7zg7q") pod "ee94a050-f905-44f1-a5da-16536b8cdfa7" (UID: "ee94a050-f905-44f1-a5da-16536b8cdfa7"). InnerVolumeSpecName "kube-api-access-7zg7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.905425 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67f877b5-12d3-4b48-a9eb-9ee2629e830a-kube-api-access-n4j57" (OuterVolumeSpecName: "kube-api-access-n4j57") pod "67f877b5-12d3-4b48-a9eb-9ee2629e830a" (UID: "67f877b5-12d3-4b48-a9eb-9ee2629e830a"). InnerVolumeSpecName "kube-api-access-n4j57". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:24:54 crc kubenswrapper[4632]: I0313 10:24:54.908396 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d045bc7-38b2-46f5-8cd8-cf634003bedf-kube-api-access-fsxc4" (OuterVolumeSpecName: "kube-api-access-fsxc4") pod "0d045bc7-38b2-46f5-8cd8-cf634003bedf" (UID: "0d045bc7-38b2-46f5-8cd8-cf634003bedf"). InnerVolumeSpecName "kube-api-access-fsxc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.002972 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4j57\" (UniqueName: \"kubernetes.io/projected/67f877b5-12d3-4b48-a9eb-9ee2629e830a-kube-api-access-n4j57\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.003333 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zg7q\" (UniqueName: \"kubernetes.io/projected/ee94a050-f905-44f1-a5da-16536b8cdfa7-kube-api-access-7zg7q\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.003353 4632 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/67f877b5-12d3-4b48-a9eb-9ee2629e830a-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.003368 4632 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d045bc7-38b2-46f5-8cd8-cf634003bedf-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.003379 4632 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee94a050-f905-44f1-a5da-16536b8cdfa7-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.003390 4632 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/67f877b5-12d3-4b48-a9eb-9ee2629e830a-additional-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.003404 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsxc4\" (UniqueName: \"kubernetes.io/projected/0d045bc7-38b2-46f5-8cd8-cf634003bedf-kube-api-access-fsxc4\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.003417 4632 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67f877b5-12d3-4b48-a9eb-9ee2629e830a-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.003429 4632 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/67f877b5-12d3-4b48-a9eb-9ee2629e830a-var-run\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.127105 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-g7pfc" event={"ID":"aa0000da-8f11-4e97-8ab5-1bcfea0ac894","Type":"ContainerDied","Data":"64833dd5dcca1cf0d9b0e7fcfd2870e2d6c8b40bb49e5e7b154150c2bf852051"} Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.127146 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64833dd5dcca1cf0d9b0e7fcfd2870e2d6c8b40bb49e5e7b154150c2bf852051" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.127144 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-g7pfc" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.132191 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9kd7r-config-jtj4g" event={"ID":"67f877b5-12d3-4b48-a9eb-9ee2629e830a","Type":"ContainerDied","Data":"ce1ab79e8690eb9d1f16d7b7f5d9ff52729195fe7c60d80326b0842004a4a53d"} Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.132237 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce1ab79e8690eb9d1f16d7b7f5d9ff52729195fe7c60d80326b0842004a4a53d" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.132318 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9kd7r-config-jtj4g" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.140243 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4dec-account-create-update-hfnth" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.140253 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4dec-account-create-update-hfnth" event={"ID":"47870992-2db9-46f4-84d9-fd50fb9851eb","Type":"ContainerDied","Data":"664b8a378c78fddcf14389393b1cc3a53fe85b08aab7f467156058884d9c4350"} Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.140277 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="664b8a378c78fddcf14389393b1cc3a53fe85b08aab7f467156058884d9c4350" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.142296 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-b742-account-create-update-gfdkg" event={"ID":"ee94a050-f905-44f1-a5da-16536b8cdfa7","Type":"ContainerDied","Data":"c71c108608caeb76931caff20f7b0c7e5d8d5c389c0440d31bb543598d28dfb8"} Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.142346 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c71c108608caeb76931caff20f7b0c7e5d8d5c389c0440d31bb543598d28dfb8" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.142469 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-b742-account-create-update-gfdkg" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.144917 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-da66-account-create-update-tk8pd" event={"ID":"0d045bc7-38b2-46f5-8cd8-cf634003bedf","Type":"ContainerDied","Data":"6cabee2602a6d9e4308c8db70b1d7f8643862ae4eef1ae7803777760563d87cb"} Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.144976 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cabee2602a6d9e4308c8db70b1d7f8643862ae4eef1ae7803777760563d87cb" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.145143 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-da66-account-create-update-tk8pd" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.156014 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e37b3d77-de2e-4be9-9984-550d4ba0f2f0","Type":"ContainerStarted","Data":"5fbf766730c5e0b50f415ecb57ded1219f87c9fea289a1de721175fa49897a02"} Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.156068 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e37b3d77-de2e-4be9-9984-550d4ba0f2f0","Type":"ContainerStarted","Data":"350411d685010b4776bb5fa669ae387802d1836dc40c6f341061fd360acd8211"} Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.163060 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-mq9np" event={"ID":"e824ae7d-dbbd-496b-b8b0-8b5c59a4d419","Type":"ContainerStarted","Data":"53c212eae0f18baff6fdcd0d88db82f3271a3997b68292e7fdae508ea7808719"} Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.167994 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pnvjb" event={"ID":"cb216b07-9809-4b8b-857b-ac1192747b9c","Type":"ContainerDied","Data":"417ef1960b2d4ea70aed07efa739673778171eebc874e98d5bdf429380cac86f"} Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.168186 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="417ef1960b2d4ea70aed07efa739673778171eebc874e98d5bdf429380cac86f" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.168270 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pnvjb" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.194544 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-mq9np" podStartSLOduration=3.516220301 podStartE2EDuration="12.19441146s" podCreationTimestamp="2026-03-13 10:24:43 +0000 UTC" firstStartedPulling="2026-03-13 10:24:45.790838775 +0000 UTC m=+1259.813368908" lastFinishedPulling="2026-03-13 10:24:54.469029934 +0000 UTC m=+1268.491560067" observedRunningTime="2026-03-13 10:24:55.184359924 +0000 UTC m=+1269.206890057" watchObservedRunningTime="2026-03-13 10:24:55.19441146 +0000 UTC m=+1269.216941593" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.826091 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-9kd7r-config-jtj4g"] Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.833933 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-9kd7r-config-jtj4g"] Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.950659 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-9kd7r-config-hlhk2"] Mar 13 10:24:55 crc kubenswrapper[4632]: E0313 10:24:55.954109 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcdcfad1-d735-4b55-ae65-0ce16bdbc79d" containerName="mariadb-database-create" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.954144 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcdcfad1-d735-4b55-ae65-0ce16bdbc79d" containerName="mariadb-database-create" Mar 13 10:24:55 crc kubenswrapper[4632]: E0313 10:24:55.954166 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a03b92ea-cd2c-455d-a88e-1d57b958b138" containerName="mariadb-account-create-update" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.954174 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="a03b92ea-cd2c-455d-a88e-1d57b958b138" containerName="mariadb-account-create-update" Mar 13 10:24:55 crc kubenswrapper[4632]: E0313 10:24:55.954192 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d045bc7-38b2-46f5-8cd8-cf634003bedf" containerName="mariadb-account-create-update" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.954198 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d045bc7-38b2-46f5-8cd8-cf634003bedf" containerName="mariadb-account-create-update" Mar 13 10:24:55 crc kubenswrapper[4632]: E0313 10:24:55.954210 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67f877b5-12d3-4b48-a9eb-9ee2629e830a" containerName="ovn-config" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.954216 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="67f877b5-12d3-4b48-a9eb-9ee2629e830a" containerName="ovn-config" Mar 13 10:24:55 crc kubenswrapper[4632]: E0313 10:24:55.954226 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa0000da-8f11-4e97-8ab5-1bcfea0ac894" containerName="mariadb-database-create" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.954232 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa0000da-8f11-4e97-8ab5-1bcfea0ac894" containerName="mariadb-database-create" Mar 13 10:24:55 crc kubenswrapper[4632]: E0313 10:24:55.954245 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee94a050-f905-44f1-a5da-16536b8cdfa7" containerName="mariadb-account-create-update" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.954253 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee94a050-f905-44f1-a5da-16536b8cdfa7" containerName="mariadb-account-create-update" Mar 13 10:24:55 crc kubenswrapper[4632]: E0313 10:24:55.954259 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb216b07-9809-4b8b-857b-ac1192747b9c" containerName="mariadb-database-create" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.954265 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb216b07-9809-4b8b-857b-ac1192747b9c" containerName="mariadb-database-create" Mar 13 10:24:55 crc kubenswrapper[4632]: E0313 10:24:55.954278 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="239c554e-360d-4f04-86f0-b2b98974bad3" containerName="mariadb-database-create" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.954283 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="239c554e-360d-4f04-86f0-b2b98974bad3" containerName="mariadb-database-create" Mar 13 10:24:55 crc kubenswrapper[4632]: E0313 10:24:55.954295 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47870992-2db9-46f4-84d9-fd50fb9851eb" containerName="mariadb-account-create-update" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.954303 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="47870992-2db9-46f4-84d9-fd50fb9851eb" containerName="mariadb-account-create-update" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.954517 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa0000da-8f11-4e97-8ab5-1bcfea0ac894" containerName="mariadb-database-create" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.954530 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="a03b92ea-cd2c-455d-a88e-1d57b958b138" containerName="mariadb-account-create-update" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.954542 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee94a050-f905-44f1-a5da-16536b8cdfa7" containerName="mariadb-account-create-update" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.954551 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb216b07-9809-4b8b-857b-ac1192747b9c" containerName="mariadb-database-create" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.954562 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d045bc7-38b2-46f5-8cd8-cf634003bedf" containerName="mariadb-account-create-update" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.954570 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="67f877b5-12d3-4b48-a9eb-9ee2629e830a" containerName="ovn-config" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.954580 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="239c554e-360d-4f04-86f0-b2b98974bad3" containerName="mariadb-database-create" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.954588 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcdcfad1-d735-4b55-ae65-0ce16bdbc79d" containerName="mariadb-database-create" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.954600 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="47870992-2db9-46f4-84d9-fd50fb9851eb" containerName="mariadb-account-create-update" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.955221 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9kd7r-config-hlhk2" Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.970531 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9kd7r-config-hlhk2"] Mar 13 10:24:55 crc kubenswrapper[4632]: I0313 10:24:55.970774 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 13 10:24:56 crc kubenswrapper[4632]: I0313 10:24:56.070138 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67f877b5-12d3-4b48-a9eb-9ee2629e830a" path="/var/lib/kubelet/pods/67f877b5-12d3-4b48-a9eb-9ee2629e830a/volumes" Mar 13 10:24:56 crc kubenswrapper[4632]: I0313 10:24:56.127688 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8874c236-ccb3-45c3-9838-42542e1483fb-var-run-ovn\") pod \"ovn-controller-9kd7r-config-hlhk2\" (UID: \"8874c236-ccb3-45c3-9838-42542e1483fb\") " pod="openstack/ovn-controller-9kd7r-config-hlhk2" Mar 13 10:24:56 crc kubenswrapper[4632]: I0313 10:24:56.127762 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8874c236-ccb3-45c3-9838-42542e1483fb-scripts\") pod \"ovn-controller-9kd7r-config-hlhk2\" (UID: \"8874c236-ccb3-45c3-9838-42542e1483fb\") " pod="openstack/ovn-controller-9kd7r-config-hlhk2" Mar 13 10:24:56 crc kubenswrapper[4632]: I0313 10:24:56.127849 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8874c236-ccb3-45c3-9838-42542e1483fb-var-run\") pod \"ovn-controller-9kd7r-config-hlhk2\" (UID: \"8874c236-ccb3-45c3-9838-42542e1483fb\") " pod="openstack/ovn-controller-9kd7r-config-hlhk2" Mar 13 10:24:56 crc kubenswrapper[4632]: I0313 10:24:56.127878 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8874c236-ccb3-45c3-9838-42542e1483fb-var-log-ovn\") pod \"ovn-controller-9kd7r-config-hlhk2\" (UID: \"8874c236-ccb3-45c3-9838-42542e1483fb\") " pod="openstack/ovn-controller-9kd7r-config-hlhk2" Mar 13 10:24:56 crc kubenswrapper[4632]: I0313 10:24:56.127901 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mklx\" (UniqueName: \"kubernetes.io/projected/8874c236-ccb3-45c3-9838-42542e1483fb-kube-api-access-2mklx\") pod \"ovn-controller-9kd7r-config-hlhk2\" (UID: \"8874c236-ccb3-45c3-9838-42542e1483fb\") " pod="openstack/ovn-controller-9kd7r-config-hlhk2" Mar 13 10:24:56 crc kubenswrapper[4632]: I0313 10:24:56.127982 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8874c236-ccb3-45c3-9838-42542e1483fb-additional-scripts\") pod \"ovn-controller-9kd7r-config-hlhk2\" (UID: \"8874c236-ccb3-45c3-9838-42542e1483fb\") " pod="openstack/ovn-controller-9kd7r-config-hlhk2" Mar 13 10:24:56 crc kubenswrapper[4632]: I0313 10:24:56.229784 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8874c236-ccb3-45c3-9838-42542e1483fb-additional-scripts\") pod \"ovn-controller-9kd7r-config-hlhk2\" (UID: \"8874c236-ccb3-45c3-9838-42542e1483fb\") " pod="openstack/ovn-controller-9kd7r-config-hlhk2" Mar 13 10:24:56 crc kubenswrapper[4632]: I0313 10:24:56.229864 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8874c236-ccb3-45c3-9838-42542e1483fb-var-run-ovn\") pod \"ovn-controller-9kd7r-config-hlhk2\" (UID: \"8874c236-ccb3-45c3-9838-42542e1483fb\") " pod="openstack/ovn-controller-9kd7r-config-hlhk2" Mar 13 10:24:56 crc kubenswrapper[4632]: I0313 10:24:56.229917 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8874c236-ccb3-45c3-9838-42542e1483fb-scripts\") pod \"ovn-controller-9kd7r-config-hlhk2\" (UID: \"8874c236-ccb3-45c3-9838-42542e1483fb\") " pod="openstack/ovn-controller-9kd7r-config-hlhk2" Mar 13 10:24:56 crc kubenswrapper[4632]: I0313 10:24:56.230012 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8874c236-ccb3-45c3-9838-42542e1483fb-var-run\") pod \"ovn-controller-9kd7r-config-hlhk2\" (UID: \"8874c236-ccb3-45c3-9838-42542e1483fb\") " pod="openstack/ovn-controller-9kd7r-config-hlhk2" Mar 13 10:24:56 crc kubenswrapper[4632]: I0313 10:24:56.230034 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8874c236-ccb3-45c3-9838-42542e1483fb-var-log-ovn\") pod \"ovn-controller-9kd7r-config-hlhk2\" (UID: \"8874c236-ccb3-45c3-9838-42542e1483fb\") " pod="openstack/ovn-controller-9kd7r-config-hlhk2" Mar 13 10:24:56 crc kubenswrapper[4632]: I0313 10:24:56.230053 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mklx\" (UniqueName: \"kubernetes.io/projected/8874c236-ccb3-45c3-9838-42542e1483fb-kube-api-access-2mklx\") pod \"ovn-controller-9kd7r-config-hlhk2\" (UID: \"8874c236-ccb3-45c3-9838-42542e1483fb\") " pod="openstack/ovn-controller-9kd7r-config-hlhk2" Mar 13 10:24:56 crc kubenswrapper[4632]: I0313 10:24:56.230186 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8874c236-ccb3-45c3-9838-42542e1483fb-var-run-ovn\") pod \"ovn-controller-9kd7r-config-hlhk2\" (UID: \"8874c236-ccb3-45c3-9838-42542e1483fb\") " pod="openstack/ovn-controller-9kd7r-config-hlhk2" Mar 13 10:24:56 crc kubenswrapper[4632]: I0313 10:24:56.230255 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8874c236-ccb3-45c3-9838-42542e1483fb-var-run\") pod \"ovn-controller-9kd7r-config-hlhk2\" (UID: \"8874c236-ccb3-45c3-9838-42542e1483fb\") " pod="openstack/ovn-controller-9kd7r-config-hlhk2" Mar 13 10:24:56 crc kubenswrapper[4632]: I0313 10:24:56.230587 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8874c236-ccb3-45c3-9838-42542e1483fb-additional-scripts\") pod \"ovn-controller-9kd7r-config-hlhk2\" (UID: \"8874c236-ccb3-45c3-9838-42542e1483fb\") " pod="openstack/ovn-controller-9kd7r-config-hlhk2" Mar 13 10:24:56 crc kubenswrapper[4632]: I0313 10:24:56.230661 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8874c236-ccb3-45c3-9838-42542e1483fb-var-log-ovn\") pod \"ovn-controller-9kd7r-config-hlhk2\" (UID: \"8874c236-ccb3-45c3-9838-42542e1483fb\") " pod="openstack/ovn-controller-9kd7r-config-hlhk2" Mar 13 10:24:56 crc kubenswrapper[4632]: I0313 10:24:56.232082 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8874c236-ccb3-45c3-9838-42542e1483fb-scripts\") pod \"ovn-controller-9kd7r-config-hlhk2\" (UID: \"8874c236-ccb3-45c3-9838-42542e1483fb\") " pod="openstack/ovn-controller-9kd7r-config-hlhk2" Mar 13 10:24:56 crc kubenswrapper[4632]: I0313 10:24:56.251733 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mklx\" (UniqueName: \"kubernetes.io/projected/8874c236-ccb3-45c3-9838-42542e1483fb-kube-api-access-2mklx\") pod \"ovn-controller-9kd7r-config-hlhk2\" (UID: \"8874c236-ccb3-45c3-9838-42542e1483fb\") " pod="openstack/ovn-controller-9kd7r-config-hlhk2" Mar 13 10:24:56 crc kubenswrapper[4632]: I0313 10:24:56.327858 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9kd7r-config-hlhk2" Mar 13 10:24:56 crc kubenswrapper[4632]: I0313 10:24:56.973737 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9kd7r-config-hlhk2"] Mar 13 10:24:57 crc kubenswrapper[4632]: I0313 10:24:57.194358 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9kd7r-config-hlhk2" event={"ID":"8874c236-ccb3-45c3-9838-42542e1483fb","Type":"ContainerStarted","Data":"de5e563237b6f5d5a900bfa4db200770d531a773a59fc045ca36a99783ac23c3"} Mar 13 10:24:57 crc kubenswrapper[4632]: I0313 10:24:57.200211 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e37b3d77-de2e-4be9-9984-550d4ba0f2f0","Type":"ContainerStarted","Data":"6410c58da9c7335dcc151a74496460faab6690d4d868622c9f53e1ac95af015d"} Mar 13 10:24:58 crc kubenswrapper[4632]: I0313 10:24:58.212891 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e37b3d77-de2e-4be9-9984-550d4ba0f2f0","Type":"ContainerStarted","Data":"2ad862ed96fbf3f56e9fff7d04adafeceb7e022528ca4c0cfd19fd10939e9b7a"} Mar 13 10:24:58 crc kubenswrapper[4632]: I0313 10:24:58.214019 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e37b3d77-de2e-4be9-9984-550d4ba0f2f0","Type":"ContainerStarted","Data":"deed0762a2707d9fc7e47fb11fd2a976ec5f0a219b2515f95c27456e30196111"} Mar 13 10:24:58 crc kubenswrapper[4632]: I0313 10:24:58.214099 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e37b3d77-de2e-4be9-9984-550d4ba0f2f0","Type":"ContainerStarted","Data":"75f72d4258d1b9a4742e65c0ab7da327b79d9d4d43591f14e17852aa410cacc0"} Mar 13 10:24:58 crc kubenswrapper[4632]: I0313 10:24:58.215386 4632 generic.go:334] "Generic (PLEG): container finished" podID="8874c236-ccb3-45c3-9838-42542e1483fb" containerID="98a44d8e524895de3db65a2da91c25a6875681d7e31dfa6eb205635df601d593" exitCode=0 Mar 13 10:24:58 crc kubenswrapper[4632]: I0313 10:24:58.215468 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9kd7r-config-hlhk2" event={"ID":"8874c236-ccb3-45c3-9838-42542e1483fb","Type":"ContainerDied","Data":"98a44d8e524895de3db65a2da91c25a6875681d7e31dfa6eb205635df601d593"} Mar 13 10:24:58 crc kubenswrapper[4632]: I0313 10:24:58.931230 4632 scope.go:117] "RemoveContainer" containerID="971cfa2ec11ce234b8c8c574daddb17b130773fddba410f62dd84c800e0f4023" Mar 13 10:24:59 crc kubenswrapper[4632]: I0313 10:24:59.239358 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e37b3d77-de2e-4be9-9984-550d4ba0f2f0","Type":"ContainerStarted","Data":"c974bded2de6ee2b905b68670d69ac1e80116509a459a79b905c7f37bdacff58"} Mar 13 10:24:59 crc kubenswrapper[4632]: I0313 10:24:59.626635 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9kd7r-config-hlhk2" Mar 13 10:24:59 crc kubenswrapper[4632]: I0313 10:24:59.801915 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8874c236-ccb3-45c3-9838-42542e1483fb-var-log-ovn\") pod \"8874c236-ccb3-45c3-9838-42542e1483fb\" (UID: \"8874c236-ccb3-45c3-9838-42542e1483fb\") " Mar 13 10:24:59 crc kubenswrapper[4632]: I0313 10:24:59.802304 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8874c236-ccb3-45c3-9838-42542e1483fb-var-run\") pod \"8874c236-ccb3-45c3-9838-42542e1483fb\" (UID: \"8874c236-ccb3-45c3-9838-42542e1483fb\") " Mar 13 10:24:59 crc kubenswrapper[4632]: I0313 10:24:59.802354 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8874c236-ccb3-45c3-9838-42542e1483fb-additional-scripts\") pod \"8874c236-ccb3-45c3-9838-42542e1483fb\" (UID: \"8874c236-ccb3-45c3-9838-42542e1483fb\") " Mar 13 10:24:59 crc kubenswrapper[4632]: I0313 10:24:59.802373 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8874c236-ccb3-45c3-9838-42542e1483fb-scripts\") pod \"8874c236-ccb3-45c3-9838-42542e1483fb\" (UID: \"8874c236-ccb3-45c3-9838-42542e1483fb\") " Mar 13 10:24:59 crc kubenswrapper[4632]: I0313 10:24:59.802056 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8874c236-ccb3-45c3-9838-42542e1483fb-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "8874c236-ccb3-45c3-9838-42542e1483fb" (UID: "8874c236-ccb3-45c3-9838-42542e1483fb"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:24:59 crc kubenswrapper[4632]: I0313 10:24:59.802329 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8874c236-ccb3-45c3-9838-42542e1483fb-var-run" (OuterVolumeSpecName: "var-run") pod "8874c236-ccb3-45c3-9838-42542e1483fb" (UID: "8874c236-ccb3-45c3-9838-42542e1483fb"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:24:59 crc kubenswrapper[4632]: I0313 10:24:59.802441 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mklx\" (UniqueName: \"kubernetes.io/projected/8874c236-ccb3-45c3-9838-42542e1483fb-kube-api-access-2mklx\") pod \"8874c236-ccb3-45c3-9838-42542e1483fb\" (UID: \"8874c236-ccb3-45c3-9838-42542e1483fb\") " Mar 13 10:24:59 crc kubenswrapper[4632]: I0313 10:24:59.802471 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8874c236-ccb3-45c3-9838-42542e1483fb-var-run-ovn\") pod \"8874c236-ccb3-45c3-9838-42542e1483fb\" (UID: \"8874c236-ccb3-45c3-9838-42542e1483fb\") " Mar 13 10:24:59 crc kubenswrapper[4632]: I0313 10:24:59.802767 4632 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8874c236-ccb3-45c3-9838-42542e1483fb-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:59 crc kubenswrapper[4632]: I0313 10:24:59.802780 4632 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8874c236-ccb3-45c3-9838-42542e1483fb-var-run\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:59 crc kubenswrapper[4632]: I0313 10:24:59.802816 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8874c236-ccb3-45c3-9838-42542e1483fb-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "8874c236-ccb3-45c3-9838-42542e1483fb" (UID: "8874c236-ccb3-45c3-9838-42542e1483fb"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:24:59 crc kubenswrapper[4632]: I0313 10:24:59.803663 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8874c236-ccb3-45c3-9838-42542e1483fb-scripts" (OuterVolumeSpecName: "scripts") pod "8874c236-ccb3-45c3-9838-42542e1483fb" (UID: "8874c236-ccb3-45c3-9838-42542e1483fb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:24:59 crc kubenswrapper[4632]: I0313 10:24:59.803968 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8874c236-ccb3-45c3-9838-42542e1483fb-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "8874c236-ccb3-45c3-9838-42542e1483fb" (UID: "8874c236-ccb3-45c3-9838-42542e1483fb"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:24:59 crc kubenswrapper[4632]: I0313 10:24:59.811624 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8874c236-ccb3-45c3-9838-42542e1483fb-kube-api-access-2mklx" (OuterVolumeSpecName: "kube-api-access-2mklx") pod "8874c236-ccb3-45c3-9838-42542e1483fb" (UID: "8874c236-ccb3-45c3-9838-42542e1483fb"). InnerVolumeSpecName "kube-api-access-2mklx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:24:59 crc kubenswrapper[4632]: I0313 10:24:59.904251 4632 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8874c236-ccb3-45c3-9838-42542e1483fb-additional-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:59 crc kubenswrapper[4632]: I0313 10:24:59.904298 4632 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8874c236-ccb3-45c3-9838-42542e1483fb-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:59 crc kubenswrapper[4632]: I0313 10:24:59.904312 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mklx\" (UniqueName: \"kubernetes.io/projected/8874c236-ccb3-45c3-9838-42542e1483fb-kube-api-access-2mklx\") on node \"crc\" DevicePath \"\"" Mar 13 10:24:59 crc kubenswrapper[4632]: I0313 10:24:59.904327 4632 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8874c236-ccb3-45c3-9838-42542e1483fb-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:00 crc kubenswrapper[4632]: I0313 10:25:00.261933 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-l6hpb" event={"ID":"4f1c5663-463b-45e2-b200-64e73e6d5698","Type":"ContainerStarted","Data":"bf8d93edd68f1cf79021467ff9910419baf75397a4140fb3d25bca7f97abbf70"} Mar 13 10:25:00 crc kubenswrapper[4632]: I0313 10:25:00.266797 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9kd7r-config-hlhk2" Mar 13 10:25:00 crc kubenswrapper[4632]: I0313 10:25:00.267083 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9kd7r-config-hlhk2" event={"ID":"8874c236-ccb3-45c3-9838-42542e1483fb","Type":"ContainerDied","Data":"de5e563237b6f5d5a900bfa4db200770d531a773a59fc045ca36a99783ac23c3"} Mar 13 10:25:00 crc kubenswrapper[4632]: I0313 10:25:00.267112 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de5e563237b6f5d5a900bfa4db200770d531a773a59fc045ca36a99783ac23c3" Mar 13 10:25:00 crc kubenswrapper[4632]: I0313 10:25:00.297767 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e37b3d77-de2e-4be9-9984-550d4ba0f2f0","Type":"ContainerStarted","Data":"992aa88321f15a34fdab3b0ce20b6c83dd648c4655d479b207bb5da20558c851"} Mar 13 10:25:00 crc kubenswrapper[4632]: I0313 10:25:00.297829 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e37b3d77-de2e-4be9-9984-550d4ba0f2f0","Type":"ContainerStarted","Data":"24f32d07c7b51462bf0a0a775221d99b9ece41557d0064e4a499eefe18ba7dfc"} Mar 13 10:25:00 crc kubenswrapper[4632]: I0313 10:25:00.297848 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e37b3d77-de2e-4be9-9984-550d4ba0f2f0","Type":"ContainerStarted","Data":"d5e01d4eefcd8e0fb049c2a6a11ed0324853c68edd619187e50b5676b13f1103"} Mar 13 10:25:00 crc kubenswrapper[4632]: I0313 10:25:00.299995 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-l6hpb" podStartSLOduration=2.853085272 podStartE2EDuration="36.299971059s" podCreationTimestamp="2026-03-13 10:24:24 +0000 UTC" firstStartedPulling="2026-03-13 10:24:25.287597404 +0000 UTC m=+1239.310127527" lastFinishedPulling="2026-03-13 10:24:58.734483181 +0000 UTC m=+1272.757013314" observedRunningTime="2026-03-13 10:25:00.279911376 +0000 UTC m=+1274.302441519" watchObservedRunningTime="2026-03-13 10:25:00.299971059 +0000 UTC m=+1274.322501192" Mar 13 10:25:00 crc kubenswrapper[4632]: I0313 10:25:00.739043 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-9kd7r-config-hlhk2"] Mar 13 10:25:00 crc kubenswrapper[4632]: I0313 10:25:00.773456 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-9kd7r-config-hlhk2"] Mar 13 10:25:01 crc kubenswrapper[4632]: I0313 10:25:01.313511 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e37b3d77-de2e-4be9-9984-550d4ba0f2f0","Type":"ContainerStarted","Data":"0af8435ee83cb5cebf4c2cf61b496b0ccc8ad30e3126d780d20bab35af32a80b"} Mar 13 10:25:01 crc kubenswrapper[4632]: I0313 10:25:01.313560 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e37b3d77-de2e-4be9-9984-550d4ba0f2f0","Type":"ContainerStarted","Data":"4bed88103529c366d530f8c0b3586e87d09239d52f2dc963ad771a5ba7b1873e"} Mar 13 10:25:01 crc kubenswrapper[4632]: I0313 10:25:01.313572 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e37b3d77-de2e-4be9-9984-550d4ba0f2f0","Type":"ContainerStarted","Data":"de216b68315e4d4fe5c2c01408fd20a4ef13238294cf3f3a9267a8193c8962b4"} Mar 13 10:25:01 crc kubenswrapper[4632]: I0313 10:25:01.315323 4632 generic.go:334] "Generic (PLEG): container finished" podID="e824ae7d-dbbd-496b-b8b0-8b5c59a4d419" containerID="53c212eae0f18baff6fdcd0d88db82f3271a3997b68292e7fdae508ea7808719" exitCode=0 Mar 13 10:25:01 crc kubenswrapper[4632]: I0313 10:25:01.315358 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-mq9np" event={"ID":"e824ae7d-dbbd-496b-b8b0-8b5c59a4d419","Type":"ContainerDied","Data":"53c212eae0f18baff6fdcd0d88db82f3271a3997b68292e7fdae508ea7808719"} Mar 13 10:25:01 crc kubenswrapper[4632]: I0313 10:25:01.357566 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=43.438449855 podStartE2EDuration="55.357543615s" podCreationTimestamp="2026-03-13 10:24:06 +0000 UTC" firstStartedPulling="2026-03-13 10:24:46.813789012 +0000 UTC m=+1260.836319145" lastFinishedPulling="2026-03-13 10:24:58.732882772 +0000 UTC m=+1272.755412905" observedRunningTime="2026-03-13 10:25:01.353407143 +0000 UTC m=+1275.375937276" watchObservedRunningTime="2026-03-13 10:25:01.357543615 +0000 UTC m=+1275.380073758" Mar 13 10:25:01 crc kubenswrapper[4632]: I0313 10:25:01.784666 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-756d4f5c49-ng8tb"] Mar 13 10:25:01 crc kubenswrapper[4632]: E0313 10:25:01.785625 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8874c236-ccb3-45c3-9838-42542e1483fb" containerName="ovn-config" Mar 13 10:25:01 crc kubenswrapper[4632]: I0313 10:25:01.785750 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="8874c236-ccb3-45c3-9838-42542e1483fb" containerName="ovn-config" Mar 13 10:25:01 crc kubenswrapper[4632]: I0313 10:25:01.786061 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="8874c236-ccb3-45c3-9838-42542e1483fb" containerName="ovn-config" Mar 13 10:25:01 crc kubenswrapper[4632]: I0313 10:25:01.787279 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-756d4f5c49-ng8tb" Mar 13 10:25:01 crc kubenswrapper[4632]: I0313 10:25:01.790292 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Mar 13 10:25:01 crc kubenswrapper[4632]: I0313 10:25:01.796398 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-756d4f5c49-ng8tb"] Mar 13 10:25:01 crc kubenswrapper[4632]: I0313 10:25:01.946013 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/875ab9dc-abac-45c4-86b9-b0bfccdfb240-ovsdbserver-nb\") pod \"dnsmasq-dns-756d4f5c49-ng8tb\" (UID: \"875ab9dc-abac-45c4-86b9-b0bfccdfb240\") " pod="openstack/dnsmasq-dns-756d4f5c49-ng8tb" Mar 13 10:25:01 crc kubenswrapper[4632]: I0313 10:25:01.946075 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/875ab9dc-abac-45c4-86b9-b0bfccdfb240-dns-svc\") pod \"dnsmasq-dns-756d4f5c49-ng8tb\" (UID: \"875ab9dc-abac-45c4-86b9-b0bfccdfb240\") " pod="openstack/dnsmasq-dns-756d4f5c49-ng8tb" Mar 13 10:25:01 crc kubenswrapper[4632]: I0313 10:25:01.946110 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/875ab9dc-abac-45c4-86b9-b0bfccdfb240-ovsdbserver-sb\") pod \"dnsmasq-dns-756d4f5c49-ng8tb\" (UID: \"875ab9dc-abac-45c4-86b9-b0bfccdfb240\") " pod="openstack/dnsmasq-dns-756d4f5c49-ng8tb" Mar 13 10:25:01 crc kubenswrapper[4632]: I0313 10:25:01.946351 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/875ab9dc-abac-45c4-86b9-b0bfccdfb240-config\") pod \"dnsmasq-dns-756d4f5c49-ng8tb\" (UID: \"875ab9dc-abac-45c4-86b9-b0bfccdfb240\") " pod="openstack/dnsmasq-dns-756d4f5c49-ng8tb" Mar 13 10:25:01 crc kubenswrapper[4632]: I0313 10:25:01.946532 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5jtw\" (UniqueName: \"kubernetes.io/projected/875ab9dc-abac-45c4-86b9-b0bfccdfb240-kube-api-access-t5jtw\") pod \"dnsmasq-dns-756d4f5c49-ng8tb\" (UID: \"875ab9dc-abac-45c4-86b9-b0bfccdfb240\") " pod="openstack/dnsmasq-dns-756d4f5c49-ng8tb" Mar 13 10:25:01 crc kubenswrapper[4632]: I0313 10:25:01.946836 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/875ab9dc-abac-45c4-86b9-b0bfccdfb240-dns-swift-storage-0\") pod \"dnsmasq-dns-756d4f5c49-ng8tb\" (UID: \"875ab9dc-abac-45c4-86b9-b0bfccdfb240\") " pod="openstack/dnsmasq-dns-756d4f5c49-ng8tb" Mar 13 10:25:02 crc kubenswrapper[4632]: I0313 10:25:02.048113 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/875ab9dc-abac-45c4-86b9-b0bfccdfb240-dns-swift-storage-0\") pod \"dnsmasq-dns-756d4f5c49-ng8tb\" (UID: \"875ab9dc-abac-45c4-86b9-b0bfccdfb240\") " pod="openstack/dnsmasq-dns-756d4f5c49-ng8tb" Mar 13 10:25:02 crc kubenswrapper[4632]: I0313 10:25:02.048415 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/875ab9dc-abac-45c4-86b9-b0bfccdfb240-ovsdbserver-nb\") pod \"dnsmasq-dns-756d4f5c49-ng8tb\" (UID: \"875ab9dc-abac-45c4-86b9-b0bfccdfb240\") " pod="openstack/dnsmasq-dns-756d4f5c49-ng8tb" Mar 13 10:25:02 crc kubenswrapper[4632]: I0313 10:25:02.048495 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/875ab9dc-abac-45c4-86b9-b0bfccdfb240-dns-svc\") pod \"dnsmasq-dns-756d4f5c49-ng8tb\" (UID: \"875ab9dc-abac-45c4-86b9-b0bfccdfb240\") " pod="openstack/dnsmasq-dns-756d4f5c49-ng8tb" Mar 13 10:25:02 crc kubenswrapper[4632]: I0313 10:25:02.048565 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/875ab9dc-abac-45c4-86b9-b0bfccdfb240-ovsdbserver-sb\") pod \"dnsmasq-dns-756d4f5c49-ng8tb\" (UID: \"875ab9dc-abac-45c4-86b9-b0bfccdfb240\") " pod="openstack/dnsmasq-dns-756d4f5c49-ng8tb" Mar 13 10:25:02 crc kubenswrapper[4632]: I0313 10:25:02.048658 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/875ab9dc-abac-45c4-86b9-b0bfccdfb240-config\") pod \"dnsmasq-dns-756d4f5c49-ng8tb\" (UID: \"875ab9dc-abac-45c4-86b9-b0bfccdfb240\") " pod="openstack/dnsmasq-dns-756d4f5c49-ng8tb" Mar 13 10:25:02 crc kubenswrapper[4632]: I0313 10:25:02.048748 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5jtw\" (UniqueName: \"kubernetes.io/projected/875ab9dc-abac-45c4-86b9-b0bfccdfb240-kube-api-access-t5jtw\") pod \"dnsmasq-dns-756d4f5c49-ng8tb\" (UID: \"875ab9dc-abac-45c4-86b9-b0bfccdfb240\") " pod="openstack/dnsmasq-dns-756d4f5c49-ng8tb" Mar 13 10:25:02 crc kubenswrapper[4632]: I0313 10:25:02.049137 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/875ab9dc-abac-45c4-86b9-b0bfccdfb240-dns-swift-storage-0\") pod \"dnsmasq-dns-756d4f5c49-ng8tb\" (UID: \"875ab9dc-abac-45c4-86b9-b0bfccdfb240\") " pod="openstack/dnsmasq-dns-756d4f5c49-ng8tb" Mar 13 10:25:02 crc kubenswrapper[4632]: I0313 10:25:02.049221 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/875ab9dc-abac-45c4-86b9-b0bfccdfb240-dns-svc\") pod \"dnsmasq-dns-756d4f5c49-ng8tb\" (UID: \"875ab9dc-abac-45c4-86b9-b0bfccdfb240\") " pod="openstack/dnsmasq-dns-756d4f5c49-ng8tb" Mar 13 10:25:02 crc kubenswrapper[4632]: I0313 10:25:02.049803 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/875ab9dc-abac-45c4-86b9-b0bfccdfb240-ovsdbserver-sb\") pod \"dnsmasq-dns-756d4f5c49-ng8tb\" (UID: \"875ab9dc-abac-45c4-86b9-b0bfccdfb240\") " pod="openstack/dnsmasq-dns-756d4f5c49-ng8tb" Mar 13 10:25:02 crc kubenswrapper[4632]: I0313 10:25:02.049817 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/875ab9dc-abac-45c4-86b9-b0bfccdfb240-config\") pod \"dnsmasq-dns-756d4f5c49-ng8tb\" (UID: \"875ab9dc-abac-45c4-86b9-b0bfccdfb240\") " pod="openstack/dnsmasq-dns-756d4f5c49-ng8tb" Mar 13 10:25:02 crc kubenswrapper[4632]: I0313 10:25:02.050341 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/875ab9dc-abac-45c4-86b9-b0bfccdfb240-ovsdbserver-nb\") pod \"dnsmasq-dns-756d4f5c49-ng8tb\" (UID: \"875ab9dc-abac-45c4-86b9-b0bfccdfb240\") " pod="openstack/dnsmasq-dns-756d4f5c49-ng8tb" Mar 13 10:25:02 crc kubenswrapper[4632]: I0313 10:25:02.055984 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8874c236-ccb3-45c3-9838-42542e1483fb" path="/var/lib/kubelet/pods/8874c236-ccb3-45c3-9838-42542e1483fb/volumes" Mar 13 10:25:02 crc kubenswrapper[4632]: I0313 10:25:02.079756 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5jtw\" (UniqueName: \"kubernetes.io/projected/875ab9dc-abac-45c4-86b9-b0bfccdfb240-kube-api-access-t5jtw\") pod \"dnsmasq-dns-756d4f5c49-ng8tb\" (UID: \"875ab9dc-abac-45c4-86b9-b0bfccdfb240\") " pod="openstack/dnsmasq-dns-756d4f5c49-ng8tb" Mar 13 10:25:02 crc kubenswrapper[4632]: I0313 10:25:02.103053 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-756d4f5c49-ng8tb" Mar 13 10:25:02 crc kubenswrapper[4632]: I0313 10:25:02.569329 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-756d4f5c49-ng8tb"] Mar 13 10:25:02 crc kubenswrapper[4632]: W0313 10:25:02.588716 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod875ab9dc_abac_45c4_86b9_b0bfccdfb240.slice/crio-0260417d40f7e52a2270d809525583f83a0891dc6b82c6c6e13d0d522dd4a9b4 WatchSource:0}: Error finding container 0260417d40f7e52a2270d809525583f83a0891dc6b82c6c6e13d0d522dd4a9b4: Status 404 returned error can't find the container with id 0260417d40f7e52a2270d809525583f83a0891dc6b82c6c6e13d0d522dd4a9b4 Mar 13 10:25:02 crc kubenswrapper[4632]: I0313 10:25:02.686899 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-mq9np" Mar 13 10:25:02 crc kubenswrapper[4632]: I0313 10:25:02.760506 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e824ae7d-dbbd-496b-b8b0-8b5c59a4d419-config-data\") pod \"e824ae7d-dbbd-496b-b8b0-8b5c59a4d419\" (UID: \"e824ae7d-dbbd-496b-b8b0-8b5c59a4d419\") " Mar 13 10:25:02 crc kubenswrapper[4632]: I0313 10:25:02.760560 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hh8q\" (UniqueName: \"kubernetes.io/projected/e824ae7d-dbbd-496b-b8b0-8b5c59a4d419-kube-api-access-5hh8q\") pod \"e824ae7d-dbbd-496b-b8b0-8b5c59a4d419\" (UID: \"e824ae7d-dbbd-496b-b8b0-8b5c59a4d419\") " Mar 13 10:25:02 crc kubenswrapper[4632]: I0313 10:25:02.760631 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e824ae7d-dbbd-496b-b8b0-8b5c59a4d419-combined-ca-bundle\") pod \"e824ae7d-dbbd-496b-b8b0-8b5c59a4d419\" (UID: \"e824ae7d-dbbd-496b-b8b0-8b5c59a4d419\") " Mar 13 10:25:02 crc kubenswrapper[4632]: I0313 10:25:02.765192 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e824ae7d-dbbd-496b-b8b0-8b5c59a4d419-kube-api-access-5hh8q" (OuterVolumeSpecName: "kube-api-access-5hh8q") pod "e824ae7d-dbbd-496b-b8b0-8b5c59a4d419" (UID: "e824ae7d-dbbd-496b-b8b0-8b5c59a4d419"). InnerVolumeSpecName "kube-api-access-5hh8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:25:02 crc kubenswrapper[4632]: I0313 10:25:02.802880 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e824ae7d-dbbd-496b-b8b0-8b5c59a4d419-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e824ae7d-dbbd-496b-b8b0-8b5c59a4d419" (UID: "e824ae7d-dbbd-496b-b8b0-8b5c59a4d419"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:25:02 crc kubenswrapper[4632]: I0313 10:25:02.829835 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e824ae7d-dbbd-496b-b8b0-8b5c59a4d419-config-data" (OuterVolumeSpecName: "config-data") pod "e824ae7d-dbbd-496b-b8b0-8b5c59a4d419" (UID: "e824ae7d-dbbd-496b-b8b0-8b5c59a4d419"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:25:02 crc kubenswrapper[4632]: I0313 10:25:02.862189 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e824ae7d-dbbd-496b-b8b0-8b5c59a4d419-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:02 crc kubenswrapper[4632]: I0313 10:25:02.862224 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hh8q\" (UniqueName: \"kubernetes.io/projected/e824ae7d-dbbd-496b-b8b0-8b5c59a4d419-kube-api-access-5hh8q\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:02 crc kubenswrapper[4632]: I0313 10:25:02.862234 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e824ae7d-dbbd-496b-b8b0-8b5c59a4d419-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:03 crc kubenswrapper[4632]: I0313 10:25:03.338111 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-mq9np" event={"ID":"e824ae7d-dbbd-496b-b8b0-8b5c59a4d419","Type":"ContainerDied","Data":"1a3020d6e5b66dad152669406220e67cb7be099d82ff8fd4925d6504c1176fb1"} Mar 13 10:25:03 crc kubenswrapper[4632]: I0313 10:25:03.338340 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a3020d6e5b66dad152669406220e67cb7be099d82ff8fd4925d6504c1176fb1" Mar 13 10:25:03 crc kubenswrapper[4632]: I0313 10:25:03.339069 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-mq9np" Mar 13 10:25:03 crc kubenswrapper[4632]: I0313 10:25:03.339739 4632 generic.go:334] "Generic (PLEG): container finished" podID="875ab9dc-abac-45c4-86b9-b0bfccdfb240" containerID="6cc0261f52d1f9ff6e82214738332406f7144515ee6e99809f0b2f51974a5801" exitCode=0 Mar 13 10:25:03 crc kubenswrapper[4632]: I0313 10:25:03.339797 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-756d4f5c49-ng8tb" event={"ID":"875ab9dc-abac-45c4-86b9-b0bfccdfb240","Type":"ContainerDied","Data":"6cc0261f52d1f9ff6e82214738332406f7144515ee6e99809f0b2f51974a5801"} Mar 13 10:25:03 crc kubenswrapper[4632]: I0313 10:25:03.339823 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-756d4f5c49-ng8tb" event={"ID":"875ab9dc-abac-45c4-86b9-b0bfccdfb240","Type":"ContainerStarted","Data":"0260417d40f7e52a2270d809525583f83a0891dc6b82c6c6e13d0d522dd4a9b4"} Mar 13 10:25:03 crc kubenswrapper[4632]: I0313 10:25:03.811048 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-756d4f5c49-ng8tb"] Mar 13 10:25:03 crc kubenswrapper[4632]: I0313 10:25:03.821277 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-6wd56"] Mar 13 10:25:03 crc kubenswrapper[4632]: E0313 10:25:03.821746 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e824ae7d-dbbd-496b-b8b0-8b5c59a4d419" containerName="keystone-db-sync" Mar 13 10:25:03 crc kubenswrapper[4632]: I0313 10:25:03.821762 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="e824ae7d-dbbd-496b-b8b0-8b5c59a4d419" containerName="keystone-db-sync" Mar 13 10:25:03 crc kubenswrapper[4632]: I0313 10:25:03.821968 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="e824ae7d-dbbd-496b-b8b0-8b5c59a4d419" containerName="keystone-db-sync" Mar 13 10:25:03 crc kubenswrapper[4632]: I0313 10:25:03.822553 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6wd56" Mar 13 10:25:03 crc kubenswrapper[4632]: I0313 10:25:03.828894 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6wd56"] Mar 13 10:25:03 crc kubenswrapper[4632]: I0313 10:25:03.838767 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 13 10:25:03 crc kubenswrapper[4632]: I0313 10:25:03.839113 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 13 10:25:03 crc kubenswrapper[4632]: I0313 10:25:03.839302 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-llpcf" Mar 13 10:25:03 crc kubenswrapper[4632]: I0313 10:25:03.839464 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 13 10:25:03 crc kubenswrapper[4632]: I0313 10:25:03.851803 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 13 10:25:03 crc kubenswrapper[4632]: I0313 10:25:03.883449 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dd01e75-b01a-439d-953a-a7b35aefaccf-combined-ca-bundle\") pod \"keystone-bootstrap-6wd56\" (UID: \"6dd01e75-b01a-439d-953a-a7b35aefaccf\") " pod="openstack/keystone-bootstrap-6wd56" Mar 13 10:25:03 crc kubenswrapper[4632]: I0313 10:25:03.883572 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dd01e75-b01a-439d-953a-a7b35aefaccf-scripts\") pod \"keystone-bootstrap-6wd56\" (UID: \"6dd01e75-b01a-439d-953a-a7b35aefaccf\") " pod="openstack/keystone-bootstrap-6wd56" Mar 13 10:25:03 crc kubenswrapper[4632]: I0313 10:25:03.883601 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dd01e75-b01a-439d-953a-a7b35aefaccf-config-data\") pod \"keystone-bootstrap-6wd56\" (UID: \"6dd01e75-b01a-439d-953a-a7b35aefaccf\") " pod="openstack/keystone-bootstrap-6wd56" Mar 13 10:25:03 crc kubenswrapper[4632]: I0313 10:25:03.883635 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6dd01e75-b01a-439d-953a-a7b35aefaccf-credential-keys\") pod \"keystone-bootstrap-6wd56\" (UID: \"6dd01e75-b01a-439d-953a-a7b35aefaccf\") " pod="openstack/keystone-bootstrap-6wd56" Mar 13 10:25:03 crc kubenswrapper[4632]: I0313 10:25:03.883657 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6dd01e75-b01a-439d-953a-a7b35aefaccf-fernet-keys\") pod \"keystone-bootstrap-6wd56\" (UID: \"6dd01e75-b01a-439d-953a-a7b35aefaccf\") " pod="openstack/keystone-bootstrap-6wd56" Mar 13 10:25:03 crc kubenswrapper[4632]: I0313 10:25:03.883677 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9qt5\" (UniqueName: \"kubernetes.io/projected/6dd01e75-b01a-439d-953a-a7b35aefaccf-kube-api-access-q9qt5\") pod \"keystone-bootstrap-6wd56\" (UID: \"6dd01e75-b01a-439d-953a-a7b35aefaccf\") " pod="openstack/keystone-bootstrap-6wd56" Mar 13 10:25:03 crc kubenswrapper[4632]: I0313 10:25:03.949871 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6844bbffb5-6qbh8"] Mar 13 10:25:03 crc kubenswrapper[4632]: I0313 10:25:03.951838 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6844bbffb5-6qbh8" Mar 13 10:25:03 crc kubenswrapper[4632]: I0313 10:25:03.985029 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6dd01e75-b01a-439d-953a-a7b35aefaccf-credential-keys\") pod \"keystone-bootstrap-6wd56\" (UID: \"6dd01e75-b01a-439d-953a-a7b35aefaccf\") " pod="openstack/keystone-bootstrap-6wd56" Mar 13 10:25:03 crc kubenswrapper[4632]: I0313 10:25:03.985089 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6dd01e75-b01a-439d-953a-a7b35aefaccf-fernet-keys\") pod \"keystone-bootstrap-6wd56\" (UID: \"6dd01e75-b01a-439d-953a-a7b35aefaccf\") " pod="openstack/keystone-bootstrap-6wd56" Mar 13 10:25:03 crc kubenswrapper[4632]: I0313 10:25:03.985116 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9qt5\" (UniqueName: \"kubernetes.io/projected/6dd01e75-b01a-439d-953a-a7b35aefaccf-kube-api-access-q9qt5\") pod \"keystone-bootstrap-6wd56\" (UID: \"6dd01e75-b01a-439d-953a-a7b35aefaccf\") " pod="openstack/keystone-bootstrap-6wd56" Mar 13 10:25:03 crc kubenswrapper[4632]: I0313 10:25:03.985179 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dd01e75-b01a-439d-953a-a7b35aefaccf-combined-ca-bundle\") pod \"keystone-bootstrap-6wd56\" (UID: \"6dd01e75-b01a-439d-953a-a7b35aefaccf\") " pod="openstack/keystone-bootstrap-6wd56" Mar 13 10:25:03 crc kubenswrapper[4632]: I0313 10:25:03.985295 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dd01e75-b01a-439d-953a-a7b35aefaccf-scripts\") pod \"keystone-bootstrap-6wd56\" (UID: \"6dd01e75-b01a-439d-953a-a7b35aefaccf\") " pod="openstack/keystone-bootstrap-6wd56" Mar 13 10:25:03 crc kubenswrapper[4632]: I0313 10:25:03.985335 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dd01e75-b01a-439d-953a-a7b35aefaccf-config-data\") pod \"keystone-bootstrap-6wd56\" (UID: \"6dd01e75-b01a-439d-953a-a7b35aefaccf\") " pod="openstack/keystone-bootstrap-6wd56" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.021630 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6dd01e75-b01a-439d-953a-a7b35aefaccf-credential-keys\") pod \"keystone-bootstrap-6wd56\" (UID: \"6dd01e75-b01a-439d-953a-a7b35aefaccf\") " pod="openstack/keystone-bootstrap-6wd56" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.025026 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dd01e75-b01a-439d-953a-a7b35aefaccf-scripts\") pod \"keystone-bootstrap-6wd56\" (UID: \"6dd01e75-b01a-439d-953a-a7b35aefaccf\") " pod="openstack/keystone-bootstrap-6wd56" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.025740 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6dd01e75-b01a-439d-953a-a7b35aefaccf-fernet-keys\") pod \"keystone-bootstrap-6wd56\" (UID: \"6dd01e75-b01a-439d-953a-a7b35aefaccf\") " pod="openstack/keystone-bootstrap-6wd56" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.026662 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dd01e75-b01a-439d-953a-a7b35aefaccf-config-data\") pod \"keystone-bootstrap-6wd56\" (UID: \"6dd01e75-b01a-439d-953a-a7b35aefaccf\") " pod="openstack/keystone-bootstrap-6wd56" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.052234 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dd01e75-b01a-439d-953a-a7b35aefaccf-combined-ca-bundle\") pod \"keystone-bootstrap-6wd56\" (UID: \"6dd01e75-b01a-439d-953a-a7b35aefaccf\") " pod="openstack/keystone-bootstrap-6wd56" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.083703 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6844bbffb5-6qbh8"] Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.087277 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e8562e5-7677-460c-864c-c0f1dcd2ac41-dns-swift-storage-0\") pod \"dnsmasq-dns-6844bbffb5-6qbh8\" (UID: \"8e8562e5-7677-460c-864c-c0f1dcd2ac41\") " pod="openstack/dnsmasq-dns-6844bbffb5-6qbh8" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.087339 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e8562e5-7677-460c-864c-c0f1dcd2ac41-config\") pod \"dnsmasq-dns-6844bbffb5-6qbh8\" (UID: \"8e8562e5-7677-460c-864c-c0f1dcd2ac41\") " pod="openstack/dnsmasq-dns-6844bbffb5-6qbh8" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.087399 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e8562e5-7677-460c-864c-c0f1dcd2ac41-dns-svc\") pod \"dnsmasq-dns-6844bbffb5-6qbh8\" (UID: \"8e8562e5-7677-460c-864c-c0f1dcd2ac41\") " pod="openstack/dnsmasq-dns-6844bbffb5-6qbh8" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.087476 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e8562e5-7677-460c-864c-c0f1dcd2ac41-ovsdbserver-nb\") pod \"dnsmasq-dns-6844bbffb5-6qbh8\" (UID: \"8e8562e5-7677-460c-864c-c0f1dcd2ac41\") " pod="openstack/dnsmasq-dns-6844bbffb5-6qbh8" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.087501 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slg5l\" (UniqueName: \"kubernetes.io/projected/8e8562e5-7677-460c-864c-c0f1dcd2ac41-kube-api-access-slg5l\") pod \"dnsmasq-dns-6844bbffb5-6qbh8\" (UID: \"8e8562e5-7677-460c-864c-c0f1dcd2ac41\") " pod="openstack/dnsmasq-dns-6844bbffb5-6qbh8" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.087528 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e8562e5-7677-460c-864c-c0f1dcd2ac41-ovsdbserver-sb\") pod \"dnsmasq-dns-6844bbffb5-6qbh8\" (UID: \"8e8562e5-7677-460c-864c-c0f1dcd2ac41\") " pod="openstack/dnsmasq-dns-6844bbffb5-6qbh8" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.091417 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9qt5\" (UniqueName: \"kubernetes.io/projected/6dd01e75-b01a-439d-953a-a7b35aefaccf-kube-api-access-q9qt5\") pod \"keystone-bootstrap-6wd56\" (UID: \"6dd01e75-b01a-439d-953a-a7b35aefaccf\") " pod="openstack/keystone-bootstrap-6wd56" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.158241 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6wd56" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.189111 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e8562e5-7677-460c-864c-c0f1dcd2ac41-dns-svc\") pod \"dnsmasq-dns-6844bbffb5-6qbh8\" (UID: \"8e8562e5-7677-460c-864c-c0f1dcd2ac41\") " pod="openstack/dnsmasq-dns-6844bbffb5-6qbh8" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.189184 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e8562e5-7677-460c-864c-c0f1dcd2ac41-ovsdbserver-nb\") pod \"dnsmasq-dns-6844bbffb5-6qbh8\" (UID: \"8e8562e5-7677-460c-864c-c0f1dcd2ac41\") " pod="openstack/dnsmasq-dns-6844bbffb5-6qbh8" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.189205 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slg5l\" (UniqueName: \"kubernetes.io/projected/8e8562e5-7677-460c-864c-c0f1dcd2ac41-kube-api-access-slg5l\") pod \"dnsmasq-dns-6844bbffb5-6qbh8\" (UID: \"8e8562e5-7677-460c-864c-c0f1dcd2ac41\") " pod="openstack/dnsmasq-dns-6844bbffb5-6qbh8" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.189239 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e8562e5-7677-460c-864c-c0f1dcd2ac41-ovsdbserver-sb\") pod \"dnsmasq-dns-6844bbffb5-6qbh8\" (UID: \"8e8562e5-7677-460c-864c-c0f1dcd2ac41\") " pod="openstack/dnsmasq-dns-6844bbffb5-6qbh8" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.189279 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e8562e5-7677-460c-864c-c0f1dcd2ac41-dns-swift-storage-0\") pod \"dnsmasq-dns-6844bbffb5-6qbh8\" (UID: \"8e8562e5-7677-460c-864c-c0f1dcd2ac41\") " pod="openstack/dnsmasq-dns-6844bbffb5-6qbh8" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.189321 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e8562e5-7677-460c-864c-c0f1dcd2ac41-config\") pod \"dnsmasq-dns-6844bbffb5-6qbh8\" (UID: \"8e8562e5-7677-460c-864c-c0f1dcd2ac41\") " pod="openstack/dnsmasq-dns-6844bbffb5-6qbh8" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.191749 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e8562e5-7677-460c-864c-c0f1dcd2ac41-dns-svc\") pod \"dnsmasq-dns-6844bbffb5-6qbh8\" (UID: \"8e8562e5-7677-460c-864c-c0f1dcd2ac41\") " pod="openstack/dnsmasq-dns-6844bbffb5-6qbh8" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.192391 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e8562e5-7677-460c-864c-c0f1dcd2ac41-ovsdbserver-nb\") pod \"dnsmasq-dns-6844bbffb5-6qbh8\" (UID: \"8e8562e5-7677-460c-864c-c0f1dcd2ac41\") " pod="openstack/dnsmasq-dns-6844bbffb5-6qbh8" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.193478 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e8562e5-7677-460c-864c-c0f1dcd2ac41-ovsdbserver-sb\") pod \"dnsmasq-dns-6844bbffb5-6qbh8\" (UID: \"8e8562e5-7677-460c-864c-c0f1dcd2ac41\") " pod="openstack/dnsmasq-dns-6844bbffb5-6qbh8" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.194107 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e8562e5-7677-460c-864c-c0f1dcd2ac41-config\") pod \"dnsmasq-dns-6844bbffb5-6qbh8\" (UID: \"8e8562e5-7677-460c-864c-c0f1dcd2ac41\") " pod="openstack/dnsmasq-dns-6844bbffb5-6qbh8" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.194726 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e8562e5-7677-460c-864c-c0f1dcd2ac41-dns-swift-storage-0\") pod \"dnsmasq-dns-6844bbffb5-6qbh8\" (UID: \"8e8562e5-7677-460c-864c-c0f1dcd2ac41\") " pod="openstack/dnsmasq-dns-6844bbffb5-6qbh8" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.206924 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-7fvlk"] Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.207931 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-7fvlk" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.223665 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-vbbdq" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.223998 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.265776 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slg5l\" (UniqueName: \"kubernetes.io/projected/8e8562e5-7677-460c-864c-c0f1dcd2ac41-kube-api-access-slg5l\") pod \"dnsmasq-dns-6844bbffb5-6qbh8\" (UID: \"8e8562e5-7677-460c-864c-c0f1dcd2ac41\") " pod="openstack/dnsmasq-dns-6844bbffb5-6qbh8" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.274053 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-7fvlk"] Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.290362 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d722ddd7-e65d-44f7-a02d-18ddf126ccf5-combined-ca-bundle\") pod \"heat-db-sync-7fvlk\" (UID: \"d722ddd7-e65d-44f7-a02d-18ddf126ccf5\") " pod="openstack/heat-db-sync-7fvlk" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.290420 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d722ddd7-e65d-44f7-a02d-18ddf126ccf5-config-data\") pod \"heat-db-sync-7fvlk\" (UID: \"d722ddd7-e65d-44f7-a02d-18ddf126ccf5\") " pod="openstack/heat-db-sync-7fvlk" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.290465 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5rzh\" (UniqueName: \"kubernetes.io/projected/d722ddd7-e65d-44f7-a02d-18ddf126ccf5-kube-api-access-n5rzh\") pod \"heat-db-sync-7fvlk\" (UID: \"d722ddd7-e65d-44f7-a02d-18ddf126ccf5\") " pod="openstack/heat-db-sync-7fvlk" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.295398 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6844bbffb5-6qbh8" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.391503 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-756d4f5c49-ng8tb" event={"ID":"875ab9dc-abac-45c4-86b9-b0bfccdfb240","Type":"ContainerStarted","Data":"d5fd04632add169f284e136d8cc0cc1ed2dafea7a7d420e5a55cd81556345dd4"} Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.391637 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d722ddd7-e65d-44f7-a02d-18ddf126ccf5-combined-ca-bundle\") pod \"heat-db-sync-7fvlk\" (UID: \"d722ddd7-e65d-44f7-a02d-18ddf126ccf5\") " pod="openstack/heat-db-sync-7fvlk" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.391679 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d722ddd7-e65d-44f7-a02d-18ddf126ccf5-config-data\") pod \"heat-db-sync-7fvlk\" (UID: \"d722ddd7-e65d-44f7-a02d-18ddf126ccf5\") " pod="openstack/heat-db-sync-7fvlk" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.391688 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-756d4f5c49-ng8tb" podUID="875ab9dc-abac-45c4-86b9-b0bfccdfb240" containerName="dnsmasq-dns" containerID="cri-o://d5fd04632add169f284e136d8cc0cc1ed2dafea7a7d420e5a55cd81556345dd4" gracePeriod=10 Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.391724 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5rzh\" (UniqueName: \"kubernetes.io/projected/d722ddd7-e65d-44f7-a02d-18ddf126ccf5-kube-api-access-n5rzh\") pod \"heat-db-sync-7fvlk\" (UID: \"d722ddd7-e65d-44f7-a02d-18ddf126ccf5\") " pod="openstack/heat-db-sync-7fvlk" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.392329 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-756d4f5c49-ng8tb" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.396787 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d722ddd7-e65d-44f7-a02d-18ddf126ccf5-combined-ca-bundle\") pod \"heat-db-sync-7fvlk\" (UID: \"d722ddd7-e65d-44f7-a02d-18ddf126ccf5\") " pod="openstack/heat-db-sync-7fvlk" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.405840 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d722ddd7-e65d-44f7-a02d-18ddf126ccf5-config-data\") pod \"heat-db-sync-7fvlk\" (UID: \"d722ddd7-e65d-44f7-a02d-18ddf126ccf5\") " pod="openstack/heat-db-sync-7fvlk" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.513423 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.516149 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.551402 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.551584 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.597161 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/270ebc10-986f-4473-8a5e-9094de34ae98-log-httpd\") pod \"ceilometer-0\" (UID: \"270ebc10-986f-4473-8a5e-9094de34ae98\") " pod="openstack/ceilometer-0" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.597242 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/270ebc10-986f-4473-8a5e-9094de34ae98-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"270ebc10-986f-4473-8a5e-9094de34ae98\") " pod="openstack/ceilometer-0" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.597270 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/270ebc10-986f-4473-8a5e-9094de34ae98-config-data\") pod \"ceilometer-0\" (UID: \"270ebc10-986f-4473-8a5e-9094de34ae98\") " pod="openstack/ceilometer-0" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.597335 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/270ebc10-986f-4473-8a5e-9094de34ae98-run-httpd\") pod \"ceilometer-0\" (UID: \"270ebc10-986f-4473-8a5e-9094de34ae98\") " pod="openstack/ceilometer-0" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.597467 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/270ebc10-986f-4473-8a5e-9094de34ae98-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"270ebc10-986f-4473-8a5e-9094de34ae98\") " pod="openstack/ceilometer-0" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.597504 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/270ebc10-986f-4473-8a5e-9094de34ae98-scripts\") pod \"ceilometer-0\" (UID: \"270ebc10-986f-4473-8a5e-9094de34ae98\") " pod="openstack/ceilometer-0" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.597618 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz5nk\" (UniqueName: \"kubernetes.io/projected/270ebc10-986f-4473-8a5e-9094de34ae98-kube-api-access-jz5nk\") pod \"ceilometer-0\" (UID: \"270ebc10-986f-4473-8a5e-9094de34ae98\") " pod="openstack/ceilometer-0" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.615466 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-hlsnz"] Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.616840 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hlsnz" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.628633 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.628871 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-r2t7p" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.629631 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.643640 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.701078 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b7221b50-7231-4ade-917e-b10f177cb539-config\") pod \"neutron-db-sync-hlsnz\" (UID: \"b7221b50-7231-4ade-917e-b10f177cb539\") " pod="openstack/neutron-db-sync-hlsnz" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.701136 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz5nk\" (UniqueName: \"kubernetes.io/projected/270ebc10-986f-4473-8a5e-9094de34ae98-kube-api-access-jz5nk\") pod \"ceilometer-0\" (UID: \"270ebc10-986f-4473-8a5e-9094de34ae98\") " pod="openstack/ceilometer-0" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.701174 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7221b50-7231-4ade-917e-b10f177cb539-combined-ca-bundle\") pod \"neutron-db-sync-hlsnz\" (UID: \"b7221b50-7231-4ade-917e-b10f177cb539\") " pod="openstack/neutron-db-sync-hlsnz" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.701209 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/270ebc10-986f-4473-8a5e-9094de34ae98-log-httpd\") pod \"ceilometer-0\" (UID: \"270ebc10-986f-4473-8a5e-9094de34ae98\") " pod="openstack/ceilometer-0" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.701229 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/270ebc10-986f-4473-8a5e-9094de34ae98-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"270ebc10-986f-4473-8a5e-9094de34ae98\") " pod="openstack/ceilometer-0" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.701249 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngb4t\" (UniqueName: \"kubernetes.io/projected/b7221b50-7231-4ade-917e-b10f177cb539-kube-api-access-ngb4t\") pod \"neutron-db-sync-hlsnz\" (UID: \"b7221b50-7231-4ade-917e-b10f177cb539\") " pod="openstack/neutron-db-sync-hlsnz" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.701269 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/270ebc10-986f-4473-8a5e-9094de34ae98-config-data\") pod \"ceilometer-0\" (UID: \"270ebc10-986f-4473-8a5e-9094de34ae98\") " pod="openstack/ceilometer-0" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.701298 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/270ebc10-986f-4473-8a5e-9094de34ae98-run-httpd\") pod \"ceilometer-0\" (UID: \"270ebc10-986f-4473-8a5e-9094de34ae98\") " pod="openstack/ceilometer-0" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.701364 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/270ebc10-986f-4473-8a5e-9094de34ae98-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"270ebc10-986f-4473-8a5e-9094de34ae98\") " pod="openstack/ceilometer-0" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.701391 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/270ebc10-986f-4473-8a5e-9094de34ae98-scripts\") pod \"ceilometer-0\" (UID: \"270ebc10-986f-4473-8a5e-9094de34ae98\") " pod="openstack/ceilometer-0" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.702776 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/270ebc10-986f-4473-8a5e-9094de34ae98-run-httpd\") pod \"ceilometer-0\" (UID: \"270ebc10-986f-4473-8a5e-9094de34ae98\") " pod="openstack/ceilometer-0" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.703302 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/270ebc10-986f-4473-8a5e-9094de34ae98-log-httpd\") pod \"ceilometer-0\" (UID: \"270ebc10-986f-4473-8a5e-9094de34ae98\") " pod="openstack/ceilometer-0" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.706115 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-756d4f5c49-ng8tb" podStartSLOduration=3.706095517 podStartE2EDuration="3.706095517s" podCreationTimestamp="2026-03-13 10:25:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:25:04.696166944 +0000 UTC m=+1278.718697087" watchObservedRunningTime="2026-03-13 10:25:04.706095517 +0000 UTC m=+1278.728625660" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.728273 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/270ebc10-986f-4473-8a5e-9094de34ae98-config-data\") pod \"ceilometer-0\" (UID: \"270ebc10-986f-4473-8a5e-9094de34ae98\") " pod="openstack/ceilometer-0" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.728910 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/270ebc10-986f-4473-8a5e-9094de34ae98-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"270ebc10-986f-4473-8a5e-9094de34ae98\") " pod="openstack/ceilometer-0" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.729646 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/270ebc10-986f-4473-8a5e-9094de34ae98-scripts\") pod \"ceilometer-0\" (UID: \"270ebc10-986f-4473-8a5e-9094de34ae98\") " pod="openstack/ceilometer-0" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.754040 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/270ebc10-986f-4473-8a5e-9094de34ae98-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"270ebc10-986f-4473-8a5e-9094de34ae98\") " pod="openstack/ceilometer-0" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.809821 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-hlsnz"] Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.810742 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7221b50-7231-4ade-917e-b10f177cb539-combined-ca-bundle\") pod \"neutron-db-sync-hlsnz\" (UID: \"b7221b50-7231-4ade-917e-b10f177cb539\") " pod="openstack/neutron-db-sync-hlsnz" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.810781 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngb4t\" (UniqueName: \"kubernetes.io/projected/b7221b50-7231-4ade-917e-b10f177cb539-kube-api-access-ngb4t\") pod \"neutron-db-sync-hlsnz\" (UID: \"b7221b50-7231-4ade-917e-b10f177cb539\") " pod="openstack/neutron-db-sync-hlsnz" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.810903 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b7221b50-7231-4ade-917e-b10f177cb539-config\") pod \"neutron-db-sync-hlsnz\" (UID: \"b7221b50-7231-4ade-917e-b10f177cb539\") " pod="openstack/neutron-db-sync-hlsnz" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.815265 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz5nk\" (UniqueName: \"kubernetes.io/projected/270ebc10-986f-4473-8a5e-9094de34ae98-kube-api-access-jz5nk\") pod \"ceilometer-0\" (UID: \"270ebc10-986f-4473-8a5e-9094de34ae98\") " pod="openstack/ceilometer-0" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.849881 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="1761ca69-46fd-4375-af60-22b3e77c19a2" containerName="galera" probeResult="failure" output="command timed out" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.867965 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b7221b50-7231-4ade-917e-b10f177cb539-config\") pod \"neutron-db-sync-hlsnz\" (UID: \"b7221b50-7231-4ade-917e-b10f177cb539\") " pod="openstack/neutron-db-sync-hlsnz" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.868465 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7221b50-7231-4ade-917e-b10f177cb539-combined-ca-bundle\") pod \"neutron-db-sync-hlsnz\" (UID: \"b7221b50-7231-4ade-917e-b10f177cb539\") " pod="openstack/neutron-db-sync-hlsnz" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.869235 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.901194 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6fb489b64f-prckv"] Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.913017 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6fb489b64f-prckv" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.921164 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngb4t\" (UniqueName: \"kubernetes.io/projected/b7221b50-7231-4ade-917e-b10f177cb539-kube-api-access-ngb4t\") pod \"neutron-db-sync-hlsnz\" (UID: \"b7221b50-7231-4ade-917e-b10f177cb539\") " pod="openstack/neutron-db-sync-hlsnz" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.928303 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.944901 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.945118 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-59mgb" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.945220 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.948341 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6fb489b64f-prckv"] Mar 13 10:25:04 crc kubenswrapper[4632]: I0313 10:25:04.969474 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hlsnz" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.013442 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00c4bdc6-a22c-4ab6-b898-cf591b92756b-scripts\") pod \"horizon-6fb489b64f-prckv\" (UID: \"00c4bdc6-a22c-4ab6-b898-cf591b92756b\") " pod="openstack/horizon-6fb489b64f-prckv" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.013726 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00c4bdc6-a22c-4ab6-b898-cf591b92756b-config-data\") pod \"horizon-6fb489b64f-prckv\" (UID: \"00c4bdc6-a22c-4ab6-b898-cf591b92756b\") " pod="openstack/horizon-6fb489b64f-prckv" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.013837 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/00c4bdc6-a22c-4ab6-b898-cf591b92756b-horizon-secret-key\") pod \"horizon-6fb489b64f-prckv\" (UID: \"00c4bdc6-a22c-4ab6-b898-cf591b92756b\") " pod="openstack/horizon-6fb489b64f-prckv" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.013922 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00c4bdc6-a22c-4ab6-b898-cf591b92756b-logs\") pod \"horizon-6fb489b64f-prckv\" (UID: \"00c4bdc6-a22c-4ab6-b898-cf591b92756b\") " pod="openstack/horizon-6fb489b64f-prckv" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.014553 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwx9r\" (UniqueName: \"kubernetes.io/projected/00c4bdc6-a22c-4ab6-b898-cf591b92756b-kube-api-access-gwx9r\") pod \"horizon-6fb489b64f-prckv\" (UID: \"00c4bdc6-a22c-4ab6-b898-cf591b92756b\") " pod="openstack/horizon-6fb489b64f-prckv" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.025862 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-kq8lc"] Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.027033 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kq8lc" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.046325 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-j7c52" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.046559 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.046699 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.119509 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f916c05-f172-42b6-9b13-0c8d2058bfb1-combined-ca-bundle\") pod \"cinder-db-sync-kq8lc\" (UID: \"8f916c05-f172-42b6-9b13-0c8d2058bfb1\") " pod="openstack/cinder-db-sync-kq8lc" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.119574 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00c4bdc6-a22c-4ab6-b898-cf591b92756b-config-data\") pod \"horizon-6fb489b64f-prckv\" (UID: \"00c4bdc6-a22c-4ab6-b898-cf591b92756b\") " pod="openstack/horizon-6fb489b64f-prckv" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.119599 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f916c05-f172-42b6-9b13-0c8d2058bfb1-scripts\") pod \"cinder-db-sync-kq8lc\" (UID: \"8f916c05-f172-42b6-9b13-0c8d2058bfb1\") " pod="openstack/cinder-db-sync-kq8lc" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.119640 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8f916c05-f172-42b6-9b13-0c8d2058bfb1-db-sync-config-data\") pod \"cinder-db-sync-kq8lc\" (UID: \"8f916c05-f172-42b6-9b13-0c8d2058bfb1\") " pod="openstack/cinder-db-sync-kq8lc" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.119676 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/00c4bdc6-a22c-4ab6-b898-cf591b92756b-horizon-secret-key\") pod \"horizon-6fb489b64f-prckv\" (UID: \"00c4bdc6-a22c-4ab6-b898-cf591b92756b\") " pod="openstack/horizon-6fb489b64f-prckv" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.119702 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00c4bdc6-a22c-4ab6-b898-cf591b92756b-logs\") pod \"horizon-6fb489b64f-prckv\" (UID: \"00c4bdc6-a22c-4ab6-b898-cf591b92756b\") " pod="openstack/horizon-6fb489b64f-prckv" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.119743 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m5bn\" (UniqueName: \"kubernetes.io/projected/8f916c05-f172-42b6-9b13-0c8d2058bfb1-kube-api-access-5m5bn\") pod \"cinder-db-sync-kq8lc\" (UID: \"8f916c05-f172-42b6-9b13-0c8d2058bfb1\") " pod="openstack/cinder-db-sync-kq8lc" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.119785 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwx9r\" (UniqueName: \"kubernetes.io/projected/00c4bdc6-a22c-4ab6-b898-cf591b92756b-kube-api-access-gwx9r\") pod \"horizon-6fb489b64f-prckv\" (UID: \"00c4bdc6-a22c-4ab6-b898-cf591b92756b\") " pod="openstack/horizon-6fb489b64f-prckv" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.119829 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f916c05-f172-42b6-9b13-0c8d2058bfb1-config-data\") pod \"cinder-db-sync-kq8lc\" (UID: \"8f916c05-f172-42b6-9b13-0c8d2058bfb1\") " pod="openstack/cinder-db-sync-kq8lc" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.119895 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8f916c05-f172-42b6-9b13-0c8d2058bfb1-etc-machine-id\") pod \"cinder-db-sync-kq8lc\" (UID: \"8f916c05-f172-42b6-9b13-0c8d2058bfb1\") " pod="openstack/cinder-db-sync-kq8lc" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.119999 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00c4bdc6-a22c-4ab6-b898-cf591b92756b-scripts\") pod \"horizon-6fb489b64f-prckv\" (UID: \"00c4bdc6-a22c-4ab6-b898-cf591b92756b\") " pod="openstack/horizon-6fb489b64f-prckv" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.120783 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00c4bdc6-a22c-4ab6-b898-cf591b92756b-scripts\") pod \"horizon-6fb489b64f-prckv\" (UID: \"00c4bdc6-a22c-4ab6-b898-cf591b92756b\") " pod="openstack/horizon-6fb489b64f-prckv" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.129865 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00c4bdc6-a22c-4ab6-b898-cf591b92756b-config-data\") pod \"horizon-6fb489b64f-prckv\" (UID: \"00c4bdc6-a22c-4ab6-b898-cf591b92756b\") " pod="openstack/horizon-6fb489b64f-prckv" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.130192 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00c4bdc6-a22c-4ab6-b898-cf591b92756b-logs\") pod \"horizon-6fb489b64f-prckv\" (UID: \"00c4bdc6-a22c-4ab6-b898-cf591b92756b\") " pod="openstack/horizon-6fb489b64f-prckv" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.211022 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-kq8lc"] Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.223041 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8f916c05-f172-42b6-9b13-0c8d2058bfb1-etc-machine-id\") pod \"cinder-db-sync-kq8lc\" (UID: \"8f916c05-f172-42b6-9b13-0c8d2058bfb1\") " pod="openstack/cinder-db-sync-kq8lc" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.223175 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f916c05-f172-42b6-9b13-0c8d2058bfb1-combined-ca-bundle\") pod \"cinder-db-sync-kq8lc\" (UID: \"8f916c05-f172-42b6-9b13-0c8d2058bfb1\") " pod="openstack/cinder-db-sync-kq8lc" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.223217 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f916c05-f172-42b6-9b13-0c8d2058bfb1-scripts\") pod \"cinder-db-sync-kq8lc\" (UID: \"8f916c05-f172-42b6-9b13-0c8d2058bfb1\") " pod="openstack/cinder-db-sync-kq8lc" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.223266 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8f916c05-f172-42b6-9b13-0c8d2058bfb1-db-sync-config-data\") pod \"cinder-db-sync-kq8lc\" (UID: \"8f916c05-f172-42b6-9b13-0c8d2058bfb1\") " pod="openstack/cinder-db-sync-kq8lc" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.223338 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m5bn\" (UniqueName: \"kubernetes.io/projected/8f916c05-f172-42b6-9b13-0c8d2058bfb1-kube-api-access-5m5bn\") pod \"cinder-db-sync-kq8lc\" (UID: \"8f916c05-f172-42b6-9b13-0c8d2058bfb1\") " pod="openstack/cinder-db-sync-kq8lc" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.223425 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f916c05-f172-42b6-9b13-0c8d2058bfb1-config-data\") pod \"cinder-db-sync-kq8lc\" (UID: \"8f916c05-f172-42b6-9b13-0c8d2058bfb1\") " pod="openstack/cinder-db-sync-kq8lc" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.233794 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f916c05-f172-42b6-9b13-0c8d2058bfb1-scripts\") pod \"cinder-db-sync-kq8lc\" (UID: \"8f916c05-f172-42b6-9b13-0c8d2058bfb1\") " pod="openstack/cinder-db-sync-kq8lc" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.233886 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8f916c05-f172-42b6-9b13-0c8d2058bfb1-etc-machine-id\") pod \"cinder-db-sync-kq8lc\" (UID: \"8f916c05-f172-42b6-9b13-0c8d2058bfb1\") " pod="openstack/cinder-db-sync-kq8lc" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.239709 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f916c05-f172-42b6-9b13-0c8d2058bfb1-combined-ca-bundle\") pod \"cinder-db-sync-kq8lc\" (UID: \"8f916c05-f172-42b6-9b13-0c8d2058bfb1\") " pod="openstack/cinder-db-sync-kq8lc" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.245443 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwx9r\" (UniqueName: \"kubernetes.io/projected/00c4bdc6-a22c-4ab6-b898-cf591b92756b-kube-api-access-gwx9r\") pod \"horizon-6fb489b64f-prckv\" (UID: \"00c4bdc6-a22c-4ab6-b898-cf591b92756b\") " pod="openstack/horizon-6fb489b64f-prckv" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.246175 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8f916c05-f172-42b6-9b13-0c8d2058bfb1-db-sync-config-data\") pod \"cinder-db-sync-kq8lc\" (UID: \"8f916c05-f172-42b6-9b13-0c8d2058bfb1\") " pod="openstack/cinder-db-sync-kq8lc" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.246199 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f916c05-f172-42b6-9b13-0c8d2058bfb1-config-data\") pod \"cinder-db-sync-kq8lc\" (UID: \"8f916c05-f172-42b6-9b13-0c8d2058bfb1\") " pod="openstack/cinder-db-sync-kq8lc" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.258413 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/00c4bdc6-a22c-4ab6-b898-cf591b92756b-horizon-secret-key\") pod \"horizon-6fb489b64f-prckv\" (UID: \"00c4bdc6-a22c-4ab6-b898-cf591b92756b\") " pod="openstack/horizon-6fb489b64f-prckv" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.296748 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6fb489b64f-prckv" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.389722 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m5bn\" (UniqueName: \"kubernetes.io/projected/8f916c05-f172-42b6-9b13-0c8d2058bfb1-kube-api-access-5m5bn\") pod \"cinder-db-sync-kq8lc\" (UID: \"8f916c05-f172-42b6-9b13-0c8d2058bfb1\") " pod="openstack/cinder-db-sync-kq8lc" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.432346 4632 generic.go:334] "Generic (PLEG): container finished" podID="875ab9dc-abac-45c4-86b9-b0bfccdfb240" containerID="d5fd04632add169f284e136d8cc0cc1ed2dafea7a7d420e5a55cd81556345dd4" exitCode=0 Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.432395 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-756d4f5c49-ng8tb" event={"ID":"875ab9dc-abac-45c4-86b9-b0bfccdfb240","Type":"ContainerDied","Data":"d5fd04632add169f284e136d8cc0cc1ed2dafea7a7d420e5a55cd81556345dd4"} Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.452212 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-zdgpw"] Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.453978 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zdgpw" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.467726 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.467917 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-m45mn" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.483099 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-zdgpw"] Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.542124 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-htnd9"] Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.543431 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-htnd9" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.545171 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/418cb883-abd1-46b4-957f-0a40f3e62297-db-sync-config-data\") pod \"barbican-db-sync-zdgpw\" (UID: \"418cb883-abd1-46b4-957f-0a40f3e62297\") " pod="openstack/barbican-db-sync-zdgpw" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.545298 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgn44\" (UniqueName: \"kubernetes.io/projected/418cb883-abd1-46b4-957f-0a40f3e62297-kube-api-access-zgn44\") pod \"barbican-db-sync-zdgpw\" (UID: \"418cb883-abd1-46b4-957f-0a40f3e62297\") " pod="openstack/barbican-db-sync-zdgpw" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.545334 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/418cb883-abd1-46b4-957f-0a40f3e62297-combined-ca-bundle\") pod \"barbican-db-sync-zdgpw\" (UID: \"418cb883-abd1-46b4-957f-0a40f3e62297\") " pod="openstack/barbican-db-sync-zdgpw" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.570615 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.574181 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-6tvl4" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.587501 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.620753 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5rzh\" (UniqueName: \"kubernetes.io/projected/d722ddd7-e65d-44f7-a02d-18ddf126ccf5-kube-api-access-n5rzh\") pod \"heat-db-sync-7fvlk\" (UID: \"d722ddd7-e65d-44f7-a02d-18ddf126ccf5\") " pod="openstack/heat-db-sync-7fvlk" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.631072 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6844bbffb5-6qbh8"] Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.647239 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e92afa62-9c75-4e0e-92f4-76e57328d7a0-scripts\") pod \"placement-db-sync-htnd9\" (UID: \"e92afa62-9c75-4e0e-92f4-76e57328d7a0\") " pod="openstack/placement-db-sync-htnd9" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.647331 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e92afa62-9c75-4e0e-92f4-76e57328d7a0-combined-ca-bundle\") pod \"placement-db-sync-htnd9\" (UID: \"e92afa62-9c75-4e0e-92f4-76e57328d7a0\") " pod="openstack/placement-db-sync-htnd9" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.647384 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgn44\" (UniqueName: \"kubernetes.io/projected/418cb883-abd1-46b4-957f-0a40f3e62297-kube-api-access-zgn44\") pod \"barbican-db-sync-zdgpw\" (UID: \"418cb883-abd1-46b4-957f-0a40f3e62297\") " pod="openstack/barbican-db-sync-zdgpw" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.647410 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e92afa62-9c75-4e0e-92f4-76e57328d7a0-logs\") pod \"placement-db-sync-htnd9\" (UID: \"e92afa62-9c75-4e0e-92f4-76e57328d7a0\") " pod="openstack/placement-db-sync-htnd9" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.647458 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/418cb883-abd1-46b4-957f-0a40f3e62297-combined-ca-bundle\") pod \"barbican-db-sync-zdgpw\" (UID: \"418cb883-abd1-46b4-957f-0a40f3e62297\") " pod="openstack/barbican-db-sync-zdgpw" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.647521 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/418cb883-abd1-46b4-957f-0a40f3e62297-db-sync-config-data\") pod \"barbican-db-sync-zdgpw\" (UID: \"418cb883-abd1-46b4-957f-0a40f3e62297\") " pod="openstack/barbican-db-sync-zdgpw" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.647562 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqp5h\" (UniqueName: \"kubernetes.io/projected/e92afa62-9c75-4e0e-92f4-76e57328d7a0-kube-api-access-mqp5h\") pod \"placement-db-sync-htnd9\" (UID: \"e92afa62-9c75-4e0e-92f4-76e57328d7a0\") " pod="openstack/placement-db-sync-htnd9" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.647611 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e92afa62-9c75-4e0e-92f4-76e57328d7a0-config-data\") pod \"placement-db-sync-htnd9\" (UID: \"e92afa62-9c75-4e0e-92f4-76e57328d7a0\") " pod="openstack/placement-db-sync-htnd9" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.652071 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-htnd9"] Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.680069 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/418cb883-abd1-46b4-957f-0a40f3e62297-combined-ca-bundle\") pod \"barbican-db-sync-zdgpw\" (UID: \"418cb883-abd1-46b4-957f-0a40f3e62297\") " pod="openstack/barbican-db-sync-zdgpw" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.680477 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kq8lc" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.688919 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgn44\" (UniqueName: \"kubernetes.io/projected/418cb883-abd1-46b4-957f-0a40f3e62297-kube-api-access-zgn44\") pod \"barbican-db-sync-zdgpw\" (UID: \"418cb883-abd1-46b4-957f-0a40f3e62297\") " pod="openstack/barbican-db-sync-zdgpw" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.698295 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/418cb883-abd1-46b4-957f-0a40f3e62297-db-sync-config-data\") pod \"barbican-db-sync-zdgpw\" (UID: \"418cb883-abd1-46b4-957f-0a40f3e62297\") " pod="openstack/barbican-db-sync-zdgpw" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.699067 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d8f9dd5cc-6nktg"] Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.711069 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d8f9dd5cc-6nktg" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.748801 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e92afa62-9c75-4e0e-92f4-76e57328d7a0-logs\") pod \"placement-db-sync-htnd9\" (UID: \"e92afa62-9c75-4e0e-92f4-76e57328d7a0\") " pod="openstack/placement-db-sync-htnd9" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.749535 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e92afa62-9c75-4e0e-92f4-76e57328d7a0-logs\") pod \"placement-db-sync-htnd9\" (UID: \"e92afa62-9c75-4e0e-92f4-76e57328d7a0\") " pod="openstack/placement-db-sync-htnd9" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.749689 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqp5h\" (UniqueName: \"kubernetes.io/projected/e92afa62-9c75-4e0e-92f4-76e57328d7a0-kube-api-access-mqp5h\") pod \"placement-db-sync-htnd9\" (UID: \"e92afa62-9c75-4e0e-92f4-76e57328d7a0\") " pod="openstack/placement-db-sync-htnd9" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.749733 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e92afa62-9c75-4e0e-92f4-76e57328d7a0-config-data\") pod \"placement-db-sync-htnd9\" (UID: \"e92afa62-9c75-4e0e-92f4-76e57328d7a0\") " pod="openstack/placement-db-sync-htnd9" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.750086 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e92afa62-9c75-4e0e-92f4-76e57328d7a0-scripts\") pod \"placement-db-sync-htnd9\" (UID: \"e92afa62-9c75-4e0e-92f4-76e57328d7a0\") " pod="openstack/placement-db-sync-htnd9" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.766610 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e92afa62-9c75-4e0e-92f4-76e57328d7a0-config-data\") pod \"placement-db-sync-htnd9\" (UID: \"e92afa62-9c75-4e0e-92f4-76e57328d7a0\") " pod="openstack/placement-db-sync-htnd9" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.766776 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e92afa62-9c75-4e0e-92f4-76e57328d7a0-combined-ca-bundle\") pod \"placement-db-sync-htnd9\" (UID: \"e92afa62-9c75-4e0e-92f4-76e57328d7a0\") " pod="openstack/placement-db-sync-htnd9" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.768836 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d8f9dd5cc-6nktg"] Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.796246 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e92afa62-9c75-4e0e-92f4-76e57328d7a0-scripts\") pod \"placement-db-sync-htnd9\" (UID: \"e92afa62-9c75-4e0e-92f4-76e57328d7a0\") " pod="openstack/placement-db-sync-htnd9" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.809186 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e92afa62-9c75-4e0e-92f4-76e57328d7a0-combined-ca-bundle\") pod \"placement-db-sync-htnd9\" (UID: \"e92afa62-9c75-4e0e-92f4-76e57328d7a0\") " pod="openstack/placement-db-sync-htnd9" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.820219 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zdgpw" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.830768 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqp5h\" (UniqueName: \"kubernetes.io/projected/e92afa62-9c75-4e0e-92f4-76e57328d7a0-kube-api-access-mqp5h\") pod \"placement-db-sync-htnd9\" (UID: \"e92afa62-9c75-4e0e-92f4-76e57328d7a0\") " pod="openstack/placement-db-sync-htnd9" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.872495 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78e29b83-b50e-46db-a8d6-bba0ecfb5c08-ovsdbserver-nb\") pod \"dnsmasq-dns-7d8f9dd5cc-6nktg\" (UID: \"78e29b83-b50e-46db-a8d6-bba0ecfb5c08\") " pod="openstack/dnsmasq-dns-7d8f9dd5cc-6nktg" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.872623 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/78e29b83-b50e-46db-a8d6-bba0ecfb5c08-dns-swift-storage-0\") pod \"dnsmasq-dns-7d8f9dd5cc-6nktg\" (UID: \"78e29b83-b50e-46db-a8d6-bba0ecfb5c08\") " pod="openstack/dnsmasq-dns-7d8f9dd5cc-6nktg" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.872793 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78e29b83-b50e-46db-a8d6-bba0ecfb5c08-config\") pod \"dnsmasq-dns-7d8f9dd5cc-6nktg\" (UID: \"78e29b83-b50e-46db-a8d6-bba0ecfb5c08\") " pod="openstack/dnsmasq-dns-7d8f9dd5cc-6nktg" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.872845 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29zt6\" (UniqueName: \"kubernetes.io/projected/78e29b83-b50e-46db-a8d6-bba0ecfb5c08-kube-api-access-29zt6\") pod \"dnsmasq-dns-7d8f9dd5cc-6nktg\" (UID: \"78e29b83-b50e-46db-a8d6-bba0ecfb5c08\") " pod="openstack/dnsmasq-dns-7d8f9dd5cc-6nktg" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.872993 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78e29b83-b50e-46db-a8d6-bba0ecfb5c08-ovsdbserver-sb\") pod \"dnsmasq-dns-7d8f9dd5cc-6nktg\" (UID: \"78e29b83-b50e-46db-a8d6-bba0ecfb5c08\") " pod="openstack/dnsmasq-dns-7d8f9dd5cc-6nktg" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.873099 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78e29b83-b50e-46db-a8d6-bba0ecfb5c08-dns-svc\") pod \"dnsmasq-dns-7d8f9dd5cc-6nktg\" (UID: \"78e29b83-b50e-46db-a8d6-bba0ecfb5c08\") " pod="openstack/dnsmasq-dns-7d8f9dd5cc-6nktg" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.889885 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-htnd9" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.893630 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-7fvlk" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.937398 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-67d6b4b8f7-nrxn8"] Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.939211 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67d6b4b8f7-nrxn8" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.956537 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6wd56"] Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.980172 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78e29b83-b50e-46db-a8d6-bba0ecfb5c08-ovsdbserver-sb\") pod \"dnsmasq-dns-7d8f9dd5cc-6nktg\" (UID: \"78e29b83-b50e-46db-a8d6-bba0ecfb5c08\") " pod="openstack/dnsmasq-dns-7d8f9dd5cc-6nktg" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.980237 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78e29b83-b50e-46db-a8d6-bba0ecfb5c08-dns-svc\") pod \"dnsmasq-dns-7d8f9dd5cc-6nktg\" (UID: \"78e29b83-b50e-46db-a8d6-bba0ecfb5c08\") " pod="openstack/dnsmasq-dns-7d8f9dd5cc-6nktg" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.980361 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78e29b83-b50e-46db-a8d6-bba0ecfb5c08-ovsdbserver-nb\") pod \"dnsmasq-dns-7d8f9dd5cc-6nktg\" (UID: \"78e29b83-b50e-46db-a8d6-bba0ecfb5c08\") " pod="openstack/dnsmasq-dns-7d8f9dd5cc-6nktg" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.980384 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/78e29b83-b50e-46db-a8d6-bba0ecfb5c08-dns-swift-storage-0\") pod \"dnsmasq-dns-7d8f9dd5cc-6nktg\" (UID: \"78e29b83-b50e-46db-a8d6-bba0ecfb5c08\") " pod="openstack/dnsmasq-dns-7d8f9dd5cc-6nktg" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.980457 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78e29b83-b50e-46db-a8d6-bba0ecfb5c08-config\") pod \"dnsmasq-dns-7d8f9dd5cc-6nktg\" (UID: \"78e29b83-b50e-46db-a8d6-bba0ecfb5c08\") " pod="openstack/dnsmasq-dns-7d8f9dd5cc-6nktg" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.980477 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29zt6\" (UniqueName: \"kubernetes.io/projected/78e29b83-b50e-46db-a8d6-bba0ecfb5c08-kube-api-access-29zt6\") pod \"dnsmasq-dns-7d8f9dd5cc-6nktg\" (UID: \"78e29b83-b50e-46db-a8d6-bba0ecfb5c08\") " pod="openstack/dnsmasq-dns-7d8f9dd5cc-6nktg" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.981853 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78e29b83-b50e-46db-a8d6-bba0ecfb5c08-ovsdbserver-nb\") pod \"dnsmasq-dns-7d8f9dd5cc-6nktg\" (UID: \"78e29b83-b50e-46db-a8d6-bba0ecfb5c08\") " pod="openstack/dnsmasq-dns-7d8f9dd5cc-6nktg" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.982425 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78e29b83-b50e-46db-a8d6-bba0ecfb5c08-ovsdbserver-sb\") pod \"dnsmasq-dns-7d8f9dd5cc-6nktg\" (UID: \"78e29b83-b50e-46db-a8d6-bba0ecfb5c08\") " pod="openstack/dnsmasq-dns-7d8f9dd5cc-6nktg" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.982735 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/78e29b83-b50e-46db-a8d6-bba0ecfb5c08-dns-swift-storage-0\") pod \"dnsmasq-dns-7d8f9dd5cc-6nktg\" (UID: \"78e29b83-b50e-46db-a8d6-bba0ecfb5c08\") " pod="openstack/dnsmasq-dns-7d8f9dd5cc-6nktg" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.983055 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78e29b83-b50e-46db-a8d6-bba0ecfb5c08-dns-svc\") pod \"dnsmasq-dns-7d8f9dd5cc-6nktg\" (UID: \"78e29b83-b50e-46db-a8d6-bba0ecfb5c08\") " pod="openstack/dnsmasq-dns-7d8f9dd5cc-6nktg" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.983307 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78e29b83-b50e-46db-a8d6-bba0ecfb5c08-config\") pod \"dnsmasq-dns-7d8f9dd5cc-6nktg\" (UID: \"78e29b83-b50e-46db-a8d6-bba0ecfb5c08\") " pod="openstack/dnsmasq-dns-7d8f9dd5cc-6nktg" Mar 13 10:25:05 crc kubenswrapper[4632]: I0313 10:25:05.975925 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-67d6b4b8f7-nrxn8"] Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.039400 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6844bbffb5-6qbh8"] Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.091324 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxkl9\" (UniqueName: \"kubernetes.io/projected/95fe9a38-2b32-411e-9121-ad4cc32f159e-kube-api-access-zxkl9\") pod \"horizon-67d6b4b8f7-nrxn8\" (UID: \"95fe9a38-2b32-411e-9121-ad4cc32f159e\") " pod="openstack/horizon-67d6b4b8f7-nrxn8" Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.091423 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95fe9a38-2b32-411e-9121-ad4cc32f159e-scripts\") pod \"horizon-67d6b4b8f7-nrxn8\" (UID: \"95fe9a38-2b32-411e-9121-ad4cc32f159e\") " pod="openstack/horizon-67d6b4b8f7-nrxn8" Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.091455 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95fe9a38-2b32-411e-9121-ad4cc32f159e-config-data\") pod \"horizon-67d6b4b8f7-nrxn8\" (UID: \"95fe9a38-2b32-411e-9121-ad4cc32f159e\") " pod="openstack/horizon-67d6b4b8f7-nrxn8" Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.091487 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95fe9a38-2b32-411e-9121-ad4cc32f159e-logs\") pod \"horizon-67d6b4b8f7-nrxn8\" (UID: \"95fe9a38-2b32-411e-9121-ad4cc32f159e\") " pod="openstack/horizon-67d6b4b8f7-nrxn8" Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.091534 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/95fe9a38-2b32-411e-9121-ad4cc32f159e-horizon-secret-key\") pod \"horizon-67d6b4b8f7-nrxn8\" (UID: \"95fe9a38-2b32-411e-9121-ad4cc32f159e\") " pod="openstack/horizon-67d6b4b8f7-nrxn8" Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.148519 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29zt6\" (UniqueName: \"kubernetes.io/projected/78e29b83-b50e-46db-a8d6-bba0ecfb5c08-kube-api-access-29zt6\") pod \"dnsmasq-dns-7d8f9dd5cc-6nktg\" (UID: \"78e29b83-b50e-46db-a8d6-bba0ecfb5c08\") " pod="openstack/dnsmasq-dns-7d8f9dd5cc-6nktg" Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.203628 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95fe9a38-2b32-411e-9121-ad4cc32f159e-scripts\") pod \"horizon-67d6b4b8f7-nrxn8\" (UID: \"95fe9a38-2b32-411e-9121-ad4cc32f159e\") " pod="openstack/horizon-67d6b4b8f7-nrxn8" Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.203704 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95fe9a38-2b32-411e-9121-ad4cc32f159e-config-data\") pod \"horizon-67d6b4b8f7-nrxn8\" (UID: \"95fe9a38-2b32-411e-9121-ad4cc32f159e\") " pod="openstack/horizon-67d6b4b8f7-nrxn8" Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.204659 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95fe9a38-2b32-411e-9121-ad4cc32f159e-scripts\") pod \"horizon-67d6b4b8f7-nrxn8\" (UID: \"95fe9a38-2b32-411e-9121-ad4cc32f159e\") " pod="openstack/horizon-67d6b4b8f7-nrxn8" Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.207275 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95fe9a38-2b32-411e-9121-ad4cc32f159e-config-data\") pod \"horizon-67d6b4b8f7-nrxn8\" (UID: \"95fe9a38-2b32-411e-9121-ad4cc32f159e\") " pod="openstack/horizon-67d6b4b8f7-nrxn8" Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.215224 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95fe9a38-2b32-411e-9121-ad4cc32f159e-logs\") pod \"horizon-67d6b4b8f7-nrxn8\" (UID: \"95fe9a38-2b32-411e-9121-ad4cc32f159e\") " pod="openstack/horizon-67d6b4b8f7-nrxn8" Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.215367 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/95fe9a38-2b32-411e-9121-ad4cc32f159e-horizon-secret-key\") pod \"horizon-67d6b4b8f7-nrxn8\" (UID: \"95fe9a38-2b32-411e-9121-ad4cc32f159e\") " pod="openstack/horizon-67d6b4b8f7-nrxn8" Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.215513 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxkl9\" (UniqueName: \"kubernetes.io/projected/95fe9a38-2b32-411e-9121-ad4cc32f159e-kube-api-access-zxkl9\") pod \"horizon-67d6b4b8f7-nrxn8\" (UID: \"95fe9a38-2b32-411e-9121-ad4cc32f159e\") " pod="openstack/horizon-67d6b4b8f7-nrxn8" Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.224795 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95fe9a38-2b32-411e-9121-ad4cc32f159e-logs\") pod \"horizon-67d6b4b8f7-nrxn8\" (UID: \"95fe9a38-2b32-411e-9121-ad4cc32f159e\") " pod="openstack/horizon-67d6b4b8f7-nrxn8" Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.232411 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/95fe9a38-2b32-411e-9121-ad4cc32f159e-horizon-secret-key\") pod \"horizon-67d6b4b8f7-nrxn8\" (UID: \"95fe9a38-2b32-411e-9121-ad4cc32f159e\") " pod="openstack/horizon-67d6b4b8f7-nrxn8" Mar 13 10:25:06 crc kubenswrapper[4632]: W0313 10:25:06.255776 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6dd01e75_b01a_439d_953a_a7b35aefaccf.slice/crio-e5726b2fe96111fb5381a471a1eb71f1473219059f290427afb5975ccc268d97 WatchSource:0}: Error finding container e5726b2fe96111fb5381a471a1eb71f1473219059f290427afb5975ccc268d97: Status 404 returned error can't find the container with id e5726b2fe96111fb5381a471a1eb71f1473219059f290427afb5975ccc268d97 Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.318601 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-hlsnz"] Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.345969 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxkl9\" (UniqueName: \"kubernetes.io/projected/95fe9a38-2b32-411e-9121-ad4cc32f159e-kube-api-access-zxkl9\") pod \"horizon-67d6b4b8f7-nrxn8\" (UID: \"95fe9a38-2b32-411e-9121-ad4cc32f159e\") " pod="openstack/horizon-67d6b4b8f7-nrxn8" Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.384548 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d8f9dd5cc-6nktg" Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.454826 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-756d4f5c49-ng8tb" Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.466164 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6844bbffb5-6qbh8" event={"ID":"8e8562e5-7677-460c-864c-c0f1dcd2ac41","Type":"ContainerStarted","Data":"b4b78a7ce795d6bbb416eb8a947d7920ad2bf057be657ab85e7df8a0742013f4"} Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.470415 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6wd56" event={"ID":"6dd01e75-b01a-439d-953a-a7b35aefaccf","Type":"ContainerStarted","Data":"e5726b2fe96111fb5381a471a1eb71f1473219059f290427afb5975ccc268d97"} Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.531236 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-756d4f5c49-ng8tb" Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.531079 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-756d4f5c49-ng8tb" event={"ID":"875ab9dc-abac-45c4-86b9-b0bfccdfb240","Type":"ContainerDied","Data":"0260417d40f7e52a2270d809525583f83a0891dc6b82c6c6e13d0d522dd4a9b4"} Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.531540 4632 scope.go:117] "RemoveContainer" containerID="d5fd04632add169f284e136d8cc0cc1ed2dafea7a7d420e5a55cd81556345dd4" Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.544584 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hlsnz" event={"ID":"b7221b50-7231-4ade-917e-b10f177cb539","Type":"ContainerStarted","Data":"20f645b899e167ff59a24d843990ef38d86d73ef7009bca8f9190936862bedaf"} Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.616557 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67d6b4b8f7-nrxn8" Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.625737 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/875ab9dc-abac-45c4-86b9-b0bfccdfb240-ovsdbserver-sb\") pod \"875ab9dc-abac-45c4-86b9-b0bfccdfb240\" (UID: \"875ab9dc-abac-45c4-86b9-b0bfccdfb240\") " Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.625799 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5jtw\" (UniqueName: \"kubernetes.io/projected/875ab9dc-abac-45c4-86b9-b0bfccdfb240-kube-api-access-t5jtw\") pod \"875ab9dc-abac-45c4-86b9-b0bfccdfb240\" (UID: \"875ab9dc-abac-45c4-86b9-b0bfccdfb240\") " Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.625857 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/875ab9dc-abac-45c4-86b9-b0bfccdfb240-config\") pod \"875ab9dc-abac-45c4-86b9-b0bfccdfb240\" (UID: \"875ab9dc-abac-45c4-86b9-b0bfccdfb240\") " Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.625920 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/875ab9dc-abac-45c4-86b9-b0bfccdfb240-dns-svc\") pod \"875ab9dc-abac-45c4-86b9-b0bfccdfb240\" (UID: \"875ab9dc-abac-45c4-86b9-b0bfccdfb240\") " Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.626006 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/875ab9dc-abac-45c4-86b9-b0bfccdfb240-ovsdbserver-nb\") pod \"875ab9dc-abac-45c4-86b9-b0bfccdfb240\" (UID: \"875ab9dc-abac-45c4-86b9-b0bfccdfb240\") " Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.626228 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/875ab9dc-abac-45c4-86b9-b0bfccdfb240-dns-swift-storage-0\") pod \"875ab9dc-abac-45c4-86b9-b0bfccdfb240\" (UID: \"875ab9dc-abac-45c4-86b9-b0bfccdfb240\") " Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.637492 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/875ab9dc-abac-45c4-86b9-b0bfccdfb240-kube-api-access-t5jtw" (OuterVolumeSpecName: "kube-api-access-t5jtw") pod "875ab9dc-abac-45c4-86b9-b0bfccdfb240" (UID: "875ab9dc-abac-45c4-86b9-b0bfccdfb240"). InnerVolumeSpecName "kube-api-access-t5jtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.666921 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.808119 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5jtw\" (UniqueName: \"kubernetes.io/projected/875ab9dc-abac-45c4-86b9-b0bfccdfb240-kube-api-access-t5jtw\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:06 crc kubenswrapper[4632]: I0313 10:25:06.829373 4632 scope.go:117] "RemoveContainer" containerID="6cc0261f52d1f9ff6e82214738332406f7144515ee6e99809f0b2f51974a5801" Mar 13 10:25:07 crc kubenswrapper[4632]: I0313 10:25:07.044190 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6fb489b64f-prckv"] Mar 13 10:25:07 crc kubenswrapper[4632]: I0313 10:25:07.066386 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/875ab9dc-abac-45c4-86b9-b0bfccdfb240-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "875ab9dc-abac-45c4-86b9-b0bfccdfb240" (UID: "875ab9dc-abac-45c4-86b9-b0bfccdfb240"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:25:07 crc kubenswrapper[4632]: I0313 10:25:07.151150 4632 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/875ab9dc-abac-45c4-86b9-b0bfccdfb240-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:07 crc kubenswrapper[4632]: I0313 10:25:07.158297 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-zdgpw"] Mar 13 10:25:07 crc kubenswrapper[4632]: I0313 10:25:07.172851 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/875ab9dc-abac-45c4-86b9-b0bfccdfb240-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "875ab9dc-abac-45c4-86b9-b0bfccdfb240" (UID: "875ab9dc-abac-45c4-86b9-b0bfccdfb240"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:25:07 crc kubenswrapper[4632]: I0313 10:25:07.173931 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/875ab9dc-abac-45c4-86b9-b0bfccdfb240-config" (OuterVolumeSpecName: "config") pod "875ab9dc-abac-45c4-86b9-b0bfccdfb240" (UID: "875ab9dc-abac-45c4-86b9-b0bfccdfb240"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:25:07 crc kubenswrapper[4632]: I0313 10:25:07.197990 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-kq8lc"] Mar 13 10:25:07 crc kubenswrapper[4632]: I0313 10:25:07.209279 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/875ab9dc-abac-45c4-86b9-b0bfccdfb240-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "875ab9dc-abac-45c4-86b9-b0bfccdfb240" (UID: "875ab9dc-abac-45c4-86b9-b0bfccdfb240"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:25:07 crc kubenswrapper[4632]: W0313 10:25:07.249713 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f916c05_f172_42b6_9b13_0c8d2058bfb1.slice/crio-8ac8055b0e5fc8cb1135e4ae559dd9794358a9f9dfb68fd20402b62c57115f00 WatchSource:0}: Error finding container 8ac8055b0e5fc8cb1135e4ae559dd9794358a9f9dfb68fd20402b62c57115f00: Status 404 returned error can't find the container with id 8ac8055b0e5fc8cb1135e4ae559dd9794358a9f9dfb68fd20402b62c57115f00 Mar 13 10:25:07 crc kubenswrapper[4632]: I0313 10:25:07.254403 4632 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/875ab9dc-abac-45c4-86b9-b0bfccdfb240-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:07 crc kubenswrapper[4632]: I0313 10:25:07.254557 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/875ab9dc-abac-45c4-86b9-b0bfccdfb240-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:07 crc kubenswrapper[4632]: I0313 10:25:07.254614 4632 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/875ab9dc-abac-45c4-86b9-b0bfccdfb240-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:07 crc kubenswrapper[4632]: I0313 10:25:07.259357 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/875ab9dc-abac-45c4-86b9-b0bfccdfb240-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "875ab9dc-abac-45c4-86b9-b0bfccdfb240" (UID: "875ab9dc-abac-45c4-86b9-b0bfccdfb240"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:25:07 crc kubenswrapper[4632]: I0313 10:25:07.365572 4632 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/875ab9dc-abac-45c4-86b9-b0bfccdfb240-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:07 crc kubenswrapper[4632]: I0313 10:25:07.375391 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-htnd9"] Mar 13 10:25:07 crc kubenswrapper[4632]: I0313 10:25:07.464978 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-7fvlk"] Mar 13 10:25:07 crc kubenswrapper[4632]: I0313 10:25:07.552210 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d8f9dd5cc-6nktg"] Mar 13 10:25:07 crc kubenswrapper[4632]: I0313 10:25:07.612955 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"270ebc10-986f-4473-8a5e-9094de34ae98","Type":"ContainerStarted","Data":"c4fcb786f7a33daa32bea87a76b7b56e9f86402051990ca301fe80823cca805f"} Mar 13 10:25:07 crc kubenswrapper[4632]: I0313 10:25:07.643640 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-756d4f5c49-ng8tb"] Mar 13 10:25:07 crc kubenswrapper[4632]: I0313 10:25:07.649973 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-756d4f5c49-ng8tb"] Mar 13 10:25:07 crc kubenswrapper[4632]: I0313 10:25:07.650386 4632 generic.go:334] "Generic (PLEG): container finished" podID="8e8562e5-7677-460c-864c-c0f1dcd2ac41" containerID="33cfae1bedfa94ab3959b48b4e6591f364a299ec2695a359dd153fc66fde7615" exitCode=0 Mar 13 10:25:07 crc kubenswrapper[4632]: I0313 10:25:07.650662 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6844bbffb5-6qbh8" event={"ID":"8e8562e5-7677-460c-864c-c0f1dcd2ac41","Type":"ContainerDied","Data":"33cfae1bedfa94ab3959b48b4e6591f364a299ec2695a359dd153fc66fde7615"} Mar 13 10:25:07 crc kubenswrapper[4632]: I0313 10:25:07.682513 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kq8lc" event={"ID":"8f916c05-f172-42b6-9b13-0c8d2058bfb1","Type":"ContainerStarted","Data":"8ac8055b0e5fc8cb1135e4ae559dd9794358a9f9dfb68fd20402b62c57115f00"} Mar 13 10:25:07 crc kubenswrapper[4632]: I0313 10:25:07.696200 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-67d6b4b8f7-nrxn8"] Mar 13 10:25:07 crc kubenswrapper[4632]: I0313 10:25:07.696356 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-htnd9" event={"ID":"e92afa62-9c75-4e0e-92f4-76e57328d7a0","Type":"ContainerStarted","Data":"fa8253910988ff0dbee81a3230f0ff84637c4204c805ed0e40f0cc26f23d5381"} Mar 13 10:25:07 crc kubenswrapper[4632]: I0313 10:25:07.696465 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zdgpw" event={"ID":"418cb883-abd1-46b4-957f-0a40f3e62297","Type":"ContainerStarted","Data":"47e1c2b826ae3f1aaa52b7a4210b405df85537a8c7de35fb1657923a6d754982"} Mar 13 10:25:07 crc kubenswrapper[4632]: I0313 10:25:07.702026 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-7fvlk" event={"ID":"d722ddd7-e65d-44f7-a02d-18ddf126ccf5","Type":"ContainerStarted","Data":"76d57552a9eced6e283cb6dee93cf8db23032b8fbb20e4a910d615de236f52d7"} Mar 13 10:25:07 crc kubenswrapper[4632]: I0313 10:25:07.722918 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6fb489b64f-prckv" event={"ID":"00c4bdc6-a22c-4ab6-b898-cf591b92756b","Type":"ContainerStarted","Data":"1e0d86fdbf39635fdea4aed078faa89b9573bea1f02b182e9ea0c1a965b0c550"} Mar 13 10:25:08 crc kubenswrapper[4632]: I0313 10:25:08.062636 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="875ab9dc-abac-45c4-86b9-b0bfccdfb240" path="/var/lib/kubelet/pods/875ab9dc-abac-45c4-86b9-b0bfccdfb240/volumes" Mar 13 10:25:08 crc kubenswrapper[4632]: I0313 10:25:08.397542 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6844bbffb5-6qbh8" Mar 13 10:25:08 crc kubenswrapper[4632]: I0313 10:25:08.455070 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e8562e5-7677-460c-864c-c0f1dcd2ac41-ovsdbserver-nb\") pod \"8e8562e5-7677-460c-864c-c0f1dcd2ac41\" (UID: \"8e8562e5-7677-460c-864c-c0f1dcd2ac41\") " Mar 13 10:25:08 crc kubenswrapper[4632]: I0313 10:25:08.455125 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e8562e5-7677-460c-864c-c0f1dcd2ac41-dns-svc\") pod \"8e8562e5-7677-460c-864c-c0f1dcd2ac41\" (UID: \"8e8562e5-7677-460c-864c-c0f1dcd2ac41\") " Mar 13 10:25:08 crc kubenswrapper[4632]: I0313 10:25:08.455176 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slg5l\" (UniqueName: \"kubernetes.io/projected/8e8562e5-7677-460c-864c-c0f1dcd2ac41-kube-api-access-slg5l\") pod \"8e8562e5-7677-460c-864c-c0f1dcd2ac41\" (UID: \"8e8562e5-7677-460c-864c-c0f1dcd2ac41\") " Mar 13 10:25:08 crc kubenswrapper[4632]: I0313 10:25:08.455220 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e8562e5-7677-460c-864c-c0f1dcd2ac41-config\") pod \"8e8562e5-7677-460c-864c-c0f1dcd2ac41\" (UID: \"8e8562e5-7677-460c-864c-c0f1dcd2ac41\") " Mar 13 10:25:08 crc kubenswrapper[4632]: I0313 10:25:08.455287 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e8562e5-7677-460c-864c-c0f1dcd2ac41-dns-swift-storage-0\") pod \"8e8562e5-7677-460c-864c-c0f1dcd2ac41\" (UID: \"8e8562e5-7677-460c-864c-c0f1dcd2ac41\") " Mar 13 10:25:08 crc kubenswrapper[4632]: I0313 10:25:08.455315 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e8562e5-7677-460c-864c-c0f1dcd2ac41-ovsdbserver-sb\") pod \"8e8562e5-7677-460c-864c-c0f1dcd2ac41\" (UID: \"8e8562e5-7677-460c-864c-c0f1dcd2ac41\") " Mar 13 10:25:08 crc kubenswrapper[4632]: I0313 10:25:08.501060 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e8562e5-7677-460c-864c-c0f1dcd2ac41-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8e8562e5-7677-460c-864c-c0f1dcd2ac41" (UID: "8e8562e5-7677-460c-864c-c0f1dcd2ac41"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:25:08 crc kubenswrapper[4632]: I0313 10:25:08.518156 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e8562e5-7677-460c-864c-c0f1dcd2ac41-kube-api-access-slg5l" (OuterVolumeSpecName: "kube-api-access-slg5l") pod "8e8562e5-7677-460c-864c-c0f1dcd2ac41" (UID: "8e8562e5-7677-460c-864c-c0f1dcd2ac41"). InnerVolumeSpecName "kube-api-access-slg5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:25:08 crc kubenswrapper[4632]: I0313 10:25:08.537339 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e8562e5-7677-460c-864c-c0f1dcd2ac41-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8e8562e5-7677-460c-864c-c0f1dcd2ac41" (UID: "8e8562e5-7677-460c-864c-c0f1dcd2ac41"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:25:08 crc kubenswrapper[4632]: I0313 10:25:08.558470 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e8562e5-7677-460c-864c-c0f1dcd2ac41-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8e8562e5-7677-460c-864c-c0f1dcd2ac41" (UID: "8e8562e5-7677-460c-864c-c0f1dcd2ac41"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:25:08 crc kubenswrapper[4632]: I0313 10:25:08.559398 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e8562e5-7677-460c-864c-c0f1dcd2ac41-ovsdbserver-sb\") pod \"8e8562e5-7677-460c-864c-c0f1dcd2ac41\" (UID: \"8e8562e5-7677-460c-864c-c0f1dcd2ac41\") " Mar 13 10:25:08 crc kubenswrapper[4632]: I0313 10:25:08.559881 4632 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e8562e5-7677-460c-864c-c0f1dcd2ac41-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:08 crc kubenswrapper[4632]: I0313 10:25:08.559896 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slg5l\" (UniqueName: \"kubernetes.io/projected/8e8562e5-7677-460c-864c-c0f1dcd2ac41-kube-api-access-slg5l\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:08 crc kubenswrapper[4632]: I0313 10:25:08.559907 4632 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e8562e5-7677-460c-864c-c0f1dcd2ac41-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:08 crc kubenswrapper[4632]: W0313 10:25:08.560027 4632 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/8e8562e5-7677-460c-864c-c0f1dcd2ac41/volumes/kubernetes.io~configmap/ovsdbserver-sb Mar 13 10:25:08 crc kubenswrapper[4632]: I0313 10:25:08.560042 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e8562e5-7677-460c-864c-c0f1dcd2ac41-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8e8562e5-7677-460c-864c-c0f1dcd2ac41" (UID: "8e8562e5-7677-460c-864c-c0f1dcd2ac41"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:25:08 crc kubenswrapper[4632]: I0313 10:25:08.596567 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e8562e5-7677-460c-864c-c0f1dcd2ac41-config" (OuterVolumeSpecName: "config") pod "8e8562e5-7677-460c-864c-c0f1dcd2ac41" (UID: "8e8562e5-7677-460c-864c-c0f1dcd2ac41"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:25:08 crc kubenswrapper[4632]: I0313 10:25:08.629314 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e8562e5-7677-460c-864c-c0f1dcd2ac41-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8e8562e5-7677-460c-864c-c0f1dcd2ac41" (UID: "8e8562e5-7677-460c-864c-c0f1dcd2ac41"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:25:08 crc kubenswrapper[4632]: I0313 10:25:08.662963 4632 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e8562e5-7677-460c-864c-c0f1dcd2ac41-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:08 crc kubenswrapper[4632]: I0313 10:25:08.662992 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e8562e5-7677-460c-864c-c0f1dcd2ac41-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:08 crc kubenswrapper[4632]: I0313 10:25:08.663002 4632 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e8562e5-7677-460c-864c-c0f1dcd2ac41-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:08 crc kubenswrapper[4632]: I0313 10:25:08.768418 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hlsnz" event={"ID":"b7221b50-7231-4ade-917e-b10f177cb539","Type":"ContainerStarted","Data":"a4f9bd4f877455829b998ee69c6d5f9dd7fb999a6d06fe2960e4af1bfddc1eb0"} Mar 13 10:25:08 crc kubenswrapper[4632]: I0313 10:25:08.776469 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67d6b4b8f7-nrxn8" event={"ID":"95fe9a38-2b32-411e-9121-ad4cc32f159e","Type":"ContainerStarted","Data":"23d0d6f6bc6174b2a86ec905a9477b2974881387bec66374cfa55dca37114aec"} Mar 13 10:25:08 crc kubenswrapper[4632]: I0313 10:25:08.784582 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6844bbffb5-6qbh8" event={"ID":"8e8562e5-7677-460c-864c-c0f1dcd2ac41","Type":"ContainerDied","Data":"b4b78a7ce795d6bbb416eb8a947d7920ad2bf057be657ab85e7df8a0742013f4"} Mar 13 10:25:08 crc kubenswrapper[4632]: I0313 10:25:08.784899 4632 scope.go:117] "RemoveContainer" containerID="33cfae1bedfa94ab3959b48b4e6591f364a299ec2695a359dd153fc66fde7615" Mar 13 10:25:08 crc kubenswrapper[4632]: I0313 10:25:08.785022 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6844bbffb5-6qbh8" Mar 13 10:25:08 crc kubenswrapper[4632]: I0313 10:25:08.920167 4632 generic.go:334] "Generic (PLEG): container finished" podID="78e29b83-b50e-46db-a8d6-bba0ecfb5c08" containerID="80dc69ee9ec968911aeb73d01429c087d3144584fba19b07c0a8b37e75187f19" exitCode=0 Mar 13 10:25:08 crc kubenswrapper[4632]: I0313 10:25:08.920638 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d8f9dd5cc-6nktg" event={"ID":"78e29b83-b50e-46db-a8d6-bba0ecfb5c08","Type":"ContainerDied","Data":"80dc69ee9ec968911aeb73d01429c087d3144584fba19b07c0a8b37e75187f19"} Mar 13 10:25:08 crc kubenswrapper[4632]: I0313 10:25:08.920748 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d8f9dd5cc-6nktg" event={"ID":"78e29b83-b50e-46db-a8d6-bba0ecfb5c08","Type":"ContainerStarted","Data":"5187e6e9a0835d8922aa8452723fd7620bf5222c8a96f16a5be9778d8386494d"} Mar 13 10:25:08 crc kubenswrapper[4632]: I0313 10:25:08.928358 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-hlsnz" podStartSLOduration=4.928337615 podStartE2EDuration="4.928337615s" podCreationTimestamp="2026-03-13 10:25:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:25:08.838313727 +0000 UTC m=+1282.860843870" watchObservedRunningTime="2026-03-13 10:25:08.928337615 +0000 UTC m=+1282.950867748" Mar 13 10:25:08 crc kubenswrapper[4632]: I0313 10:25:08.991247 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6wd56" event={"ID":"6dd01e75-b01a-439d-953a-a7b35aefaccf","Type":"ContainerStarted","Data":"19adb417107921a77df964ab1bd8c8cf0029e40afcac705a66952307655b68b9"} Mar 13 10:25:09 crc kubenswrapper[4632]: I0313 10:25:09.057250 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6844bbffb5-6qbh8"] Mar 13 10:25:09 crc kubenswrapper[4632]: I0313 10:25:09.094549 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6844bbffb5-6qbh8"] Mar 13 10:25:09 crc kubenswrapper[4632]: I0313 10:25:09.164046 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-6wd56" podStartSLOduration=6.164022857 podStartE2EDuration="6.164022857s" podCreationTimestamp="2026-03-13 10:25:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:25:09.125710938 +0000 UTC m=+1283.148241071" watchObservedRunningTime="2026-03-13 10:25:09.164022857 +0000 UTC m=+1283.186552990" Mar 13 10:25:10 crc kubenswrapper[4632]: I0313 10:25:10.027778 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d8f9dd5cc-6nktg" event={"ID":"78e29b83-b50e-46db-a8d6-bba0ecfb5c08","Type":"ContainerStarted","Data":"0f3b47ae46ac068badd9fe9f0befa9613632a7901ef641ad38d1419cf04cc4af"} Mar 13 10:25:10 crc kubenswrapper[4632]: I0313 10:25:10.027829 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d8f9dd5cc-6nktg" Mar 13 10:25:10 crc kubenswrapper[4632]: I0313 10:25:10.086213 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d8f9dd5cc-6nktg" podStartSLOduration=5.086145511 podStartE2EDuration="5.086145511s" podCreationTimestamp="2026-03-13 10:25:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:25:10.084129672 +0000 UTC m=+1284.106659815" watchObservedRunningTime="2026-03-13 10:25:10.086145511 +0000 UTC m=+1284.108675654" Mar 13 10:25:10 crc kubenswrapper[4632]: I0313 10:25:10.089894 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e8562e5-7677-460c-864c-c0f1dcd2ac41" path="/var/lib/kubelet/pods/8e8562e5-7677-460c-864c-c0f1dcd2ac41/volumes" Mar 13 10:25:10 crc kubenswrapper[4632]: I0313 10:25:10.091240 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6fb489b64f-prckv"] Mar 13 10:25:10 crc kubenswrapper[4632]: I0313 10:25:10.139152 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:25:10 crc kubenswrapper[4632]: I0313 10:25:10.170459 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7ff9d5cddf-cz85p"] Mar 13 10:25:10 crc kubenswrapper[4632]: E0313 10:25:10.170882 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="875ab9dc-abac-45c4-86b9-b0bfccdfb240" containerName="init" Mar 13 10:25:10 crc kubenswrapper[4632]: I0313 10:25:10.170899 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="875ab9dc-abac-45c4-86b9-b0bfccdfb240" containerName="init" Mar 13 10:25:10 crc kubenswrapper[4632]: E0313 10:25:10.170922 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="875ab9dc-abac-45c4-86b9-b0bfccdfb240" containerName="dnsmasq-dns" Mar 13 10:25:10 crc kubenswrapper[4632]: I0313 10:25:10.170930 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="875ab9dc-abac-45c4-86b9-b0bfccdfb240" containerName="dnsmasq-dns" Mar 13 10:25:10 crc kubenswrapper[4632]: E0313 10:25:10.170970 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e8562e5-7677-460c-864c-c0f1dcd2ac41" containerName="init" Mar 13 10:25:10 crc kubenswrapper[4632]: I0313 10:25:10.170986 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e8562e5-7677-460c-864c-c0f1dcd2ac41" containerName="init" Mar 13 10:25:10 crc kubenswrapper[4632]: I0313 10:25:10.171206 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="875ab9dc-abac-45c4-86b9-b0bfccdfb240" containerName="dnsmasq-dns" Mar 13 10:25:10 crc kubenswrapper[4632]: I0313 10:25:10.171224 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e8562e5-7677-460c-864c-c0f1dcd2ac41" containerName="init" Mar 13 10:25:10 crc kubenswrapper[4632]: I0313 10:25:10.172309 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7ff9d5cddf-cz85p" Mar 13 10:25:10 crc kubenswrapper[4632]: I0313 10:25:10.199133 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7ff9d5cddf-cz85p"] Mar 13 10:25:10 crc kubenswrapper[4632]: I0313 10:25:10.329912 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgt2s\" (UniqueName: \"kubernetes.io/projected/930f1246-53c8-4970-af1f-a7ef0ae42648-kube-api-access-rgt2s\") pod \"horizon-7ff9d5cddf-cz85p\" (UID: \"930f1246-53c8-4970-af1f-a7ef0ae42648\") " pod="openstack/horizon-7ff9d5cddf-cz85p" Mar 13 10:25:10 crc kubenswrapper[4632]: I0313 10:25:10.329997 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/930f1246-53c8-4970-af1f-a7ef0ae42648-scripts\") pod \"horizon-7ff9d5cddf-cz85p\" (UID: \"930f1246-53c8-4970-af1f-a7ef0ae42648\") " pod="openstack/horizon-7ff9d5cddf-cz85p" Mar 13 10:25:10 crc kubenswrapper[4632]: I0313 10:25:10.330061 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/930f1246-53c8-4970-af1f-a7ef0ae42648-config-data\") pod \"horizon-7ff9d5cddf-cz85p\" (UID: \"930f1246-53c8-4970-af1f-a7ef0ae42648\") " pod="openstack/horizon-7ff9d5cddf-cz85p" Mar 13 10:25:10 crc kubenswrapper[4632]: I0313 10:25:10.330121 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/930f1246-53c8-4970-af1f-a7ef0ae42648-logs\") pod \"horizon-7ff9d5cddf-cz85p\" (UID: \"930f1246-53c8-4970-af1f-a7ef0ae42648\") " pod="openstack/horizon-7ff9d5cddf-cz85p" Mar 13 10:25:10 crc kubenswrapper[4632]: I0313 10:25:10.330140 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/930f1246-53c8-4970-af1f-a7ef0ae42648-horizon-secret-key\") pod \"horizon-7ff9d5cddf-cz85p\" (UID: \"930f1246-53c8-4970-af1f-a7ef0ae42648\") " pod="openstack/horizon-7ff9d5cddf-cz85p" Mar 13 10:25:10 crc kubenswrapper[4632]: I0313 10:25:10.431166 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/930f1246-53c8-4970-af1f-a7ef0ae42648-logs\") pod \"horizon-7ff9d5cddf-cz85p\" (UID: \"930f1246-53c8-4970-af1f-a7ef0ae42648\") " pod="openstack/horizon-7ff9d5cddf-cz85p" Mar 13 10:25:10 crc kubenswrapper[4632]: I0313 10:25:10.431205 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/930f1246-53c8-4970-af1f-a7ef0ae42648-horizon-secret-key\") pod \"horizon-7ff9d5cddf-cz85p\" (UID: \"930f1246-53c8-4970-af1f-a7ef0ae42648\") " pod="openstack/horizon-7ff9d5cddf-cz85p" Mar 13 10:25:10 crc kubenswrapper[4632]: I0313 10:25:10.431256 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgt2s\" (UniqueName: \"kubernetes.io/projected/930f1246-53c8-4970-af1f-a7ef0ae42648-kube-api-access-rgt2s\") pod \"horizon-7ff9d5cddf-cz85p\" (UID: \"930f1246-53c8-4970-af1f-a7ef0ae42648\") " pod="openstack/horizon-7ff9d5cddf-cz85p" Mar 13 10:25:10 crc kubenswrapper[4632]: I0313 10:25:10.431286 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/930f1246-53c8-4970-af1f-a7ef0ae42648-scripts\") pod \"horizon-7ff9d5cddf-cz85p\" (UID: \"930f1246-53c8-4970-af1f-a7ef0ae42648\") " pod="openstack/horizon-7ff9d5cddf-cz85p" Mar 13 10:25:10 crc kubenswrapper[4632]: I0313 10:25:10.431337 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/930f1246-53c8-4970-af1f-a7ef0ae42648-config-data\") pod \"horizon-7ff9d5cddf-cz85p\" (UID: \"930f1246-53c8-4970-af1f-a7ef0ae42648\") " pod="openstack/horizon-7ff9d5cddf-cz85p" Mar 13 10:25:10 crc kubenswrapper[4632]: I0313 10:25:10.431656 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/930f1246-53c8-4970-af1f-a7ef0ae42648-logs\") pod \"horizon-7ff9d5cddf-cz85p\" (UID: \"930f1246-53c8-4970-af1f-a7ef0ae42648\") " pod="openstack/horizon-7ff9d5cddf-cz85p" Mar 13 10:25:10 crc kubenswrapper[4632]: I0313 10:25:10.432464 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/930f1246-53c8-4970-af1f-a7ef0ae42648-scripts\") pod \"horizon-7ff9d5cddf-cz85p\" (UID: \"930f1246-53c8-4970-af1f-a7ef0ae42648\") " pod="openstack/horizon-7ff9d5cddf-cz85p" Mar 13 10:25:10 crc kubenswrapper[4632]: I0313 10:25:10.433753 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/930f1246-53c8-4970-af1f-a7ef0ae42648-config-data\") pod \"horizon-7ff9d5cddf-cz85p\" (UID: \"930f1246-53c8-4970-af1f-a7ef0ae42648\") " pod="openstack/horizon-7ff9d5cddf-cz85p" Mar 13 10:25:10 crc kubenswrapper[4632]: I0313 10:25:10.445354 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/930f1246-53c8-4970-af1f-a7ef0ae42648-horizon-secret-key\") pod \"horizon-7ff9d5cddf-cz85p\" (UID: \"930f1246-53c8-4970-af1f-a7ef0ae42648\") " pod="openstack/horizon-7ff9d5cddf-cz85p" Mar 13 10:25:10 crc kubenswrapper[4632]: I0313 10:25:10.465558 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgt2s\" (UniqueName: \"kubernetes.io/projected/930f1246-53c8-4970-af1f-a7ef0ae42648-kube-api-access-rgt2s\") pod \"horizon-7ff9d5cddf-cz85p\" (UID: \"930f1246-53c8-4970-af1f-a7ef0ae42648\") " pod="openstack/horizon-7ff9d5cddf-cz85p" Mar 13 10:25:10 crc kubenswrapper[4632]: I0313 10:25:10.517783 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7ff9d5cddf-cz85p" Mar 13 10:25:11 crc kubenswrapper[4632]: I0313 10:25:11.280006 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7ff9d5cddf-cz85p"] Mar 13 10:25:12 crc kubenswrapper[4632]: I0313 10:25:12.096052 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7ff9d5cddf-cz85p" event={"ID":"930f1246-53c8-4970-af1f-a7ef0ae42648","Type":"ContainerStarted","Data":"adf64e4e5f85756a4d7fe85854309c14415a387724ea67892ccde00c8e6a4b0e"} Mar 13 10:25:14 crc kubenswrapper[4632]: I0313 10:25:14.141065 4632 generic.go:334] "Generic (PLEG): container finished" podID="6dd01e75-b01a-439d-953a-a7b35aefaccf" containerID="19adb417107921a77df964ab1bd8c8cf0029e40afcac705a66952307655b68b9" exitCode=0 Mar 13 10:25:14 crc kubenswrapper[4632]: I0313 10:25:14.141295 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6wd56" event={"ID":"6dd01e75-b01a-439d-953a-a7b35aefaccf","Type":"ContainerDied","Data":"19adb417107921a77df964ab1bd8c8cf0029e40afcac705a66952307655b68b9"} Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.009445 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-67d6b4b8f7-nrxn8"] Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.029723 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7bdb5f7878-ng2k2"] Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.031235 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.035621 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.078018 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7bdb5f7878-ng2k2"] Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.138221 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7ff9d5cddf-cz85p"] Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.149998 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e4afb91-ce26-4325-89c9-2542da2ec48a-scripts\") pod \"horizon-7bdb5f7878-ng2k2\" (UID: \"3e4afb91-ce26-4325-89c9-2542da2ec48a\") " pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.150085 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3e4afb91-ce26-4325-89c9-2542da2ec48a-horizon-secret-key\") pod \"horizon-7bdb5f7878-ng2k2\" (UID: \"3e4afb91-ce26-4325-89c9-2542da2ec48a\") " pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.150121 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e4afb91-ce26-4325-89c9-2542da2ec48a-logs\") pod \"horizon-7bdb5f7878-ng2k2\" (UID: \"3e4afb91-ce26-4325-89c9-2542da2ec48a\") " pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.150234 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntccx\" (UniqueName: \"kubernetes.io/projected/3e4afb91-ce26-4325-89c9-2542da2ec48a-kube-api-access-ntccx\") pod \"horizon-7bdb5f7878-ng2k2\" (UID: \"3e4afb91-ce26-4325-89c9-2542da2ec48a\") " pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.150274 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e4afb91-ce26-4325-89c9-2542da2ec48a-combined-ca-bundle\") pod \"horizon-7bdb5f7878-ng2k2\" (UID: \"3e4afb91-ce26-4325-89c9-2542da2ec48a\") " pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.150300 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e4afb91-ce26-4325-89c9-2542da2ec48a-config-data\") pod \"horizon-7bdb5f7878-ng2k2\" (UID: \"3e4afb91-ce26-4325-89c9-2542da2ec48a\") " pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.150434 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e4afb91-ce26-4325-89c9-2542da2ec48a-horizon-tls-certs\") pod \"horizon-7bdb5f7878-ng2k2\" (UID: \"3e4afb91-ce26-4325-89c9-2542da2ec48a\") " pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.213461 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-689764498d-rg7vt"] Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.217425 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-689764498d-rg7vt" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.254209 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e4afb91-ce26-4325-89c9-2542da2ec48a-combined-ca-bundle\") pod \"horizon-7bdb5f7878-ng2k2\" (UID: \"3e4afb91-ce26-4325-89c9-2542da2ec48a\") " pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.254267 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e4afb91-ce26-4325-89c9-2542da2ec48a-config-data\") pod \"horizon-7bdb5f7878-ng2k2\" (UID: \"3e4afb91-ce26-4325-89c9-2542da2ec48a\") " pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.254358 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e4afb91-ce26-4325-89c9-2542da2ec48a-horizon-tls-certs\") pod \"horizon-7bdb5f7878-ng2k2\" (UID: \"3e4afb91-ce26-4325-89c9-2542da2ec48a\") " pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.254413 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e4afb91-ce26-4325-89c9-2542da2ec48a-scripts\") pod \"horizon-7bdb5f7878-ng2k2\" (UID: \"3e4afb91-ce26-4325-89c9-2542da2ec48a\") " pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.254446 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3e4afb91-ce26-4325-89c9-2542da2ec48a-horizon-secret-key\") pod \"horizon-7bdb5f7878-ng2k2\" (UID: \"3e4afb91-ce26-4325-89c9-2542da2ec48a\") " pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.254466 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e4afb91-ce26-4325-89c9-2542da2ec48a-logs\") pod \"horizon-7bdb5f7878-ng2k2\" (UID: \"3e4afb91-ce26-4325-89c9-2542da2ec48a\") " pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.254515 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntccx\" (UniqueName: \"kubernetes.io/projected/3e4afb91-ce26-4325-89c9-2542da2ec48a-kube-api-access-ntccx\") pod \"horizon-7bdb5f7878-ng2k2\" (UID: \"3e4afb91-ce26-4325-89c9-2542da2ec48a\") " pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.256012 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e4afb91-ce26-4325-89c9-2542da2ec48a-scripts\") pod \"horizon-7bdb5f7878-ng2k2\" (UID: \"3e4afb91-ce26-4325-89c9-2542da2ec48a\") " pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.257224 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e4afb91-ce26-4325-89c9-2542da2ec48a-config-data\") pod \"horizon-7bdb5f7878-ng2k2\" (UID: \"3e4afb91-ce26-4325-89c9-2542da2ec48a\") " pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.258754 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-689764498d-rg7vt"] Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.259737 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e4afb91-ce26-4325-89c9-2542da2ec48a-logs\") pod \"horizon-7bdb5f7878-ng2k2\" (UID: \"3e4afb91-ce26-4325-89c9-2542da2ec48a\") " pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.280246 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e4afb91-ce26-4325-89c9-2542da2ec48a-horizon-tls-certs\") pod \"horizon-7bdb5f7878-ng2k2\" (UID: \"3e4afb91-ce26-4325-89c9-2542da2ec48a\") " pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.301928 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3e4afb91-ce26-4325-89c9-2542da2ec48a-horizon-secret-key\") pod \"horizon-7bdb5f7878-ng2k2\" (UID: \"3e4afb91-ce26-4325-89c9-2542da2ec48a\") " pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.308802 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e4afb91-ce26-4325-89c9-2542da2ec48a-combined-ca-bundle\") pod \"horizon-7bdb5f7878-ng2k2\" (UID: \"3e4afb91-ce26-4325-89c9-2542da2ec48a\") " pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.327755 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntccx\" (UniqueName: \"kubernetes.io/projected/3e4afb91-ce26-4325-89c9-2542da2ec48a-kube-api-access-ntccx\") pod \"horizon-7bdb5f7878-ng2k2\" (UID: \"3e4afb91-ce26-4325-89c9-2542da2ec48a\") " pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.356974 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c-combined-ca-bundle\") pod \"horizon-689764498d-rg7vt\" (UID: \"5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c\") " pod="openstack/horizon-689764498d-rg7vt" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.357044 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c-scripts\") pod \"horizon-689764498d-rg7vt\" (UID: \"5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c\") " pod="openstack/horizon-689764498d-rg7vt" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.357094 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c-config-data\") pod \"horizon-689764498d-rg7vt\" (UID: \"5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c\") " pod="openstack/horizon-689764498d-rg7vt" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.357136 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c-logs\") pod \"horizon-689764498d-rg7vt\" (UID: \"5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c\") " pod="openstack/horizon-689764498d-rg7vt" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.357154 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c-horizon-tls-certs\") pod \"horizon-689764498d-rg7vt\" (UID: \"5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c\") " pod="openstack/horizon-689764498d-rg7vt" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.357175 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbnrh\" (UniqueName: \"kubernetes.io/projected/5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c-kube-api-access-xbnrh\") pod \"horizon-689764498d-rg7vt\" (UID: \"5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c\") " pod="openstack/horizon-689764498d-rg7vt" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.357216 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c-horizon-secret-key\") pod \"horizon-689764498d-rg7vt\" (UID: \"5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c\") " pod="openstack/horizon-689764498d-rg7vt" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.392968 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.468438 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c-combined-ca-bundle\") pod \"horizon-689764498d-rg7vt\" (UID: \"5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c\") " pod="openstack/horizon-689764498d-rg7vt" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.468508 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c-scripts\") pod \"horizon-689764498d-rg7vt\" (UID: \"5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c\") " pod="openstack/horizon-689764498d-rg7vt" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.468567 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c-config-data\") pod \"horizon-689764498d-rg7vt\" (UID: \"5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c\") " pod="openstack/horizon-689764498d-rg7vt" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.468612 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c-logs\") pod \"horizon-689764498d-rg7vt\" (UID: \"5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c\") " pod="openstack/horizon-689764498d-rg7vt" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.468639 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c-horizon-tls-certs\") pod \"horizon-689764498d-rg7vt\" (UID: \"5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c\") " pod="openstack/horizon-689764498d-rg7vt" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.468663 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbnrh\" (UniqueName: \"kubernetes.io/projected/5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c-kube-api-access-xbnrh\") pod \"horizon-689764498d-rg7vt\" (UID: \"5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c\") " pod="openstack/horizon-689764498d-rg7vt" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.468685 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c-horizon-secret-key\") pod \"horizon-689764498d-rg7vt\" (UID: \"5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c\") " pod="openstack/horizon-689764498d-rg7vt" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.473039 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c-config-data\") pod \"horizon-689764498d-rg7vt\" (UID: \"5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c\") " pod="openstack/horizon-689764498d-rg7vt" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.481261 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c-scripts\") pod \"horizon-689764498d-rg7vt\" (UID: \"5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c\") " pod="openstack/horizon-689764498d-rg7vt" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.489481 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c-logs\") pod \"horizon-689764498d-rg7vt\" (UID: \"5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c\") " pod="openstack/horizon-689764498d-rg7vt" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.564901 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c-combined-ca-bundle\") pod \"horizon-689764498d-rg7vt\" (UID: \"5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c\") " pod="openstack/horizon-689764498d-rg7vt" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.566956 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c-horizon-secret-key\") pod \"horizon-689764498d-rg7vt\" (UID: \"5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c\") " pod="openstack/horizon-689764498d-rg7vt" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.572983 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbnrh\" (UniqueName: \"kubernetes.io/projected/5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c-kube-api-access-xbnrh\") pod \"horizon-689764498d-rg7vt\" (UID: \"5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c\") " pod="openstack/horizon-689764498d-rg7vt" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.618689 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c-horizon-tls-certs\") pod \"horizon-689764498d-rg7vt\" (UID: \"5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c\") " pod="openstack/horizon-689764498d-rg7vt" Mar 13 10:25:15 crc kubenswrapper[4632]: I0313 10:25:15.855787 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-689764498d-rg7vt" Mar 13 10:25:16 crc kubenswrapper[4632]: I0313 10:25:16.387072 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d8f9dd5cc-6nktg" Mar 13 10:25:16 crc kubenswrapper[4632]: I0313 10:25:16.475957 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b59dbc87f-7zwrj"] Mar 13 10:25:16 crc kubenswrapper[4632]: I0313 10:25:16.476372 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" podUID="7203640d-964c-4c28-8cc2-6a7ae27cdab3" containerName="dnsmasq-dns" containerID="cri-o://f1255f2b0d97d7bcc13a7045fc5d8e4778eece89f9f6f1d468ae8c05e428c6f7" gracePeriod=10 Mar 13 10:25:16 crc kubenswrapper[4632]: I0313 10:25:16.719333 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" podUID="7203640d-964c-4c28-8cc2-6a7ae27cdab3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: connect: connection refused" Mar 13 10:25:17 crc kubenswrapper[4632]: I0313 10:25:17.187239 4632 generic.go:334] "Generic (PLEG): container finished" podID="7203640d-964c-4c28-8cc2-6a7ae27cdab3" containerID="f1255f2b0d97d7bcc13a7045fc5d8e4778eece89f9f6f1d468ae8c05e428c6f7" exitCode=0 Mar 13 10:25:17 crc kubenswrapper[4632]: I0313 10:25:17.187279 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" event={"ID":"7203640d-964c-4c28-8cc2-6a7ae27cdab3","Type":"ContainerDied","Data":"f1255f2b0d97d7bcc13a7045fc5d8e4778eece89f9f6f1d468ae8c05e428c6f7"} Mar 13 10:25:21 crc kubenswrapper[4632]: I0313 10:25:21.224986 4632 generic.go:334] "Generic (PLEG): container finished" podID="4f1c5663-463b-45e2-b200-64e73e6d5698" containerID="bf8d93edd68f1cf79021467ff9910419baf75397a4140fb3d25bca7f97abbf70" exitCode=0 Mar 13 10:25:21 crc kubenswrapper[4632]: I0313 10:25:21.225711 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-l6hpb" event={"ID":"4f1c5663-463b-45e2-b200-64e73e6d5698","Type":"ContainerDied","Data":"bf8d93edd68f1cf79021467ff9910419baf75397a4140fb3d25bca7f97abbf70"} Mar 13 10:25:21 crc kubenswrapper[4632]: I0313 10:25:21.719349 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" podUID="7203640d-964c-4c28-8cc2-6a7ae27cdab3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: connect: connection refused" Mar 13 10:25:26 crc kubenswrapper[4632]: I0313 10:25:26.719552 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" podUID="7203640d-964c-4c28-8cc2-6a7ae27cdab3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: connect: connection refused" Mar 13 10:25:26 crc kubenswrapper[4632]: I0313 10:25:26.720800 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" Mar 13 10:25:31 crc kubenswrapper[4632]: I0313 10:25:31.718774 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" podUID="7203640d-964c-4c28-8cc2-6a7ae27cdab3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: connect: connection refused" Mar 13 10:25:33 crc kubenswrapper[4632]: I0313 10:25:33.325899 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6wd56" Mar 13 10:25:33 crc kubenswrapper[4632]: I0313 10:25:33.379660 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6wd56" event={"ID":"6dd01e75-b01a-439d-953a-a7b35aefaccf","Type":"ContainerDied","Data":"e5726b2fe96111fb5381a471a1eb71f1473219059f290427afb5975ccc268d97"} Mar 13 10:25:33 crc kubenswrapper[4632]: I0313 10:25:33.379730 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5726b2fe96111fb5381a471a1eb71f1473219059f290427afb5975ccc268d97" Mar 13 10:25:33 crc kubenswrapper[4632]: I0313 10:25:33.379745 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6wd56" Mar 13 10:25:33 crc kubenswrapper[4632]: I0313 10:25:33.460358 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9qt5\" (UniqueName: \"kubernetes.io/projected/6dd01e75-b01a-439d-953a-a7b35aefaccf-kube-api-access-q9qt5\") pod \"6dd01e75-b01a-439d-953a-a7b35aefaccf\" (UID: \"6dd01e75-b01a-439d-953a-a7b35aefaccf\") " Mar 13 10:25:33 crc kubenswrapper[4632]: I0313 10:25:33.460840 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dd01e75-b01a-439d-953a-a7b35aefaccf-scripts\") pod \"6dd01e75-b01a-439d-953a-a7b35aefaccf\" (UID: \"6dd01e75-b01a-439d-953a-a7b35aefaccf\") " Mar 13 10:25:33 crc kubenswrapper[4632]: I0313 10:25:33.460953 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6dd01e75-b01a-439d-953a-a7b35aefaccf-fernet-keys\") pod \"6dd01e75-b01a-439d-953a-a7b35aefaccf\" (UID: \"6dd01e75-b01a-439d-953a-a7b35aefaccf\") " Mar 13 10:25:33 crc kubenswrapper[4632]: I0313 10:25:33.460987 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6dd01e75-b01a-439d-953a-a7b35aefaccf-credential-keys\") pod \"6dd01e75-b01a-439d-953a-a7b35aefaccf\" (UID: \"6dd01e75-b01a-439d-953a-a7b35aefaccf\") " Mar 13 10:25:33 crc kubenswrapper[4632]: I0313 10:25:33.461032 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dd01e75-b01a-439d-953a-a7b35aefaccf-config-data\") pod \"6dd01e75-b01a-439d-953a-a7b35aefaccf\" (UID: \"6dd01e75-b01a-439d-953a-a7b35aefaccf\") " Mar 13 10:25:33 crc kubenswrapper[4632]: I0313 10:25:33.461081 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dd01e75-b01a-439d-953a-a7b35aefaccf-combined-ca-bundle\") pod \"6dd01e75-b01a-439d-953a-a7b35aefaccf\" (UID: \"6dd01e75-b01a-439d-953a-a7b35aefaccf\") " Mar 13 10:25:33 crc kubenswrapper[4632]: I0313 10:25:33.470086 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dd01e75-b01a-439d-953a-a7b35aefaccf-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6dd01e75-b01a-439d-953a-a7b35aefaccf" (UID: "6dd01e75-b01a-439d-953a-a7b35aefaccf"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:25:33 crc kubenswrapper[4632]: I0313 10:25:33.470129 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dd01e75-b01a-439d-953a-a7b35aefaccf-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "6dd01e75-b01a-439d-953a-a7b35aefaccf" (UID: "6dd01e75-b01a-439d-953a-a7b35aefaccf"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:25:33 crc kubenswrapper[4632]: I0313 10:25:33.470177 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dd01e75-b01a-439d-953a-a7b35aefaccf-scripts" (OuterVolumeSpecName: "scripts") pod "6dd01e75-b01a-439d-953a-a7b35aefaccf" (UID: "6dd01e75-b01a-439d-953a-a7b35aefaccf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:25:33 crc kubenswrapper[4632]: I0313 10:25:33.473656 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dd01e75-b01a-439d-953a-a7b35aefaccf-kube-api-access-q9qt5" (OuterVolumeSpecName: "kube-api-access-q9qt5") pod "6dd01e75-b01a-439d-953a-a7b35aefaccf" (UID: "6dd01e75-b01a-439d-953a-a7b35aefaccf"). InnerVolumeSpecName "kube-api-access-q9qt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:25:33 crc kubenswrapper[4632]: I0313 10:25:33.503339 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dd01e75-b01a-439d-953a-a7b35aefaccf-config-data" (OuterVolumeSpecName: "config-data") pod "6dd01e75-b01a-439d-953a-a7b35aefaccf" (UID: "6dd01e75-b01a-439d-953a-a7b35aefaccf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:25:33 crc kubenswrapper[4632]: I0313 10:25:33.503551 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dd01e75-b01a-439d-953a-a7b35aefaccf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6dd01e75-b01a-439d-953a-a7b35aefaccf" (UID: "6dd01e75-b01a-439d-953a-a7b35aefaccf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:25:33 crc kubenswrapper[4632]: I0313 10:25:33.563650 4632 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6dd01e75-b01a-439d-953a-a7b35aefaccf-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:33 crc kubenswrapper[4632]: I0313 10:25:33.563694 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dd01e75-b01a-439d-953a-a7b35aefaccf-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:33 crc kubenswrapper[4632]: I0313 10:25:33.563706 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dd01e75-b01a-439d-953a-a7b35aefaccf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:33 crc kubenswrapper[4632]: I0313 10:25:33.563718 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9qt5\" (UniqueName: \"kubernetes.io/projected/6dd01e75-b01a-439d-953a-a7b35aefaccf-kube-api-access-q9qt5\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:33 crc kubenswrapper[4632]: I0313 10:25:33.563732 4632 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dd01e75-b01a-439d-953a-a7b35aefaccf-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:33 crc kubenswrapper[4632]: I0313 10:25:33.563743 4632 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6dd01e75-b01a-439d-953a-a7b35aefaccf-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:33 crc kubenswrapper[4632]: E0313 10:25:33.720270 4632 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-heat-engine:e43235cb19da04699a53f42b6a75afe9" Mar 13 10:25:33 crc kubenswrapper[4632]: E0313 10:25:33.720631 4632 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-heat-engine:e43235cb19da04699a53f42b6a75afe9" Mar 13 10:25:33 crc kubenswrapper[4632]: E0313 10:25:33.720778 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-heat-engine:e43235cb19da04699a53f42b6a75afe9,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n5rzh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-7fvlk_openstack(d722ddd7-e65d-44f7-a02d-18ddf126ccf5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 10:25:33 crc kubenswrapper[4632]: E0313 10:25:33.722599 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-7fvlk" podUID="d722ddd7-e65d-44f7-a02d-18ddf126ccf5" Mar 13 10:25:33 crc kubenswrapper[4632]: E0313 10:25:33.729979 4632 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:e43235cb19da04699a53f42b6a75afe9" Mar 13 10:25:33 crc kubenswrapper[4632]: E0313 10:25:33.730034 4632 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:e43235cb19da04699a53f42b6a75afe9" Mar 13 10:25:33 crc kubenswrapper[4632]: E0313 10:25:33.730150 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:e43235cb19da04699a53f42b6a75afe9,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n65dh55dh9fh567h567h7ch66ch6ch5d6hfbh658hfchbch5dch59h55fh94hbch5d5hbch685h649h87h5h56bh5bh56fh588h9h5c6h54dh696q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gwx9r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6fb489b64f-prckv_openstack(00c4bdc6-a22c-4ab6-b898-cf591b92756b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 10:25:33 crc kubenswrapper[4632]: E0313 10:25:33.732311 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:e43235cb19da04699a53f42b6a75afe9\\\"\"]" pod="openstack/horizon-6fb489b64f-prckv" podUID="00c4bdc6-a22c-4ab6-b898-cf591b92756b" Mar 13 10:25:34 crc kubenswrapper[4632]: E0313 10:25:34.402967 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-heat-engine:e43235cb19da04699a53f42b6a75afe9\\\"\"" pod="openstack/heat-db-sync-7fvlk" podUID="d722ddd7-e65d-44f7-a02d-18ddf126ccf5" Mar 13 10:25:34 crc kubenswrapper[4632]: I0313 10:25:34.471144 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-6wd56"] Mar 13 10:25:34 crc kubenswrapper[4632]: I0313 10:25:34.481109 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-6wd56"] Mar 13 10:25:34 crc kubenswrapper[4632]: I0313 10:25:34.537880 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-x8tq8"] Mar 13 10:25:34 crc kubenswrapper[4632]: E0313 10:25:34.539541 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dd01e75-b01a-439d-953a-a7b35aefaccf" containerName="keystone-bootstrap" Mar 13 10:25:34 crc kubenswrapper[4632]: I0313 10:25:34.539653 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dd01e75-b01a-439d-953a-a7b35aefaccf" containerName="keystone-bootstrap" Mar 13 10:25:34 crc kubenswrapper[4632]: I0313 10:25:34.539987 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dd01e75-b01a-439d-953a-a7b35aefaccf" containerName="keystone-bootstrap" Mar 13 10:25:34 crc kubenswrapper[4632]: I0313 10:25:34.541323 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x8tq8" Mar 13 10:25:34 crc kubenswrapper[4632]: I0313 10:25:34.543848 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 13 10:25:34 crc kubenswrapper[4632]: I0313 10:25:34.544701 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 13 10:25:34 crc kubenswrapper[4632]: I0313 10:25:34.545496 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-llpcf" Mar 13 10:25:34 crc kubenswrapper[4632]: I0313 10:25:34.546104 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 13 10:25:34 crc kubenswrapper[4632]: I0313 10:25:34.546888 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 13 10:25:34 crc kubenswrapper[4632]: I0313 10:25:34.552204 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-x8tq8"] Mar 13 10:25:34 crc kubenswrapper[4632]: I0313 10:25:34.694293 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8d0f662-d180-4137-8107-e465c5fb0621-scripts\") pod \"keystone-bootstrap-x8tq8\" (UID: \"d8d0f662-d180-4137-8107-e465c5fb0621\") " pod="openstack/keystone-bootstrap-x8tq8" Mar 13 10:25:34 crc kubenswrapper[4632]: I0313 10:25:34.694368 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln474\" (UniqueName: \"kubernetes.io/projected/d8d0f662-d180-4137-8107-e465c5fb0621-kube-api-access-ln474\") pod \"keystone-bootstrap-x8tq8\" (UID: \"d8d0f662-d180-4137-8107-e465c5fb0621\") " pod="openstack/keystone-bootstrap-x8tq8" Mar 13 10:25:34 crc kubenswrapper[4632]: I0313 10:25:34.694517 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d0f662-d180-4137-8107-e465c5fb0621-combined-ca-bundle\") pod \"keystone-bootstrap-x8tq8\" (UID: \"d8d0f662-d180-4137-8107-e465c5fb0621\") " pod="openstack/keystone-bootstrap-x8tq8" Mar 13 10:25:34 crc kubenswrapper[4632]: I0313 10:25:34.694551 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d8d0f662-d180-4137-8107-e465c5fb0621-fernet-keys\") pod \"keystone-bootstrap-x8tq8\" (UID: \"d8d0f662-d180-4137-8107-e465c5fb0621\") " pod="openstack/keystone-bootstrap-x8tq8" Mar 13 10:25:34 crc kubenswrapper[4632]: I0313 10:25:34.694620 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d0f662-d180-4137-8107-e465c5fb0621-config-data\") pod \"keystone-bootstrap-x8tq8\" (UID: \"d8d0f662-d180-4137-8107-e465c5fb0621\") " pod="openstack/keystone-bootstrap-x8tq8" Mar 13 10:25:34 crc kubenswrapper[4632]: I0313 10:25:34.695431 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d8d0f662-d180-4137-8107-e465c5fb0621-credential-keys\") pod \"keystone-bootstrap-x8tq8\" (UID: \"d8d0f662-d180-4137-8107-e465c5fb0621\") " pod="openstack/keystone-bootstrap-x8tq8" Mar 13 10:25:34 crc kubenswrapper[4632]: I0313 10:25:34.798668 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d8d0f662-d180-4137-8107-e465c5fb0621-credential-keys\") pod \"keystone-bootstrap-x8tq8\" (UID: \"d8d0f662-d180-4137-8107-e465c5fb0621\") " pod="openstack/keystone-bootstrap-x8tq8" Mar 13 10:25:34 crc kubenswrapper[4632]: I0313 10:25:34.798774 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8d0f662-d180-4137-8107-e465c5fb0621-scripts\") pod \"keystone-bootstrap-x8tq8\" (UID: \"d8d0f662-d180-4137-8107-e465c5fb0621\") " pod="openstack/keystone-bootstrap-x8tq8" Mar 13 10:25:34 crc kubenswrapper[4632]: I0313 10:25:34.798801 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln474\" (UniqueName: \"kubernetes.io/projected/d8d0f662-d180-4137-8107-e465c5fb0621-kube-api-access-ln474\") pod \"keystone-bootstrap-x8tq8\" (UID: \"d8d0f662-d180-4137-8107-e465c5fb0621\") " pod="openstack/keystone-bootstrap-x8tq8" Mar 13 10:25:34 crc kubenswrapper[4632]: I0313 10:25:34.798839 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d0f662-d180-4137-8107-e465c5fb0621-combined-ca-bundle\") pod \"keystone-bootstrap-x8tq8\" (UID: \"d8d0f662-d180-4137-8107-e465c5fb0621\") " pod="openstack/keystone-bootstrap-x8tq8" Mar 13 10:25:34 crc kubenswrapper[4632]: I0313 10:25:34.798863 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d8d0f662-d180-4137-8107-e465c5fb0621-fernet-keys\") pod \"keystone-bootstrap-x8tq8\" (UID: \"d8d0f662-d180-4137-8107-e465c5fb0621\") " pod="openstack/keystone-bootstrap-x8tq8" Mar 13 10:25:34 crc kubenswrapper[4632]: I0313 10:25:34.798880 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d0f662-d180-4137-8107-e465c5fb0621-config-data\") pod \"keystone-bootstrap-x8tq8\" (UID: \"d8d0f662-d180-4137-8107-e465c5fb0621\") " pod="openstack/keystone-bootstrap-x8tq8" Mar 13 10:25:34 crc kubenswrapper[4632]: I0313 10:25:34.813138 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8d0f662-d180-4137-8107-e465c5fb0621-scripts\") pod \"keystone-bootstrap-x8tq8\" (UID: \"d8d0f662-d180-4137-8107-e465c5fb0621\") " pod="openstack/keystone-bootstrap-x8tq8" Mar 13 10:25:34 crc kubenswrapper[4632]: I0313 10:25:34.814302 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d8d0f662-d180-4137-8107-e465c5fb0621-fernet-keys\") pod \"keystone-bootstrap-x8tq8\" (UID: \"d8d0f662-d180-4137-8107-e465c5fb0621\") " pod="openstack/keystone-bootstrap-x8tq8" Mar 13 10:25:34 crc kubenswrapper[4632]: I0313 10:25:34.814378 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d8d0f662-d180-4137-8107-e465c5fb0621-credential-keys\") pod \"keystone-bootstrap-x8tq8\" (UID: \"d8d0f662-d180-4137-8107-e465c5fb0621\") " pod="openstack/keystone-bootstrap-x8tq8" Mar 13 10:25:34 crc kubenswrapper[4632]: I0313 10:25:34.821496 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d0f662-d180-4137-8107-e465c5fb0621-combined-ca-bundle\") pod \"keystone-bootstrap-x8tq8\" (UID: \"d8d0f662-d180-4137-8107-e465c5fb0621\") " pod="openstack/keystone-bootstrap-x8tq8" Mar 13 10:25:34 crc kubenswrapper[4632]: I0313 10:25:34.828847 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d0f662-d180-4137-8107-e465c5fb0621-config-data\") pod \"keystone-bootstrap-x8tq8\" (UID: \"d8d0f662-d180-4137-8107-e465c5fb0621\") " pod="openstack/keystone-bootstrap-x8tq8" Mar 13 10:25:34 crc kubenswrapper[4632]: I0313 10:25:34.832330 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln474\" (UniqueName: \"kubernetes.io/projected/d8d0f662-d180-4137-8107-e465c5fb0621-kube-api-access-ln474\") pod \"keystone-bootstrap-x8tq8\" (UID: \"d8d0f662-d180-4137-8107-e465c5fb0621\") " pod="openstack/keystone-bootstrap-x8tq8" Mar 13 10:25:34 crc kubenswrapper[4632]: I0313 10:25:34.936654 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x8tq8" Mar 13 10:25:36 crc kubenswrapper[4632]: I0313 10:25:36.057346 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dd01e75-b01a-439d-953a-a7b35aefaccf" path="/var/lib/kubelet/pods/6dd01e75-b01a-439d-953a-a7b35aefaccf/volumes" Mar 13 10:25:36 crc kubenswrapper[4632]: E0313 10:25:36.211020 4632 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-placement-api:e43235cb19da04699a53f42b6a75afe9" Mar 13 10:25:36 crc kubenswrapper[4632]: E0313 10:25:36.211082 4632 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-placement-api:e43235cb19da04699a53f42b6a75afe9" Mar 13 10:25:36 crc kubenswrapper[4632]: E0313 10:25:36.211206 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-placement-api:e43235cb19da04699a53f42b6a75afe9,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mqp5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-htnd9_openstack(e92afa62-9c75-4e0e-92f4-76e57328d7a0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 10:25:36 crc kubenswrapper[4632]: E0313 10:25:36.212399 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-htnd9" podUID="e92afa62-9c75-4e0e-92f4-76e57328d7a0" Mar 13 10:25:36 crc kubenswrapper[4632]: E0313 10:25:36.223051 4632 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:e43235cb19da04699a53f42b6a75afe9" Mar 13 10:25:36 crc kubenswrapper[4632]: E0313 10:25:36.223113 4632 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:e43235cb19da04699a53f42b6a75afe9" Mar 13 10:25:36 crc kubenswrapper[4632]: E0313 10:25:36.223245 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:e43235cb19da04699a53f42b6a75afe9,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n56dhbh68dhfbh569h676h64fh69h669h65fh64fh85h5c7h588h5bdh76h65chd7h5b8h5ch677hc7h64ch8bh5fdh75h7dh5cbhf9h76h688h5d4q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rgt2s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7ff9d5cddf-cz85p_openstack(930f1246-53c8-4970-af1f-a7ef0ae42648): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 10:25:36 crc kubenswrapper[4632]: E0313 10:25:36.226586 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:e43235cb19da04699a53f42b6a75afe9\\\"\"]" pod="openstack/horizon-7ff9d5cddf-cz85p" podUID="930f1246-53c8-4970-af1f-a7ef0ae42648" Mar 13 10:25:36 crc kubenswrapper[4632]: I0313 10:25:36.307482 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-l6hpb" Mar 13 10:25:36 crc kubenswrapper[4632]: I0313 10:25:36.429876 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1c5663-463b-45e2-b200-64e73e6d5698-combined-ca-bundle\") pod \"4f1c5663-463b-45e2-b200-64e73e6d5698\" (UID: \"4f1c5663-463b-45e2-b200-64e73e6d5698\") " Mar 13 10:25:36 crc kubenswrapper[4632]: I0313 10:25:36.429927 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f1c5663-463b-45e2-b200-64e73e6d5698-config-data\") pod \"4f1c5663-463b-45e2-b200-64e73e6d5698\" (UID: \"4f1c5663-463b-45e2-b200-64e73e6d5698\") " Mar 13 10:25:36 crc kubenswrapper[4632]: I0313 10:25:36.430085 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4f1c5663-463b-45e2-b200-64e73e6d5698-db-sync-config-data\") pod \"4f1c5663-463b-45e2-b200-64e73e6d5698\" (UID: \"4f1c5663-463b-45e2-b200-64e73e6d5698\") " Mar 13 10:25:36 crc kubenswrapper[4632]: I0313 10:25:36.430147 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjbxx\" (UniqueName: \"kubernetes.io/projected/4f1c5663-463b-45e2-b200-64e73e6d5698-kube-api-access-fjbxx\") pod \"4f1c5663-463b-45e2-b200-64e73e6d5698\" (UID: \"4f1c5663-463b-45e2-b200-64e73e6d5698\") " Mar 13 10:25:36 crc kubenswrapper[4632]: I0313 10:25:36.457052 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f1c5663-463b-45e2-b200-64e73e6d5698-kube-api-access-fjbxx" (OuterVolumeSpecName: "kube-api-access-fjbxx") pod "4f1c5663-463b-45e2-b200-64e73e6d5698" (UID: "4f1c5663-463b-45e2-b200-64e73e6d5698"). InnerVolumeSpecName "kube-api-access-fjbxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:25:36 crc kubenswrapper[4632]: I0313 10:25:36.464232 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f1c5663-463b-45e2-b200-64e73e6d5698-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4f1c5663-463b-45e2-b200-64e73e6d5698" (UID: "4f1c5663-463b-45e2-b200-64e73e6d5698"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:25:36 crc kubenswrapper[4632]: I0313 10:25:36.514455 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-l6hpb" Mar 13 10:25:36 crc kubenswrapper[4632]: I0313 10:25:36.514933 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-l6hpb" event={"ID":"4f1c5663-463b-45e2-b200-64e73e6d5698","Type":"ContainerDied","Data":"0a5d62eda0a21b4de62c912c034c3914a852ed117fa1d5a908a4b0e7b70dc6a3"} Mar 13 10:25:36 crc kubenswrapper[4632]: I0313 10:25:36.514990 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a5d62eda0a21b4de62c912c034c3914a852ed117fa1d5a908a4b0e7b70dc6a3" Mar 13 10:25:36 crc kubenswrapper[4632]: I0313 10:25:36.521067 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f1c5663-463b-45e2-b200-64e73e6d5698-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f1c5663-463b-45e2-b200-64e73e6d5698" (UID: "4f1c5663-463b-45e2-b200-64e73e6d5698"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:25:36 crc kubenswrapper[4632]: E0313 10:25:36.521240 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-placement-api:e43235cb19da04699a53f42b6a75afe9\\\"\"" pod="openstack/placement-db-sync-htnd9" podUID="e92afa62-9c75-4e0e-92f4-76e57328d7a0" Mar 13 10:25:36 crc kubenswrapper[4632]: I0313 10:25:36.532440 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1c5663-463b-45e2-b200-64e73e6d5698-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:36 crc kubenswrapper[4632]: I0313 10:25:36.532480 4632 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4f1c5663-463b-45e2-b200-64e73e6d5698-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:36 crc kubenswrapper[4632]: I0313 10:25:36.532494 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjbxx\" (UniqueName: \"kubernetes.io/projected/4f1c5663-463b-45e2-b200-64e73e6d5698-kube-api-access-fjbxx\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:36 crc kubenswrapper[4632]: I0313 10:25:36.609130 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f1c5663-463b-45e2-b200-64e73e6d5698-config-data" (OuterVolumeSpecName: "config-data") pod "4f1c5663-463b-45e2-b200-64e73e6d5698" (UID: "4f1c5663-463b-45e2-b200-64e73e6d5698"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:25:36 crc kubenswrapper[4632]: I0313 10:25:36.636227 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f1c5663-463b-45e2-b200-64e73e6d5698-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.168286 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5db978f585-jtbcw"] Mar 13 10:25:38 crc kubenswrapper[4632]: E0313 10:25:38.169472 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f1c5663-463b-45e2-b200-64e73e6d5698" containerName="glance-db-sync" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.169495 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f1c5663-463b-45e2-b200-64e73e6d5698" containerName="glance-db-sync" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.169931 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f1c5663-463b-45e2-b200-64e73e6d5698" containerName="glance-db-sync" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.171992 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5db978f585-jtbcw" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.175869 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5db978f585-jtbcw"] Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.238632 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cf1d659-89cc-471b-8089-bc85f7ab3578-ovsdbserver-sb\") pod \"dnsmasq-dns-5db978f585-jtbcw\" (UID: \"4cf1d659-89cc-471b-8089-bc85f7ab3578\") " pod="openstack/dnsmasq-dns-5db978f585-jtbcw" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.238768 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsqxq\" (UniqueName: \"kubernetes.io/projected/4cf1d659-89cc-471b-8089-bc85f7ab3578-kube-api-access-gsqxq\") pod \"dnsmasq-dns-5db978f585-jtbcw\" (UID: \"4cf1d659-89cc-471b-8089-bc85f7ab3578\") " pod="openstack/dnsmasq-dns-5db978f585-jtbcw" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.238845 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cf1d659-89cc-471b-8089-bc85f7ab3578-dns-swift-storage-0\") pod \"dnsmasq-dns-5db978f585-jtbcw\" (UID: \"4cf1d659-89cc-471b-8089-bc85f7ab3578\") " pod="openstack/dnsmasq-dns-5db978f585-jtbcw" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.238880 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cf1d659-89cc-471b-8089-bc85f7ab3578-ovsdbserver-nb\") pod \"dnsmasq-dns-5db978f585-jtbcw\" (UID: \"4cf1d659-89cc-471b-8089-bc85f7ab3578\") " pod="openstack/dnsmasq-dns-5db978f585-jtbcw" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.238910 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cf1d659-89cc-471b-8089-bc85f7ab3578-config\") pod \"dnsmasq-dns-5db978f585-jtbcw\" (UID: \"4cf1d659-89cc-471b-8089-bc85f7ab3578\") " pod="openstack/dnsmasq-dns-5db978f585-jtbcw" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.239081 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cf1d659-89cc-471b-8089-bc85f7ab3578-dns-svc\") pod \"dnsmasq-dns-5db978f585-jtbcw\" (UID: \"4cf1d659-89cc-471b-8089-bc85f7ab3578\") " pod="openstack/dnsmasq-dns-5db978f585-jtbcw" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.340880 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cf1d659-89cc-471b-8089-bc85f7ab3578-dns-swift-storage-0\") pod \"dnsmasq-dns-5db978f585-jtbcw\" (UID: \"4cf1d659-89cc-471b-8089-bc85f7ab3578\") " pod="openstack/dnsmasq-dns-5db978f585-jtbcw" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.342133 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cf1d659-89cc-471b-8089-bc85f7ab3578-ovsdbserver-nb\") pod \"dnsmasq-dns-5db978f585-jtbcw\" (UID: \"4cf1d659-89cc-471b-8089-bc85f7ab3578\") " pod="openstack/dnsmasq-dns-5db978f585-jtbcw" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.342265 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cf1d659-89cc-471b-8089-bc85f7ab3578-dns-swift-storage-0\") pod \"dnsmasq-dns-5db978f585-jtbcw\" (UID: \"4cf1d659-89cc-471b-8089-bc85f7ab3578\") " pod="openstack/dnsmasq-dns-5db978f585-jtbcw" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.342571 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cf1d659-89cc-471b-8089-bc85f7ab3578-config\") pod \"dnsmasq-dns-5db978f585-jtbcw\" (UID: \"4cf1d659-89cc-471b-8089-bc85f7ab3578\") " pod="openstack/dnsmasq-dns-5db978f585-jtbcw" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.342652 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cf1d659-89cc-471b-8089-bc85f7ab3578-ovsdbserver-nb\") pod \"dnsmasq-dns-5db978f585-jtbcw\" (UID: \"4cf1d659-89cc-471b-8089-bc85f7ab3578\") " pod="openstack/dnsmasq-dns-5db978f585-jtbcw" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.342808 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cf1d659-89cc-471b-8089-bc85f7ab3578-dns-svc\") pod \"dnsmasq-dns-5db978f585-jtbcw\" (UID: \"4cf1d659-89cc-471b-8089-bc85f7ab3578\") " pod="openstack/dnsmasq-dns-5db978f585-jtbcw" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.342905 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cf1d659-89cc-471b-8089-bc85f7ab3578-ovsdbserver-sb\") pod \"dnsmasq-dns-5db978f585-jtbcw\" (UID: \"4cf1d659-89cc-471b-8089-bc85f7ab3578\") " pod="openstack/dnsmasq-dns-5db978f585-jtbcw" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.343166 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsqxq\" (UniqueName: \"kubernetes.io/projected/4cf1d659-89cc-471b-8089-bc85f7ab3578-kube-api-access-gsqxq\") pod \"dnsmasq-dns-5db978f585-jtbcw\" (UID: \"4cf1d659-89cc-471b-8089-bc85f7ab3578\") " pod="openstack/dnsmasq-dns-5db978f585-jtbcw" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.343784 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cf1d659-89cc-471b-8089-bc85f7ab3578-ovsdbserver-sb\") pod \"dnsmasq-dns-5db978f585-jtbcw\" (UID: \"4cf1d659-89cc-471b-8089-bc85f7ab3578\") " pod="openstack/dnsmasq-dns-5db978f585-jtbcw" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.344019 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cf1d659-89cc-471b-8089-bc85f7ab3578-dns-svc\") pod \"dnsmasq-dns-5db978f585-jtbcw\" (UID: \"4cf1d659-89cc-471b-8089-bc85f7ab3578\") " pod="openstack/dnsmasq-dns-5db978f585-jtbcw" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.344849 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cf1d659-89cc-471b-8089-bc85f7ab3578-config\") pod \"dnsmasq-dns-5db978f585-jtbcw\" (UID: \"4cf1d659-89cc-471b-8089-bc85f7ab3578\") " pod="openstack/dnsmasq-dns-5db978f585-jtbcw" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.366833 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsqxq\" (UniqueName: \"kubernetes.io/projected/4cf1d659-89cc-471b-8089-bc85f7ab3578-kube-api-access-gsqxq\") pod \"dnsmasq-dns-5db978f585-jtbcw\" (UID: \"4cf1d659-89cc-471b-8089-bc85f7ab3578\") " pod="openstack/dnsmasq-dns-5db978f585-jtbcw" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.550799 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5db978f585-jtbcw" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.777250 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.790815 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.793629 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.793897 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qpd5p" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.795265 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.799504 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.855222 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37dc6e5d-eb14-4cef-9451-7c567c6c9068-logs\") pod \"glance-default-external-api-0\" (UID: \"37dc6e5d-eb14-4cef-9451-7c567c6c9068\") " pod="openstack/glance-default-external-api-0" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.855417 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37dc6e5d-eb14-4cef-9451-7c567c6c9068-config-data\") pod \"glance-default-external-api-0\" (UID: \"37dc6e5d-eb14-4cef-9451-7c567c6c9068\") " pod="openstack/glance-default-external-api-0" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.855529 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/37dc6e5d-eb14-4cef-9451-7c567c6c9068-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"37dc6e5d-eb14-4cef-9451-7c567c6c9068\") " pod="openstack/glance-default-external-api-0" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.855593 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37dc6e5d-eb14-4cef-9451-7c567c6c9068-scripts\") pod \"glance-default-external-api-0\" (UID: \"37dc6e5d-eb14-4cef-9451-7c567c6c9068\") " pod="openstack/glance-default-external-api-0" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.856138 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37dc6e5d-eb14-4cef-9451-7c567c6c9068-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"37dc6e5d-eb14-4cef-9451-7c567c6c9068\") " pod="openstack/glance-default-external-api-0" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.856364 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"37dc6e5d-eb14-4cef-9451-7c567c6c9068\") " pod="openstack/glance-default-external-api-0" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.856398 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rqnd\" (UniqueName: \"kubernetes.io/projected/37dc6e5d-eb14-4cef-9451-7c567c6c9068-kube-api-access-5rqnd\") pod \"glance-default-external-api-0\" (UID: \"37dc6e5d-eb14-4cef-9451-7c567c6c9068\") " pod="openstack/glance-default-external-api-0" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.958037 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37dc6e5d-eb14-4cef-9451-7c567c6c9068-logs\") pod \"glance-default-external-api-0\" (UID: \"37dc6e5d-eb14-4cef-9451-7c567c6c9068\") " pod="openstack/glance-default-external-api-0" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.958102 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37dc6e5d-eb14-4cef-9451-7c567c6c9068-config-data\") pod \"glance-default-external-api-0\" (UID: \"37dc6e5d-eb14-4cef-9451-7c567c6c9068\") " pod="openstack/glance-default-external-api-0" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.958147 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/37dc6e5d-eb14-4cef-9451-7c567c6c9068-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"37dc6e5d-eb14-4cef-9451-7c567c6c9068\") " pod="openstack/glance-default-external-api-0" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.958167 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37dc6e5d-eb14-4cef-9451-7c567c6c9068-scripts\") pod \"glance-default-external-api-0\" (UID: \"37dc6e5d-eb14-4cef-9451-7c567c6c9068\") " pod="openstack/glance-default-external-api-0" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.958196 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37dc6e5d-eb14-4cef-9451-7c567c6c9068-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"37dc6e5d-eb14-4cef-9451-7c567c6c9068\") " pod="openstack/glance-default-external-api-0" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.958289 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"37dc6e5d-eb14-4cef-9451-7c567c6c9068\") " pod="openstack/glance-default-external-api-0" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.958761 4632 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"37dc6e5d-eb14-4cef-9451-7c567c6c9068\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.959116 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rqnd\" (UniqueName: \"kubernetes.io/projected/37dc6e5d-eb14-4cef-9451-7c567c6c9068-kube-api-access-5rqnd\") pod \"glance-default-external-api-0\" (UID: \"37dc6e5d-eb14-4cef-9451-7c567c6c9068\") " pod="openstack/glance-default-external-api-0" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.959332 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/37dc6e5d-eb14-4cef-9451-7c567c6c9068-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"37dc6e5d-eb14-4cef-9451-7c567c6c9068\") " pod="openstack/glance-default-external-api-0" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.959412 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37dc6e5d-eb14-4cef-9451-7c567c6c9068-logs\") pod \"glance-default-external-api-0\" (UID: \"37dc6e5d-eb14-4cef-9451-7c567c6c9068\") " pod="openstack/glance-default-external-api-0" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.963921 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37dc6e5d-eb14-4cef-9451-7c567c6c9068-scripts\") pod \"glance-default-external-api-0\" (UID: \"37dc6e5d-eb14-4cef-9451-7c567c6c9068\") " pod="openstack/glance-default-external-api-0" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.964759 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37dc6e5d-eb14-4cef-9451-7c567c6c9068-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"37dc6e5d-eb14-4cef-9451-7c567c6c9068\") " pod="openstack/glance-default-external-api-0" Mar 13 10:25:38 crc kubenswrapper[4632]: I0313 10:25:38.974388 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37dc6e5d-eb14-4cef-9451-7c567c6c9068-config-data\") pod \"glance-default-external-api-0\" (UID: \"37dc6e5d-eb14-4cef-9451-7c567c6c9068\") " pod="openstack/glance-default-external-api-0" Mar 13 10:25:39 crc kubenswrapper[4632]: I0313 10:25:39.023196 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"37dc6e5d-eb14-4cef-9451-7c567c6c9068\") " pod="openstack/glance-default-external-api-0" Mar 13 10:25:39 crc kubenswrapper[4632]: I0313 10:25:39.023847 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rqnd\" (UniqueName: \"kubernetes.io/projected/37dc6e5d-eb14-4cef-9451-7c567c6c9068-kube-api-access-5rqnd\") pod \"glance-default-external-api-0\" (UID: \"37dc6e5d-eb14-4cef-9451-7c567c6c9068\") " pod="openstack/glance-default-external-api-0" Mar 13 10:25:39 crc kubenswrapper[4632]: E0313 10:25:39.113144 4632 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-barbican-api:e43235cb19da04699a53f42b6a75afe9" Mar 13 10:25:39 crc kubenswrapper[4632]: E0313 10:25:39.113203 4632 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-barbican-api:e43235cb19da04699a53f42b6a75afe9" Mar 13 10:25:39 crc kubenswrapper[4632]: E0313 10:25:39.113380 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-barbican-api:e43235cb19da04699a53f42b6a75afe9,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zgn44,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-zdgpw_openstack(418cb883-abd1-46b4-957f-0a40f3e62297): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 10:25:39 crc kubenswrapper[4632]: E0313 10:25:39.114758 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-zdgpw" podUID="418cb883-abd1-46b4-957f-0a40f3e62297" Mar 13 10:25:39 crc kubenswrapper[4632]: I0313 10:25:39.121763 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 13 10:25:39 crc kubenswrapper[4632]: I0313 10:25:39.299433 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 13 10:25:39 crc kubenswrapper[4632]: I0313 10:25:39.301264 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 13 10:25:39 crc kubenswrapper[4632]: I0313 10:25:39.303675 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 13 10:25:39 crc kubenswrapper[4632]: I0313 10:25:39.323033 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 13 10:25:39 crc kubenswrapper[4632]: I0313 10:25:39.366024 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1756bbdc-3e6c-4815-96a7-0620f7400cb7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1756bbdc-3e6c-4815-96a7-0620f7400cb7\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:25:39 crc kubenswrapper[4632]: I0313 10:25:39.366093 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1756bbdc-3e6c-4815-96a7-0620f7400cb7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1756bbdc-3e6c-4815-96a7-0620f7400cb7\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:25:39 crc kubenswrapper[4632]: I0313 10:25:39.366168 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1756bbdc-3e6c-4815-96a7-0620f7400cb7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1756bbdc-3e6c-4815-96a7-0620f7400cb7\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:25:39 crc kubenswrapper[4632]: I0313 10:25:39.366264 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"1756bbdc-3e6c-4815-96a7-0620f7400cb7\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:25:39 crc kubenswrapper[4632]: I0313 10:25:39.366349 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1756bbdc-3e6c-4815-96a7-0620f7400cb7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1756bbdc-3e6c-4815-96a7-0620f7400cb7\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:25:39 crc kubenswrapper[4632]: I0313 10:25:39.366382 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1756bbdc-3e6c-4815-96a7-0620f7400cb7-logs\") pod \"glance-default-internal-api-0\" (UID: \"1756bbdc-3e6c-4815-96a7-0620f7400cb7\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:25:39 crc kubenswrapper[4632]: I0313 10:25:39.366419 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jn72\" (UniqueName: \"kubernetes.io/projected/1756bbdc-3e6c-4815-96a7-0620f7400cb7-kube-api-access-7jn72\") pod \"glance-default-internal-api-0\" (UID: \"1756bbdc-3e6c-4815-96a7-0620f7400cb7\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:25:39 crc kubenswrapper[4632]: I0313 10:25:39.467805 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"1756bbdc-3e6c-4815-96a7-0620f7400cb7\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:25:39 crc kubenswrapper[4632]: I0313 10:25:39.467882 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1756bbdc-3e6c-4815-96a7-0620f7400cb7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1756bbdc-3e6c-4815-96a7-0620f7400cb7\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:25:39 crc kubenswrapper[4632]: I0313 10:25:39.467908 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1756bbdc-3e6c-4815-96a7-0620f7400cb7-logs\") pod \"glance-default-internal-api-0\" (UID: \"1756bbdc-3e6c-4815-96a7-0620f7400cb7\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:25:39 crc kubenswrapper[4632]: I0313 10:25:39.467935 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jn72\" (UniqueName: \"kubernetes.io/projected/1756bbdc-3e6c-4815-96a7-0620f7400cb7-kube-api-access-7jn72\") pod \"glance-default-internal-api-0\" (UID: \"1756bbdc-3e6c-4815-96a7-0620f7400cb7\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:25:39 crc kubenswrapper[4632]: I0313 10:25:39.468031 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1756bbdc-3e6c-4815-96a7-0620f7400cb7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1756bbdc-3e6c-4815-96a7-0620f7400cb7\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:25:39 crc kubenswrapper[4632]: I0313 10:25:39.468062 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1756bbdc-3e6c-4815-96a7-0620f7400cb7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1756bbdc-3e6c-4815-96a7-0620f7400cb7\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:25:39 crc kubenswrapper[4632]: I0313 10:25:39.468107 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1756bbdc-3e6c-4815-96a7-0620f7400cb7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1756bbdc-3e6c-4815-96a7-0620f7400cb7\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:25:39 crc kubenswrapper[4632]: I0313 10:25:39.468403 4632 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"1756bbdc-3e6c-4815-96a7-0620f7400cb7\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Mar 13 10:25:39 crc kubenswrapper[4632]: I0313 10:25:39.469299 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1756bbdc-3e6c-4815-96a7-0620f7400cb7-logs\") pod \"glance-default-internal-api-0\" (UID: \"1756bbdc-3e6c-4815-96a7-0620f7400cb7\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:25:39 crc kubenswrapper[4632]: I0313 10:25:39.469439 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1756bbdc-3e6c-4815-96a7-0620f7400cb7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1756bbdc-3e6c-4815-96a7-0620f7400cb7\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:25:39 crc kubenswrapper[4632]: I0313 10:25:39.474819 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1756bbdc-3e6c-4815-96a7-0620f7400cb7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1756bbdc-3e6c-4815-96a7-0620f7400cb7\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:25:39 crc kubenswrapper[4632]: I0313 10:25:39.474985 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1756bbdc-3e6c-4815-96a7-0620f7400cb7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1756bbdc-3e6c-4815-96a7-0620f7400cb7\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:25:39 crc kubenswrapper[4632]: I0313 10:25:39.475889 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1756bbdc-3e6c-4815-96a7-0620f7400cb7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1756bbdc-3e6c-4815-96a7-0620f7400cb7\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:25:39 crc kubenswrapper[4632]: I0313 10:25:39.490038 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jn72\" (UniqueName: \"kubernetes.io/projected/1756bbdc-3e6c-4815-96a7-0620f7400cb7-kube-api-access-7jn72\") pod \"glance-default-internal-api-0\" (UID: \"1756bbdc-3e6c-4815-96a7-0620f7400cb7\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:25:39 crc kubenswrapper[4632]: I0313 10:25:39.503236 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"1756bbdc-3e6c-4815-96a7-0620f7400cb7\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:25:39 crc kubenswrapper[4632]: E0313 10:25:39.548254 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-barbican-api:e43235cb19da04699a53f42b6a75afe9\\\"\"" pod="openstack/barbican-db-sync-zdgpw" podUID="418cb883-abd1-46b4-957f-0a40f3e62297" Mar 13 10:25:39 crc kubenswrapper[4632]: I0313 10:25:39.628775 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 13 10:25:41 crc kubenswrapper[4632]: I0313 10:25:41.104578 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 13 10:25:41 crc kubenswrapper[4632]: I0313 10:25:41.220578 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 13 10:25:41 crc kubenswrapper[4632]: I0313 10:25:41.566357 4632 generic.go:334] "Generic (PLEG): container finished" podID="b7221b50-7231-4ade-917e-b10f177cb539" containerID="a4f9bd4f877455829b998ee69c6d5f9dd7fb999a6d06fe2960e4af1bfddc1eb0" exitCode=0 Mar 13 10:25:41 crc kubenswrapper[4632]: I0313 10:25:41.566415 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hlsnz" event={"ID":"b7221b50-7231-4ade-917e-b10f177cb539","Type":"ContainerDied","Data":"a4f9bd4f877455829b998ee69c6d5f9dd7fb999a6d06fe2960e4af1bfddc1eb0"} Mar 13 10:25:41 crc kubenswrapper[4632]: I0313 10:25:41.719073 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" podUID="7203640d-964c-4c28-8cc2-6a7ae27cdab3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: i/o timeout" Mar 13 10:25:46 crc kubenswrapper[4632]: I0313 10:25:46.719411 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" podUID="7203640d-964c-4c28-8cc2-6a7ae27cdab3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: i/o timeout" Mar 13 10:25:51 crc kubenswrapper[4632]: I0313 10:25:51.720179 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" podUID="7203640d-964c-4c28-8cc2-6a7ae27cdab3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: i/o timeout" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.430668 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.435395 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6fb489b64f-prckv" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.445869 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7ff9d5cddf-cz85p" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.461763 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hlsnz" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.542261 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/930f1246-53c8-4970-af1f-a7ef0ae42648-logs\") pod \"930f1246-53c8-4970-af1f-a7ef0ae42648\" (UID: \"930f1246-53c8-4970-af1f-a7ef0ae42648\") " Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.542362 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwx9r\" (UniqueName: \"kubernetes.io/projected/00c4bdc6-a22c-4ab6-b898-cf591b92756b-kube-api-access-gwx9r\") pod \"00c4bdc6-a22c-4ab6-b898-cf591b92756b\" (UID: \"00c4bdc6-a22c-4ab6-b898-cf591b92756b\") " Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.542411 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r95hn\" (UniqueName: \"kubernetes.io/projected/7203640d-964c-4c28-8cc2-6a7ae27cdab3-kube-api-access-r95hn\") pod \"7203640d-964c-4c28-8cc2-6a7ae27cdab3\" (UID: \"7203640d-964c-4c28-8cc2-6a7ae27cdab3\") " Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.542444 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7203640d-964c-4c28-8cc2-6a7ae27cdab3-ovsdbserver-sb\") pod \"7203640d-964c-4c28-8cc2-6a7ae27cdab3\" (UID: \"7203640d-964c-4c28-8cc2-6a7ae27cdab3\") " Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.542476 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00c4bdc6-a22c-4ab6-b898-cf591b92756b-config-data\") pod \"00c4bdc6-a22c-4ab6-b898-cf591b92756b\" (UID: \"00c4bdc6-a22c-4ab6-b898-cf591b92756b\") " Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.542566 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00c4bdc6-a22c-4ab6-b898-cf591b92756b-logs\") pod \"00c4bdc6-a22c-4ab6-b898-cf591b92756b\" (UID: \"00c4bdc6-a22c-4ab6-b898-cf591b92756b\") " Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.542598 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgt2s\" (UniqueName: \"kubernetes.io/projected/930f1246-53c8-4970-af1f-a7ef0ae42648-kube-api-access-rgt2s\") pod \"930f1246-53c8-4970-af1f-a7ef0ae42648\" (UID: \"930f1246-53c8-4970-af1f-a7ef0ae42648\") " Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.542635 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/930f1246-53c8-4970-af1f-a7ef0ae42648-scripts\") pod \"930f1246-53c8-4970-af1f-a7ef0ae42648\" (UID: \"930f1246-53c8-4970-af1f-a7ef0ae42648\") " Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.542662 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7203640d-964c-4c28-8cc2-6a7ae27cdab3-dns-svc\") pod \"7203640d-964c-4c28-8cc2-6a7ae27cdab3\" (UID: \"7203640d-964c-4c28-8cc2-6a7ae27cdab3\") " Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.542699 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7203640d-964c-4c28-8cc2-6a7ae27cdab3-config\") pod \"7203640d-964c-4c28-8cc2-6a7ae27cdab3\" (UID: \"7203640d-964c-4c28-8cc2-6a7ae27cdab3\") " Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.542743 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/00c4bdc6-a22c-4ab6-b898-cf591b92756b-horizon-secret-key\") pod \"00c4bdc6-a22c-4ab6-b898-cf591b92756b\" (UID: \"00c4bdc6-a22c-4ab6-b898-cf591b92756b\") " Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.543457 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/930f1246-53c8-4970-af1f-a7ef0ae42648-config-data\") pod \"930f1246-53c8-4970-af1f-a7ef0ae42648\" (UID: \"930f1246-53c8-4970-af1f-a7ef0ae42648\") " Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.543554 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00c4bdc6-a22c-4ab6-b898-cf591b92756b-scripts\") pod \"00c4bdc6-a22c-4ab6-b898-cf591b92756b\" (UID: \"00c4bdc6-a22c-4ab6-b898-cf591b92756b\") " Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.543625 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/930f1246-53c8-4970-af1f-a7ef0ae42648-horizon-secret-key\") pod \"930f1246-53c8-4970-af1f-a7ef0ae42648\" (UID: \"930f1246-53c8-4970-af1f-a7ef0ae42648\") " Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.543674 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7203640d-964c-4c28-8cc2-6a7ae27cdab3-ovsdbserver-nb\") pod \"7203640d-964c-4c28-8cc2-6a7ae27cdab3\" (UID: \"7203640d-964c-4c28-8cc2-6a7ae27cdab3\") " Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.543454 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/930f1246-53c8-4970-af1f-a7ef0ae42648-logs" (OuterVolumeSpecName: "logs") pod "930f1246-53c8-4970-af1f-a7ef0ae42648" (UID: "930f1246-53c8-4970-af1f-a7ef0ae42648"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.545732 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00c4bdc6-a22c-4ab6-b898-cf591b92756b-logs" (OuterVolumeSpecName: "logs") pod "00c4bdc6-a22c-4ab6-b898-cf591b92756b" (UID: "00c4bdc6-a22c-4ab6-b898-cf591b92756b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.547072 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00c4bdc6-a22c-4ab6-b898-cf591b92756b-config-data" (OuterVolumeSpecName: "config-data") pod "00c4bdc6-a22c-4ab6-b898-cf591b92756b" (UID: "00c4bdc6-a22c-4ab6-b898-cf591b92756b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.580844 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/930f1246-53c8-4970-af1f-a7ef0ae42648-config-data" (OuterVolumeSpecName: "config-data") pod "930f1246-53c8-4970-af1f-a7ef0ae42648" (UID: "930f1246-53c8-4970-af1f-a7ef0ae42648"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.581754 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00c4bdc6-a22c-4ab6-b898-cf591b92756b-scripts" (OuterVolumeSpecName: "scripts") pod "00c4bdc6-a22c-4ab6-b898-cf591b92756b" (UID: "00c4bdc6-a22c-4ab6-b898-cf591b92756b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.582193 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/930f1246-53c8-4970-af1f-a7ef0ae42648-kube-api-access-rgt2s" (OuterVolumeSpecName: "kube-api-access-rgt2s") pod "930f1246-53c8-4970-af1f-a7ef0ae42648" (UID: "930f1246-53c8-4970-af1f-a7ef0ae42648"). InnerVolumeSpecName "kube-api-access-rgt2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.582743 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/930f1246-53c8-4970-af1f-a7ef0ae42648-scripts" (OuterVolumeSpecName: "scripts") pod "930f1246-53c8-4970-af1f-a7ef0ae42648" (UID: "930f1246-53c8-4970-af1f-a7ef0ae42648"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.608409 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7203640d-964c-4c28-8cc2-6a7ae27cdab3-kube-api-access-r95hn" (OuterVolumeSpecName: "kube-api-access-r95hn") pod "7203640d-964c-4c28-8cc2-6a7ae27cdab3" (UID: "7203640d-964c-4c28-8cc2-6a7ae27cdab3"). InnerVolumeSpecName "kube-api-access-r95hn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.608583 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00c4bdc6-a22c-4ab6-b898-cf591b92756b-kube-api-access-gwx9r" (OuterVolumeSpecName: "kube-api-access-gwx9r") pod "00c4bdc6-a22c-4ab6-b898-cf591b92756b" (UID: "00c4bdc6-a22c-4ab6-b898-cf591b92756b"). InnerVolumeSpecName "kube-api-access-gwx9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.608680 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/930f1246-53c8-4970-af1f-a7ef0ae42648-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "930f1246-53c8-4970-af1f-a7ef0ae42648" (UID: "930f1246-53c8-4970-af1f-a7ef0ae42648"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.614969 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c4bdc6-a22c-4ab6-b898-cf591b92756b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "00c4bdc6-a22c-4ab6-b898-cf591b92756b" (UID: "00c4bdc6-a22c-4ab6-b898-cf591b92756b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.634154 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7203640d-964c-4c28-8cc2-6a7ae27cdab3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7203640d-964c-4c28-8cc2-6a7ae27cdab3" (UID: "7203640d-964c-4c28-8cc2-6a7ae27cdab3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.652609 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngb4t\" (UniqueName: \"kubernetes.io/projected/b7221b50-7231-4ade-917e-b10f177cb539-kube-api-access-ngb4t\") pod \"b7221b50-7231-4ade-917e-b10f177cb539\" (UID: \"b7221b50-7231-4ade-917e-b10f177cb539\") " Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.652741 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7221b50-7231-4ade-917e-b10f177cb539-combined-ca-bundle\") pod \"b7221b50-7231-4ade-917e-b10f177cb539\" (UID: \"b7221b50-7231-4ade-917e-b10f177cb539\") " Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.652879 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b7221b50-7231-4ade-917e-b10f177cb539-config\") pod \"b7221b50-7231-4ade-917e-b10f177cb539\" (UID: \"b7221b50-7231-4ade-917e-b10f177cb539\") " Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.654909 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgt2s\" (UniqueName: \"kubernetes.io/projected/930f1246-53c8-4970-af1f-a7ef0ae42648-kube-api-access-rgt2s\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.654930 4632 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/930f1246-53c8-4970-af1f-a7ef0ae42648-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.654981 4632 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/00c4bdc6-a22c-4ab6-b898-cf591b92756b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.654991 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/930f1246-53c8-4970-af1f-a7ef0ae42648-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.654999 4632 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00c4bdc6-a22c-4ab6-b898-cf591b92756b-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.655008 4632 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/930f1246-53c8-4970-af1f-a7ef0ae42648-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.655016 4632 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7203640d-964c-4c28-8cc2-6a7ae27cdab3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.655024 4632 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/930f1246-53c8-4970-af1f-a7ef0ae42648-logs\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.655054 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwx9r\" (UniqueName: \"kubernetes.io/projected/00c4bdc6-a22c-4ab6-b898-cf591b92756b-kube-api-access-gwx9r\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.655063 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r95hn\" (UniqueName: \"kubernetes.io/projected/7203640d-964c-4c28-8cc2-6a7ae27cdab3-kube-api-access-r95hn\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.655072 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00c4bdc6-a22c-4ab6-b898-cf591b92756b-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.655081 4632 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00c4bdc6-a22c-4ab6-b898-cf591b92756b-logs\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.656761 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7221b50-7231-4ade-917e-b10f177cb539-kube-api-access-ngb4t" (OuterVolumeSpecName: "kube-api-access-ngb4t") pod "b7221b50-7231-4ade-917e-b10f177cb539" (UID: "b7221b50-7231-4ade-917e-b10f177cb539"). InnerVolumeSpecName "kube-api-access-ngb4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.683153 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7203640d-964c-4c28-8cc2-6a7ae27cdab3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7203640d-964c-4c28-8cc2-6a7ae27cdab3" (UID: "7203640d-964c-4c28-8cc2-6a7ae27cdab3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.688786 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.688797 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" event={"ID":"7203640d-964c-4c28-8cc2-6a7ae27cdab3","Type":"ContainerDied","Data":"501cdcd9d1f38a4b8b82ad7d76e2b6765f391cfadd65ee750e8254d78d76de84"} Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.689099 4632 scope.go:117] "RemoveContainer" containerID="f1255f2b0d97d7bcc13a7045fc5d8e4778eece89f9f6f1d468ae8c05e428c6f7" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.692025 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hlsnz" event={"ID":"b7221b50-7231-4ade-917e-b10f177cb539","Type":"ContainerDied","Data":"20f645b899e167ff59a24d843990ef38d86d73ef7009bca8f9190936862bedaf"} Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.692194 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20f645b899e167ff59a24d843990ef38d86d73ef7009bca8f9190936862bedaf" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.692442 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hlsnz" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.694176 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7203640d-964c-4c28-8cc2-6a7ae27cdab3-config" (OuterVolumeSpecName: "config") pod "7203640d-964c-4c28-8cc2-6a7ae27cdab3" (UID: "7203640d-964c-4c28-8cc2-6a7ae27cdab3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.695250 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6fb489b64f-prckv" event={"ID":"00c4bdc6-a22c-4ab6-b898-cf591b92756b","Type":"ContainerDied","Data":"1e0d86fdbf39635fdea4aed078faa89b9573bea1f02b182e9ea0c1a965b0c550"} Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.695414 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6fb489b64f-prckv" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.697739 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7ff9d5cddf-cz85p" event={"ID":"930f1246-53c8-4970-af1f-a7ef0ae42648","Type":"ContainerDied","Data":"adf64e4e5f85756a4d7fe85854309c14415a387724ea67892ccde00c8e6a4b0e"} Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.697910 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7ff9d5cddf-cz85p" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.711904 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7221b50-7231-4ade-917e-b10f177cb539-config" (OuterVolumeSpecName: "config") pod "b7221b50-7231-4ade-917e-b10f177cb539" (UID: "b7221b50-7231-4ade-917e-b10f177cb539"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.714903 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7203640d-964c-4c28-8cc2-6a7ae27cdab3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7203640d-964c-4c28-8cc2-6a7ae27cdab3" (UID: "7203640d-964c-4c28-8cc2-6a7ae27cdab3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.715091 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7221b50-7231-4ade-917e-b10f177cb539-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b7221b50-7231-4ade-917e-b10f177cb539" (UID: "b7221b50-7231-4ade-917e-b10f177cb539"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.756645 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b7221b50-7231-4ade-917e-b10f177cb539-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.756687 4632 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7203640d-964c-4c28-8cc2-6a7ae27cdab3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.756701 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngb4t\" (UniqueName: \"kubernetes.io/projected/b7221b50-7231-4ade-917e-b10f177cb539-kube-api-access-ngb4t\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.756712 4632 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7203640d-964c-4c28-8cc2-6a7ae27cdab3-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.756722 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7203640d-964c-4c28-8cc2-6a7ae27cdab3-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.756732 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7221b50-7231-4ade-917e-b10f177cb539-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.773930 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7ff9d5cddf-cz85p"] Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.797384 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7ff9d5cddf-cz85p"] Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.817813 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6fb489b64f-prckv"] Mar 13 10:25:52 crc kubenswrapper[4632]: I0313 10:25:52.829487 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6fb489b64f-prckv"] Mar 13 10:25:53 crc kubenswrapper[4632]: I0313 10:25:53.030075 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b59dbc87f-7zwrj"] Mar 13 10:25:53 crc kubenswrapper[4632]: I0313 10:25:53.037418 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b59dbc87f-7zwrj"] Mar 13 10:25:53 crc kubenswrapper[4632]: I0313 10:25:53.904511 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5db978f585-jtbcw"] Mar 13 10:25:53 crc kubenswrapper[4632]: I0313 10:25:53.940100 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5c86b4b888-l9574"] Mar 13 10:25:53 crc kubenswrapper[4632]: E0313 10:25:53.941300 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7203640d-964c-4c28-8cc2-6a7ae27cdab3" containerName="dnsmasq-dns" Mar 13 10:25:53 crc kubenswrapper[4632]: I0313 10:25:53.941321 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="7203640d-964c-4c28-8cc2-6a7ae27cdab3" containerName="dnsmasq-dns" Mar 13 10:25:53 crc kubenswrapper[4632]: E0313 10:25:53.941332 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7221b50-7231-4ade-917e-b10f177cb539" containerName="neutron-db-sync" Mar 13 10:25:53 crc kubenswrapper[4632]: I0313 10:25:53.941338 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7221b50-7231-4ade-917e-b10f177cb539" containerName="neutron-db-sync" Mar 13 10:25:53 crc kubenswrapper[4632]: E0313 10:25:53.941361 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7203640d-964c-4c28-8cc2-6a7ae27cdab3" containerName="init" Mar 13 10:25:53 crc kubenswrapper[4632]: I0313 10:25:53.941369 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="7203640d-964c-4c28-8cc2-6a7ae27cdab3" containerName="init" Mar 13 10:25:53 crc kubenswrapper[4632]: I0313 10:25:53.941528 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="7203640d-964c-4c28-8cc2-6a7ae27cdab3" containerName="dnsmasq-dns" Mar 13 10:25:53 crc kubenswrapper[4632]: I0313 10:25:53.941550 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7221b50-7231-4ade-917e-b10f177cb539" containerName="neutron-db-sync" Mar 13 10:25:53 crc kubenswrapper[4632]: I0313 10:25:53.942402 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c86b4b888-l9574" Mar 13 10:25:53 crc kubenswrapper[4632]: I0313 10:25:53.952825 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-r2t7p" Mar 13 10:25:53 crc kubenswrapper[4632]: I0313 10:25:53.953053 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Mar 13 10:25:53 crc kubenswrapper[4632]: I0313 10:25:53.953198 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 13 10:25:53 crc kubenswrapper[4632]: I0313 10:25:53.953328 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 13 10:25:53 crc kubenswrapper[4632]: I0313 10:25:53.961910 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c86b4b888-l9574"] Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.066470 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00c4bdc6-a22c-4ab6-b898-cf591b92756b" path="/var/lib/kubelet/pods/00c4bdc6-a22c-4ab6-b898-cf591b92756b/volumes" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.066900 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7203640d-964c-4c28-8cc2-6a7ae27cdab3" path="/var/lib/kubelet/pods/7203640d-964c-4c28-8cc2-6a7ae27cdab3/volumes" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.067775 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="930f1246-53c8-4970-af1f-a7ef0ae42648" path="/var/lib/kubelet/pods/930f1246-53c8-4970-af1f-a7ef0ae42648/volumes" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.087896 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5776d95bfc-hl9dv"] Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.092813 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lsp7\" (UniqueName: \"kubernetes.io/projected/6d73a499-d334-4a7a-9783-640b98760672-kube-api-access-6lsp7\") pod \"neutron-5c86b4b888-l9574\" (UID: \"6d73a499-d334-4a7a-9783-640b98760672\") " pod="openstack/neutron-5c86b4b888-l9574" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.092886 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6d73a499-d334-4a7a-9783-640b98760672-httpd-config\") pod \"neutron-5c86b4b888-l9574\" (UID: \"6d73a499-d334-4a7a-9783-640b98760672\") " pod="openstack/neutron-5c86b4b888-l9574" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.093029 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d73a499-d334-4a7a-9783-640b98760672-ovndb-tls-certs\") pod \"neutron-5c86b4b888-l9574\" (UID: \"6d73a499-d334-4a7a-9783-640b98760672\") " pod="openstack/neutron-5c86b4b888-l9574" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.093070 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d73a499-d334-4a7a-9783-640b98760672-combined-ca-bundle\") pod \"neutron-5c86b4b888-l9574\" (UID: \"6d73a499-d334-4a7a-9783-640b98760672\") " pod="openstack/neutron-5c86b4b888-l9574" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.093102 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6d73a499-d334-4a7a-9783-640b98760672-config\") pod \"neutron-5c86b4b888-l9574\" (UID: \"6d73a499-d334-4a7a-9783-640b98760672\") " pod="openstack/neutron-5c86b4b888-l9574" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.093267 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5776d95bfc-hl9dv" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.102950 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5776d95bfc-hl9dv"] Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.199052 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff547198-2736-4059-8e66-e63ea9ce7345-ovsdbserver-nb\") pod \"dnsmasq-dns-5776d95bfc-hl9dv\" (UID: \"ff547198-2736-4059-8e66-e63ea9ce7345\") " pod="openstack/dnsmasq-dns-5776d95bfc-hl9dv" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.199495 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff547198-2736-4059-8e66-e63ea9ce7345-dns-swift-storage-0\") pod \"dnsmasq-dns-5776d95bfc-hl9dv\" (UID: \"ff547198-2736-4059-8e66-e63ea9ce7345\") " pod="openstack/dnsmasq-dns-5776d95bfc-hl9dv" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.199555 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff547198-2736-4059-8e66-e63ea9ce7345-ovsdbserver-sb\") pod \"dnsmasq-dns-5776d95bfc-hl9dv\" (UID: \"ff547198-2736-4059-8e66-e63ea9ce7345\") " pod="openstack/dnsmasq-dns-5776d95bfc-hl9dv" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.199611 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d73a499-d334-4a7a-9783-640b98760672-ovndb-tls-certs\") pod \"neutron-5c86b4b888-l9574\" (UID: \"6d73a499-d334-4a7a-9783-640b98760672\") " pod="openstack/neutron-5c86b4b888-l9574" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.199645 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff547198-2736-4059-8e66-e63ea9ce7345-dns-svc\") pod \"dnsmasq-dns-5776d95bfc-hl9dv\" (UID: \"ff547198-2736-4059-8e66-e63ea9ce7345\") " pod="openstack/dnsmasq-dns-5776d95bfc-hl9dv" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.199671 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d73a499-d334-4a7a-9783-640b98760672-combined-ca-bundle\") pod \"neutron-5c86b4b888-l9574\" (UID: \"6d73a499-d334-4a7a-9783-640b98760672\") " pod="openstack/neutron-5c86b4b888-l9574" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.199702 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6d73a499-d334-4a7a-9783-640b98760672-config\") pod \"neutron-5c86b4b888-l9574\" (UID: \"6d73a499-d334-4a7a-9783-640b98760672\") " pod="openstack/neutron-5c86b4b888-l9574" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.199749 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4rzv\" (UniqueName: \"kubernetes.io/projected/ff547198-2736-4059-8e66-e63ea9ce7345-kube-api-access-v4rzv\") pod \"dnsmasq-dns-5776d95bfc-hl9dv\" (UID: \"ff547198-2736-4059-8e66-e63ea9ce7345\") " pod="openstack/dnsmasq-dns-5776d95bfc-hl9dv" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.199796 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff547198-2736-4059-8e66-e63ea9ce7345-config\") pod \"dnsmasq-dns-5776d95bfc-hl9dv\" (UID: \"ff547198-2736-4059-8e66-e63ea9ce7345\") " pod="openstack/dnsmasq-dns-5776d95bfc-hl9dv" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.199825 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lsp7\" (UniqueName: \"kubernetes.io/projected/6d73a499-d334-4a7a-9783-640b98760672-kube-api-access-6lsp7\") pod \"neutron-5c86b4b888-l9574\" (UID: \"6d73a499-d334-4a7a-9783-640b98760672\") " pod="openstack/neutron-5c86b4b888-l9574" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.199871 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6d73a499-d334-4a7a-9783-640b98760672-httpd-config\") pod \"neutron-5c86b4b888-l9574\" (UID: \"6d73a499-d334-4a7a-9783-640b98760672\") " pod="openstack/neutron-5c86b4b888-l9574" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.208290 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6d73a499-d334-4a7a-9783-640b98760672-config\") pod \"neutron-5c86b4b888-l9574\" (UID: \"6d73a499-d334-4a7a-9783-640b98760672\") " pod="openstack/neutron-5c86b4b888-l9574" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.208752 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6d73a499-d334-4a7a-9783-640b98760672-httpd-config\") pod \"neutron-5c86b4b888-l9574\" (UID: \"6d73a499-d334-4a7a-9783-640b98760672\") " pod="openstack/neutron-5c86b4b888-l9574" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.212733 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d73a499-d334-4a7a-9783-640b98760672-combined-ca-bundle\") pod \"neutron-5c86b4b888-l9574\" (UID: \"6d73a499-d334-4a7a-9783-640b98760672\") " pod="openstack/neutron-5c86b4b888-l9574" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.222486 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d73a499-d334-4a7a-9783-640b98760672-ovndb-tls-certs\") pod \"neutron-5c86b4b888-l9574\" (UID: \"6d73a499-d334-4a7a-9783-640b98760672\") " pod="openstack/neutron-5c86b4b888-l9574" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.223837 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lsp7\" (UniqueName: \"kubernetes.io/projected/6d73a499-d334-4a7a-9783-640b98760672-kube-api-access-6lsp7\") pod \"neutron-5c86b4b888-l9574\" (UID: \"6d73a499-d334-4a7a-9783-640b98760672\") " pod="openstack/neutron-5c86b4b888-l9574" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.276268 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c86b4b888-l9574" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.301337 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff547198-2736-4059-8e66-e63ea9ce7345-config\") pod \"dnsmasq-dns-5776d95bfc-hl9dv\" (UID: \"ff547198-2736-4059-8e66-e63ea9ce7345\") " pod="openstack/dnsmasq-dns-5776d95bfc-hl9dv" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.301671 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff547198-2736-4059-8e66-e63ea9ce7345-ovsdbserver-nb\") pod \"dnsmasq-dns-5776d95bfc-hl9dv\" (UID: \"ff547198-2736-4059-8e66-e63ea9ce7345\") " pod="openstack/dnsmasq-dns-5776d95bfc-hl9dv" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.301815 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff547198-2736-4059-8e66-e63ea9ce7345-dns-swift-storage-0\") pod \"dnsmasq-dns-5776d95bfc-hl9dv\" (UID: \"ff547198-2736-4059-8e66-e63ea9ce7345\") " pod="openstack/dnsmasq-dns-5776d95bfc-hl9dv" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.301925 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff547198-2736-4059-8e66-e63ea9ce7345-ovsdbserver-sb\") pod \"dnsmasq-dns-5776d95bfc-hl9dv\" (UID: \"ff547198-2736-4059-8e66-e63ea9ce7345\") " pod="openstack/dnsmasq-dns-5776d95bfc-hl9dv" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.302101 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff547198-2736-4059-8e66-e63ea9ce7345-dns-svc\") pod \"dnsmasq-dns-5776d95bfc-hl9dv\" (UID: \"ff547198-2736-4059-8e66-e63ea9ce7345\") " pod="openstack/dnsmasq-dns-5776d95bfc-hl9dv" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.302314 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff547198-2736-4059-8e66-e63ea9ce7345-config\") pod \"dnsmasq-dns-5776d95bfc-hl9dv\" (UID: \"ff547198-2736-4059-8e66-e63ea9ce7345\") " pod="openstack/dnsmasq-dns-5776d95bfc-hl9dv" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.302466 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4rzv\" (UniqueName: \"kubernetes.io/projected/ff547198-2736-4059-8e66-e63ea9ce7345-kube-api-access-v4rzv\") pod \"dnsmasq-dns-5776d95bfc-hl9dv\" (UID: \"ff547198-2736-4059-8e66-e63ea9ce7345\") " pod="openstack/dnsmasq-dns-5776d95bfc-hl9dv" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.302876 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff547198-2736-4059-8e66-e63ea9ce7345-dns-swift-storage-0\") pod \"dnsmasq-dns-5776d95bfc-hl9dv\" (UID: \"ff547198-2736-4059-8e66-e63ea9ce7345\") " pod="openstack/dnsmasq-dns-5776d95bfc-hl9dv" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.303489 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff547198-2736-4059-8e66-e63ea9ce7345-ovsdbserver-nb\") pod \"dnsmasq-dns-5776d95bfc-hl9dv\" (UID: \"ff547198-2736-4059-8e66-e63ea9ce7345\") " pod="openstack/dnsmasq-dns-5776d95bfc-hl9dv" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.304299 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff547198-2736-4059-8e66-e63ea9ce7345-dns-svc\") pod \"dnsmasq-dns-5776d95bfc-hl9dv\" (UID: \"ff547198-2736-4059-8e66-e63ea9ce7345\") " pod="openstack/dnsmasq-dns-5776d95bfc-hl9dv" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.304789 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff547198-2736-4059-8e66-e63ea9ce7345-ovsdbserver-sb\") pod \"dnsmasq-dns-5776d95bfc-hl9dv\" (UID: \"ff547198-2736-4059-8e66-e63ea9ce7345\") " pod="openstack/dnsmasq-dns-5776d95bfc-hl9dv" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.344656 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4rzv\" (UniqueName: \"kubernetes.io/projected/ff547198-2736-4059-8e66-e63ea9ce7345-kube-api-access-v4rzv\") pod \"dnsmasq-dns-5776d95bfc-hl9dv\" (UID: \"ff547198-2736-4059-8e66-e63ea9ce7345\") " pod="openstack/dnsmasq-dns-5776d95bfc-hl9dv" Mar 13 10:25:54 crc kubenswrapper[4632]: I0313 10:25:54.446605 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5776d95bfc-hl9dv" Mar 13 10:25:55 crc kubenswrapper[4632]: E0313 10:25:55.073319 4632 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:e43235cb19da04699a53f42b6a75afe9" Mar 13 10:25:55 crc kubenswrapper[4632]: E0313 10:25:55.073382 4632 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:e43235cb19da04699a53f42b6a75afe9" Mar 13 10:25:55 crc kubenswrapper[4632]: E0313 10:25:55.073507 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:e43235cb19da04699a53f42b6a75afe9,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5m5bn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-kq8lc_openstack(8f916c05-f172-42b6-9b13-0c8d2058bfb1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 10:25:55 crc kubenswrapper[4632]: E0313 10:25:55.074634 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-kq8lc" podUID="8f916c05-f172-42b6-9b13-0c8d2058bfb1" Mar 13 10:25:55 crc kubenswrapper[4632]: I0313 10:25:55.348024 4632 scope.go:117] "RemoveContainer" containerID="f74cf11731f4fec2422112ef6bdd1e43cc133692a8363ef95d5bb5847ffb0fd1" Mar 13 10:25:55 crc kubenswrapper[4632]: I0313 10:25:55.548445 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7bdb5f7878-ng2k2"] Mar 13 10:25:55 crc kubenswrapper[4632]: I0313 10:25:55.700925 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-689764498d-rg7vt"] Mar 13 10:25:55 crc kubenswrapper[4632]: I0313 10:25:55.876605 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bdb5f7878-ng2k2" event={"ID":"3e4afb91-ce26-4325-89c9-2542da2ec48a","Type":"ContainerStarted","Data":"aaad122938f426786c8baabdc4555594b0ba0e55f0c39302b9bf84230f06cfd1"} Mar 13 10:25:55 crc kubenswrapper[4632]: E0313 10:25:55.944420 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:e43235cb19da04699a53f42b6a75afe9\\\"\"" pod="openstack/cinder-db-sync-kq8lc" podUID="8f916c05-f172-42b6-9b13-0c8d2058bfb1" Mar 13 10:25:56 crc kubenswrapper[4632]: I0313 10:25:56.091086 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-x8tq8"] Mar 13 10:25:56 crc kubenswrapper[4632]: W0313 10:25:56.095746 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8d0f662_d180_4137_8107_e465c5fb0621.slice/crio-58b9dfc8050bba09291b639a8d3d5cc84a9643af0afb63feda9e26973d06a678 WatchSource:0}: Error finding container 58b9dfc8050bba09291b639a8d3d5cc84a9643af0afb63feda9e26973d06a678: Status 404 returned error can't find the container with id 58b9dfc8050bba09291b639a8d3d5cc84a9643af0afb63feda9e26973d06a678 Mar 13 10:25:56 crc kubenswrapper[4632]: I0313 10:25:56.205441 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 13 10:25:56 crc kubenswrapper[4632]: I0313 10:25:56.343808 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5db978f585-jtbcw"] Mar 13 10:25:56 crc kubenswrapper[4632]: I0313 10:25:56.471166 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-64bdffbb5c-mpfvf"] Mar 13 10:25:56 crc kubenswrapper[4632]: I0313 10:25:56.473028 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64bdffbb5c-mpfvf" Mar 13 10:25:56 crc kubenswrapper[4632]: I0313 10:25:56.492416 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Mar 13 10:25:56 crc kubenswrapper[4632]: I0313 10:25:56.492701 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Mar 13 10:25:56 crc kubenswrapper[4632]: I0313 10:25:56.505803 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64bdffbb5c-mpfvf"] Mar 13 10:25:56 crc kubenswrapper[4632]: I0313 10:25:56.631081 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-ovndb-tls-certs\") pod \"neutron-64bdffbb5c-mpfvf\" (UID: \"6c867fc1-05ed-46c3-99dc-71ef8a09dad3\") " pod="openstack/neutron-64bdffbb5c-mpfvf" Mar 13 10:25:56 crc kubenswrapper[4632]: I0313 10:25:56.636895 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-public-tls-certs\") pod \"neutron-64bdffbb5c-mpfvf\" (UID: \"6c867fc1-05ed-46c3-99dc-71ef8a09dad3\") " pod="openstack/neutron-64bdffbb5c-mpfvf" Mar 13 10:25:56 crc kubenswrapper[4632]: I0313 10:25:56.636965 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-httpd-config\") pod \"neutron-64bdffbb5c-mpfvf\" (UID: \"6c867fc1-05ed-46c3-99dc-71ef8a09dad3\") " pod="openstack/neutron-64bdffbb5c-mpfvf" Mar 13 10:25:56 crc kubenswrapper[4632]: I0313 10:25:56.637645 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-config\") pod \"neutron-64bdffbb5c-mpfvf\" (UID: \"6c867fc1-05ed-46c3-99dc-71ef8a09dad3\") " pod="openstack/neutron-64bdffbb5c-mpfvf" Mar 13 10:25:56 crc kubenswrapper[4632]: I0313 10:25:56.637684 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-internal-tls-certs\") pod \"neutron-64bdffbb5c-mpfvf\" (UID: \"6c867fc1-05ed-46c3-99dc-71ef8a09dad3\") " pod="openstack/neutron-64bdffbb5c-mpfvf" Mar 13 10:25:56 crc kubenswrapper[4632]: I0313 10:25:56.637745 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-combined-ca-bundle\") pod \"neutron-64bdffbb5c-mpfvf\" (UID: \"6c867fc1-05ed-46c3-99dc-71ef8a09dad3\") " pod="openstack/neutron-64bdffbb5c-mpfvf" Mar 13 10:25:56 crc kubenswrapper[4632]: I0313 10:25:56.637781 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbbzk\" (UniqueName: \"kubernetes.io/projected/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-kube-api-access-rbbzk\") pod \"neutron-64bdffbb5c-mpfvf\" (UID: \"6c867fc1-05ed-46c3-99dc-71ef8a09dad3\") " pod="openstack/neutron-64bdffbb5c-mpfvf" Mar 13 10:25:56 crc kubenswrapper[4632]: I0313 10:25:56.723147 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b59dbc87f-7zwrj" podUID="7203640d-964c-4c28-8cc2-6a7ae27cdab3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: i/o timeout" Mar 13 10:25:56 crc kubenswrapper[4632]: I0313 10:25:56.753360 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-config\") pod \"neutron-64bdffbb5c-mpfvf\" (UID: \"6c867fc1-05ed-46c3-99dc-71ef8a09dad3\") " pod="openstack/neutron-64bdffbb5c-mpfvf" Mar 13 10:25:56 crc kubenswrapper[4632]: I0313 10:25:56.766324 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-config\") pod \"neutron-64bdffbb5c-mpfvf\" (UID: \"6c867fc1-05ed-46c3-99dc-71ef8a09dad3\") " pod="openstack/neutron-64bdffbb5c-mpfvf" Mar 13 10:25:56 crc kubenswrapper[4632]: I0313 10:25:56.766736 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-internal-tls-certs\") pod \"neutron-64bdffbb5c-mpfvf\" (UID: \"6c867fc1-05ed-46c3-99dc-71ef8a09dad3\") " pod="openstack/neutron-64bdffbb5c-mpfvf" Mar 13 10:25:56 crc kubenswrapper[4632]: I0313 10:25:56.766899 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-combined-ca-bundle\") pod \"neutron-64bdffbb5c-mpfvf\" (UID: \"6c867fc1-05ed-46c3-99dc-71ef8a09dad3\") " pod="openstack/neutron-64bdffbb5c-mpfvf" Mar 13 10:25:56 crc kubenswrapper[4632]: I0313 10:25:56.766986 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbbzk\" (UniqueName: \"kubernetes.io/projected/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-kube-api-access-rbbzk\") pod \"neutron-64bdffbb5c-mpfvf\" (UID: \"6c867fc1-05ed-46c3-99dc-71ef8a09dad3\") " pod="openstack/neutron-64bdffbb5c-mpfvf" Mar 13 10:25:56 crc kubenswrapper[4632]: I0313 10:25:56.767116 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-ovndb-tls-certs\") pod \"neutron-64bdffbb5c-mpfvf\" (UID: \"6c867fc1-05ed-46c3-99dc-71ef8a09dad3\") " pod="openstack/neutron-64bdffbb5c-mpfvf" Mar 13 10:25:56 crc kubenswrapper[4632]: I0313 10:25:56.767337 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-public-tls-certs\") pod \"neutron-64bdffbb5c-mpfvf\" (UID: \"6c867fc1-05ed-46c3-99dc-71ef8a09dad3\") " pod="openstack/neutron-64bdffbb5c-mpfvf" Mar 13 10:25:56 crc kubenswrapper[4632]: I0313 10:25:56.767441 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-httpd-config\") pod \"neutron-64bdffbb5c-mpfvf\" (UID: \"6c867fc1-05ed-46c3-99dc-71ef8a09dad3\") " pod="openstack/neutron-64bdffbb5c-mpfvf" Mar 13 10:25:56 crc kubenswrapper[4632]: I0313 10:25:56.789968 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-public-tls-certs\") pod \"neutron-64bdffbb5c-mpfvf\" (UID: \"6c867fc1-05ed-46c3-99dc-71ef8a09dad3\") " pod="openstack/neutron-64bdffbb5c-mpfvf" Mar 13 10:25:56 crc kubenswrapper[4632]: I0313 10:25:56.795028 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c86b4b888-l9574"] Mar 13 10:25:56 crc kubenswrapper[4632]: I0313 10:25:56.803136 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-ovndb-tls-certs\") pod \"neutron-64bdffbb5c-mpfvf\" (UID: \"6c867fc1-05ed-46c3-99dc-71ef8a09dad3\") " pod="openstack/neutron-64bdffbb5c-mpfvf" Mar 13 10:25:56 crc kubenswrapper[4632]: I0313 10:25:56.820568 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5776d95bfc-hl9dv"] Mar 13 10:25:56 crc kubenswrapper[4632]: I0313 10:25:56.833677 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-httpd-config\") pod \"neutron-64bdffbb5c-mpfvf\" (UID: \"6c867fc1-05ed-46c3-99dc-71ef8a09dad3\") " pod="openstack/neutron-64bdffbb5c-mpfvf" Mar 13 10:25:56 crc kubenswrapper[4632]: I0313 10:25:56.845767 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-combined-ca-bundle\") pod \"neutron-64bdffbb5c-mpfvf\" (UID: \"6c867fc1-05ed-46c3-99dc-71ef8a09dad3\") " pod="openstack/neutron-64bdffbb5c-mpfvf" Mar 13 10:25:56 crc kubenswrapper[4632]: I0313 10:25:56.846831 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-internal-tls-certs\") pod \"neutron-64bdffbb5c-mpfvf\" (UID: \"6c867fc1-05ed-46c3-99dc-71ef8a09dad3\") " pod="openstack/neutron-64bdffbb5c-mpfvf" Mar 13 10:25:56 crc kubenswrapper[4632]: I0313 10:25:56.927956 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbbzk\" (UniqueName: \"kubernetes.io/projected/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-kube-api-access-rbbzk\") pod \"neutron-64bdffbb5c-mpfvf\" (UID: \"6c867fc1-05ed-46c3-99dc-71ef8a09dad3\") " pod="openstack/neutron-64bdffbb5c-mpfvf" Mar 13 10:25:56 crc kubenswrapper[4632]: I0313 10:25:56.956212 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67d6b4b8f7-nrxn8" event={"ID":"95fe9a38-2b32-411e-9121-ad4cc32f159e","Type":"ContainerStarted","Data":"d9db78843b825b24c0eab6345b91a7657d2b3f0bb64d65b5dcc125b1edeeb022"} Mar 13 10:25:56 crc kubenswrapper[4632]: I0313 10:25:56.990256 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x8tq8" event={"ID":"d8d0f662-d180-4137-8107-e465c5fb0621","Type":"ContainerStarted","Data":"58b9dfc8050bba09291b639a8d3d5cc84a9643af0afb63feda9e26973d06a678"} Mar 13 10:25:57 crc kubenswrapper[4632]: I0313 10:25:57.029397 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-htnd9" event={"ID":"e92afa62-9c75-4e0e-92f4-76e57328d7a0","Type":"ContainerStarted","Data":"68a82ec143a93c9f66b6d5e73e70ead182bba11acadf06a0bc0700ee8971357d"} Mar 13 10:25:57 crc kubenswrapper[4632]: I0313 10:25:57.052793 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5db978f585-jtbcw" event={"ID":"4cf1d659-89cc-471b-8089-bc85f7ab3578","Type":"ContainerStarted","Data":"4e0560a7572c040fe38f67491b2340496562a7d42aebf98395ce318dd739fcfb"} Mar 13 10:25:57 crc kubenswrapper[4632]: I0313 10:25:57.065390 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-htnd9" podStartSLOduration=4.126363025 podStartE2EDuration="52.065376503s" podCreationTimestamp="2026-03-13 10:25:05 +0000 UTC" firstStartedPulling="2026-03-13 10:25:07.408847427 +0000 UTC m=+1281.431377560" lastFinishedPulling="2026-03-13 10:25:55.347860905 +0000 UTC m=+1329.370391038" observedRunningTime="2026-03-13 10:25:57.060789674 +0000 UTC m=+1331.083319807" watchObservedRunningTime="2026-03-13 10:25:57.065376503 +0000 UTC m=+1331.087906636" Mar 13 10:25:57 crc kubenswrapper[4632]: I0313 10:25:57.078299 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-689764498d-rg7vt" event={"ID":"5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c","Type":"ContainerStarted","Data":"3e7612a846cadd8420f2057181569ce83941e65dffcec61def9bcce804b35eef"} Mar 13 10:25:57 crc kubenswrapper[4632]: I0313 10:25:57.102804 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-7fvlk" event={"ID":"d722ddd7-e65d-44f7-a02d-18ddf126ccf5","Type":"ContainerStarted","Data":"3ef3ce34ce4d2a0d8d000d31874aca20b10c953ddde87f68a0b04979e69b8bae"} Mar 13 10:25:57 crc kubenswrapper[4632]: I0313 10:25:57.133494 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"270ebc10-986f-4473-8a5e-9094de34ae98","Type":"ContainerStarted","Data":"27c121915dbbdfc336d1bc55bed50eb5edaf76e1bc92f4f6b5e249f4ffe5098a"} Mar 13 10:25:57 crc kubenswrapper[4632]: I0313 10:25:57.151581 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-7fvlk" podStartSLOduration=5.195070397 podStartE2EDuration="53.15155934s" podCreationTimestamp="2026-03-13 10:25:04 +0000 UTC" firstStartedPulling="2026-03-13 10:25:07.529247871 +0000 UTC m=+1281.551778004" lastFinishedPulling="2026-03-13 10:25:55.485736814 +0000 UTC m=+1329.508266947" observedRunningTime="2026-03-13 10:25:57.13380559 +0000 UTC m=+1331.156335723" watchObservedRunningTime="2026-03-13 10:25:57.15155934 +0000 UTC m=+1331.174089473" Mar 13 10:25:57 crc kubenswrapper[4632]: I0313 10:25:57.303715 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64bdffbb5c-mpfvf" Mar 13 10:25:57 crc kubenswrapper[4632]: I0313 10:25:57.335986 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 13 10:25:58 crc kubenswrapper[4632]: I0313 10:25:58.137100 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 13 10:25:58 crc kubenswrapper[4632]: I0313 10:25:58.223520 4632 generic.go:334] "Generic (PLEG): container finished" podID="ff547198-2736-4059-8e66-e63ea9ce7345" containerID="dec2a00b325c16f4a1d001f23d5e8b1ffdb30f4c935f90c479b4c2928a1f9cbd" exitCode=0 Mar 13 10:25:58 crc kubenswrapper[4632]: I0313 10:25:58.224485 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5776d95bfc-hl9dv" event={"ID":"ff547198-2736-4059-8e66-e63ea9ce7345","Type":"ContainerDied","Data":"dec2a00b325c16f4a1d001f23d5e8b1ffdb30f4c935f90c479b4c2928a1f9cbd"} Mar 13 10:25:58 crc kubenswrapper[4632]: I0313 10:25:58.224520 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5776d95bfc-hl9dv" event={"ID":"ff547198-2736-4059-8e66-e63ea9ce7345","Type":"ContainerStarted","Data":"8db6fac31f3928e6490a77faa8cf72ab51791153ec4fce9dafd1cd9fb950c31f"} Mar 13 10:25:58 crc kubenswrapper[4632]: I0313 10:25:58.244987 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64bdffbb5c-mpfvf"] Mar 13 10:25:58 crc kubenswrapper[4632]: I0313 10:25:58.296015 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-689764498d-rg7vt" event={"ID":"5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c","Type":"ContainerStarted","Data":"b6cdbfe3937cc8607d510b86100785b83eab81056229525992fbe23bfebc3c39"} Mar 13 10:25:58 crc kubenswrapper[4632]: I0313 10:25:58.334021 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c86b4b888-l9574" event={"ID":"6d73a499-d334-4a7a-9783-640b98760672","Type":"ContainerStarted","Data":"8c839401b1db62da93454588496b8ab534c9e6313aa3bcb0003cb9137b63b2ca"} Mar 13 10:25:58 crc kubenswrapper[4632]: I0313 10:25:58.334070 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c86b4b888-l9574" event={"ID":"6d73a499-d334-4a7a-9783-640b98760672","Type":"ContainerStarted","Data":"c201c6ed0f734df3747387db31697b083007f33831a2be5b5b4d93d97a61d2c9"} Mar 13 10:25:58 crc kubenswrapper[4632]: I0313 10:25:58.336686 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"37dc6e5d-eb14-4cef-9451-7c567c6c9068","Type":"ContainerStarted","Data":"064fad76db398b97a6a04386f16fbe17c9bebd9b23d2f3264f42bd5bbfc7916f"} Mar 13 10:25:58 crc kubenswrapper[4632]: I0313 10:25:58.348311 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bdb5f7878-ng2k2" event={"ID":"3e4afb91-ce26-4325-89c9-2542da2ec48a","Type":"ContainerStarted","Data":"0848706fc94227d25a25441bdfc8b2affad934f1f484b5adb7c4470d7918e589"} Mar 13 10:25:58 crc kubenswrapper[4632]: I0313 10:25:58.400406 4632 generic.go:334] "Generic (PLEG): container finished" podID="4cf1d659-89cc-471b-8089-bc85f7ab3578" containerID="259722717f7860244e919e91ec8af7531cfacc106ccbd95a7ce0bb8509700a95" exitCode=0 Mar 13 10:25:58 crc kubenswrapper[4632]: I0313 10:25:58.400504 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5db978f585-jtbcw" event={"ID":"4cf1d659-89cc-471b-8089-bc85f7ab3578","Type":"ContainerDied","Data":"259722717f7860244e919e91ec8af7531cfacc106ccbd95a7ce0bb8509700a95"} Mar 13 10:25:58 crc kubenswrapper[4632]: I0313 10:25:58.435585 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zdgpw" event={"ID":"418cb883-abd1-46b4-957f-0a40f3e62297","Type":"ContainerStarted","Data":"3672f721f5cc963fe48f19a0fe26275ae0f1cbd82fd44ed2d6b14dcbb240be1d"} Mar 13 10:25:58 crc kubenswrapper[4632]: I0313 10:25:58.474425 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-zdgpw" podStartSLOduration=4.941126353 podStartE2EDuration="53.47440765s" podCreationTimestamp="2026-03-13 10:25:05 +0000 UTC" firstStartedPulling="2026-03-13 10:25:07.184145893 +0000 UTC m=+1281.206676026" lastFinishedPulling="2026-03-13 10:25:55.71742719 +0000 UTC m=+1329.739957323" observedRunningTime="2026-03-13 10:25:58.470693111 +0000 UTC m=+1332.493223255" watchObservedRunningTime="2026-03-13 10:25:58.47440765 +0000 UTC m=+1332.496937783" Mar 13 10:25:58 crc kubenswrapper[4632]: I0313 10:25:58.477234 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67d6b4b8f7-nrxn8" event={"ID":"95fe9a38-2b32-411e-9121-ad4cc32f159e","Type":"ContainerStarted","Data":"f8238ac2122bdce07c274a4f41c5a0d859a4162d57594e52444f5d2a425d1e7b"} Mar 13 10:25:58 crc kubenswrapper[4632]: I0313 10:25:58.477379 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-67d6b4b8f7-nrxn8" podUID="95fe9a38-2b32-411e-9121-ad4cc32f159e" containerName="horizon-log" containerID="cri-o://d9db78843b825b24c0eab6345b91a7657d2b3f0bb64d65b5dcc125b1edeeb022" gracePeriod=30 Mar 13 10:25:58 crc kubenswrapper[4632]: I0313 10:25:58.477857 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-67d6b4b8f7-nrxn8" podUID="95fe9a38-2b32-411e-9121-ad4cc32f159e" containerName="horizon" containerID="cri-o://f8238ac2122bdce07c274a4f41c5a0d859a4162d57594e52444f5d2a425d1e7b" gracePeriod=30 Mar 13 10:25:58 crc kubenswrapper[4632]: I0313 10:25:58.501602 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x8tq8" event={"ID":"d8d0f662-d180-4137-8107-e465c5fb0621","Type":"ContainerStarted","Data":"3b5385b113397b9418c59a941d2a27f232c7b0df4b245db65886e55380c57297"} Mar 13 10:25:58 crc kubenswrapper[4632]: I0313 10:25:58.553058 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-67d6b4b8f7-nrxn8" podStartSLOduration=8.912867412 podStartE2EDuration="53.553034588s" podCreationTimestamp="2026-03-13 10:25:05 +0000 UTC" firstStartedPulling="2026-03-13 10:25:07.693195043 +0000 UTC m=+1281.715725176" lastFinishedPulling="2026-03-13 10:25:52.333362219 +0000 UTC m=+1326.355892352" observedRunningTime="2026-03-13 10:25:58.531367535 +0000 UTC m=+1332.553897668" watchObservedRunningTime="2026-03-13 10:25:58.553034588 +0000 UTC m=+1332.575564721" Mar 13 10:25:58 crc kubenswrapper[4632]: I0313 10:25:58.647963 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-x8tq8" podStartSLOduration=24.64792298 podStartE2EDuration="24.64792298s" podCreationTimestamp="2026-03-13 10:25:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:25:58.567603582 +0000 UTC m=+1332.590133715" watchObservedRunningTime="2026-03-13 10:25:58.64792298 +0000 UTC m=+1332.670453113" Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.093020 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5db978f585-jtbcw" Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.204365 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cf1d659-89cc-471b-8089-bc85f7ab3578-config\") pod \"4cf1d659-89cc-471b-8089-bc85f7ab3578\" (UID: \"4cf1d659-89cc-471b-8089-bc85f7ab3578\") " Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.204527 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cf1d659-89cc-471b-8089-bc85f7ab3578-ovsdbserver-nb\") pod \"4cf1d659-89cc-471b-8089-bc85f7ab3578\" (UID: \"4cf1d659-89cc-471b-8089-bc85f7ab3578\") " Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.204733 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsqxq\" (UniqueName: \"kubernetes.io/projected/4cf1d659-89cc-471b-8089-bc85f7ab3578-kube-api-access-gsqxq\") pod \"4cf1d659-89cc-471b-8089-bc85f7ab3578\" (UID: \"4cf1d659-89cc-471b-8089-bc85f7ab3578\") " Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.204793 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cf1d659-89cc-471b-8089-bc85f7ab3578-dns-swift-storage-0\") pod \"4cf1d659-89cc-471b-8089-bc85f7ab3578\" (UID: \"4cf1d659-89cc-471b-8089-bc85f7ab3578\") " Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.204843 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cf1d659-89cc-471b-8089-bc85f7ab3578-ovsdbserver-sb\") pod \"4cf1d659-89cc-471b-8089-bc85f7ab3578\" (UID: \"4cf1d659-89cc-471b-8089-bc85f7ab3578\") " Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.204924 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cf1d659-89cc-471b-8089-bc85f7ab3578-dns-svc\") pod \"4cf1d659-89cc-471b-8089-bc85f7ab3578\" (UID: \"4cf1d659-89cc-471b-8089-bc85f7ab3578\") " Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.240553 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cf1d659-89cc-471b-8089-bc85f7ab3578-kube-api-access-gsqxq" (OuterVolumeSpecName: "kube-api-access-gsqxq") pod "4cf1d659-89cc-471b-8089-bc85f7ab3578" (UID: "4cf1d659-89cc-471b-8089-bc85f7ab3578"). InnerVolumeSpecName "kube-api-access-gsqxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.307329 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsqxq\" (UniqueName: \"kubernetes.io/projected/4cf1d659-89cc-471b-8089-bc85f7ab3578-kube-api-access-gsqxq\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.320249 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cf1d659-89cc-471b-8089-bc85f7ab3578-config" (OuterVolumeSpecName: "config") pod "4cf1d659-89cc-471b-8089-bc85f7ab3578" (UID: "4cf1d659-89cc-471b-8089-bc85f7ab3578"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.320592 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cf1d659-89cc-471b-8089-bc85f7ab3578-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4cf1d659-89cc-471b-8089-bc85f7ab3578" (UID: "4cf1d659-89cc-471b-8089-bc85f7ab3578"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.322321 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cf1d659-89cc-471b-8089-bc85f7ab3578-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4cf1d659-89cc-471b-8089-bc85f7ab3578" (UID: "4cf1d659-89cc-471b-8089-bc85f7ab3578"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.336351 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cf1d659-89cc-471b-8089-bc85f7ab3578-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4cf1d659-89cc-471b-8089-bc85f7ab3578" (UID: "4cf1d659-89cc-471b-8089-bc85f7ab3578"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.372570 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cf1d659-89cc-471b-8089-bc85f7ab3578-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4cf1d659-89cc-471b-8089-bc85f7ab3578" (UID: "4cf1d659-89cc-471b-8089-bc85f7ab3578"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.409672 4632 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cf1d659-89cc-471b-8089-bc85f7ab3578-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.409711 4632 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cf1d659-89cc-471b-8089-bc85f7ab3578-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.409726 4632 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cf1d659-89cc-471b-8089-bc85f7ab3578-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.409739 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cf1d659-89cc-471b-8089-bc85f7ab3578-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.409750 4632 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cf1d659-89cc-471b-8089-bc85f7ab3578-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.557734 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"37dc6e5d-eb14-4cef-9451-7c567c6c9068","Type":"ContainerStarted","Data":"4cf0cebf490653caa207d6e711c23d84bd0c1109c2c15d6d1a3ec573b2a4d48f"} Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.583819 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bdb5f7878-ng2k2" event={"ID":"3e4afb91-ce26-4325-89c9-2542da2ec48a","Type":"ContainerStarted","Data":"dc4a058f6feb7822333693352f32f5677ff03988b7b5b71005c85c4bf733b402"} Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.603336 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c86b4b888-l9574" event={"ID":"6d73a499-d334-4a7a-9783-640b98760672","Type":"ContainerStarted","Data":"e005b4f09b297f1fe00efd39c9534b7382173cd69b88dca5466ba89c0f3c0de7"} Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.604565 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5c86b4b888-l9574" Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.623164 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5776d95bfc-hl9dv" event={"ID":"ff547198-2736-4059-8e66-e63ea9ce7345","Type":"ContainerStarted","Data":"1076485d4d02b6cacd1f94b4c459b88d5309d73c47777ad04b4bed1ee81eb7ff"} Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.624001 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5776d95bfc-hl9dv" Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.636931 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7bdb5f7878-ng2k2" podStartSLOduration=45.636912938 podStartE2EDuration="45.636912938s" podCreationTimestamp="2026-03-13 10:25:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:25:59.633762624 +0000 UTC m=+1333.656292767" watchObservedRunningTime="2026-03-13 10:25:59.636912938 +0000 UTC m=+1333.659443071" Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.657790 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5776d95bfc-hl9dv" podStartSLOduration=5.657773102 podStartE2EDuration="5.657773102s" podCreationTimestamp="2026-03-13 10:25:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:25:59.656146283 +0000 UTC m=+1333.678676426" watchObservedRunningTime="2026-03-13 10:25:59.657773102 +0000 UTC m=+1333.680303235" Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.662270 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64bdffbb5c-mpfvf" event={"ID":"6c867fc1-05ed-46c3-99dc-71ef8a09dad3","Type":"ContainerStarted","Data":"027b2c4436a3d137f7ef6a7921904bf128e17aa7812143af60d4d11a546759da"} Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.662322 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64bdffbb5c-mpfvf" event={"ID":"6c867fc1-05ed-46c3-99dc-71ef8a09dad3","Type":"ContainerStarted","Data":"bb71081b64258f79a4055c8e129128f47654fe94235aa2a730194da521f70fe1"} Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.682729 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1756bbdc-3e6c-4815-96a7-0620f7400cb7","Type":"ContainerStarted","Data":"4958903559f9d5de9098d0d27c704deb245f10fc57158f3d446a7bff788fb121"} Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.698975 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5db978f585-jtbcw" event={"ID":"4cf1d659-89cc-471b-8089-bc85f7ab3578","Type":"ContainerDied","Data":"4e0560a7572c040fe38f67491b2340496562a7d42aebf98395ce318dd739fcfb"} Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.699040 4632 scope.go:117] "RemoveContainer" containerID="259722717f7860244e919e91ec8af7531cfacc106ccbd95a7ce0bb8509700a95" Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.699202 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5db978f585-jtbcw" Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.703290 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5c86b4b888-l9574" podStartSLOduration=6.703268546 podStartE2EDuration="6.703268546s" podCreationTimestamp="2026-03-13 10:25:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:25:59.682533256 +0000 UTC m=+1333.705063409" watchObservedRunningTime="2026-03-13 10:25:59.703268546 +0000 UTC m=+1333.725798679" Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.733070 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-689764498d-rg7vt" event={"ID":"5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c","Type":"ContainerStarted","Data":"8ce0185281fb59d0c6bda2b2c484ad3711b4bd3b729b4b8677e75ca6b8e1f739"} Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.810980 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5db978f585-jtbcw"] Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.860692 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5db978f585-jtbcw"] Mar 13 10:25:59 crc kubenswrapper[4632]: I0313 10:25:59.869646 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-689764498d-rg7vt" podStartSLOduration=44.869622229 podStartE2EDuration="44.869622229s" podCreationTimestamp="2026-03-13 10:25:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:25:59.805676557 +0000 UTC m=+1333.828206690" watchObservedRunningTime="2026-03-13 10:25:59.869622229 +0000 UTC m=+1333.892152372" Mar 13 10:26:00 crc kubenswrapper[4632]: I0313 10:26:00.058293 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cf1d659-89cc-471b-8089-bc85f7ab3578" path="/var/lib/kubelet/pods/4cf1d659-89cc-471b-8089-bc85f7ab3578/volumes" Mar 13 10:26:00 crc kubenswrapper[4632]: I0313 10:26:00.225045 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556626-z45rd"] Mar 13 10:26:00 crc kubenswrapper[4632]: E0313 10:26:00.225629 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf1d659-89cc-471b-8089-bc85f7ab3578" containerName="init" Mar 13 10:26:00 crc kubenswrapper[4632]: I0313 10:26:00.225649 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf1d659-89cc-471b-8089-bc85f7ab3578" containerName="init" Mar 13 10:26:00 crc kubenswrapper[4632]: I0313 10:26:00.225964 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cf1d659-89cc-471b-8089-bc85f7ab3578" containerName="init" Mar 13 10:26:00 crc kubenswrapper[4632]: I0313 10:26:00.226734 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556626-z45rd" Mar 13 10:26:00 crc kubenswrapper[4632]: I0313 10:26:00.229323 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 10:26:00 crc kubenswrapper[4632]: I0313 10:26:00.231570 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 10:26:00 crc kubenswrapper[4632]: I0313 10:26:00.232570 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 10:26:00 crc kubenswrapper[4632]: I0313 10:26:00.249057 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556626-z45rd"] Mar 13 10:26:00 crc kubenswrapper[4632]: I0313 10:26:00.363500 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9ghq\" (UniqueName: \"kubernetes.io/projected/8d59f1c4-0990-423d-ab4f-ecf2d0a1ac27-kube-api-access-r9ghq\") pod \"auto-csr-approver-29556626-z45rd\" (UID: \"8d59f1c4-0990-423d-ab4f-ecf2d0a1ac27\") " pod="openshift-infra/auto-csr-approver-29556626-z45rd" Mar 13 10:26:00 crc kubenswrapper[4632]: I0313 10:26:00.465225 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9ghq\" (UniqueName: \"kubernetes.io/projected/8d59f1c4-0990-423d-ab4f-ecf2d0a1ac27-kube-api-access-r9ghq\") pod \"auto-csr-approver-29556626-z45rd\" (UID: \"8d59f1c4-0990-423d-ab4f-ecf2d0a1ac27\") " pod="openshift-infra/auto-csr-approver-29556626-z45rd" Mar 13 10:26:00 crc kubenswrapper[4632]: I0313 10:26:00.512043 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9ghq\" (UniqueName: \"kubernetes.io/projected/8d59f1c4-0990-423d-ab4f-ecf2d0a1ac27-kube-api-access-r9ghq\") pod \"auto-csr-approver-29556626-z45rd\" (UID: \"8d59f1c4-0990-423d-ab4f-ecf2d0a1ac27\") " pod="openshift-infra/auto-csr-approver-29556626-z45rd" Mar 13 10:26:00 crc kubenswrapper[4632]: I0313 10:26:00.562262 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556626-z45rd" Mar 13 10:26:00 crc kubenswrapper[4632]: I0313 10:26:00.746754 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1756bbdc-3e6c-4815-96a7-0620f7400cb7","Type":"ContainerStarted","Data":"7d8fa15092ec71e6c12fe0e4bfd626668295f5b687014027b1d5515acb53e02d"} Mar 13 10:26:04 crc kubenswrapper[4632]: I0313 10:26:04.264418 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556626-z45rd"] Mar 13 10:26:04 crc kubenswrapper[4632]: I0313 10:26:04.450108 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5776d95bfc-hl9dv" Mar 13 10:26:04 crc kubenswrapper[4632]: I0313 10:26:04.584780 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d8f9dd5cc-6nktg"] Mar 13 10:26:04 crc kubenswrapper[4632]: I0313 10:26:04.595470 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d8f9dd5cc-6nktg" podUID="78e29b83-b50e-46db-a8d6-bba0ecfb5c08" containerName="dnsmasq-dns" containerID="cri-o://0f3b47ae46ac068badd9fe9f0befa9613632a7901ef641ad38d1419cf04cc4af" gracePeriod=10 Mar 13 10:26:04 crc kubenswrapper[4632]: I0313 10:26:04.879527 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64bdffbb5c-mpfvf" event={"ID":"6c867fc1-05ed-46c3-99dc-71ef8a09dad3","Type":"ContainerStarted","Data":"bbc256375bc79a61ff656574ec8a596aed3314e7ad4cd2f7fcf6a7462aee3274"} Mar 13 10:26:04 crc kubenswrapper[4632]: I0313 10:26:04.880263 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-64bdffbb5c-mpfvf" Mar 13 10:26:04 crc kubenswrapper[4632]: I0313 10:26:04.889079 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556626-z45rd" event={"ID":"8d59f1c4-0990-423d-ab4f-ecf2d0a1ac27","Type":"ContainerStarted","Data":"6d61d4f7c0c3be011ec1f0f84978bb250c298a5c766226d624254ef183165b94"} Mar 13 10:26:05 crc kubenswrapper[4632]: I0313 10:26:05.394483 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:26:05 crc kubenswrapper[4632]: I0313 10:26:05.395429 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:26:05 crc kubenswrapper[4632]: I0313 10:26:05.523806 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d8f9dd5cc-6nktg" Mar 13 10:26:05 crc kubenswrapper[4632]: I0313 10:26:05.546522 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-64bdffbb5c-mpfvf" podStartSLOduration=9.546502741 podStartE2EDuration="9.546502741s" podCreationTimestamp="2026-03-13 10:25:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:26:04.920661708 +0000 UTC m=+1338.943191841" watchObservedRunningTime="2026-03-13 10:26:05.546502741 +0000 UTC m=+1339.569032874" Mar 13 10:26:05 crc kubenswrapper[4632]: I0313 10:26:05.694669 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29zt6\" (UniqueName: \"kubernetes.io/projected/78e29b83-b50e-46db-a8d6-bba0ecfb5c08-kube-api-access-29zt6\") pod \"78e29b83-b50e-46db-a8d6-bba0ecfb5c08\" (UID: \"78e29b83-b50e-46db-a8d6-bba0ecfb5c08\") " Mar 13 10:26:05 crc kubenswrapper[4632]: I0313 10:26:05.694751 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78e29b83-b50e-46db-a8d6-bba0ecfb5c08-config\") pod \"78e29b83-b50e-46db-a8d6-bba0ecfb5c08\" (UID: \"78e29b83-b50e-46db-a8d6-bba0ecfb5c08\") " Mar 13 10:26:05 crc kubenswrapper[4632]: I0313 10:26:05.694794 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/78e29b83-b50e-46db-a8d6-bba0ecfb5c08-dns-swift-storage-0\") pod \"78e29b83-b50e-46db-a8d6-bba0ecfb5c08\" (UID: \"78e29b83-b50e-46db-a8d6-bba0ecfb5c08\") " Mar 13 10:26:05 crc kubenswrapper[4632]: I0313 10:26:05.694830 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78e29b83-b50e-46db-a8d6-bba0ecfb5c08-ovsdbserver-nb\") pod \"78e29b83-b50e-46db-a8d6-bba0ecfb5c08\" (UID: \"78e29b83-b50e-46db-a8d6-bba0ecfb5c08\") " Mar 13 10:26:05 crc kubenswrapper[4632]: I0313 10:26:05.695021 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78e29b83-b50e-46db-a8d6-bba0ecfb5c08-dns-svc\") pod \"78e29b83-b50e-46db-a8d6-bba0ecfb5c08\" (UID: \"78e29b83-b50e-46db-a8d6-bba0ecfb5c08\") " Mar 13 10:26:05 crc kubenswrapper[4632]: I0313 10:26:05.695082 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78e29b83-b50e-46db-a8d6-bba0ecfb5c08-ovsdbserver-sb\") pod \"78e29b83-b50e-46db-a8d6-bba0ecfb5c08\" (UID: \"78e29b83-b50e-46db-a8d6-bba0ecfb5c08\") " Mar 13 10:26:05 crc kubenswrapper[4632]: I0313 10:26:05.704299 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78e29b83-b50e-46db-a8d6-bba0ecfb5c08-kube-api-access-29zt6" (OuterVolumeSpecName: "kube-api-access-29zt6") pod "78e29b83-b50e-46db-a8d6-bba0ecfb5c08" (UID: "78e29b83-b50e-46db-a8d6-bba0ecfb5c08"). InnerVolumeSpecName "kube-api-access-29zt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:26:05 crc kubenswrapper[4632]: I0313 10:26:05.797530 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29zt6\" (UniqueName: \"kubernetes.io/projected/78e29b83-b50e-46db-a8d6-bba0ecfb5c08-kube-api-access-29zt6\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:05 crc kubenswrapper[4632]: I0313 10:26:05.812003 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78e29b83-b50e-46db-a8d6-bba0ecfb5c08-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "78e29b83-b50e-46db-a8d6-bba0ecfb5c08" (UID: "78e29b83-b50e-46db-a8d6-bba0ecfb5c08"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:26:05 crc kubenswrapper[4632]: I0313 10:26:05.812347 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78e29b83-b50e-46db-a8d6-bba0ecfb5c08-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "78e29b83-b50e-46db-a8d6-bba0ecfb5c08" (UID: "78e29b83-b50e-46db-a8d6-bba0ecfb5c08"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:26:05 crc kubenswrapper[4632]: I0313 10:26:05.856696 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-689764498d-rg7vt" Mar 13 10:26:05 crc kubenswrapper[4632]: I0313 10:26:05.858334 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-689764498d-rg7vt" Mar 13 10:26:05 crc kubenswrapper[4632]: I0313 10:26:05.862698 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78e29b83-b50e-46db-a8d6-bba0ecfb5c08-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "78e29b83-b50e-46db-a8d6-bba0ecfb5c08" (UID: "78e29b83-b50e-46db-a8d6-bba0ecfb5c08"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:26:05 crc kubenswrapper[4632]: I0313 10:26:05.881926 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78e29b83-b50e-46db-a8d6-bba0ecfb5c08-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "78e29b83-b50e-46db-a8d6-bba0ecfb5c08" (UID: "78e29b83-b50e-46db-a8d6-bba0ecfb5c08"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:26:05 crc kubenswrapper[4632]: I0313 10:26:05.885639 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78e29b83-b50e-46db-a8d6-bba0ecfb5c08-config" (OuterVolumeSpecName: "config") pod "78e29b83-b50e-46db-a8d6-bba0ecfb5c08" (UID: "78e29b83-b50e-46db-a8d6-bba0ecfb5c08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:26:05 crc kubenswrapper[4632]: I0313 10:26:05.900685 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78e29b83-b50e-46db-a8d6-bba0ecfb5c08-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:05 crc kubenswrapper[4632]: I0313 10:26:05.900719 4632 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/78e29b83-b50e-46db-a8d6-bba0ecfb5c08-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:05 crc kubenswrapper[4632]: I0313 10:26:05.900735 4632 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78e29b83-b50e-46db-a8d6-bba0ecfb5c08-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:05 crc kubenswrapper[4632]: I0313 10:26:05.900749 4632 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78e29b83-b50e-46db-a8d6-bba0ecfb5c08-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:05 crc kubenswrapper[4632]: I0313 10:26:05.900759 4632 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78e29b83-b50e-46db-a8d6-bba0ecfb5c08-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:05 crc kubenswrapper[4632]: I0313 10:26:05.917549 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"37dc6e5d-eb14-4cef-9451-7c567c6c9068","Type":"ContainerStarted","Data":"82cadbe6c9dcc57c6c514bcdc80e74daabbed007f9160e45e58a214195d92a1e"} Mar 13 10:26:05 crc kubenswrapper[4632]: I0313 10:26:05.917718 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="37dc6e5d-eb14-4cef-9451-7c567c6c9068" containerName="glance-log" containerID="cri-o://4cf0cebf490653caa207d6e711c23d84bd0c1109c2c15d6d1a3ec573b2a4d48f" gracePeriod=30 Mar 13 10:26:05 crc kubenswrapper[4632]: I0313 10:26:05.918153 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="37dc6e5d-eb14-4cef-9451-7c567c6c9068" containerName="glance-httpd" containerID="cri-o://82cadbe6c9dcc57c6c514bcdc80e74daabbed007f9160e45e58a214195d92a1e" gracePeriod=30 Mar 13 10:26:05 crc kubenswrapper[4632]: I0313 10:26:05.937688 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"270ebc10-986f-4473-8a5e-9094de34ae98","Type":"ContainerStarted","Data":"b63cc4f80efbb7b17b044808a5b6c8d5aa98b9e2ae8e38ab95a55c4e3ba911d1"} Mar 13 10:26:05 crc kubenswrapper[4632]: I0313 10:26:05.948211 4632 generic.go:334] "Generic (PLEG): container finished" podID="78e29b83-b50e-46db-a8d6-bba0ecfb5c08" containerID="0f3b47ae46ac068badd9fe9f0befa9613632a7901ef641ad38d1419cf04cc4af" exitCode=0 Mar 13 10:26:05 crc kubenswrapper[4632]: I0313 10:26:05.948279 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d8f9dd5cc-6nktg" event={"ID":"78e29b83-b50e-46db-a8d6-bba0ecfb5c08","Type":"ContainerDied","Data":"0f3b47ae46ac068badd9fe9f0befa9613632a7901ef641ad38d1419cf04cc4af"} Mar 13 10:26:05 crc kubenswrapper[4632]: I0313 10:26:05.948308 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d8f9dd5cc-6nktg" event={"ID":"78e29b83-b50e-46db-a8d6-bba0ecfb5c08","Type":"ContainerDied","Data":"5187e6e9a0835d8922aa8452723fd7620bf5222c8a96f16a5be9778d8386494d"} Mar 13 10:26:05 crc kubenswrapper[4632]: I0313 10:26:05.948325 4632 scope.go:117] "RemoveContainer" containerID="0f3b47ae46ac068badd9fe9f0befa9613632a7901ef641ad38d1419cf04cc4af" Mar 13 10:26:05 crc kubenswrapper[4632]: I0313 10:26:05.948474 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d8f9dd5cc-6nktg" Mar 13 10:26:05 crc kubenswrapper[4632]: I0313 10:26:05.980078 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1756bbdc-3e6c-4815-96a7-0620f7400cb7" containerName="glance-log" containerID="cri-o://7d8fa15092ec71e6c12fe0e4bfd626668295f5b687014027b1d5515acb53e02d" gracePeriod=30 Mar 13 10:26:05 crc kubenswrapper[4632]: I0313 10:26:05.980656 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1756bbdc-3e6c-4815-96a7-0620f7400cb7","Type":"ContainerStarted","Data":"859ff52dd31106c27267e2e88fdd9f3088d59cb66aea2491829bb3b779d0c030"} Mar 13 10:26:05 crc kubenswrapper[4632]: I0313 10:26:05.983252 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1756bbdc-3e6c-4815-96a7-0620f7400cb7" containerName="glance-httpd" containerID="cri-o://859ff52dd31106c27267e2e88fdd9f3088d59cb66aea2491829bb3b779d0c030" gracePeriod=30 Mar 13 10:26:06 crc kubenswrapper[4632]: I0313 10:26:06.014798 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=29.01478288 podStartE2EDuration="29.01478288s" podCreationTimestamp="2026-03-13 10:25:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:26:05.960259601 +0000 UTC m=+1339.982789754" watchObservedRunningTime="2026-03-13 10:26:06.01478288 +0000 UTC m=+1340.037313013" Mar 13 10:26:06 crc kubenswrapper[4632]: I0313 10:26:06.019459 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=28.01944629 podStartE2EDuration="28.01944629s" podCreationTimestamp="2026-03-13 10:25:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:26:06.010012007 +0000 UTC m=+1340.032542160" watchObservedRunningTime="2026-03-13 10:26:06.01944629 +0000 UTC m=+1340.041976423" Mar 13 10:26:06 crc kubenswrapper[4632]: I0313 10:26:06.086697 4632 scope.go:117] "RemoveContainer" containerID="80dc69ee9ec968911aeb73d01429c087d3144584fba19b07c0a8b37e75187f19" Mar 13 10:26:06 crc kubenswrapper[4632]: I0313 10:26:06.115493 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d8f9dd5cc-6nktg"] Mar 13 10:26:06 crc kubenswrapper[4632]: I0313 10:26:06.127303 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d8f9dd5cc-6nktg"] Mar 13 10:26:06 crc kubenswrapper[4632]: I0313 10:26:06.406587 4632 scope.go:117] "RemoveContainer" containerID="0f3b47ae46ac068badd9fe9f0befa9613632a7901ef641ad38d1419cf04cc4af" Mar 13 10:26:06 crc kubenswrapper[4632]: E0313 10:26:06.409756 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f3b47ae46ac068badd9fe9f0befa9613632a7901ef641ad38d1419cf04cc4af\": container with ID starting with 0f3b47ae46ac068badd9fe9f0befa9613632a7901ef641ad38d1419cf04cc4af not found: ID does not exist" containerID="0f3b47ae46ac068badd9fe9f0befa9613632a7901ef641ad38d1419cf04cc4af" Mar 13 10:26:06 crc kubenswrapper[4632]: I0313 10:26:06.409830 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f3b47ae46ac068badd9fe9f0befa9613632a7901ef641ad38d1419cf04cc4af"} err="failed to get container status \"0f3b47ae46ac068badd9fe9f0befa9613632a7901ef641ad38d1419cf04cc4af\": rpc error: code = NotFound desc = could not find container \"0f3b47ae46ac068badd9fe9f0befa9613632a7901ef641ad38d1419cf04cc4af\": container with ID starting with 0f3b47ae46ac068badd9fe9f0befa9613632a7901ef641ad38d1419cf04cc4af not found: ID does not exist" Mar 13 10:26:06 crc kubenswrapper[4632]: I0313 10:26:06.409876 4632 scope.go:117] "RemoveContainer" containerID="80dc69ee9ec968911aeb73d01429c087d3144584fba19b07c0a8b37e75187f19" Mar 13 10:26:06 crc kubenswrapper[4632]: E0313 10:26:06.410640 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80dc69ee9ec968911aeb73d01429c087d3144584fba19b07c0a8b37e75187f19\": container with ID starting with 80dc69ee9ec968911aeb73d01429c087d3144584fba19b07c0a8b37e75187f19 not found: ID does not exist" containerID="80dc69ee9ec968911aeb73d01429c087d3144584fba19b07c0a8b37e75187f19" Mar 13 10:26:06 crc kubenswrapper[4632]: I0313 10:26:06.410671 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80dc69ee9ec968911aeb73d01429c087d3144584fba19b07c0a8b37e75187f19"} err="failed to get container status \"80dc69ee9ec968911aeb73d01429c087d3144584fba19b07c0a8b37e75187f19\": rpc error: code = NotFound desc = could not find container \"80dc69ee9ec968911aeb73d01429c087d3144584fba19b07c0a8b37e75187f19\": container with ID starting with 80dc69ee9ec968911aeb73d01429c087d3144584fba19b07c0a8b37e75187f19 not found: ID does not exist" Mar 13 10:26:06 crc kubenswrapper[4632]: I0313 10:26:06.618142 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-67d6b4b8f7-nrxn8" Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.086109 4632 generic.go:334] "Generic (PLEG): container finished" podID="1756bbdc-3e6c-4815-96a7-0620f7400cb7" containerID="859ff52dd31106c27267e2e88fdd9f3088d59cb66aea2491829bb3b779d0c030" exitCode=143 Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.086562 4632 generic.go:334] "Generic (PLEG): container finished" podID="1756bbdc-3e6c-4815-96a7-0620f7400cb7" containerID="7d8fa15092ec71e6c12fe0e4bfd626668295f5b687014027b1d5515acb53e02d" exitCode=143 Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.086636 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1756bbdc-3e6c-4815-96a7-0620f7400cb7","Type":"ContainerDied","Data":"859ff52dd31106c27267e2e88fdd9f3088d59cb66aea2491829bb3b779d0c030"} Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.086664 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1756bbdc-3e6c-4815-96a7-0620f7400cb7","Type":"ContainerDied","Data":"7d8fa15092ec71e6c12fe0e4bfd626668295f5b687014027b1d5515acb53e02d"} Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.099036 4632 generic.go:334] "Generic (PLEG): container finished" podID="37dc6e5d-eb14-4cef-9451-7c567c6c9068" containerID="82cadbe6c9dcc57c6c514bcdc80e74daabbed007f9160e45e58a214195d92a1e" exitCode=143 Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.099072 4632 generic.go:334] "Generic (PLEG): container finished" podID="37dc6e5d-eb14-4cef-9451-7c567c6c9068" containerID="4cf0cebf490653caa207d6e711c23d84bd0c1109c2c15d6d1a3ec573b2a4d48f" exitCode=143 Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.099118 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"37dc6e5d-eb14-4cef-9451-7c567c6c9068","Type":"ContainerDied","Data":"82cadbe6c9dcc57c6c514bcdc80e74daabbed007f9160e45e58a214195d92a1e"} Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.099152 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"37dc6e5d-eb14-4cef-9451-7c567c6c9068","Type":"ContainerDied","Data":"4cf0cebf490653caa207d6e711c23d84bd0c1109c2c15d6d1a3ec573b2a4d48f"} Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.101994 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556626-z45rd" event={"ID":"8d59f1c4-0990-423d-ab4f-ecf2d0a1ac27","Type":"ContainerStarted","Data":"e8fc7f9526396e3f4333f93ccef86f72aee3214939c63a5e8145c990bbf9d938"} Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.137216 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556626-z45rd" podStartSLOduration=6.105890334 podStartE2EDuration="7.137198552s" podCreationTimestamp="2026-03-13 10:26:00 +0000 UTC" firstStartedPulling="2026-03-13 10:26:04.256514908 +0000 UTC m=+1338.279045041" lastFinishedPulling="2026-03-13 10:26:05.287823126 +0000 UTC m=+1339.310353259" observedRunningTime="2026-03-13 10:26:07.132212913 +0000 UTC m=+1341.154743046" watchObservedRunningTime="2026-03-13 10:26:07.137198552 +0000 UTC m=+1341.159728685" Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.323778 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.345966 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.452138 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"37dc6e5d-eb14-4cef-9451-7c567c6c9068\" (UID: \"37dc6e5d-eb14-4cef-9451-7c567c6c9068\") " Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.452225 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1756bbdc-3e6c-4815-96a7-0620f7400cb7-combined-ca-bundle\") pod \"1756bbdc-3e6c-4815-96a7-0620f7400cb7\" (UID: \"1756bbdc-3e6c-4815-96a7-0620f7400cb7\") " Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.452282 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/37dc6e5d-eb14-4cef-9451-7c567c6c9068-httpd-run\") pod \"37dc6e5d-eb14-4cef-9451-7c567c6c9068\" (UID: \"37dc6e5d-eb14-4cef-9451-7c567c6c9068\") " Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.452311 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1756bbdc-3e6c-4815-96a7-0620f7400cb7-scripts\") pod \"1756bbdc-3e6c-4815-96a7-0620f7400cb7\" (UID: \"1756bbdc-3e6c-4815-96a7-0620f7400cb7\") " Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.452340 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jn72\" (UniqueName: \"kubernetes.io/projected/1756bbdc-3e6c-4815-96a7-0620f7400cb7-kube-api-access-7jn72\") pod \"1756bbdc-3e6c-4815-96a7-0620f7400cb7\" (UID: \"1756bbdc-3e6c-4815-96a7-0620f7400cb7\") " Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.452367 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rqnd\" (UniqueName: \"kubernetes.io/projected/37dc6e5d-eb14-4cef-9451-7c567c6c9068-kube-api-access-5rqnd\") pod \"37dc6e5d-eb14-4cef-9451-7c567c6c9068\" (UID: \"37dc6e5d-eb14-4cef-9451-7c567c6c9068\") " Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.452391 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1756bbdc-3e6c-4815-96a7-0620f7400cb7-logs\") pod \"1756bbdc-3e6c-4815-96a7-0620f7400cb7\" (UID: \"1756bbdc-3e6c-4815-96a7-0620f7400cb7\") " Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.452431 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37dc6e5d-eb14-4cef-9451-7c567c6c9068-combined-ca-bundle\") pod \"37dc6e5d-eb14-4cef-9451-7c567c6c9068\" (UID: \"37dc6e5d-eb14-4cef-9451-7c567c6c9068\") " Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.452481 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37dc6e5d-eb14-4cef-9451-7c567c6c9068-scripts\") pod \"37dc6e5d-eb14-4cef-9451-7c567c6c9068\" (UID: \"37dc6e5d-eb14-4cef-9451-7c567c6c9068\") " Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.452542 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37dc6e5d-eb14-4cef-9451-7c567c6c9068-logs\") pod \"37dc6e5d-eb14-4cef-9451-7c567c6c9068\" (UID: \"37dc6e5d-eb14-4cef-9451-7c567c6c9068\") " Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.452581 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37dc6e5d-eb14-4cef-9451-7c567c6c9068-config-data\") pod \"37dc6e5d-eb14-4cef-9451-7c567c6c9068\" (UID: \"37dc6e5d-eb14-4cef-9451-7c567c6c9068\") " Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.452601 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"1756bbdc-3e6c-4815-96a7-0620f7400cb7\" (UID: \"1756bbdc-3e6c-4815-96a7-0620f7400cb7\") " Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.452646 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1756bbdc-3e6c-4815-96a7-0620f7400cb7-httpd-run\") pod \"1756bbdc-3e6c-4815-96a7-0620f7400cb7\" (UID: \"1756bbdc-3e6c-4815-96a7-0620f7400cb7\") " Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.452663 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1756bbdc-3e6c-4815-96a7-0620f7400cb7-config-data\") pod \"1756bbdc-3e6c-4815-96a7-0620f7400cb7\" (UID: \"1756bbdc-3e6c-4815-96a7-0620f7400cb7\") " Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.453541 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1756bbdc-3e6c-4815-96a7-0620f7400cb7-logs" (OuterVolumeSpecName: "logs") pod "1756bbdc-3e6c-4815-96a7-0620f7400cb7" (UID: "1756bbdc-3e6c-4815-96a7-0620f7400cb7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.454105 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37dc6e5d-eb14-4cef-9451-7c567c6c9068-logs" (OuterVolumeSpecName: "logs") pod "37dc6e5d-eb14-4cef-9451-7c567c6c9068" (UID: "37dc6e5d-eb14-4cef-9451-7c567c6c9068"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.465227 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1756bbdc-3e6c-4815-96a7-0620f7400cb7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1756bbdc-3e6c-4815-96a7-0620f7400cb7" (UID: "1756bbdc-3e6c-4815-96a7-0620f7400cb7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.495539 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37dc6e5d-eb14-4cef-9451-7c567c6c9068-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "37dc6e5d-eb14-4cef-9451-7c567c6c9068" (UID: "37dc6e5d-eb14-4cef-9451-7c567c6c9068"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.510133 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "1756bbdc-3e6c-4815-96a7-0620f7400cb7" (UID: "1756bbdc-3e6c-4815-96a7-0620f7400cb7"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.510270 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "37dc6e5d-eb14-4cef-9451-7c567c6c9068" (UID: "37dc6e5d-eb14-4cef-9451-7c567c6c9068"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.535310 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1756bbdc-3e6c-4815-96a7-0620f7400cb7-kube-api-access-7jn72" (OuterVolumeSpecName: "kube-api-access-7jn72") pod "1756bbdc-3e6c-4815-96a7-0620f7400cb7" (UID: "1756bbdc-3e6c-4815-96a7-0620f7400cb7"). InnerVolumeSpecName "kube-api-access-7jn72". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.535336 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37dc6e5d-eb14-4cef-9451-7c567c6c9068-scripts" (OuterVolumeSpecName: "scripts") pod "37dc6e5d-eb14-4cef-9451-7c567c6c9068" (UID: "37dc6e5d-eb14-4cef-9451-7c567c6c9068"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.537105 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1756bbdc-3e6c-4815-96a7-0620f7400cb7-scripts" (OuterVolumeSpecName: "scripts") pod "1756bbdc-3e6c-4815-96a7-0620f7400cb7" (UID: "1756bbdc-3e6c-4815-96a7-0620f7400cb7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.549535 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37dc6e5d-eb14-4cef-9451-7c567c6c9068-kube-api-access-5rqnd" (OuterVolumeSpecName: "kube-api-access-5rqnd") pod "37dc6e5d-eb14-4cef-9451-7c567c6c9068" (UID: "37dc6e5d-eb14-4cef-9451-7c567c6c9068"). InnerVolumeSpecName "kube-api-access-5rqnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.554711 4632 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37dc6e5d-eb14-4cef-9451-7c567c6c9068-logs\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.554780 4632 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.554807 4632 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1756bbdc-3e6c-4815-96a7-0620f7400cb7-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.554820 4632 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.554830 4632 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/37dc6e5d-eb14-4cef-9451-7c567c6c9068-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.554840 4632 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1756bbdc-3e6c-4815-96a7-0620f7400cb7-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.554849 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jn72\" (UniqueName: \"kubernetes.io/projected/1756bbdc-3e6c-4815-96a7-0620f7400cb7-kube-api-access-7jn72\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.554860 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rqnd\" (UniqueName: \"kubernetes.io/projected/37dc6e5d-eb14-4cef-9451-7c567c6c9068-kube-api-access-5rqnd\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.554868 4632 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1756bbdc-3e6c-4815-96a7-0620f7400cb7-logs\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.554890 4632 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37dc6e5d-eb14-4cef-9451-7c567c6c9068-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.588531 4632 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.601293 4632 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.802577 4632 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.802611 4632 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.878133 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37dc6e5d-eb14-4cef-9451-7c567c6c9068-config-data" (OuterVolumeSpecName: "config-data") pod "37dc6e5d-eb14-4cef-9451-7c567c6c9068" (UID: "37dc6e5d-eb14-4cef-9451-7c567c6c9068"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.894358 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37dc6e5d-eb14-4cef-9451-7c567c6c9068-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37dc6e5d-eb14-4cef-9451-7c567c6c9068" (UID: "37dc6e5d-eb14-4cef-9451-7c567c6c9068"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.915198 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37dc6e5d-eb14-4cef-9451-7c567c6c9068-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.915264 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37dc6e5d-eb14-4cef-9451-7c567c6c9068-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.936791 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1756bbdc-3e6c-4815-96a7-0620f7400cb7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1756bbdc-3e6c-4815-96a7-0620f7400cb7" (UID: "1756bbdc-3e6c-4815-96a7-0620f7400cb7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:26:07 crc kubenswrapper[4632]: I0313 10:26:07.943099 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1756bbdc-3e6c-4815-96a7-0620f7400cb7-config-data" (OuterVolumeSpecName: "config-data") pod "1756bbdc-3e6c-4815-96a7-0620f7400cb7" (UID: "1756bbdc-3e6c-4815-96a7-0620f7400cb7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.017441 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1756bbdc-3e6c-4815-96a7-0620f7400cb7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.017482 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1756bbdc-3e6c-4815-96a7-0620f7400cb7-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.077170 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78e29b83-b50e-46db-a8d6-bba0ecfb5c08" path="/var/lib/kubelet/pods/78e29b83-b50e-46db-a8d6-bba0ecfb5c08/volumes" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.186577 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"37dc6e5d-eb14-4cef-9451-7c567c6c9068","Type":"ContainerDied","Data":"064fad76db398b97a6a04386f16fbe17c9bebd9b23d2f3264f42bd5bbfc7916f"} Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.186655 4632 scope.go:117] "RemoveContainer" containerID="82cadbe6c9dcc57c6c514bcdc80e74daabbed007f9160e45e58a214195d92a1e" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.186925 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.256738 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.260589 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1756bbdc-3e6c-4815-96a7-0620f7400cb7","Type":"ContainerDied","Data":"4958903559f9d5de9098d0d27c704deb245f10fc57158f3d446a7bff788fb121"} Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.300720 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.337191 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.381014 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.465419 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.497912 4632 scope.go:117] "RemoveContainer" containerID="4cf0cebf490653caa207d6e711c23d84bd0c1109c2c15d6d1a3ec573b2a4d48f" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.511852 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 13 10:26:08 crc kubenswrapper[4632]: E0313 10:26:08.529790 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78e29b83-b50e-46db-a8d6-bba0ecfb5c08" containerName="dnsmasq-dns" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.529843 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="78e29b83-b50e-46db-a8d6-bba0ecfb5c08" containerName="dnsmasq-dns" Mar 13 10:26:08 crc kubenswrapper[4632]: E0313 10:26:08.529862 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78e29b83-b50e-46db-a8d6-bba0ecfb5c08" containerName="init" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.529871 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="78e29b83-b50e-46db-a8d6-bba0ecfb5c08" containerName="init" Mar 13 10:26:08 crc kubenswrapper[4632]: E0313 10:26:08.529890 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37dc6e5d-eb14-4cef-9451-7c567c6c9068" containerName="glance-httpd" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.529897 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="37dc6e5d-eb14-4cef-9451-7c567c6c9068" containerName="glance-httpd" Mar 13 10:26:08 crc kubenswrapper[4632]: E0313 10:26:08.529913 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37dc6e5d-eb14-4cef-9451-7c567c6c9068" containerName="glance-log" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.529920 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="37dc6e5d-eb14-4cef-9451-7c567c6c9068" containerName="glance-log" Mar 13 10:26:08 crc kubenswrapper[4632]: E0313 10:26:08.529955 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1756bbdc-3e6c-4815-96a7-0620f7400cb7" containerName="glance-log" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.529962 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="1756bbdc-3e6c-4815-96a7-0620f7400cb7" containerName="glance-log" Mar 13 10:26:08 crc kubenswrapper[4632]: E0313 10:26:08.529979 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1756bbdc-3e6c-4815-96a7-0620f7400cb7" containerName="glance-httpd" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.529987 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="1756bbdc-3e6c-4815-96a7-0620f7400cb7" containerName="glance-httpd" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.530235 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="1756bbdc-3e6c-4815-96a7-0620f7400cb7" containerName="glance-log" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.530267 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="1756bbdc-3e6c-4815-96a7-0620f7400cb7" containerName="glance-httpd" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.530285 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="37dc6e5d-eb14-4cef-9451-7c567c6c9068" containerName="glance-log" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.530298 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="37dc6e5d-eb14-4cef-9451-7c567c6c9068" containerName="glance-httpd" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.530315 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="78e29b83-b50e-46db-a8d6-bba0ecfb5c08" containerName="dnsmasq-dns" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.531740 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.549536 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.582305 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.582567 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.582664 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.582971 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qpd5p" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.604502 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.621901 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.629355 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.629665 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.658222 4632 scope.go:117] "RemoveContainer" containerID="859ff52dd31106c27267e2e88fdd9f3088d59cb66aea2491829bb3b779d0c030" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.668558 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.689673 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62c1f3f8-e898-4481-88e0-49f0c20228a4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"62c1f3f8-e898-4481-88e0-49f0c20228a4\") " pod="openstack/glance-default-external-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.689749 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbnq2\" (UniqueName: \"kubernetes.io/projected/62c1f3f8-e898-4481-88e0-49f0c20228a4-kube-api-access-gbnq2\") pod \"glance-default-external-api-0\" (UID: \"62c1f3f8-e898-4481-88e0-49f0c20228a4\") " pod="openstack/glance-default-external-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.689814 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62c1f3f8-e898-4481-88e0-49f0c20228a4-config-data\") pod \"glance-default-external-api-0\" (UID: \"62c1f3f8-e898-4481-88e0-49f0c20228a4\") " pod="openstack/glance-default-external-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.689843 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62c1f3f8-e898-4481-88e0-49f0c20228a4-logs\") pod \"glance-default-external-api-0\" (UID: \"62c1f3f8-e898-4481-88e0-49f0c20228a4\") " pod="openstack/glance-default-external-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.689897 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"62c1f3f8-e898-4481-88e0-49f0c20228a4\") " pod="openstack/glance-default-external-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.689990 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62c1f3f8-e898-4481-88e0-49f0c20228a4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"62c1f3f8-e898-4481-88e0-49f0c20228a4\") " pod="openstack/glance-default-external-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.690009 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62c1f3f8-e898-4481-88e0-49f0c20228a4-scripts\") pod \"glance-default-external-api-0\" (UID: \"62c1f3f8-e898-4481-88e0-49f0c20228a4\") " pod="openstack/glance-default-external-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.690070 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/62c1f3f8-e898-4481-88e0-49f0c20228a4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"62c1f3f8-e898-4481-88e0-49f0c20228a4\") " pod="openstack/glance-default-external-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.792648 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/050df504-63b9-4453-be2b-f3b0315fb801-logs\") pod \"glance-default-internal-api-0\" (UID: \"050df504-63b9-4453-be2b-f3b0315fb801\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.792763 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/62c1f3f8-e898-4481-88e0-49f0c20228a4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"62c1f3f8-e898-4481-88e0-49f0c20228a4\") " pod="openstack/glance-default-external-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.792815 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"050df504-63b9-4453-be2b-f3b0315fb801\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.792839 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/050df504-63b9-4453-be2b-f3b0315fb801-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"050df504-63b9-4453-be2b-f3b0315fb801\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.792870 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050df504-63b9-4453-be2b-f3b0315fb801-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"050df504-63b9-4453-be2b-f3b0315fb801\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.792908 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5bl8\" (UniqueName: \"kubernetes.io/projected/050df504-63b9-4453-be2b-f3b0315fb801-kube-api-access-z5bl8\") pod \"glance-default-internal-api-0\" (UID: \"050df504-63b9-4453-be2b-f3b0315fb801\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.792961 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62c1f3f8-e898-4481-88e0-49f0c20228a4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"62c1f3f8-e898-4481-88e0-49f0c20228a4\") " pod="openstack/glance-default-external-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.793049 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbnq2\" (UniqueName: \"kubernetes.io/projected/62c1f3f8-e898-4481-88e0-49f0c20228a4-kube-api-access-gbnq2\") pod \"glance-default-external-api-0\" (UID: \"62c1f3f8-e898-4481-88e0-49f0c20228a4\") " pod="openstack/glance-default-external-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.793077 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62c1f3f8-e898-4481-88e0-49f0c20228a4-config-data\") pod \"glance-default-external-api-0\" (UID: \"62c1f3f8-e898-4481-88e0-49f0c20228a4\") " pod="openstack/glance-default-external-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.793104 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62c1f3f8-e898-4481-88e0-49f0c20228a4-logs\") pod \"glance-default-external-api-0\" (UID: \"62c1f3f8-e898-4481-88e0-49f0c20228a4\") " pod="openstack/glance-default-external-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.793159 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/050df504-63b9-4453-be2b-f3b0315fb801-scripts\") pod \"glance-default-internal-api-0\" (UID: \"050df504-63b9-4453-be2b-f3b0315fb801\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.793195 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"62c1f3f8-e898-4481-88e0-49f0c20228a4\") " pod="openstack/glance-default-external-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.793275 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/050df504-63b9-4453-be2b-f3b0315fb801-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"050df504-63b9-4453-be2b-f3b0315fb801\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.793329 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050df504-63b9-4453-be2b-f3b0315fb801-config-data\") pod \"glance-default-internal-api-0\" (UID: \"050df504-63b9-4453-be2b-f3b0315fb801\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.793360 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62c1f3f8-e898-4481-88e0-49f0c20228a4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"62c1f3f8-e898-4481-88e0-49f0c20228a4\") " pod="openstack/glance-default-external-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.793390 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62c1f3f8-e898-4481-88e0-49f0c20228a4-scripts\") pod \"glance-default-external-api-0\" (UID: \"62c1f3f8-e898-4481-88e0-49f0c20228a4\") " pod="openstack/glance-default-external-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.795072 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/62c1f3f8-e898-4481-88e0-49f0c20228a4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"62c1f3f8-e898-4481-88e0-49f0c20228a4\") " pod="openstack/glance-default-external-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.795808 4632 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"62c1f3f8-e898-4481-88e0-49f0c20228a4\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.803417 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62c1f3f8-e898-4481-88e0-49f0c20228a4-logs\") pod \"glance-default-external-api-0\" (UID: \"62c1f3f8-e898-4481-88e0-49f0c20228a4\") " pod="openstack/glance-default-external-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.818398 4632 scope.go:117] "RemoveContainer" containerID="7d8fa15092ec71e6c12fe0e4bfd626668295f5b687014027b1d5515acb53e02d" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.843919 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62c1f3f8-e898-4481-88e0-49f0c20228a4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"62c1f3f8-e898-4481-88e0-49f0c20228a4\") " pod="openstack/glance-default-external-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.845622 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62c1f3f8-e898-4481-88e0-49f0c20228a4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"62c1f3f8-e898-4481-88e0-49f0c20228a4\") " pod="openstack/glance-default-external-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.858302 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbnq2\" (UniqueName: \"kubernetes.io/projected/62c1f3f8-e898-4481-88e0-49f0c20228a4-kube-api-access-gbnq2\") pod \"glance-default-external-api-0\" (UID: \"62c1f3f8-e898-4481-88e0-49f0c20228a4\") " pod="openstack/glance-default-external-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.886513 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62c1f3f8-e898-4481-88e0-49f0c20228a4-scripts\") pod \"glance-default-external-api-0\" (UID: \"62c1f3f8-e898-4481-88e0-49f0c20228a4\") " pod="openstack/glance-default-external-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.888228 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62c1f3f8-e898-4481-88e0-49f0c20228a4-config-data\") pod \"glance-default-external-api-0\" (UID: \"62c1f3f8-e898-4481-88e0-49f0c20228a4\") " pod="openstack/glance-default-external-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.892856 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"62c1f3f8-e898-4481-88e0-49f0c20228a4\") " pod="openstack/glance-default-external-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.894589 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/050df504-63b9-4453-be2b-f3b0315fb801-logs\") pod \"glance-default-internal-api-0\" (UID: \"050df504-63b9-4453-be2b-f3b0315fb801\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.894658 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"050df504-63b9-4453-be2b-f3b0315fb801\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.894676 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/050df504-63b9-4453-be2b-f3b0315fb801-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"050df504-63b9-4453-be2b-f3b0315fb801\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.894696 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050df504-63b9-4453-be2b-f3b0315fb801-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"050df504-63b9-4453-be2b-f3b0315fb801\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.894718 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5bl8\" (UniqueName: \"kubernetes.io/projected/050df504-63b9-4453-be2b-f3b0315fb801-kube-api-access-z5bl8\") pod \"glance-default-internal-api-0\" (UID: \"050df504-63b9-4453-be2b-f3b0315fb801\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.894793 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/050df504-63b9-4453-be2b-f3b0315fb801-scripts\") pod \"glance-default-internal-api-0\" (UID: \"050df504-63b9-4453-be2b-f3b0315fb801\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.894849 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/050df504-63b9-4453-be2b-f3b0315fb801-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"050df504-63b9-4453-be2b-f3b0315fb801\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.894879 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050df504-63b9-4453-be2b-f3b0315fb801-config-data\") pod \"glance-default-internal-api-0\" (UID: \"050df504-63b9-4453-be2b-f3b0315fb801\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.900791 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/050df504-63b9-4453-be2b-f3b0315fb801-logs\") pod \"glance-default-internal-api-0\" (UID: \"050df504-63b9-4453-be2b-f3b0315fb801\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.901255 4632 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"050df504-63b9-4453-be2b-f3b0315fb801\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.908957 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/050df504-63b9-4453-be2b-f3b0315fb801-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"050df504-63b9-4453-be2b-f3b0315fb801\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.925196 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050df504-63b9-4453-be2b-f3b0315fb801-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"050df504-63b9-4453-be2b-f3b0315fb801\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.926179 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050df504-63b9-4453-be2b-f3b0315fb801-config-data\") pod \"glance-default-internal-api-0\" (UID: \"050df504-63b9-4453-be2b-f3b0315fb801\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.966906 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/050df504-63b9-4453-be2b-f3b0315fb801-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"050df504-63b9-4453-be2b-f3b0315fb801\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.967721 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/050df504-63b9-4453-be2b-f3b0315fb801-scripts\") pod \"glance-default-internal-api-0\" (UID: \"050df504-63b9-4453-be2b-f3b0315fb801\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.986500 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5bl8\" (UniqueName: \"kubernetes.io/projected/050df504-63b9-4453-be2b-f3b0315fb801-kube-api-access-z5bl8\") pod \"glance-default-internal-api-0\" (UID: \"050df504-63b9-4453-be2b-f3b0315fb801\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:26:08 crc kubenswrapper[4632]: I0313 10:26:08.990296 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"050df504-63b9-4453-be2b-f3b0315fb801\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:26:09 crc kubenswrapper[4632]: I0313 10:26:09.183037 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 13 10:26:09 crc kubenswrapper[4632]: I0313 10:26:09.284035 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 13 10:26:10 crc kubenswrapper[4632]: I0313 10:26:10.103506 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1756bbdc-3e6c-4815-96a7-0620f7400cb7" path="/var/lib/kubelet/pods/1756bbdc-3e6c-4815-96a7-0620f7400cb7/volumes" Mar 13 10:26:10 crc kubenswrapper[4632]: I0313 10:26:10.106002 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37dc6e5d-eb14-4cef-9451-7c567c6c9068" path="/var/lib/kubelet/pods/37dc6e5d-eb14-4cef-9451-7c567c6c9068/volumes" Mar 13 10:26:10 crc kubenswrapper[4632]: I0313 10:26:10.427830 4632 generic.go:334] "Generic (PLEG): container finished" podID="8d59f1c4-0990-423d-ab4f-ecf2d0a1ac27" containerID="e8fc7f9526396e3f4333f93ccef86f72aee3214939c63a5e8145c990bbf9d938" exitCode=0 Mar 13 10:26:10 crc kubenswrapper[4632]: I0313 10:26:10.427904 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556626-z45rd" event={"ID":"8d59f1c4-0990-423d-ab4f-ecf2d0a1ac27","Type":"ContainerDied","Data":"e8fc7f9526396e3f4333f93ccef86f72aee3214939c63a5e8145c990bbf9d938"} Mar 13 10:26:10 crc kubenswrapper[4632]: I0313 10:26:10.647440 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 13 10:26:10 crc kubenswrapper[4632]: W0313 10:26:10.661084 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62c1f3f8_e898_4481_88e0_49f0c20228a4.slice/crio-da67a58c5a020c95fb415df6f51542675c8d6697cd1fafcacdcc7d6081f0a9ff WatchSource:0}: Error finding container da67a58c5a020c95fb415df6f51542675c8d6697cd1fafcacdcc7d6081f0a9ff: Status 404 returned error can't find the container with id da67a58c5a020c95fb415df6f51542675c8d6697cd1fafcacdcc7d6081f0a9ff Mar 13 10:26:10 crc kubenswrapper[4632]: I0313 10:26:10.860403 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 13 10:26:11 crc kubenswrapper[4632]: I0313 10:26:11.480752 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kq8lc" event={"ID":"8f916c05-f172-42b6-9b13-0c8d2058bfb1","Type":"ContainerStarted","Data":"6d5ac5d7a6aab5517e4300c2e14808710d4f8cfa4977c9841f6552b262144012"} Mar 13 10:26:11 crc kubenswrapper[4632]: I0313 10:26:11.493927 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"62c1f3f8-e898-4481-88e0-49f0c20228a4","Type":"ContainerStarted","Data":"da67a58c5a020c95fb415df6f51542675c8d6697cd1fafcacdcc7d6081f0a9ff"} Mar 13 10:26:11 crc kubenswrapper[4632]: I0313 10:26:11.497790 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"050df504-63b9-4453-be2b-f3b0315fb801","Type":"ContainerStarted","Data":"639dbfcf9c85b2d6df276ce37ddb572204028d4a54aa36f7c4d3026c9ff6abfc"} Mar 13 10:26:11 crc kubenswrapper[4632]: I0313 10:26:11.521994 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-kq8lc" podStartSLOduration=7.33701618 podStartE2EDuration="1m7.521970178s" podCreationTimestamp="2026-03-13 10:25:04 +0000 UTC" firstStartedPulling="2026-03-13 10:25:07.27532772 +0000 UTC m=+1281.297857853" lastFinishedPulling="2026-03-13 10:26:07.460281718 +0000 UTC m=+1341.482811851" observedRunningTime="2026-03-13 10:26:11.515435574 +0000 UTC m=+1345.537965717" watchObservedRunningTime="2026-03-13 10:26:11.521970178 +0000 UTC m=+1345.544500331" Mar 13 10:26:12 crc kubenswrapper[4632]: I0313 10:26:12.003170 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556626-z45rd" Mar 13 10:26:12 crc kubenswrapper[4632]: I0313 10:26:12.164590 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9ghq\" (UniqueName: \"kubernetes.io/projected/8d59f1c4-0990-423d-ab4f-ecf2d0a1ac27-kube-api-access-r9ghq\") pod \"8d59f1c4-0990-423d-ab4f-ecf2d0a1ac27\" (UID: \"8d59f1c4-0990-423d-ab4f-ecf2d0a1ac27\") " Mar 13 10:26:12 crc kubenswrapper[4632]: I0313 10:26:12.198223 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d59f1c4-0990-423d-ab4f-ecf2d0a1ac27-kube-api-access-r9ghq" (OuterVolumeSpecName: "kube-api-access-r9ghq") pod "8d59f1c4-0990-423d-ab4f-ecf2d0a1ac27" (UID: "8d59f1c4-0990-423d-ab4f-ecf2d0a1ac27"). InnerVolumeSpecName "kube-api-access-r9ghq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:26:12 crc kubenswrapper[4632]: I0313 10:26:12.283223 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9ghq\" (UniqueName: \"kubernetes.io/projected/8d59f1c4-0990-423d-ab4f-ecf2d0a1ac27-kube-api-access-r9ghq\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:12 crc kubenswrapper[4632]: I0313 10:26:12.533623 4632 generic.go:334] "Generic (PLEG): container finished" podID="e92afa62-9c75-4e0e-92f4-76e57328d7a0" containerID="68a82ec143a93c9f66b6d5e73e70ead182bba11acadf06a0bc0700ee8971357d" exitCode=0 Mar 13 10:26:12 crc kubenswrapper[4632]: I0313 10:26:12.533743 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-htnd9" event={"ID":"e92afa62-9c75-4e0e-92f4-76e57328d7a0","Type":"ContainerDied","Data":"68a82ec143a93c9f66b6d5e73e70ead182bba11acadf06a0bc0700ee8971357d"} Mar 13 10:26:12 crc kubenswrapper[4632]: I0313 10:26:12.557645 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"62c1f3f8-e898-4481-88e0-49f0c20228a4","Type":"ContainerStarted","Data":"75e2995816c15269a0e0bb8513c4f7b9cace1b33dd417df2fc8f694c18b89fa0"} Mar 13 10:26:12 crc kubenswrapper[4632]: I0313 10:26:12.586227 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556626-z45rd" Mar 13 10:26:12 crc kubenswrapper[4632]: I0313 10:26:12.586327 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556626-z45rd" event={"ID":"8d59f1c4-0990-423d-ab4f-ecf2d0a1ac27","Type":"ContainerDied","Data":"6d61d4f7c0c3be011ec1f0f84978bb250c298a5c766226d624254ef183165b94"} Mar 13 10:26:12 crc kubenswrapper[4632]: I0313 10:26:12.586377 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d61d4f7c0c3be011ec1f0f84978bb250c298a5c766226d624254ef183165b94" Mar 13 10:26:12 crc kubenswrapper[4632]: I0313 10:26:12.589251 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"050df504-63b9-4453-be2b-f3b0315fb801","Type":"ContainerStarted","Data":"4286cd55d064d024725ded90d153143e568de28aeedc6a6060f69501102dd4cb"} Mar 13 10:26:12 crc kubenswrapper[4632]: I0313 10:26:12.625324 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556620-42vs6"] Mar 13 10:26:12 crc kubenswrapper[4632]: I0313 10:26:12.650531 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556620-42vs6"] Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.064745 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28dbef1d-ca7f-4387-80af-8dffbfe92895" path="/var/lib/kubelet/pods/28dbef1d-ca7f-4387-80af-8dffbfe92895/volumes" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.401645 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-htnd9" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.461267 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e92afa62-9c75-4e0e-92f4-76e57328d7a0-logs\") pod \"e92afa62-9c75-4e0e-92f4-76e57328d7a0\" (UID: \"e92afa62-9c75-4e0e-92f4-76e57328d7a0\") " Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.461337 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqp5h\" (UniqueName: \"kubernetes.io/projected/e92afa62-9c75-4e0e-92f4-76e57328d7a0-kube-api-access-mqp5h\") pod \"e92afa62-9c75-4e0e-92f4-76e57328d7a0\" (UID: \"e92afa62-9c75-4e0e-92f4-76e57328d7a0\") " Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.461388 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e92afa62-9c75-4e0e-92f4-76e57328d7a0-config-data\") pod \"e92afa62-9c75-4e0e-92f4-76e57328d7a0\" (UID: \"e92afa62-9c75-4e0e-92f4-76e57328d7a0\") " Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.461558 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e92afa62-9c75-4e0e-92f4-76e57328d7a0-combined-ca-bundle\") pod \"e92afa62-9c75-4e0e-92f4-76e57328d7a0\" (UID: \"e92afa62-9c75-4e0e-92f4-76e57328d7a0\") " Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.461699 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e92afa62-9c75-4e0e-92f4-76e57328d7a0-scripts\") pod \"e92afa62-9c75-4e0e-92f4-76e57328d7a0\" (UID: \"e92afa62-9c75-4e0e-92f4-76e57328d7a0\") " Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.470373 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e92afa62-9c75-4e0e-92f4-76e57328d7a0-logs" (OuterVolumeSpecName: "logs") pod "e92afa62-9c75-4e0e-92f4-76e57328d7a0" (UID: "e92afa62-9c75-4e0e-92f4-76e57328d7a0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.518360 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e92afa62-9c75-4e0e-92f4-76e57328d7a0-kube-api-access-mqp5h" (OuterVolumeSpecName: "kube-api-access-mqp5h") pod "e92afa62-9c75-4e0e-92f4-76e57328d7a0" (UID: "e92afa62-9c75-4e0e-92f4-76e57328d7a0"). InnerVolumeSpecName "kube-api-access-mqp5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.518509 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e92afa62-9c75-4e0e-92f4-76e57328d7a0-scripts" (OuterVolumeSpecName: "scripts") pod "e92afa62-9c75-4e0e-92f4-76e57328d7a0" (UID: "e92afa62-9c75-4e0e-92f4-76e57328d7a0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.561649 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e92afa62-9c75-4e0e-92f4-76e57328d7a0-config-data" (OuterVolumeSpecName: "config-data") pod "e92afa62-9c75-4e0e-92f4-76e57328d7a0" (UID: "e92afa62-9c75-4e0e-92f4-76e57328d7a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.564211 4632 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e92afa62-9c75-4e0e-92f4-76e57328d7a0-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.564486 4632 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e92afa62-9c75-4e0e-92f4-76e57328d7a0-logs\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.564585 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqp5h\" (UniqueName: \"kubernetes.io/projected/e92afa62-9c75-4e0e-92f4-76e57328d7a0-kube-api-access-mqp5h\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.564832 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e92afa62-9c75-4e0e-92f4-76e57328d7a0-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.568563 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e92afa62-9c75-4e0e-92f4-76e57328d7a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e92afa62-9c75-4e0e-92f4-76e57328d7a0" (UID: "e92afa62-9c75-4e0e-92f4-76e57328d7a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.666096 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e92afa62-9c75-4e0e-92f4-76e57328d7a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.687502 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-htnd9" event={"ID":"e92afa62-9c75-4e0e-92f4-76e57328d7a0","Type":"ContainerDied","Data":"fa8253910988ff0dbee81a3230f0ff84637c4204c805ed0e40f0cc26f23d5381"} Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.688050 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa8253910988ff0dbee81a3230f0ff84637c4204c805ed0e40f0cc26f23d5381" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.688205 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-htnd9" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.698313 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"62c1f3f8-e898-4481-88e0-49f0c20228a4","Type":"ContainerStarted","Data":"60e19c69317a817c5bf104bc8691bdf46121d52039ad19099e25f869718b8e19"} Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.719152 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"050df504-63b9-4453-be2b-f3b0315fb801","Type":"ContainerStarted","Data":"a06e9823c7700968605c221a9839cf4f237fe6a7eee8836d69bade62686f4372"} Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.732021 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7dd5c7bdcd-4969b"] Mar 13 10:26:14 crc kubenswrapper[4632]: E0313 10:26:14.732606 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e92afa62-9c75-4e0e-92f4-76e57328d7a0" containerName="placement-db-sync" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.732626 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="e92afa62-9c75-4e0e-92f4-76e57328d7a0" containerName="placement-db-sync" Mar 13 10:26:14 crc kubenswrapper[4632]: E0313 10:26:14.732641 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d59f1c4-0990-423d-ab4f-ecf2d0a1ac27" containerName="oc" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.732648 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d59f1c4-0990-423d-ab4f-ecf2d0a1ac27" containerName="oc" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.732852 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d59f1c4-0990-423d-ab4f-ecf2d0a1ac27" containerName="oc" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.732873 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="e92afa62-9c75-4e0e-92f4-76e57328d7a0" containerName="placement-db-sync" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.734682 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7dd5c7bdcd-4969b" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.770476 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7dd5c7bdcd-4969b"] Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.776913 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.776889377 podStartE2EDuration="6.776889377s" podCreationTimestamp="2026-03-13 10:26:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:26:14.730526841 +0000 UTC m=+1348.753056984" watchObservedRunningTime="2026-03-13 10:26:14.776889377 +0000 UTC m=+1348.799419510" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.777905 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.778396 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-6tvl4" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.778647 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.778983 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.779180 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.814692 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.81467343 podStartE2EDuration="6.81467343s" podCreationTimestamp="2026-03-13 10:26:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:26:14.813784109 +0000 UTC m=+1348.836314242" watchObservedRunningTime="2026-03-13 10:26:14.81467343 +0000 UTC m=+1348.837203573" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.871038 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-internal-tls-certs\") pod \"placement-7dd5c7bdcd-4969b\" (UID: \"5abe7bf3-d44d-4ee5-b568-2d497868f1e5\") " pod="openstack/placement-7dd5c7bdcd-4969b" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.871416 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-config-data\") pod \"placement-7dd5c7bdcd-4969b\" (UID: \"5abe7bf3-d44d-4ee5-b568-2d497868f1e5\") " pod="openstack/placement-7dd5c7bdcd-4969b" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.871599 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-combined-ca-bundle\") pod \"placement-7dd5c7bdcd-4969b\" (UID: \"5abe7bf3-d44d-4ee5-b568-2d497868f1e5\") " pod="openstack/placement-7dd5c7bdcd-4969b" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.871753 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk8t2\" (UniqueName: \"kubernetes.io/projected/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-kube-api-access-xk8t2\") pod \"placement-7dd5c7bdcd-4969b\" (UID: \"5abe7bf3-d44d-4ee5-b568-2d497868f1e5\") " pod="openstack/placement-7dd5c7bdcd-4969b" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.871902 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-public-tls-certs\") pod \"placement-7dd5c7bdcd-4969b\" (UID: \"5abe7bf3-d44d-4ee5-b568-2d497868f1e5\") " pod="openstack/placement-7dd5c7bdcd-4969b" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.873542 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-logs\") pod \"placement-7dd5c7bdcd-4969b\" (UID: \"5abe7bf3-d44d-4ee5-b568-2d497868f1e5\") " pod="openstack/placement-7dd5c7bdcd-4969b" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.873750 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-scripts\") pod \"placement-7dd5c7bdcd-4969b\" (UID: \"5abe7bf3-d44d-4ee5-b568-2d497868f1e5\") " pod="openstack/placement-7dd5c7bdcd-4969b" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.975428 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-internal-tls-certs\") pod \"placement-7dd5c7bdcd-4969b\" (UID: \"5abe7bf3-d44d-4ee5-b568-2d497868f1e5\") " pod="openstack/placement-7dd5c7bdcd-4969b" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.975493 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-config-data\") pod \"placement-7dd5c7bdcd-4969b\" (UID: \"5abe7bf3-d44d-4ee5-b568-2d497868f1e5\") " pod="openstack/placement-7dd5c7bdcd-4969b" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.975554 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-combined-ca-bundle\") pod \"placement-7dd5c7bdcd-4969b\" (UID: \"5abe7bf3-d44d-4ee5-b568-2d497868f1e5\") " pod="openstack/placement-7dd5c7bdcd-4969b" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.975597 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk8t2\" (UniqueName: \"kubernetes.io/projected/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-kube-api-access-xk8t2\") pod \"placement-7dd5c7bdcd-4969b\" (UID: \"5abe7bf3-d44d-4ee5-b568-2d497868f1e5\") " pod="openstack/placement-7dd5c7bdcd-4969b" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.975633 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-public-tls-certs\") pod \"placement-7dd5c7bdcd-4969b\" (UID: \"5abe7bf3-d44d-4ee5-b568-2d497868f1e5\") " pod="openstack/placement-7dd5c7bdcd-4969b" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.975665 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-logs\") pod \"placement-7dd5c7bdcd-4969b\" (UID: \"5abe7bf3-d44d-4ee5-b568-2d497868f1e5\") " pod="openstack/placement-7dd5c7bdcd-4969b" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.975719 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-scripts\") pod \"placement-7dd5c7bdcd-4969b\" (UID: \"5abe7bf3-d44d-4ee5-b568-2d497868f1e5\") " pod="openstack/placement-7dd5c7bdcd-4969b" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.977395 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-logs\") pod \"placement-7dd5c7bdcd-4969b\" (UID: \"5abe7bf3-d44d-4ee5-b568-2d497868f1e5\") " pod="openstack/placement-7dd5c7bdcd-4969b" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.982150 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-config-data\") pod \"placement-7dd5c7bdcd-4969b\" (UID: \"5abe7bf3-d44d-4ee5-b568-2d497868f1e5\") " pod="openstack/placement-7dd5c7bdcd-4969b" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.982717 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-scripts\") pod \"placement-7dd5c7bdcd-4969b\" (UID: \"5abe7bf3-d44d-4ee5-b568-2d497868f1e5\") " pod="openstack/placement-7dd5c7bdcd-4969b" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.983237 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-public-tls-certs\") pod \"placement-7dd5c7bdcd-4969b\" (UID: \"5abe7bf3-d44d-4ee5-b568-2d497868f1e5\") " pod="openstack/placement-7dd5c7bdcd-4969b" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.983868 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-internal-tls-certs\") pod \"placement-7dd5c7bdcd-4969b\" (UID: \"5abe7bf3-d44d-4ee5-b568-2d497868f1e5\") " pod="openstack/placement-7dd5c7bdcd-4969b" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.986649 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-combined-ca-bundle\") pod \"placement-7dd5c7bdcd-4969b\" (UID: \"5abe7bf3-d44d-4ee5-b568-2d497868f1e5\") " pod="openstack/placement-7dd5c7bdcd-4969b" Mar 13 10:26:14 crc kubenswrapper[4632]: I0313 10:26:14.997469 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk8t2\" (UniqueName: \"kubernetes.io/projected/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-kube-api-access-xk8t2\") pod \"placement-7dd5c7bdcd-4969b\" (UID: \"5abe7bf3-d44d-4ee5-b568-2d497868f1e5\") " pod="openstack/placement-7dd5c7bdcd-4969b" Mar 13 10:26:15 crc kubenswrapper[4632]: I0313 10:26:15.111807 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7dd5c7bdcd-4969b" Mar 13 10:26:15 crc kubenswrapper[4632]: I0313 10:26:15.400370 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7bdb5f7878-ng2k2" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Mar 13 10:26:15 crc kubenswrapper[4632]: I0313 10:26:15.858588 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-689764498d-rg7vt" podUID="5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Mar 13 10:26:17 crc kubenswrapper[4632]: I0313 10:26:17.783304 4632 generic.go:334] "Generic (PLEG): container finished" podID="d8d0f662-d180-4137-8107-e465c5fb0621" containerID="3b5385b113397b9418c59a941d2a27f232c7b0df4b245db65886e55380c57297" exitCode=0 Mar 13 10:26:17 crc kubenswrapper[4632]: I0313 10:26:17.783638 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x8tq8" event={"ID":"d8d0f662-d180-4137-8107-e465c5fb0621","Type":"ContainerDied","Data":"3b5385b113397b9418c59a941d2a27f232c7b0df4b245db65886e55380c57297"} Mar 13 10:26:18 crc kubenswrapper[4632]: I0313 10:26:18.799225 4632 generic.go:334] "Generic (PLEG): container finished" podID="418cb883-abd1-46b4-957f-0a40f3e62297" containerID="3672f721f5cc963fe48f19a0fe26275ae0f1cbd82fd44ed2d6b14dcbb240be1d" exitCode=0 Mar 13 10:26:18 crc kubenswrapper[4632]: I0313 10:26:18.799318 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zdgpw" event={"ID":"418cb883-abd1-46b4-957f-0a40f3e62297","Type":"ContainerDied","Data":"3672f721f5cc963fe48f19a0fe26275ae0f1cbd82fd44ed2d6b14dcbb240be1d"} Mar 13 10:26:19 crc kubenswrapper[4632]: I0313 10:26:19.184424 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 13 10:26:19 crc kubenswrapper[4632]: I0313 10:26:19.184854 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 13 10:26:19 crc kubenswrapper[4632]: I0313 10:26:19.246489 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 13 10:26:19 crc kubenswrapper[4632]: I0313 10:26:19.254692 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 13 10:26:19 crc kubenswrapper[4632]: I0313 10:26:19.284904 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 13 10:26:19 crc kubenswrapper[4632]: I0313 10:26:19.284994 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 13 10:26:19 crc kubenswrapper[4632]: I0313 10:26:19.356911 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 13 10:26:19 crc kubenswrapper[4632]: I0313 10:26:19.388832 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 13 10:26:19 crc kubenswrapper[4632]: I0313 10:26:19.816813 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 13 10:26:19 crc kubenswrapper[4632]: I0313 10:26:19.816851 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 13 10:26:19 crc kubenswrapper[4632]: I0313 10:26:19.816866 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 13 10:26:19 crc kubenswrapper[4632]: I0313 10:26:19.817059 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 13 10:26:20 crc kubenswrapper[4632]: I0313 10:26:20.826423 4632 generic.go:334] "Generic (PLEG): container finished" podID="d722ddd7-e65d-44f7-a02d-18ddf126ccf5" containerID="3ef3ce34ce4d2a0d8d000d31874aca20b10c953ddde87f68a0b04979e69b8bae" exitCode=0 Mar 13 10:26:20 crc kubenswrapper[4632]: I0313 10:26:20.827880 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-7fvlk" event={"ID":"d722ddd7-e65d-44f7-a02d-18ddf126ccf5","Type":"ContainerDied","Data":"3ef3ce34ce4d2a0d8d000d31874aca20b10c953ddde87f68a0b04979e69b8bae"} Mar 13 10:26:21 crc kubenswrapper[4632]: I0313 10:26:21.837968 4632 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 10:26:21 crc kubenswrapper[4632]: I0313 10:26:21.837998 4632 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 10:26:21 crc kubenswrapper[4632]: I0313 10:26:21.838136 4632 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 10:26:21 crc kubenswrapper[4632]: I0313 10:26:21.838147 4632 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.004554 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x8tq8" Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.117637 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d0f662-d180-4137-8107-e465c5fb0621-combined-ca-bundle\") pod \"d8d0f662-d180-4137-8107-e465c5fb0621\" (UID: \"d8d0f662-d180-4137-8107-e465c5fb0621\") " Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.117748 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8d0f662-d180-4137-8107-e465c5fb0621-scripts\") pod \"d8d0f662-d180-4137-8107-e465c5fb0621\" (UID: \"d8d0f662-d180-4137-8107-e465c5fb0621\") " Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.117786 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d8d0f662-d180-4137-8107-e465c5fb0621-fernet-keys\") pod \"d8d0f662-d180-4137-8107-e465c5fb0621\" (UID: \"d8d0f662-d180-4137-8107-e465c5fb0621\") " Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.117824 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d8d0f662-d180-4137-8107-e465c5fb0621-credential-keys\") pod \"d8d0f662-d180-4137-8107-e465c5fb0621\" (UID: \"d8d0f662-d180-4137-8107-e465c5fb0621\") " Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.118017 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d0f662-d180-4137-8107-e465c5fb0621-config-data\") pod \"d8d0f662-d180-4137-8107-e465c5fb0621\" (UID: \"d8d0f662-d180-4137-8107-e465c5fb0621\") " Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.118111 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln474\" (UniqueName: \"kubernetes.io/projected/d8d0f662-d180-4137-8107-e465c5fb0621-kube-api-access-ln474\") pod \"d8d0f662-d180-4137-8107-e465c5fb0621\" (UID: \"d8d0f662-d180-4137-8107-e465c5fb0621\") " Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.129878 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8d0f662-d180-4137-8107-e465c5fb0621-kube-api-access-ln474" (OuterVolumeSpecName: "kube-api-access-ln474") pod "d8d0f662-d180-4137-8107-e465c5fb0621" (UID: "d8d0f662-d180-4137-8107-e465c5fb0621"). InnerVolumeSpecName "kube-api-access-ln474". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.139030 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d0f662-d180-4137-8107-e465c5fb0621-scripts" (OuterVolumeSpecName: "scripts") pod "d8d0f662-d180-4137-8107-e465c5fb0621" (UID: "d8d0f662-d180-4137-8107-e465c5fb0621"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.174542 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d0f662-d180-4137-8107-e465c5fb0621-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d8d0f662-d180-4137-8107-e465c5fb0621" (UID: "d8d0f662-d180-4137-8107-e465c5fb0621"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.180327 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d0f662-d180-4137-8107-e465c5fb0621-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d8d0f662-d180-4137-8107-e465c5fb0621" (UID: "d8d0f662-d180-4137-8107-e465c5fb0621"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.207248 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d0f662-d180-4137-8107-e465c5fb0621-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8d0f662-d180-4137-8107-e465c5fb0621" (UID: "d8d0f662-d180-4137-8107-e465c5fb0621"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.239085 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln474\" (UniqueName: \"kubernetes.io/projected/d8d0f662-d180-4137-8107-e465c5fb0621-kube-api-access-ln474\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.239118 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d0f662-d180-4137-8107-e465c5fb0621-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.239127 4632 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8d0f662-d180-4137-8107-e465c5fb0621-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.239135 4632 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d8d0f662-d180-4137-8107-e465c5fb0621-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.239146 4632 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d8d0f662-d180-4137-8107-e465c5fb0621-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.245527 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d0f662-d180-4137-8107-e465c5fb0621-config-data" (OuterVolumeSpecName: "config-data") pod "d8d0f662-d180-4137-8107-e465c5fb0621" (UID: "d8d0f662-d180-4137-8107-e465c5fb0621"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.293243 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zdgpw" Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.340985 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d0f662-d180-4137-8107-e465c5fb0621-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.393174 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-7fvlk" Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.442548 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/418cb883-abd1-46b4-957f-0a40f3e62297-db-sync-config-data\") pod \"418cb883-abd1-46b4-957f-0a40f3e62297\" (UID: \"418cb883-abd1-46b4-957f-0a40f3e62297\") " Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.442990 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgn44\" (UniqueName: \"kubernetes.io/projected/418cb883-abd1-46b4-957f-0a40f3e62297-kube-api-access-zgn44\") pod \"418cb883-abd1-46b4-957f-0a40f3e62297\" (UID: \"418cb883-abd1-46b4-957f-0a40f3e62297\") " Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.443061 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/418cb883-abd1-46b4-957f-0a40f3e62297-combined-ca-bundle\") pod \"418cb883-abd1-46b4-957f-0a40f3e62297\" (UID: \"418cb883-abd1-46b4-957f-0a40f3e62297\") " Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.449357 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/418cb883-abd1-46b4-957f-0a40f3e62297-kube-api-access-zgn44" (OuterVolumeSpecName: "kube-api-access-zgn44") pod "418cb883-abd1-46b4-957f-0a40f3e62297" (UID: "418cb883-abd1-46b4-957f-0a40f3e62297"). InnerVolumeSpecName "kube-api-access-zgn44". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.476934 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/418cb883-abd1-46b4-957f-0a40f3e62297-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "418cb883-abd1-46b4-957f-0a40f3e62297" (UID: "418cb883-abd1-46b4-957f-0a40f3e62297"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.527241 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/418cb883-abd1-46b4-957f-0a40f3e62297-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "418cb883-abd1-46b4-957f-0a40f3e62297" (UID: "418cb883-abd1-46b4-957f-0a40f3e62297"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.544539 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d722ddd7-e65d-44f7-a02d-18ddf126ccf5-combined-ca-bundle\") pod \"d722ddd7-e65d-44f7-a02d-18ddf126ccf5\" (UID: \"d722ddd7-e65d-44f7-a02d-18ddf126ccf5\") " Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.544722 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d722ddd7-e65d-44f7-a02d-18ddf126ccf5-config-data\") pod \"d722ddd7-e65d-44f7-a02d-18ddf126ccf5\" (UID: \"d722ddd7-e65d-44f7-a02d-18ddf126ccf5\") " Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.544851 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5rzh\" (UniqueName: \"kubernetes.io/projected/d722ddd7-e65d-44f7-a02d-18ddf126ccf5-kube-api-access-n5rzh\") pod \"d722ddd7-e65d-44f7-a02d-18ddf126ccf5\" (UID: \"d722ddd7-e65d-44f7-a02d-18ddf126ccf5\") " Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.545351 4632 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/418cb883-abd1-46b4-957f-0a40f3e62297-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.545375 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgn44\" (UniqueName: \"kubernetes.io/projected/418cb883-abd1-46b4-957f-0a40f3e62297-kube-api-access-zgn44\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.545386 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/418cb883-abd1-46b4-957f-0a40f3e62297-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.558236 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d722ddd7-e65d-44f7-a02d-18ddf126ccf5-kube-api-access-n5rzh" (OuterVolumeSpecName: "kube-api-access-n5rzh") pod "d722ddd7-e65d-44f7-a02d-18ddf126ccf5" (UID: "d722ddd7-e65d-44f7-a02d-18ddf126ccf5"). InnerVolumeSpecName "kube-api-access-n5rzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.582477 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d722ddd7-e65d-44f7-a02d-18ddf126ccf5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d722ddd7-e65d-44f7-a02d-18ddf126ccf5" (UID: "d722ddd7-e65d-44f7-a02d-18ddf126ccf5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.654491 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5rzh\" (UniqueName: \"kubernetes.io/projected/d722ddd7-e65d-44f7-a02d-18ddf126ccf5-kube-api-access-n5rzh\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.654809 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d722ddd7-e65d-44f7-a02d-18ddf126ccf5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.704908 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d722ddd7-e65d-44f7-a02d-18ddf126ccf5-config-data" (OuterVolumeSpecName: "config-data") pod "d722ddd7-e65d-44f7-a02d-18ddf126ccf5" (UID: "d722ddd7-e65d-44f7-a02d-18ddf126ccf5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.756917 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d722ddd7-e65d-44f7-a02d-18ddf126ccf5-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.859400 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7dd5c7bdcd-4969b"] Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.866964 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x8tq8" Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.866961 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x8tq8" event={"ID":"d8d0f662-d180-4137-8107-e465c5fb0621","Type":"ContainerDied","Data":"58b9dfc8050bba09291b639a8d3d5cc84a9643af0afb63feda9e26973d06a678"} Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.867667 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58b9dfc8050bba09291b639a8d3d5cc84a9643af0afb63feda9e26973d06a678" Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.877530 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zdgpw" event={"ID":"418cb883-abd1-46b4-957f-0a40f3e62297","Type":"ContainerDied","Data":"47e1c2b826ae3f1aaa52b7a4210b405df85537a8c7de35fb1657923a6d754982"} Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.877580 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47e1c2b826ae3f1aaa52b7a4210b405df85537a8c7de35fb1657923a6d754982" Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.877867 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zdgpw" Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.892313 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-7fvlk" event={"ID":"d722ddd7-e65d-44f7-a02d-18ddf126ccf5","Type":"ContainerDied","Data":"76d57552a9eced6e283cb6dee93cf8db23032b8fbb20e4a910d615de236f52d7"} Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.892592 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76d57552a9eced6e283cb6dee93cf8db23032b8fbb20e4a910d615de236f52d7" Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.892707 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-7fvlk" Mar 13 10:26:23 crc kubenswrapper[4632]: I0313 10:26:23.912341 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"270ebc10-986f-4473-8a5e-9094de34ae98","Type":"ContainerStarted","Data":"d138d976167695fe9d299247eefcff55845f7ad27e84fc81cc086274294f2e51"} Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.182236 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-f664b756d-8fxf4"] Mar 13 10:26:24 crc kubenswrapper[4632]: E0313 10:26:24.190749 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8d0f662-d180-4137-8107-e465c5fb0621" containerName="keystone-bootstrap" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.190792 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8d0f662-d180-4137-8107-e465c5fb0621" containerName="keystone-bootstrap" Mar 13 10:26:24 crc kubenswrapper[4632]: E0313 10:26:24.190860 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="418cb883-abd1-46b4-957f-0a40f3e62297" containerName="barbican-db-sync" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.190876 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="418cb883-abd1-46b4-957f-0a40f3e62297" containerName="barbican-db-sync" Mar 13 10:26:24 crc kubenswrapper[4632]: E0313 10:26:24.190895 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d722ddd7-e65d-44f7-a02d-18ddf126ccf5" containerName="heat-db-sync" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.190904 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="d722ddd7-e65d-44f7-a02d-18ddf126ccf5" containerName="heat-db-sync" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.230273 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="418cb883-abd1-46b4-957f-0a40f3e62297" containerName="barbican-db-sync" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.230522 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="d722ddd7-e65d-44f7-a02d-18ddf126ccf5" containerName="heat-db-sync" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.230594 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8d0f662-d180-4137-8107-e465c5fb0621" containerName="keystone-bootstrap" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.231402 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f664b756d-8fxf4" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.231907 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f664b756d-8fxf4"] Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.235405 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.235870 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.236220 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.236456 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-llpcf" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.236654 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.244201 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.293690 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df64dbf7-8526-4fab-950a-4afefe47ec77-scripts\") pod \"keystone-f664b756d-8fxf4\" (UID: \"df64dbf7-8526-4fab-950a-4afefe47ec77\") " pod="openstack/keystone-f664b756d-8fxf4" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.293766 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df64dbf7-8526-4fab-950a-4afefe47ec77-combined-ca-bundle\") pod \"keystone-f664b756d-8fxf4\" (UID: \"df64dbf7-8526-4fab-950a-4afefe47ec77\") " pod="openstack/keystone-f664b756d-8fxf4" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.293789 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df64dbf7-8526-4fab-950a-4afefe47ec77-public-tls-certs\") pod \"keystone-f664b756d-8fxf4\" (UID: \"df64dbf7-8526-4fab-950a-4afefe47ec77\") " pod="openstack/keystone-f664b756d-8fxf4" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.293822 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df64dbf7-8526-4fab-950a-4afefe47ec77-fernet-keys\") pod \"keystone-f664b756d-8fxf4\" (UID: \"df64dbf7-8526-4fab-950a-4afefe47ec77\") " pod="openstack/keystone-f664b756d-8fxf4" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.293842 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrvpt\" (UniqueName: \"kubernetes.io/projected/df64dbf7-8526-4fab-950a-4afefe47ec77-kube-api-access-rrvpt\") pod \"keystone-f664b756d-8fxf4\" (UID: \"df64dbf7-8526-4fab-950a-4afefe47ec77\") " pod="openstack/keystone-f664b756d-8fxf4" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.293860 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df64dbf7-8526-4fab-950a-4afefe47ec77-internal-tls-certs\") pod \"keystone-f664b756d-8fxf4\" (UID: \"df64dbf7-8526-4fab-950a-4afefe47ec77\") " pod="openstack/keystone-f664b756d-8fxf4" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.293901 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df64dbf7-8526-4fab-950a-4afefe47ec77-credential-keys\") pod \"keystone-f664b756d-8fxf4\" (UID: \"df64dbf7-8526-4fab-950a-4afefe47ec77\") " pod="openstack/keystone-f664b756d-8fxf4" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.293929 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df64dbf7-8526-4fab-950a-4afefe47ec77-config-data\") pod \"keystone-f664b756d-8fxf4\" (UID: \"df64dbf7-8526-4fab-950a-4afefe47ec77\") " pod="openstack/keystone-f664b756d-8fxf4" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.298276 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-5c86b4b888-l9574" podUID="6d73a499-d334-4a7a-9783-640b98760672" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.298785 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-5c86b4b888-l9574" podUID="6d73a499-d334-4a7a-9783-640b98760672" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.307869 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-5c86b4b888-l9574" podUID="6d73a499-d334-4a7a-9783-640b98760672" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.396640 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df64dbf7-8526-4fab-950a-4afefe47ec77-config-data\") pod \"keystone-f664b756d-8fxf4\" (UID: \"df64dbf7-8526-4fab-950a-4afefe47ec77\") " pod="openstack/keystone-f664b756d-8fxf4" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.396756 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df64dbf7-8526-4fab-950a-4afefe47ec77-scripts\") pod \"keystone-f664b756d-8fxf4\" (UID: \"df64dbf7-8526-4fab-950a-4afefe47ec77\") " pod="openstack/keystone-f664b756d-8fxf4" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.396790 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df64dbf7-8526-4fab-950a-4afefe47ec77-combined-ca-bundle\") pod \"keystone-f664b756d-8fxf4\" (UID: \"df64dbf7-8526-4fab-950a-4afefe47ec77\") " pod="openstack/keystone-f664b756d-8fxf4" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.396815 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df64dbf7-8526-4fab-950a-4afefe47ec77-public-tls-certs\") pod \"keystone-f664b756d-8fxf4\" (UID: \"df64dbf7-8526-4fab-950a-4afefe47ec77\") " pod="openstack/keystone-f664b756d-8fxf4" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.396857 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df64dbf7-8526-4fab-950a-4afefe47ec77-fernet-keys\") pod \"keystone-f664b756d-8fxf4\" (UID: \"df64dbf7-8526-4fab-950a-4afefe47ec77\") " pod="openstack/keystone-f664b756d-8fxf4" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.396891 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrvpt\" (UniqueName: \"kubernetes.io/projected/df64dbf7-8526-4fab-950a-4afefe47ec77-kube-api-access-rrvpt\") pod \"keystone-f664b756d-8fxf4\" (UID: \"df64dbf7-8526-4fab-950a-4afefe47ec77\") " pod="openstack/keystone-f664b756d-8fxf4" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.396916 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df64dbf7-8526-4fab-950a-4afefe47ec77-internal-tls-certs\") pod \"keystone-f664b756d-8fxf4\" (UID: \"df64dbf7-8526-4fab-950a-4afefe47ec77\") " pod="openstack/keystone-f664b756d-8fxf4" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.397000 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df64dbf7-8526-4fab-950a-4afefe47ec77-credential-keys\") pod \"keystone-f664b756d-8fxf4\" (UID: \"df64dbf7-8526-4fab-950a-4afefe47ec77\") " pod="openstack/keystone-f664b756d-8fxf4" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.404084 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df64dbf7-8526-4fab-950a-4afefe47ec77-combined-ca-bundle\") pod \"keystone-f664b756d-8fxf4\" (UID: \"df64dbf7-8526-4fab-950a-4afefe47ec77\") " pod="openstack/keystone-f664b756d-8fxf4" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.412501 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df64dbf7-8526-4fab-950a-4afefe47ec77-fernet-keys\") pod \"keystone-f664b756d-8fxf4\" (UID: \"df64dbf7-8526-4fab-950a-4afefe47ec77\") " pod="openstack/keystone-f664b756d-8fxf4" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.412633 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df64dbf7-8526-4fab-950a-4afefe47ec77-scripts\") pod \"keystone-f664b756d-8fxf4\" (UID: \"df64dbf7-8526-4fab-950a-4afefe47ec77\") " pod="openstack/keystone-f664b756d-8fxf4" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.420020 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df64dbf7-8526-4fab-950a-4afefe47ec77-credential-keys\") pod \"keystone-f664b756d-8fxf4\" (UID: \"df64dbf7-8526-4fab-950a-4afefe47ec77\") " pod="openstack/keystone-f664b756d-8fxf4" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.420353 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df64dbf7-8526-4fab-950a-4afefe47ec77-public-tls-certs\") pod \"keystone-f664b756d-8fxf4\" (UID: \"df64dbf7-8526-4fab-950a-4afefe47ec77\") " pod="openstack/keystone-f664b756d-8fxf4" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.421003 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df64dbf7-8526-4fab-950a-4afefe47ec77-internal-tls-certs\") pod \"keystone-f664b756d-8fxf4\" (UID: \"df64dbf7-8526-4fab-950a-4afefe47ec77\") " pod="openstack/keystone-f664b756d-8fxf4" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.424696 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df64dbf7-8526-4fab-950a-4afefe47ec77-config-data\") pod \"keystone-f664b756d-8fxf4\" (UID: \"df64dbf7-8526-4fab-950a-4afefe47ec77\") " pod="openstack/keystone-f664b756d-8fxf4" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.427214 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrvpt\" (UniqueName: \"kubernetes.io/projected/df64dbf7-8526-4fab-950a-4afefe47ec77-kube-api-access-rrvpt\") pod \"keystone-f664b756d-8fxf4\" (UID: \"df64dbf7-8526-4fab-950a-4afefe47ec77\") " pod="openstack/keystone-f664b756d-8fxf4" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.567000 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f664b756d-8fxf4" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.659722 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5fc9b6f5b5-6ps9m"] Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.665190 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5fc9b6f5b5-6ps9m" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.678086 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.678261 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-m45mn" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.678402 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.716642 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5fc9b6f5b5-6ps9m"] Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.759403 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51b847ef-ada2-456f-819d-0084fbb17185-config-data-custom\") pod \"barbican-worker-5fc9b6f5b5-6ps9m\" (UID: \"51b847ef-ada2-456f-819d-0084fbb17185\") " pod="openstack/barbican-worker-5fc9b6f5b5-6ps9m" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.759462 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51b847ef-ada2-456f-819d-0084fbb17185-logs\") pod \"barbican-worker-5fc9b6f5b5-6ps9m\" (UID: \"51b847ef-ada2-456f-819d-0084fbb17185\") " pod="openstack/barbican-worker-5fc9b6f5b5-6ps9m" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.759560 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24dz4\" (UniqueName: \"kubernetes.io/projected/51b847ef-ada2-456f-819d-0084fbb17185-kube-api-access-24dz4\") pod \"barbican-worker-5fc9b6f5b5-6ps9m\" (UID: \"51b847ef-ada2-456f-819d-0084fbb17185\") " pod="openstack/barbican-worker-5fc9b6f5b5-6ps9m" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.759592 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51b847ef-ada2-456f-819d-0084fbb17185-config-data\") pod \"barbican-worker-5fc9b6f5b5-6ps9m\" (UID: \"51b847ef-ada2-456f-819d-0084fbb17185\") " pod="openstack/barbican-worker-5fc9b6f5b5-6ps9m" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.759633 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51b847ef-ada2-456f-819d-0084fbb17185-combined-ca-bundle\") pod \"barbican-worker-5fc9b6f5b5-6ps9m\" (UID: \"51b847ef-ada2-456f-819d-0084fbb17185\") " pod="openstack/barbican-worker-5fc9b6f5b5-6ps9m" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.759778 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6c97cdfb86-z2dqq"] Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.785216 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6c97cdfb86-z2dqq" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.809401 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.856121 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6c97cdfb86-z2dqq"] Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.861146 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58332dcc-b1a6-4550-9c8b-8bbb82c04ff0-logs\") pod \"barbican-keystone-listener-6c97cdfb86-z2dqq\" (UID: \"58332dcc-b1a6-4550-9c8b-8bbb82c04ff0\") " pod="openstack/barbican-keystone-listener-6c97cdfb86-z2dqq" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.861215 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58332dcc-b1a6-4550-9c8b-8bbb82c04ff0-combined-ca-bundle\") pod \"barbican-keystone-listener-6c97cdfb86-z2dqq\" (UID: \"58332dcc-b1a6-4550-9c8b-8bbb82c04ff0\") " pod="openstack/barbican-keystone-listener-6c97cdfb86-z2dqq" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.861255 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsmbv\" (UniqueName: \"kubernetes.io/projected/58332dcc-b1a6-4550-9c8b-8bbb82c04ff0-kube-api-access-lsmbv\") pod \"barbican-keystone-listener-6c97cdfb86-z2dqq\" (UID: \"58332dcc-b1a6-4550-9c8b-8bbb82c04ff0\") " pod="openstack/barbican-keystone-listener-6c97cdfb86-z2dqq" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.861298 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24dz4\" (UniqueName: \"kubernetes.io/projected/51b847ef-ada2-456f-819d-0084fbb17185-kube-api-access-24dz4\") pod \"barbican-worker-5fc9b6f5b5-6ps9m\" (UID: \"51b847ef-ada2-456f-819d-0084fbb17185\") " pod="openstack/barbican-worker-5fc9b6f5b5-6ps9m" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.861326 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51b847ef-ada2-456f-819d-0084fbb17185-config-data\") pod \"barbican-worker-5fc9b6f5b5-6ps9m\" (UID: \"51b847ef-ada2-456f-819d-0084fbb17185\") " pod="openstack/barbican-worker-5fc9b6f5b5-6ps9m" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.861350 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58332dcc-b1a6-4550-9c8b-8bbb82c04ff0-config-data\") pod \"barbican-keystone-listener-6c97cdfb86-z2dqq\" (UID: \"58332dcc-b1a6-4550-9c8b-8bbb82c04ff0\") " pod="openstack/barbican-keystone-listener-6c97cdfb86-z2dqq" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.861399 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51b847ef-ada2-456f-819d-0084fbb17185-combined-ca-bundle\") pod \"barbican-worker-5fc9b6f5b5-6ps9m\" (UID: \"51b847ef-ada2-456f-819d-0084fbb17185\") " pod="openstack/barbican-worker-5fc9b6f5b5-6ps9m" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.861462 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58332dcc-b1a6-4550-9c8b-8bbb82c04ff0-config-data-custom\") pod \"barbican-keystone-listener-6c97cdfb86-z2dqq\" (UID: \"58332dcc-b1a6-4550-9c8b-8bbb82c04ff0\") " pod="openstack/barbican-keystone-listener-6c97cdfb86-z2dqq" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.861532 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51b847ef-ada2-456f-819d-0084fbb17185-config-data-custom\") pod \"barbican-worker-5fc9b6f5b5-6ps9m\" (UID: \"51b847ef-ada2-456f-819d-0084fbb17185\") " pod="openstack/barbican-worker-5fc9b6f5b5-6ps9m" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.861579 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51b847ef-ada2-456f-819d-0084fbb17185-logs\") pod \"barbican-worker-5fc9b6f5b5-6ps9m\" (UID: \"51b847ef-ada2-456f-819d-0084fbb17185\") " pod="openstack/barbican-worker-5fc9b6f5b5-6ps9m" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.862127 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51b847ef-ada2-456f-819d-0084fbb17185-logs\") pod \"barbican-worker-5fc9b6f5b5-6ps9m\" (UID: \"51b847ef-ada2-456f-819d-0084fbb17185\") " pod="openstack/barbican-worker-5fc9b6f5b5-6ps9m" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.889173 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cc676b85c-q67wf"] Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.890924 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cc676b85c-q67wf" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.949378 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cc676b85c-q67wf"] Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.953721 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51b847ef-ada2-456f-819d-0084fbb17185-combined-ca-bundle\") pod \"barbican-worker-5fc9b6f5b5-6ps9m\" (UID: \"51b847ef-ada2-456f-819d-0084fbb17185\") " pod="openstack/barbican-worker-5fc9b6f5b5-6ps9m" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.956439 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51b847ef-ada2-456f-819d-0084fbb17185-config-data\") pod \"barbican-worker-5fc9b6f5b5-6ps9m\" (UID: \"51b847ef-ada2-456f-819d-0084fbb17185\") " pod="openstack/barbican-worker-5fc9b6f5b5-6ps9m" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.963615 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6cbca281-a753-4810-ab5f-a2d5a5e9c41d-dns-swift-storage-0\") pod \"dnsmasq-dns-7cc676b85c-q67wf\" (UID: \"6cbca281-a753-4810-ab5f-a2d5a5e9c41d\") " pod="openstack/dnsmasq-dns-7cc676b85c-q67wf" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.963806 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58332dcc-b1a6-4550-9c8b-8bbb82c04ff0-logs\") pod \"barbican-keystone-listener-6c97cdfb86-z2dqq\" (UID: \"58332dcc-b1a6-4550-9c8b-8bbb82c04ff0\") " pod="openstack/barbican-keystone-listener-6c97cdfb86-z2dqq" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.963894 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58332dcc-b1a6-4550-9c8b-8bbb82c04ff0-combined-ca-bundle\") pod \"barbican-keystone-listener-6c97cdfb86-z2dqq\" (UID: \"58332dcc-b1a6-4550-9c8b-8bbb82c04ff0\") " pod="openstack/barbican-keystone-listener-6c97cdfb86-z2dqq" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.963991 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq8dx\" (UniqueName: \"kubernetes.io/projected/6cbca281-a753-4810-ab5f-a2d5a5e9c41d-kube-api-access-sq8dx\") pod \"dnsmasq-dns-7cc676b85c-q67wf\" (UID: \"6cbca281-a753-4810-ab5f-a2d5a5e9c41d\") " pod="openstack/dnsmasq-dns-7cc676b85c-q67wf" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.964065 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cbca281-a753-4810-ab5f-a2d5a5e9c41d-dns-svc\") pod \"dnsmasq-dns-7cc676b85c-q67wf\" (UID: \"6cbca281-a753-4810-ab5f-a2d5a5e9c41d\") " pod="openstack/dnsmasq-dns-7cc676b85c-q67wf" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.964159 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cbca281-a753-4810-ab5f-a2d5a5e9c41d-ovsdbserver-sb\") pod \"dnsmasq-dns-7cc676b85c-q67wf\" (UID: \"6cbca281-a753-4810-ab5f-a2d5a5e9c41d\") " pod="openstack/dnsmasq-dns-7cc676b85c-q67wf" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.964237 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsmbv\" (UniqueName: \"kubernetes.io/projected/58332dcc-b1a6-4550-9c8b-8bbb82c04ff0-kube-api-access-lsmbv\") pod \"barbican-keystone-listener-6c97cdfb86-z2dqq\" (UID: \"58332dcc-b1a6-4550-9c8b-8bbb82c04ff0\") " pod="openstack/barbican-keystone-listener-6c97cdfb86-z2dqq" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.964335 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58332dcc-b1a6-4550-9c8b-8bbb82c04ff0-config-data\") pod \"barbican-keystone-listener-6c97cdfb86-z2dqq\" (UID: \"58332dcc-b1a6-4550-9c8b-8bbb82c04ff0\") " pod="openstack/barbican-keystone-listener-6c97cdfb86-z2dqq" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.964424 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cbca281-a753-4810-ab5f-a2d5a5e9c41d-config\") pod \"dnsmasq-dns-7cc676b85c-q67wf\" (UID: \"6cbca281-a753-4810-ab5f-a2d5a5e9c41d\") " pod="openstack/dnsmasq-dns-7cc676b85c-q67wf" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.964497 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6cbca281-a753-4810-ab5f-a2d5a5e9c41d-ovsdbserver-nb\") pod \"dnsmasq-dns-7cc676b85c-q67wf\" (UID: \"6cbca281-a753-4810-ab5f-a2d5a5e9c41d\") " pod="openstack/dnsmasq-dns-7cc676b85c-q67wf" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.964586 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58332dcc-b1a6-4550-9c8b-8bbb82c04ff0-config-data-custom\") pod \"barbican-keystone-listener-6c97cdfb86-z2dqq\" (UID: \"58332dcc-b1a6-4550-9c8b-8bbb82c04ff0\") " pod="openstack/barbican-keystone-listener-6c97cdfb86-z2dqq" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.965616 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51b847ef-ada2-456f-819d-0084fbb17185-config-data-custom\") pod \"barbican-worker-5fc9b6f5b5-6ps9m\" (UID: \"51b847ef-ada2-456f-819d-0084fbb17185\") " pod="openstack/barbican-worker-5fc9b6f5b5-6ps9m" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.967272 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24dz4\" (UniqueName: \"kubernetes.io/projected/51b847ef-ada2-456f-819d-0084fbb17185-kube-api-access-24dz4\") pod \"barbican-worker-5fc9b6f5b5-6ps9m\" (UID: \"51b847ef-ada2-456f-819d-0084fbb17185\") " pod="openstack/barbican-worker-5fc9b6f5b5-6ps9m" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.968255 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58332dcc-b1a6-4550-9c8b-8bbb82c04ff0-logs\") pod \"barbican-keystone-listener-6c97cdfb86-z2dqq\" (UID: \"58332dcc-b1a6-4550-9c8b-8bbb82c04ff0\") " pod="openstack/barbican-keystone-listener-6c97cdfb86-z2dqq" Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.977418 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7dd5c7bdcd-4969b" event={"ID":"5abe7bf3-d44d-4ee5-b568-2d497868f1e5","Type":"ContainerStarted","Data":"0b15584f3607b654abe16b00ac290d1bc5ee6f763bd08234d8697e7f5b5b20bb"} Mar 13 10:26:24 crc kubenswrapper[4632]: I0313 10:26:24.981307 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7dd5c7bdcd-4969b" event={"ID":"5abe7bf3-d44d-4ee5-b568-2d497868f1e5","Type":"ContainerStarted","Data":"604b160eb4cd534ac8def868fbcdab1d748e8bc2952c85fe7198dc4a2b05d7f7"} Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.000684 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58332dcc-b1a6-4550-9c8b-8bbb82c04ff0-config-data-custom\") pod \"barbican-keystone-listener-6c97cdfb86-z2dqq\" (UID: \"58332dcc-b1a6-4550-9c8b-8bbb82c04ff0\") " pod="openstack/barbican-keystone-listener-6c97cdfb86-z2dqq" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.013044 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58332dcc-b1a6-4550-9c8b-8bbb82c04ff0-config-data\") pod \"barbican-keystone-listener-6c97cdfb86-z2dqq\" (UID: \"58332dcc-b1a6-4550-9c8b-8bbb82c04ff0\") " pod="openstack/barbican-keystone-listener-6c97cdfb86-z2dqq" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.020551 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsmbv\" (UniqueName: \"kubernetes.io/projected/58332dcc-b1a6-4550-9c8b-8bbb82c04ff0-kube-api-access-lsmbv\") pod \"barbican-keystone-listener-6c97cdfb86-z2dqq\" (UID: \"58332dcc-b1a6-4550-9c8b-8bbb82c04ff0\") " pod="openstack/barbican-keystone-listener-6c97cdfb86-z2dqq" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.020601 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58332dcc-b1a6-4550-9c8b-8bbb82c04ff0-combined-ca-bundle\") pod \"barbican-keystone-listener-6c97cdfb86-z2dqq\" (UID: \"58332dcc-b1a6-4550-9c8b-8bbb82c04ff0\") " pod="openstack/barbican-keystone-listener-6c97cdfb86-z2dqq" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.059575 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5fc9b6f5b5-6ps9m" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.068198 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq8dx\" (UniqueName: \"kubernetes.io/projected/6cbca281-a753-4810-ab5f-a2d5a5e9c41d-kube-api-access-sq8dx\") pod \"dnsmasq-dns-7cc676b85c-q67wf\" (UID: \"6cbca281-a753-4810-ab5f-a2d5a5e9c41d\") " pod="openstack/dnsmasq-dns-7cc676b85c-q67wf" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.076466 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cbca281-a753-4810-ab5f-a2d5a5e9c41d-dns-svc\") pod \"dnsmasq-dns-7cc676b85c-q67wf\" (UID: \"6cbca281-a753-4810-ab5f-a2d5a5e9c41d\") " pod="openstack/dnsmasq-dns-7cc676b85c-q67wf" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.076517 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cbca281-a753-4810-ab5f-a2d5a5e9c41d-ovsdbserver-sb\") pod \"dnsmasq-dns-7cc676b85c-q67wf\" (UID: \"6cbca281-a753-4810-ab5f-a2d5a5e9c41d\") " pod="openstack/dnsmasq-dns-7cc676b85c-q67wf" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.076629 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cbca281-a753-4810-ab5f-a2d5a5e9c41d-config\") pod \"dnsmasq-dns-7cc676b85c-q67wf\" (UID: \"6cbca281-a753-4810-ab5f-a2d5a5e9c41d\") " pod="openstack/dnsmasq-dns-7cc676b85c-q67wf" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.076655 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6cbca281-a753-4810-ab5f-a2d5a5e9c41d-ovsdbserver-nb\") pod \"dnsmasq-dns-7cc676b85c-q67wf\" (UID: \"6cbca281-a753-4810-ab5f-a2d5a5e9c41d\") " pod="openstack/dnsmasq-dns-7cc676b85c-q67wf" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.076851 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6cbca281-a753-4810-ab5f-a2d5a5e9c41d-dns-swift-storage-0\") pod \"dnsmasq-dns-7cc676b85c-q67wf\" (UID: \"6cbca281-a753-4810-ab5f-a2d5a5e9c41d\") " pod="openstack/dnsmasq-dns-7cc676b85c-q67wf" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.078817 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cbca281-a753-4810-ab5f-a2d5a5e9c41d-dns-svc\") pod \"dnsmasq-dns-7cc676b85c-q67wf\" (UID: \"6cbca281-a753-4810-ab5f-a2d5a5e9c41d\") " pod="openstack/dnsmasq-dns-7cc676b85c-q67wf" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.079907 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cbca281-a753-4810-ab5f-a2d5a5e9c41d-ovsdbserver-sb\") pod \"dnsmasq-dns-7cc676b85c-q67wf\" (UID: \"6cbca281-a753-4810-ab5f-a2d5a5e9c41d\") " pod="openstack/dnsmasq-dns-7cc676b85c-q67wf" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.081244 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cbca281-a753-4810-ab5f-a2d5a5e9c41d-config\") pod \"dnsmasq-dns-7cc676b85c-q67wf\" (UID: \"6cbca281-a753-4810-ab5f-a2d5a5e9c41d\") " pod="openstack/dnsmasq-dns-7cc676b85c-q67wf" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.081912 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6cbca281-a753-4810-ab5f-a2d5a5e9c41d-ovsdbserver-nb\") pod \"dnsmasq-dns-7cc676b85c-q67wf\" (UID: \"6cbca281-a753-4810-ab5f-a2d5a5e9c41d\") " pod="openstack/dnsmasq-dns-7cc676b85c-q67wf" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.083864 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6cbca281-a753-4810-ab5f-a2d5a5e9c41d-dns-swift-storage-0\") pod \"dnsmasq-dns-7cc676b85c-q67wf\" (UID: \"6cbca281-a753-4810-ab5f-a2d5a5e9c41d\") " pod="openstack/dnsmasq-dns-7cc676b85c-q67wf" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.156839 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq8dx\" (UniqueName: \"kubernetes.io/projected/6cbca281-a753-4810-ab5f-a2d5a5e9c41d-kube-api-access-sq8dx\") pod \"dnsmasq-dns-7cc676b85c-q67wf\" (UID: \"6cbca281-a753-4810-ab5f-a2d5a5e9c41d\") " pod="openstack/dnsmasq-dns-7cc676b85c-q67wf" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.175184 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6c97cdfb86-z2dqq" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.211363 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-548c8b4b94-2dglr"] Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.231137 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-548c8b4b94-2dglr"] Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.231272 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-548c8b4b94-2dglr" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.239274 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.272015 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cc676b85c-q67wf" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.394871 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7bdb5f7878-ng2k2" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.411139 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fa310f1-40ef-4e74-9647-d3ea87858f11-combined-ca-bundle\") pod \"barbican-api-548c8b4b94-2dglr\" (UID: \"6fa310f1-40ef-4e74-9647-d3ea87858f11\") " pod="openstack/barbican-api-548c8b4b94-2dglr" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.411243 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6fa310f1-40ef-4e74-9647-d3ea87858f11-config-data-custom\") pod \"barbican-api-548c8b4b94-2dglr\" (UID: \"6fa310f1-40ef-4e74-9647-d3ea87858f11\") " pod="openstack/barbican-api-548c8b4b94-2dglr" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.411317 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fa310f1-40ef-4e74-9647-d3ea87858f11-logs\") pod \"barbican-api-548c8b4b94-2dglr\" (UID: \"6fa310f1-40ef-4e74-9647-d3ea87858f11\") " pod="openstack/barbican-api-548c8b4b94-2dglr" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.411399 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fa310f1-40ef-4e74-9647-d3ea87858f11-config-data\") pod \"barbican-api-548c8b4b94-2dglr\" (UID: \"6fa310f1-40ef-4e74-9647-d3ea87858f11\") " pod="openstack/barbican-api-548c8b4b94-2dglr" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.411434 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pmkj\" (UniqueName: \"kubernetes.io/projected/6fa310f1-40ef-4e74-9647-d3ea87858f11-kube-api-access-8pmkj\") pod \"barbican-api-548c8b4b94-2dglr\" (UID: \"6fa310f1-40ef-4e74-9647-d3ea87858f11\") " pod="openstack/barbican-api-548c8b4b94-2dglr" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.513911 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fa310f1-40ef-4e74-9647-d3ea87858f11-config-data\") pod \"barbican-api-548c8b4b94-2dglr\" (UID: \"6fa310f1-40ef-4e74-9647-d3ea87858f11\") " pod="openstack/barbican-api-548c8b4b94-2dglr" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.513989 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pmkj\" (UniqueName: \"kubernetes.io/projected/6fa310f1-40ef-4e74-9647-d3ea87858f11-kube-api-access-8pmkj\") pod \"barbican-api-548c8b4b94-2dglr\" (UID: \"6fa310f1-40ef-4e74-9647-d3ea87858f11\") " pod="openstack/barbican-api-548c8b4b94-2dglr" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.514042 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fa310f1-40ef-4e74-9647-d3ea87858f11-combined-ca-bundle\") pod \"barbican-api-548c8b4b94-2dglr\" (UID: \"6fa310f1-40ef-4e74-9647-d3ea87858f11\") " pod="openstack/barbican-api-548c8b4b94-2dglr" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.514081 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6fa310f1-40ef-4e74-9647-d3ea87858f11-config-data-custom\") pod \"barbican-api-548c8b4b94-2dglr\" (UID: \"6fa310f1-40ef-4e74-9647-d3ea87858f11\") " pod="openstack/barbican-api-548c8b4b94-2dglr" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.514150 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fa310f1-40ef-4e74-9647-d3ea87858f11-logs\") pod \"barbican-api-548c8b4b94-2dglr\" (UID: \"6fa310f1-40ef-4e74-9647-d3ea87858f11\") " pod="openstack/barbican-api-548c8b4b94-2dglr" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.514665 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fa310f1-40ef-4e74-9647-d3ea87858f11-logs\") pod \"barbican-api-548c8b4b94-2dglr\" (UID: \"6fa310f1-40ef-4e74-9647-d3ea87858f11\") " pod="openstack/barbican-api-548c8b4b94-2dglr" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.544875 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fa310f1-40ef-4e74-9647-d3ea87858f11-combined-ca-bundle\") pod \"barbican-api-548c8b4b94-2dglr\" (UID: \"6fa310f1-40ef-4e74-9647-d3ea87858f11\") " pod="openstack/barbican-api-548c8b4b94-2dglr" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.549072 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fa310f1-40ef-4e74-9647-d3ea87858f11-config-data\") pod \"barbican-api-548c8b4b94-2dglr\" (UID: \"6fa310f1-40ef-4e74-9647-d3ea87858f11\") " pod="openstack/barbican-api-548c8b4b94-2dglr" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.565218 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pmkj\" (UniqueName: \"kubernetes.io/projected/6fa310f1-40ef-4e74-9647-d3ea87858f11-kube-api-access-8pmkj\") pod \"barbican-api-548c8b4b94-2dglr\" (UID: \"6fa310f1-40ef-4e74-9647-d3ea87858f11\") " pod="openstack/barbican-api-548c8b4b94-2dglr" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.586479 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6fa310f1-40ef-4e74-9647-d3ea87858f11-config-data-custom\") pod \"barbican-api-548c8b4b94-2dglr\" (UID: \"6fa310f1-40ef-4e74-9647-d3ea87858f11\") " pod="openstack/barbican-api-548c8b4b94-2dglr" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.587100 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f664b756d-8fxf4"] Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.600234 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-548c8b4b94-2dglr" Mar 13 10:26:25 crc kubenswrapper[4632]: I0313 10:26:25.860815 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-689764498d-rg7vt" podUID="5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Mar 13 10:26:26 crc kubenswrapper[4632]: I0313 10:26:26.006692 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5fc9b6f5b5-6ps9m"] Mar 13 10:26:26 crc kubenswrapper[4632]: I0313 10:26:26.068780 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7dd5c7bdcd-4969b" Mar 13 10:26:26 crc kubenswrapper[4632]: I0313 10:26:26.069141 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7dd5c7bdcd-4969b" Mar 13 10:26:26 crc kubenswrapper[4632]: I0313 10:26:26.069156 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7dd5c7bdcd-4969b" event={"ID":"5abe7bf3-d44d-4ee5-b568-2d497868f1e5","Type":"ContainerStarted","Data":"2cfe7ebd70fe3427d7ef352e87ea88bca1736af36e0c260541ced9066c436503"} Mar 13 10:26:26 crc kubenswrapper[4632]: I0313 10:26:26.069173 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f664b756d-8fxf4" event={"ID":"df64dbf7-8526-4fab-950a-4afefe47ec77","Type":"ContainerStarted","Data":"80faef951510ea08c374210ef40d4f2ccc3e8cf3ed32946c01da45487a2e4258"} Mar 13 10:26:26 crc kubenswrapper[4632]: I0313 10:26:26.142259 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7dd5c7bdcd-4969b" podStartSLOduration=12.142236459 podStartE2EDuration="12.142236459s" podCreationTimestamp="2026-03-13 10:26:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:26:26.092273858 +0000 UTC m=+1360.114804011" watchObservedRunningTime="2026-03-13 10:26:26.142236459 +0000 UTC m=+1360.164766592" Mar 13 10:26:26 crc kubenswrapper[4632]: I0313 10:26:26.440894 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6c97cdfb86-z2dqq"] Mar 13 10:26:26 crc kubenswrapper[4632]: I0313 10:26:26.455879 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cc676b85c-q67wf"] Mar 13 10:26:26 crc kubenswrapper[4632]: I0313 10:26:26.680721 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-548c8b4b94-2dglr"] Mar 13 10:26:27 crc kubenswrapper[4632]: I0313 10:26:27.152061 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cc676b85c-q67wf" event={"ID":"6cbca281-a753-4810-ab5f-a2d5a5e9c41d","Type":"ContainerStarted","Data":"c35af663b9fb4942a62d07ad5236c27fcd9454c66f1967196835246c5924112a"} Mar 13 10:26:27 crc kubenswrapper[4632]: I0313 10:26:27.154216 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6c97cdfb86-z2dqq" event={"ID":"58332dcc-b1a6-4550-9c8b-8bbb82c04ff0","Type":"ContainerStarted","Data":"715c026f3a82423d483c078d594d783c0987f11d6bee52b752444c42e096d445"} Mar 13 10:26:27 crc kubenswrapper[4632]: I0313 10:26:27.197440 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f664b756d-8fxf4" event={"ID":"df64dbf7-8526-4fab-950a-4afefe47ec77","Type":"ContainerStarted","Data":"e1bb600f766cd508b5b0989591cf2811ce2ae4a392a60f30c9317e38a3c5276e"} Mar 13 10:26:27 crc kubenswrapper[4632]: I0313 10:26:27.197522 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-f664b756d-8fxf4" Mar 13 10:26:27 crc kubenswrapper[4632]: I0313 10:26:27.225001 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-548c8b4b94-2dglr" event={"ID":"6fa310f1-40ef-4e74-9647-d3ea87858f11","Type":"ContainerStarted","Data":"4545ac42523c98f674d28d5d0acc10645d2b1e7d8486b7068d13c265711710a4"} Mar 13 10:26:27 crc kubenswrapper[4632]: I0313 10:26:27.230074 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fc9b6f5b5-6ps9m" event={"ID":"51b847ef-ada2-456f-819d-0084fbb17185","Type":"ContainerStarted","Data":"8bf20e8f4a839fa6fe33923720402b2b880de8a7752f1d4c0b7f5335f0df2afa"} Mar 13 10:26:27 crc kubenswrapper[4632]: I0313 10:26:27.236552 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-f664b756d-8fxf4" podStartSLOduration=3.236531996 podStartE2EDuration="3.236531996s" podCreationTimestamp="2026-03-13 10:26:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:26:27.228491026 +0000 UTC m=+1361.251021159" watchObservedRunningTime="2026-03-13 10:26:27.236531996 +0000 UTC m=+1361.259062139" Mar 13 10:26:27 crc kubenswrapper[4632]: I0313 10:26:27.356665 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-64bdffbb5c-mpfvf" podUID="6c867fc1-05ed-46c3-99dc-71ef8a09dad3" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 13 10:26:27 crc kubenswrapper[4632]: I0313 10:26:27.356701 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-64bdffbb5c-mpfvf" podUID="6c867fc1-05ed-46c3-99dc-71ef8a09dad3" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 13 10:26:27 crc kubenswrapper[4632]: I0313 10:26:27.370264 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-64bdffbb5c-mpfvf" podUID="6c867fc1-05ed-46c3-99dc-71ef8a09dad3" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 13 10:26:28 crc kubenswrapper[4632]: I0313 10:26:28.269147 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-548c8b4b94-2dglr" event={"ID":"6fa310f1-40ef-4e74-9647-d3ea87858f11","Type":"ContainerStarted","Data":"309fa94df210d44c275999bad3e9b781bb4f9646e038b1a9463656385d210cf3"} Mar 13 10:26:28 crc kubenswrapper[4632]: I0313 10:26:28.269466 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-548c8b4b94-2dglr" event={"ID":"6fa310f1-40ef-4e74-9647-d3ea87858f11","Type":"ContainerStarted","Data":"67882325af120e97844e1aef36a358fdd186b89ba1f3def214e49a353ec793aa"} Mar 13 10:26:28 crc kubenswrapper[4632]: I0313 10:26:28.269484 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-548c8b4b94-2dglr" Mar 13 10:26:28 crc kubenswrapper[4632]: I0313 10:26:28.269497 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-548c8b4b94-2dglr" Mar 13 10:26:28 crc kubenswrapper[4632]: I0313 10:26:28.289675 4632 generic.go:334] "Generic (PLEG): container finished" podID="6cbca281-a753-4810-ab5f-a2d5a5e9c41d" containerID="de515c666eef870de947ba353b522abb4ca9136dc9f866c10cc7d68d392957ce" exitCode=0 Mar 13 10:26:28 crc kubenswrapper[4632]: I0313 10:26:28.290477 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cc676b85c-q67wf" event={"ID":"6cbca281-a753-4810-ab5f-a2d5a5e9c41d","Type":"ContainerDied","Data":"de515c666eef870de947ba353b522abb4ca9136dc9f866c10cc7d68d392957ce"} Mar 13 10:26:28 crc kubenswrapper[4632]: I0313 10:26:28.293928 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-548c8b4b94-2dglr" podStartSLOduration=3.29391573 podStartE2EDuration="3.29391573s" podCreationTimestamp="2026-03-13 10:26:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:26:28.291818391 +0000 UTC m=+1362.314348524" watchObservedRunningTime="2026-03-13 10:26:28.29391573 +0000 UTC m=+1362.316445863" Mar 13 10:26:29 crc kubenswrapper[4632]: I0313 10:26:29.312368 4632 generic.go:334] "Generic (PLEG): container finished" podID="95fe9a38-2b32-411e-9121-ad4cc32f159e" containerID="d9db78843b825b24c0eab6345b91a7657d2b3f0bb64d65b5dcc125b1edeeb022" exitCode=137 Mar 13 10:26:29 crc kubenswrapper[4632]: I0313 10:26:29.313234 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67d6b4b8f7-nrxn8" event={"ID":"95fe9a38-2b32-411e-9121-ad4cc32f159e","Type":"ContainerDied","Data":"d9db78843b825b24c0eab6345b91a7657d2b3f0bb64d65b5dcc125b1edeeb022"} Mar 13 10:26:29 crc kubenswrapper[4632]: I0313 10:26:29.609749 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-756c4b86c6-rm274"] Mar 13 10:26:29 crc kubenswrapper[4632]: I0313 10:26:29.611581 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-756c4b86c6-rm274" Mar 13 10:26:29 crc kubenswrapper[4632]: I0313 10:26:29.627542 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Mar 13 10:26:29 crc kubenswrapper[4632]: I0313 10:26:29.647744 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-756c4b86c6-rm274"] Mar 13 10:26:29 crc kubenswrapper[4632]: I0313 10:26:29.652870 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Mar 13 10:26:29 crc kubenswrapper[4632]: I0313 10:26:29.703983 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbc1c989-5fa1-46dc-818e-8d609c069e34-config-data\") pod \"barbican-api-756c4b86c6-rm274\" (UID: \"dbc1c989-5fa1-46dc-818e-8d609c069e34\") " pod="openstack/barbican-api-756c4b86c6-rm274" Mar 13 10:26:29 crc kubenswrapper[4632]: I0313 10:26:29.704277 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltgf8\" (UniqueName: \"kubernetes.io/projected/dbc1c989-5fa1-46dc-818e-8d609c069e34-kube-api-access-ltgf8\") pod \"barbican-api-756c4b86c6-rm274\" (UID: \"dbc1c989-5fa1-46dc-818e-8d609c069e34\") " pod="openstack/barbican-api-756c4b86c6-rm274" Mar 13 10:26:29 crc kubenswrapper[4632]: I0313 10:26:29.704326 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbc1c989-5fa1-46dc-818e-8d609c069e34-config-data-custom\") pod \"barbican-api-756c4b86c6-rm274\" (UID: \"dbc1c989-5fa1-46dc-818e-8d609c069e34\") " pod="openstack/barbican-api-756c4b86c6-rm274" Mar 13 10:26:29 crc kubenswrapper[4632]: I0313 10:26:29.704352 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbc1c989-5fa1-46dc-818e-8d609c069e34-combined-ca-bundle\") pod \"barbican-api-756c4b86c6-rm274\" (UID: \"dbc1c989-5fa1-46dc-818e-8d609c069e34\") " pod="openstack/barbican-api-756c4b86c6-rm274" Mar 13 10:26:29 crc kubenswrapper[4632]: I0313 10:26:29.704393 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbc1c989-5fa1-46dc-818e-8d609c069e34-public-tls-certs\") pod \"barbican-api-756c4b86c6-rm274\" (UID: \"dbc1c989-5fa1-46dc-818e-8d609c069e34\") " pod="openstack/barbican-api-756c4b86c6-rm274" Mar 13 10:26:29 crc kubenswrapper[4632]: I0313 10:26:29.704432 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbc1c989-5fa1-46dc-818e-8d609c069e34-internal-tls-certs\") pod \"barbican-api-756c4b86c6-rm274\" (UID: \"dbc1c989-5fa1-46dc-818e-8d609c069e34\") " pod="openstack/barbican-api-756c4b86c6-rm274" Mar 13 10:26:29 crc kubenswrapper[4632]: I0313 10:26:29.704454 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbc1c989-5fa1-46dc-818e-8d609c069e34-logs\") pod \"barbican-api-756c4b86c6-rm274\" (UID: \"dbc1c989-5fa1-46dc-818e-8d609c069e34\") " pod="openstack/barbican-api-756c4b86c6-rm274" Mar 13 10:26:29 crc kubenswrapper[4632]: I0313 10:26:29.807052 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltgf8\" (UniqueName: \"kubernetes.io/projected/dbc1c989-5fa1-46dc-818e-8d609c069e34-kube-api-access-ltgf8\") pod \"barbican-api-756c4b86c6-rm274\" (UID: \"dbc1c989-5fa1-46dc-818e-8d609c069e34\") " pod="openstack/barbican-api-756c4b86c6-rm274" Mar 13 10:26:29 crc kubenswrapper[4632]: I0313 10:26:29.807139 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbc1c989-5fa1-46dc-818e-8d609c069e34-config-data-custom\") pod \"barbican-api-756c4b86c6-rm274\" (UID: \"dbc1c989-5fa1-46dc-818e-8d609c069e34\") " pod="openstack/barbican-api-756c4b86c6-rm274" Mar 13 10:26:29 crc kubenswrapper[4632]: I0313 10:26:29.807169 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbc1c989-5fa1-46dc-818e-8d609c069e34-combined-ca-bundle\") pod \"barbican-api-756c4b86c6-rm274\" (UID: \"dbc1c989-5fa1-46dc-818e-8d609c069e34\") " pod="openstack/barbican-api-756c4b86c6-rm274" Mar 13 10:26:29 crc kubenswrapper[4632]: I0313 10:26:29.807228 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbc1c989-5fa1-46dc-818e-8d609c069e34-public-tls-certs\") pod \"barbican-api-756c4b86c6-rm274\" (UID: \"dbc1c989-5fa1-46dc-818e-8d609c069e34\") " pod="openstack/barbican-api-756c4b86c6-rm274" Mar 13 10:26:29 crc kubenswrapper[4632]: I0313 10:26:29.807270 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbc1c989-5fa1-46dc-818e-8d609c069e34-internal-tls-certs\") pod \"barbican-api-756c4b86c6-rm274\" (UID: \"dbc1c989-5fa1-46dc-818e-8d609c069e34\") " pod="openstack/barbican-api-756c4b86c6-rm274" Mar 13 10:26:29 crc kubenswrapper[4632]: I0313 10:26:29.807294 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbc1c989-5fa1-46dc-818e-8d609c069e34-logs\") pod \"barbican-api-756c4b86c6-rm274\" (UID: \"dbc1c989-5fa1-46dc-818e-8d609c069e34\") " pod="openstack/barbican-api-756c4b86c6-rm274" Mar 13 10:26:29 crc kubenswrapper[4632]: I0313 10:26:29.807406 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbc1c989-5fa1-46dc-818e-8d609c069e34-config-data\") pod \"barbican-api-756c4b86c6-rm274\" (UID: \"dbc1c989-5fa1-46dc-818e-8d609c069e34\") " pod="openstack/barbican-api-756c4b86c6-rm274" Mar 13 10:26:29 crc kubenswrapper[4632]: I0313 10:26:29.809327 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbc1c989-5fa1-46dc-818e-8d609c069e34-logs\") pod \"barbican-api-756c4b86c6-rm274\" (UID: \"dbc1c989-5fa1-46dc-818e-8d609c069e34\") " pod="openstack/barbican-api-756c4b86c6-rm274" Mar 13 10:26:29 crc kubenswrapper[4632]: I0313 10:26:29.820074 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbc1c989-5fa1-46dc-818e-8d609c069e34-config-data\") pod \"barbican-api-756c4b86c6-rm274\" (UID: \"dbc1c989-5fa1-46dc-818e-8d609c069e34\") " pod="openstack/barbican-api-756c4b86c6-rm274" Mar 13 10:26:29 crc kubenswrapper[4632]: I0313 10:26:29.823698 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbc1c989-5fa1-46dc-818e-8d609c069e34-combined-ca-bundle\") pod \"barbican-api-756c4b86c6-rm274\" (UID: \"dbc1c989-5fa1-46dc-818e-8d609c069e34\") " pod="openstack/barbican-api-756c4b86c6-rm274" Mar 13 10:26:29 crc kubenswrapper[4632]: I0313 10:26:29.826420 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbc1c989-5fa1-46dc-818e-8d609c069e34-internal-tls-certs\") pod \"barbican-api-756c4b86c6-rm274\" (UID: \"dbc1c989-5fa1-46dc-818e-8d609c069e34\") " pod="openstack/barbican-api-756c4b86c6-rm274" Mar 13 10:26:29 crc kubenswrapper[4632]: I0313 10:26:29.831487 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbc1c989-5fa1-46dc-818e-8d609c069e34-config-data-custom\") pod \"barbican-api-756c4b86c6-rm274\" (UID: \"dbc1c989-5fa1-46dc-818e-8d609c069e34\") " pod="openstack/barbican-api-756c4b86c6-rm274" Mar 13 10:26:29 crc kubenswrapper[4632]: I0313 10:26:29.837718 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbc1c989-5fa1-46dc-818e-8d609c069e34-public-tls-certs\") pod \"barbican-api-756c4b86c6-rm274\" (UID: \"dbc1c989-5fa1-46dc-818e-8d609c069e34\") " pod="openstack/barbican-api-756c4b86c6-rm274" Mar 13 10:26:29 crc kubenswrapper[4632]: I0313 10:26:29.841341 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltgf8\" (UniqueName: \"kubernetes.io/projected/dbc1c989-5fa1-46dc-818e-8d609c069e34-kube-api-access-ltgf8\") pod \"barbican-api-756c4b86c6-rm274\" (UID: \"dbc1c989-5fa1-46dc-818e-8d609c069e34\") " pod="openstack/barbican-api-756c4b86c6-rm274" Mar 13 10:26:29 crc kubenswrapper[4632]: I0313 10:26:29.932209 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-756c4b86c6-rm274" Mar 13 10:26:30 crc kubenswrapper[4632]: I0313 10:26:30.341416 4632 generic.go:334] "Generic (PLEG): container finished" podID="95fe9a38-2b32-411e-9121-ad4cc32f159e" containerID="f8238ac2122bdce07c274a4f41c5a0d859a4162d57594e52444f5d2a425d1e7b" exitCode=137 Mar 13 10:26:30 crc kubenswrapper[4632]: I0313 10:26:30.341499 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67d6b4b8f7-nrxn8" event={"ID":"95fe9a38-2b32-411e-9121-ad4cc32f159e","Type":"ContainerDied","Data":"f8238ac2122bdce07c274a4f41c5a0d859a4162d57594e52444f5d2a425d1e7b"} Mar 13 10:26:30 crc kubenswrapper[4632]: I0313 10:26:30.556585 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 13 10:26:30 crc kubenswrapper[4632]: I0313 10:26:30.556714 4632 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 10:26:30 crc kubenswrapper[4632]: I0313 10:26:30.581672 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 13 10:26:31 crc kubenswrapper[4632]: I0313 10:26:31.017432 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 13 10:26:31 crc kubenswrapper[4632]: I0313 10:26:31.017793 4632 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 10:26:31 crc kubenswrapper[4632]: I0313 10:26:31.116573 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 13 10:26:31 crc kubenswrapper[4632]: I0313 10:26:31.392907 4632 generic.go:334] "Generic (PLEG): container finished" podID="8f916c05-f172-42b6-9b13-0c8d2058bfb1" containerID="6d5ac5d7a6aab5517e4300c2e14808710d4f8cfa4977c9841f6552b262144012" exitCode=0 Mar 13 10:26:31 crc kubenswrapper[4632]: I0313 10:26:31.394548 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kq8lc" event={"ID":"8f916c05-f172-42b6-9b13-0c8d2058bfb1","Type":"ContainerDied","Data":"6d5ac5d7a6aab5517e4300c2e14808710d4f8cfa4977c9841f6552b262144012"} Mar 13 10:26:32 crc kubenswrapper[4632]: I0313 10:26:32.546408 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67d6b4b8f7-nrxn8" Mar 13 10:26:32 crc kubenswrapper[4632]: I0313 10:26:32.715867 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxkl9\" (UniqueName: \"kubernetes.io/projected/95fe9a38-2b32-411e-9121-ad4cc32f159e-kube-api-access-zxkl9\") pod \"95fe9a38-2b32-411e-9121-ad4cc32f159e\" (UID: \"95fe9a38-2b32-411e-9121-ad4cc32f159e\") " Mar 13 10:26:32 crc kubenswrapper[4632]: I0313 10:26:32.716474 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95fe9a38-2b32-411e-9121-ad4cc32f159e-scripts\") pod \"95fe9a38-2b32-411e-9121-ad4cc32f159e\" (UID: \"95fe9a38-2b32-411e-9121-ad4cc32f159e\") " Mar 13 10:26:32 crc kubenswrapper[4632]: I0313 10:26:32.716565 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/95fe9a38-2b32-411e-9121-ad4cc32f159e-horizon-secret-key\") pod \"95fe9a38-2b32-411e-9121-ad4cc32f159e\" (UID: \"95fe9a38-2b32-411e-9121-ad4cc32f159e\") " Mar 13 10:26:32 crc kubenswrapper[4632]: I0313 10:26:32.716686 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95fe9a38-2b32-411e-9121-ad4cc32f159e-config-data\") pod \"95fe9a38-2b32-411e-9121-ad4cc32f159e\" (UID: \"95fe9a38-2b32-411e-9121-ad4cc32f159e\") " Mar 13 10:26:32 crc kubenswrapper[4632]: I0313 10:26:32.716736 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95fe9a38-2b32-411e-9121-ad4cc32f159e-logs\") pod \"95fe9a38-2b32-411e-9121-ad4cc32f159e\" (UID: \"95fe9a38-2b32-411e-9121-ad4cc32f159e\") " Mar 13 10:26:32 crc kubenswrapper[4632]: I0313 10:26:32.724926 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95fe9a38-2b32-411e-9121-ad4cc32f159e-logs" (OuterVolumeSpecName: "logs") pod "95fe9a38-2b32-411e-9121-ad4cc32f159e" (UID: "95fe9a38-2b32-411e-9121-ad4cc32f159e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:26:32 crc kubenswrapper[4632]: I0313 10:26:32.736247 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95fe9a38-2b32-411e-9121-ad4cc32f159e-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "95fe9a38-2b32-411e-9121-ad4cc32f159e" (UID: "95fe9a38-2b32-411e-9121-ad4cc32f159e"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:26:32 crc kubenswrapper[4632]: I0313 10:26:32.750321 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95fe9a38-2b32-411e-9121-ad4cc32f159e-kube-api-access-zxkl9" (OuterVolumeSpecName: "kube-api-access-zxkl9") pod "95fe9a38-2b32-411e-9121-ad4cc32f159e" (UID: "95fe9a38-2b32-411e-9121-ad4cc32f159e"). InnerVolumeSpecName "kube-api-access-zxkl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:26:32 crc kubenswrapper[4632]: I0313 10:26:32.792602 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95fe9a38-2b32-411e-9121-ad4cc32f159e-config-data" (OuterVolumeSpecName: "config-data") pod "95fe9a38-2b32-411e-9121-ad4cc32f159e" (UID: "95fe9a38-2b32-411e-9121-ad4cc32f159e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:26:32 crc kubenswrapper[4632]: I0313 10:26:32.814574 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95fe9a38-2b32-411e-9121-ad4cc32f159e-scripts" (OuterVolumeSpecName: "scripts") pod "95fe9a38-2b32-411e-9121-ad4cc32f159e" (UID: "95fe9a38-2b32-411e-9121-ad4cc32f159e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:26:32 crc kubenswrapper[4632]: I0313 10:26:32.819621 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxkl9\" (UniqueName: \"kubernetes.io/projected/95fe9a38-2b32-411e-9121-ad4cc32f159e-kube-api-access-zxkl9\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:32 crc kubenswrapper[4632]: I0313 10:26:32.819791 4632 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95fe9a38-2b32-411e-9121-ad4cc32f159e-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:32 crc kubenswrapper[4632]: I0313 10:26:32.819901 4632 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/95fe9a38-2b32-411e-9121-ad4cc32f159e-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:32 crc kubenswrapper[4632]: I0313 10:26:32.820132 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95fe9a38-2b32-411e-9121-ad4cc32f159e-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:32 crc kubenswrapper[4632]: I0313 10:26:32.820237 4632 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95fe9a38-2b32-411e-9121-ad4cc32f159e-logs\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.042454 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kq8lc" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.125297 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f916c05-f172-42b6-9b13-0c8d2058bfb1-scripts\") pod \"8f916c05-f172-42b6-9b13-0c8d2058bfb1\" (UID: \"8f916c05-f172-42b6-9b13-0c8d2058bfb1\") " Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.125459 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8f916c05-f172-42b6-9b13-0c8d2058bfb1-db-sync-config-data\") pod \"8f916c05-f172-42b6-9b13-0c8d2058bfb1\" (UID: \"8f916c05-f172-42b6-9b13-0c8d2058bfb1\") " Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.125586 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m5bn\" (UniqueName: \"kubernetes.io/projected/8f916c05-f172-42b6-9b13-0c8d2058bfb1-kube-api-access-5m5bn\") pod \"8f916c05-f172-42b6-9b13-0c8d2058bfb1\" (UID: \"8f916c05-f172-42b6-9b13-0c8d2058bfb1\") " Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.125717 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f916c05-f172-42b6-9b13-0c8d2058bfb1-combined-ca-bundle\") pod \"8f916c05-f172-42b6-9b13-0c8d2058bfb1\" (UID: \"8f916c05-f172-42b6-9b13-0c8d2058bfb1\") " Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.125755 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f916c05-f172-42b6-9b13-0c8d2058bfb1-config-data\") pod \"8f916c05-f172-42b6-9b13-0c8d2058bfb1\" (UID: \"8f916c05-f172-42b6-9b13-0c8d2058bfb1\") " Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.126867 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8f916c05-f172-42b6-9b13-0c8d2058bfb1-etc-machine-id\") pod \"8f916c05-f172-42b6-9b13-0c8d2058bfb1\" (UID: \"8f916c05-f172-42b6-9b13-0c8d2058bfb1\") " Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.127043 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f916c05-f172-42b6-9b13-0c8d2058bfb1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8f916c05-f172-42b6-9b13-0c8d2058bfb1" (UID: "8f916c05-f172-42b6-9b13-0c8d2058bfb1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.129121 4632 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8f916c05-f172-42b6-9b13-0c8d2058bfb1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.168853 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f916c05-f172-42b6-9b13-0c8d2058bfb1-kube-api-access-5m5bn" (OuterVolumeSpecName: "kube-api-access-5m5bn") pod "8f916c05-f172-42b6-9b13-0c8d2058bfb1" (UID: "8f916c05-f172-42b6-9b13-0c8d2058bfb1"). InnerVolumeSpecName "kube-api-access-5m5bn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.172061 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f916c05-f172-42b6-9b13-0c8d2058bfb1-scripts" (OuterVolumeSpecName: "scripts") pod "8f916c05-f172-42b6-9b13-0c8d2058bfb1" (UID: "8f916c05-f172-42b6-9b13-0c8d2058bfb1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.172570 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f916c05-f172-42b6-9b13-0c8d2058bfb1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8f916c05-f172-42b6-9b13-0c8d2058bfb1" (UID: "8f916c05-f172-42b6-9b13-0c8d2058bfb1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.206005 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f916c05-f172-42b6-9b13-0c8d2058bfb1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f916c05-f172-42b6-9b13-0c8d2058bfb1" (UID: "8f916c05-f172-42b6-9b13-0c8d2058bfb1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.230645 4632 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f916c05-f172-42b6-9b13-0c8d2058bfb1-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.230688 4632 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8f916c05-f172-42b6-9b13-0c8d2058bfb1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.230706 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m5bn\" (UniqueName: \"kubernetes.io/projected/8f916c05-f172-42b6-9b13-0c8d2058bfb1-kube-api-access-5m5bn\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.230718 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f916c05-f172-42b6-9b13-0c8d2058bfb1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.298273 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-756c4b86c6-rm274"] Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.315053 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f916c05-f172-42b6-9b13-0c8d2058bfb1-config-data" (OuterVolumeSpecName: "config-data") pod "8f916c05-f172-42b6-9b13-0c8d2058bfb1" (UID: "8f916c05-f172-42b6-9b13-0c8d2058bfb1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:26:33 crc kubenswrapper[4632]: W0313 10:26:33.318917 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbc1c989_5fa1_46dc_818e_8d609c069e34.slice/crio-eda6b117169809aad9d8b5baaee672caebe4bfc7c3a6fbea05ec0231894d4fd5 WatchSource:0}: Error finding container eda6b117169809aad9d8b5baaee672caebe4bfc7c3a6fbea05ec0231894d4fd5: Status 404 returned error can't find the container with id eda6b117169809aad9d8b5baaee672caebe4bfc7c3a6fbea05ec0231894d4fd5 Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.332294 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f916c05-f172-42b6-9b13-0c8d2058bfb1-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.424149 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kq8lc" event={"ID":"8f916c05-f172-42b6-9b13-0c8d2058bfb1","Type":"ContainerDied","Data":"8ac8055b0e5fc8cb1135e4ae559dd9794358a9f9dfb68fd20402b62c57115f00"} Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.424217 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ac8055b0e5fc8cb1135e4ae559dd9794358a9f9dfb68fd20402b62c57115f00" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.424407 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kq8lc" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.426799 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6c97cdfb86-z2dqq" event={"ID":"58332dcc-b1a6-4550-9c8b-8bbb82c04ff0","Type":"ContainerStarted","Data":"e64692995de9a1a74114696f6777f70f0296d603809ccbd570a856f3f597856f"} Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.426837 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6c97cdfb86-z2dqq" event={"ID":"58332dcc-b1a6-4550-9c8b-8bbb82c04ff0","Type":"ContainerStarted","Data":"4f9bc45bc10e2684aa506712975284aebd47a9a1ff8843e9364f36ca6efa6a60"} Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.434277 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-756c4b86c6-rm274" event={"ID":"dbc1c989-5fa1-46dc-818e-8d609c069e34","Type":"ContainerStarted","Data":"eda6b117169809aad9d8b5baaee672caebe4bfc7c3a6fbea05ec0231894d4fd5"} Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.440926 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fc9b6f5b5-6ps9m" event={"ID":"51b847ef-ada2-456f-819d-0084fbb17185","Type":"ContainerStarted","Data":"bd7c8c722be2532e1ae19202c0e58d8318178f85688079a6dffd9812ff5db7bc"} Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.463176 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6c97cdfb86-z2dqq" podStartSLOduration=3.39914094 podStartE2EDuration="9.46315514s" podCreationTimestamp="2026-03-13 10:26:24 +0000 UTC" firstStartedPulling="2026-03-13 10:26:26.479731416 +0000 UTC m=+1360.502261549" lastFinishedPulling="2026-03-13 10:26:32.543745616 +0000 UTC m=+1366.566275749" observedRunningTime="2026-03-13 10:26:33.450225064 +0000 UTC m=+1367.472755207" watchObservedRunningTime="2026-03-13 10:26:33.46315514 +0000 UTC m=+1367.485685273" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.475268 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67d6b4b8f7-nrxn8" event={"ID":"95fe9a38-2b32-411e-9121-ad4cc32f159e","Type":"ContainerDied","Data":"23d0d6f6bc6174b2a86ec905a9477b2974881387bec66374cfa55dca37114aec"} Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.475330 4632 scope.go:117] "RemoveContainer" containerID="f8238ac2122bdce07c274a4f41c5a0d859a4162d57594e52444f5d2a425d1e7b" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.475435 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67d6b4b8f7-nrxn8" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.484203 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cc676b85c-q67wf" event={"ID":"6cbca281-a753-4810-ab5f-a2d5a5e9c41d","Type":"ContainerStarted","Data":"d4574f8c63575790d1486fa1f8221ce400d19901d3317c976e702416503cba64"} Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.506352 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cc676b85c-q67wf" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.584705 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cc676b85c-q67wf" podStartSLOduration=9.584681912 podStartE2EDuration="9.584681912s" podCreationTimestamp="2026-03-13 10:26:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:26:33.552551173 +0000 UTC m=+1367.575081296" watchObservedRunningTime="2026-03-13 10:26:33.584681912 +0000 UTC m=+1367.607212045" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.648164 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-67d6b4b8f7-nrxn8"] Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.659737 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-67d6b4b8f7-nrxn8"] Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.822681 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 13 10:26:33 crc kubenswrapper[4632]: E0313 10:26:33.823103 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95fe9a38-2b32-411e-9121-ad4cc32f159e" containerName="horizon" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.823121 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="95fe9a38-2b32-411e-9121-ad4cc32f159e" containerName="horizon" Mar 13 10:26:33 crc kubenswrapper[4632]: E0313 10:26:33.823132 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95fe9a38-2b32-411e-9121-ad4cc32f159e" containerName="horizon-log" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.823138 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="95fe9a38-2b32-411e-9121-ad4cc32f159e" containerName="horizon-log" Mar 13 10:26:33 crc kubenswrapper[4632]: E0313 10:26:33.823150 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f916c05-f172-42b6-9b13-0c8d2058bfb1" containerName="cinder-db-sync" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.823157 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f916c05-f172-42b6-9b13-0c8d2058bfb1" containerName="cinder-db-sync" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.823323 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="95fe9a38-2b32-411e-9121-ad4cc32f159e" containerName="horizon" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.823344 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="95fe9a38-2b32-411e-9121-ad4cc32f159e" containerName="horizon-log" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.823354 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f916c05-f172-42b6-9b13-0c8d2058bfb1" containerName="cinder-db-sync" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.824289 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.849839 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.849975 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.850254 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-j7c52" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.850442 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.853562 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.943843 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cc676b85c-q67wf"] Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.954513 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6e715bfb-1bd5-4c21-ac77-df48fa58a69c\") " pod="openstack/cinder-scheduler-0" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.954773 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-scripts\") pod \"cinder-scheduler-0\" (UID: \"6e715bfb-1bd5-4c21-ac77-df48fa58a69c\") " pod="openstack/cinder-scheduler-0" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.954857 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6e715bfb-1bd5-4c21-ac77-df48fa58a69c\") " pod="openstack/cinder-scheduler-0" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.954976 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6e715bfb-1bd5-4c21-ac77-df48fa58a69c\") " pod="openstack/cinder-scheduler-0" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.955116 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkdch\" (UniqueName: \"kubernetes.io/projected/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-kube-api-access-lkdch\") pod \"cinder-scheduler-0\" (UID: \"6e715bfb-1bd5-4c21-ac77-df48fa58a69c\") " pod="openstack/cinder-scheduler-0" Mar 13 10:26:33 crc kubenswrapper[4632]: I0313 10:26:33.955268 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-config-data\") pod \"cinder-scheduler-0\" (UID: \"6e715bfb-1bd5-4c21-ac77-df48fa58a69c\") " pod="openstack/cinder-scheduler-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:33.999064 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f6d96bd7f-txx79"] Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.010035 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6d96bd7f-txx79" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.056993 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-config-data\") pod \"cinder-scheduler-0\" (UID: \"6e715bfb-1bd5-4c21-ac77-df48fa58a69c\") " pod="openstack/cinder-scheduler-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.057072 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6e715bfb-1bd5-4c21-ac77-df48fa58a69c\") " pod="openstack/cinder-scheduler-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.057100 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-scripts\") pod \"cinder-scheduler-0\" (UID: \"6e715bfb-1bd5-4c21-ac77-df48fa58a69c\") " pod="openstack/cinder-scheduler-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.057132 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6e715bfb-1bd5-4c21-ac77-df48fa58a69c\") " pod="openstack/cinder-scheduler-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.057175 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6e715bfb-1bd5-4c21-ac77-df48fa58a69c\") " pod="openstack/cinder-scheduler-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.057220 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkdch\" (UniqueName: \"kubernetes.io/projected/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-kube-api-access-lkdch\") pod \"cinder-scheduler-0\" (UID: \"6e715bfb-1bd5-4c21-ac77-df48fa58a69c\") " pod="openstack/cinder-scheduler-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.057561 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6e715bfb-1bd5-4c21-ac77-df48fa58a69c\") " pod="openstack/cinder-scheduler-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.070203 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-config-data\") pod \"cinder-scheduler-0\" (UID: \"6e715bfb-1bd5-4c21-ac77-df48fa58a69c\") " pod="openstack/cinder-scheduler-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.070534 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6e715bfb-1bd5-4c21-ac77-df48fa58a69c\") " pod="openstack/cinder-scheduler-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.070931 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-scripts\") pod \"cinder-scheduler-0\" (UID: \"6e715bfb-1bd5-4c21-ac77-df48fa58a69c\") " pod="openstack/cinder-scheduler-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.071793 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95fe9a38-2b32-411e-9121-ad4cc32f159e" path="/var/lib/kubelet/pods/95fe9a38-2b32-411e-9121-ad4cc32f159e/volumes" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.078629 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6e715bfb-1bd5-4c21-ac77-df48fa58a69c\") " pod="openstack/cinder-scheduler-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.090058 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f6d96bd7f-txx79"] Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.091225 4632 scope.go:117] "RemoveContainer" containerID="d9db78843b825b24c0eab6345b91a7657d2b3f0bb64d65b5dcc125b1edeeb022" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.120758 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkdch\" (UniqueName: \"kubernetes.io/projected/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-kube-api-access-lkdch\") pod \"cinder-scheduler-0\" (UID: \"6e715bfb-1bd5-4c21-ac77-df48fa58a69c\") " pod="openstack/cinder-scheduler-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.162323 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/55712a50-9dcf-44ce-8bac-9aa3ecf65db4-dns-swift-storage-0\") pod \"dnsmasq-dns-f6d96bd7f-txx79\" (UID: \"55712a50-9dcf-44ce-8bac-9aa3ecf65db4\") " pod="openstack/dnsmasq-dns-f6d96bd7f-txx79" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.162409 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55712a50-9dcf-44ce-8bac-9aa3ecf65db4-ovsdbserver-sb\") pod \"dnsmasq-dns-f6d96bd7f-txx79\" (UID: \"55712a50-9dcf-44ce-8bac-9aa3ecf65db4\") " pod="openstack/dnsmasq-dns-f6d96bd7f-txx79" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.162441 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55712a50-9dcf-44ce-8bac-9aa3ecf65db4-dns-svc\") pod \"dnsmasq-dns-f6d96bd7f-txx79\" (UID: \"55712a50-9dcf-44ce-8bac-9aa3ecf65db4\") " pod="openstack/dnsmasq-dns-f6d96bd7f-txx79" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.162484 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7sjz\" (UniqueName: \"kubernetes.io/projected/55712a50-9dcf-44ce-8bac-9aa3ecf65db4-kube-api-access-g7sjz\") pod \"dnsmasq-dns-f6d96bd7f-txx79\" (UID: \"55712a50-9dcf-44ce-8bac-9aa3ecf65db4\") " pod="openstack/dnsmasq-dns-f6d96bd7f-txx79" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.162518 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55712a50-9dcf-44ce-8bac-9aa3ecf65db4-config\") pod \"dnsmasq-dns-f6d96bd7f-txx79\" (UID: \"55712a50-9dcf-44ce-8bac-9aa3ecf65db4\") " pod="openstack/dnsmasq-dns-f6d96bd7f-txx79" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.162605 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55712a50-9dcf-44ce-8bac-9aa3ecf65db4-ovsdbserver-nb\") pod \"dnsmasq-dns-f6d96bd7f-txx79\" (UID: \"55712a50-9dcf-44ce-8bac-9aa3ecf65db4\") " pod="openstack/dnsmasq-dns-f6d96bd7f-txx79" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.172895 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.264498 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55712a50-9dcf-44ce-8bac-9aa3ecf65db4-ovsdbserver-nb\") pod \"dnsmasq-dns-f6d96bd7f-txx79\" (UID: \"55712a50-9dcf-44ce-8bac-9aa3ecf65db4\") " pod="openstack/dnsmasq-dns-f6d96bd7f-txx79" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.264561 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/55712a50-9dcf-44ce-8bac-9aa3ecf65db4-dns-swift-storage-0\") pod \"dnsmasq-dns-f6d96bd7f-txx79\" (UID: \"55712a50-9dcf-44ce-8bac-9aa3ecf65db4\") " pod="openstack/dnsmasq-dns-f6d96bd7f-txx79" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.264599 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55712a50-9dcf-44ce-8bac-9aa3ecf65db4-ovsdbserver-sb\") pod \"dnsmasq-dns-f6d96bd7f-txx79\" (UID: \"55712a50-9dcf-44ce-8bac-9aa3ecf65db4\") " pod="openstack/dnsmasq-dns-f6d96bd7f-txx79" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.264620 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55712a50-9dcf-44ce-8bac-9aa3ecf65db4-dns-svc\") pod \"dnsmasq-dns-f6d96bd7f-txx79\" (UID: \"55712a50-9dcf-44ce-8bac-9aa3ecf65db4\") " pod="openstack/dnsmasq-dns-f6d96bd7f-txx79" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.264648 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7sjz\" (UniqueName: \"kubernetes.io/projected/55712a50-9dcf-44ce-8bac-9aa3ecf65db4-kube-api-access-g7sjz\") pod \"dnsmasq-dns-f6d96bd7f-txx79\" (UID: \"55712a50-9dcf-44ce-8bac-9aa3ecf65db4\") " pod="openstack/dnsmasq-dns-f6d96bd7f-txx79" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.264671 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55712a50-9dcf-44ce-8bac-9aa3ecf65db4-config\") pod \"dnsmasq-dns-f6d96bd7f-txx79\" (UID: \"55712a50-9dcf-44ce-8bac-9aa3ecf65db4\") " pod="openstack/dnsmasq-dns-f6d96bd7f-txx79" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.265652 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55712a50-9dcf-44ce-8bac-9aa3ecf65db4-config\") pod \"dnsmasq-dns-f6d96bd7f-txx79\" (UID: \"55712a50-9dcf-44ce-8bac-9aa3ecf65db4\") " pod="openstack/dnsmasq-dns-f6d96bd7f-txx79" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.266209 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55712a50-9dcf-44ce-8bac-9aa3ecf65db4-ovsdbserver-nb\") pod \"dnsmasq-dns-f6d96bd7f-txx79\" (UID: \"55712a50-9dcf-44ce-8bac-9aa3ecf65db4\") " pod="openstack/dnsmasq-dns-f6d96bd7f-txx79" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.266919 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/55712a50-9dcf-44ce-8bac-9aa3ecf65db4-dns-swift-storage-0\") pod \"dnsmasq-dns-f6d96bd7f-txx79\" (UID: \"55712a50-9dcf-44ce-8bac-9aa3ecf65db4\") " pod="openstack/dnsmasq-dns-f6d96bd7f-txx79" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.267480 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55712a50-9dcf-44ce-8bac-9aa3ecf65db4-ovsdbserver-sb\") pod \"dnsmasq-dns-f6d96bd7f-txx79\" (UID: \"55712a50-9dcf-44ce-8bac-9aa3ecf65db4\") " pod="openstack/dnsmasq-dns-f6d96bd7f-txx79" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.267992 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55712a50-9dcf-44ce-8bac-9aa3ecf65db4-dns-svc\") pod \"dnsmasq-dns-f6d96bd7f-txx79\" (UID: \"55712a50-9dcf-44ce-8bac-9aa3ecf65db4\") " pod="openstack/dnsmasq-dns-f6d96bd7f-txx79" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.280029 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.281421 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.300158 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.306357 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7sjz\" (UniqueName: \"kubernetes.io/projected/55712a50-9dcf-44ce-8bac-9aa3ecf65db4-kube-api-access-g7sjz\") pod \"dnsmasq-dns-f6d96bd7f-txx79\" (UID: \"55712a50-9dcf-44ce-8bac-9aa3ecf65db4\") " pod="openstack/dnsmasq-dns-f6d96bd7f-txx79" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.310213 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.341441 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6d96bd7f-txx79" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.367608 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58677e2e-9fc6-4e50-b342-e912afa8d969-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"58677e2e-9fc6-4e50-b342-e912afa8d969\") " pod="openstack/cinder-api-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.367766 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58677e2e-9fc6-4e50-b342-e912afa8d969-scripts\") pod \"cinder-api-0\" (UID: \"58677e2e-9fc6-4e50-b342-e912afa8d969\") " pod="openstack/cinder-api-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.367846 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/58677e2e-9fc6-4e50-b342-e912afa8d969-etc-machine-id\") pod \"cinder-api-0\" (UID: \"58677e2e-9fc6-4e50-b342-e912afa8d969\") " pod="openstack/cinder-api-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.367875 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58677e2e-9fc6-4e50-b342-e912afa8d969-logs\") pod \"cinder-api-0\" (UID: \"58677e2e-9fc6-4e50-b342-e912afa8d969\") " pod="openstack/cinder-api-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.368082 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58677e2e-9fc6-4e50-b342-e912afa8d969-config-data\") pod \"cinder-api-0\" (UID: \"58677e2e-9fc6-4e50-b342-e912afa8d969\") " pod="openstack/cinder-api-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.368161 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58677e2e-9fc6-4e50-b342-e912afa8d969-config-data-custom\") pod \"cinder-api-0\" (UID: \"58677e2e-9fc6-4e50-b342-e912afa8d969\") " pod="openstack/cinder-api-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.368213 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd7tw\" (UniqueName: \"kubernetes.io/projected/58677e2e-9fc6-4e50-b342-e912afa8d969-kube-api-access-vd7tw\") pod \"cinder-api-0\" (UID: \"58677e2e-9fc6-4e50-b342-e912afa8d969\") " pod="openstack/cinder-api-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.472301 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58677e2e-9fc6-4e50-b342-e912afa8d969-scripts\") pod \"cinder-api-0\" (UID: \"58677e2e-9fc6-4e50-b342-e912afa8d969\") " pod="openstack/cinder-api-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.472372 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/58677e2e-9fc6-4e50-b342-e912afa8d969-etc-machine-id\") pod \"cinder-api-0\" (UID: \"58677e2e-9fc6-4e50-b342-e912afa8d969\") " pod="openstack/cinder-api-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.472404 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58677e2e-9fc6-4e50-b342-e912afa8d969-logs\") pod \"cinder-api-0\" (UID: \"58677e2e-9fc6-4e50-b342-e912afa8d969\") " pod="openstack/cinder-api-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.472460 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58677e2e-9fc6-4e50-b342-e912afa8d969-config-data\") pod \"cinder-api-0\" (UID: \"58677e2e-9fc6-4e50-b342-e912afa8d969\") " pod="openstack/cinder-api-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.472507 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58677e2e-9fc6-4e50-b342-e912afa8d969-config-data-custom\") pod \"cinder-api-0\" (UID: \"58677e2e-9fc6-4e50-b342-e912afa8d969\") " pod="openstack/cinder-api-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.472542 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd7tw\" (UniqueName: \"kubernetes.io/projected/58677e2e-9fc6-4e50-b342-e912afa8d969-kube-api-access-vd7tw\") pod \"cinder-api-0\" (UID: \"58677e2e-9fc6-4e50-b342-e912afa8d969\") " pod="openstack/cinder-api-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.472578 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58677e2e-9fc6-4e50-b342-e912afa8d969-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"58677e2e-9fc6-4e50-b342-e912afa8d969\") " pod="openstack/cinder-api-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.473393 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58677e2e-9fc6-4e50-b342-e912afa8d969-logs\") pod \"cinder-api-0\" (UID: \"58677e2e-9fc6-4e50-b342-e912afa8d969\") " pod="openstack/cinder-api-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.481338 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58677e2e-9fc6-4e50-b342-e912afa8d969-config-data\") pod \"cinder-api-0\" (UID: \"58677e2e-9fc6-4e50-b342-e912afa8d969\") " pod="openstack/cinder-api-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.481421 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/58677e2e-9fc6-4e50-b342-e912afa8d969-etc-machine-id\") pod \"cinder-api-0\" (UID: \"58677e2e-9fc6-4e50-b342-e912afa8d969\") " pod="openstack/cinder-api-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.489677 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58677e2e-9fc6-4e50-b342-e912afa8d969-config-data-custom\") pod \"cinder-api-0\" (UID: \"58677e2e-9fc6-4e50-b342-e912afa8d969\") " pod="openstack/cinder-api-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.490029 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58677e2e-9fc6-4e50-b342-e912afa8d969-scripts\") pod \"cinder-api-0\" (UID: \"58677e2e-9fc6-4e50-b342-e912afa8d969\") " pod="openstack/cinder-api-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.505688 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58677e2e-9fc6-4e50-b342-e912afa8d969-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"58677e2e-9fc6-4e50-b342-e912afa8d969\") " pod="openstack/cinder-api-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.521684 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd7tw\" (UniqueName: \"kubernetes.io/projected/58677e2e-9fc6-4e50-b342-e912afa8d969-kube-api-access-vd7tw\") pod \"cinder-api-0\" (UID: \"58677e2e-9fc6-4e50-b342-e912afa8d969\") " pod="openstack/cinder-api-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.529625 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-756c4b86c6-rm274" event={"ID":"dbc1c989-5fa1-46dc-818e-8d609c069e34","Type":"ContainerStarted","Data":"0d4dd1918d120f5fc3a0ac6484d4e6405bafa0d4720b5488a0d85e9bad80fcf2"} Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.555014 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fc9b6f5b5-6ps9m" event={"ID":"51b847ef-ada2-456f-819d-0084fbb17185","Type":"ContainerStarted","Data":"789e66a026cc5bb74f59bfa18247bcf771599597baee3a6d15f80b61a7a1fb39"} Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.631501 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.651498 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5fc9b6f5b5-6ps9m" podStartSLOduration=4.217235668 podStartE2EDuration="10.651472969s" podCreationTimestamp="2026-03-13 10:26:24 +0000 UTC" firstStartedPulling="2026-03-13 10:26:26.02929894 +0000 UTC m=+1360.051829073" lastFinishedPulling="2026-03-13 10:26:32.463536251 +0000 UTC m=+1366.486066374" observedRunningTime="2026-03-13 10:26:34.60962354 +0000 UTC m=+1368.632153693" watchObservedRunningTime="2026-03-13 10:26:34.651472969 +0000 UTC m=+1368.674003122" Mar 13 10:26:34 crc kubenswrapper[4632]: I0313 10:26:34.774751 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="1761ca69-46fd-4375-af60-22b3e77c19a2" containerName="galera" probeResult="failure" output="command timed out" Mar 13 10:26:35 crc kubenswrapper[4632]: I0313 10:26:35.279288 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 13 10:26:35 crc kubenswrapper[4632]: I0313 10:26:35.411097 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7bdb5f7878-ng2k2" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Mar 13 10:26:35 crc kubenswrapper[4632]: I0313 10:26:35.411537 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:26:35 crc kubenswrapper[4632]: I0313 10:26:35.412331 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"dc4a058f6feb7822333693352f32f5677ff03988b7b5b71005c85c4bf733b402"} pod="openstack/horizon-7bdb5f7878-ng2k2" containerMessage="Container horizon failed startup probe, will be restarted" Mar 13 10:26:35 crc kubenswrapper[4632]: I0313 10:26:35.412377 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7bdb5f7878-ng2k2" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerName="horizon" containerID="cri-o://dc4a058f6feb7822333693352f32f5677ff03988b7b5b71005c85c4bf733b402" gracePeriod=30 Mar 13 10:26:35 crc kubenswrapper[4632]: I0313 10:26:35.637494 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f6d96bd7f-txx79"] Mar 13 10:26:35 crc kubenswrapper[4632]: I0313 10:26:35.648221 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6d96bd7f-txx79" event={"ID":"55712a50-9dcf-44ce-8bac-9aa3ecf65db4","Type":"ContainerStarted","Data":"749192ea37afcdb5bad8f984bb1339eb6de202d1531a18803ce98189920ca65c"} Mar 13 10:26:35 crc kubenswrapper[4632]: I0313 10:26:35.685380 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6e715bfb-1bd5-4c21-ac77-df48fa58a69c","Type":"ContainerStarted","Data":"39e3d26a54ef5266d55d8b5bd910a7757a4fa838dc128c8cd4f4e7a4524e6288"} Mar 13 10:26:35 crc kubenswrapper[4632]: I0313 10:26:35.706817 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cc676b85c-q67wf" podUID="6cbca281-a753-4810-ab5f-a2d5a5e9c41d" containerName="dnsmasq-dns" containerID="cri-o://d4574f8c63575790d1486fa1f8221ce400d19901d3317c976e702416503cba64" gracePeriod=10 Mar 13 10:26:35 crc kubenswrapper[4632]: I0313 10:26:35.708801 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-756c4b86c6-rm274" event={"ID":"dbc1c989-5fa1-46dc-818e-8d609c069e34","Type":"ContainerStarted","Data":"674891ab2fd374236aa142133f7ffa3c2586bcb2078e41af46e08b0c49dd29e8"} Mar 13 10:26:35 crc kubenswrapper[4632]: I0313 10:26:35.708879 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-756c4b86c6-rm274" Mar 13 10:26:35 crc kubenswrapper[4632]: I0313 10:26:35.708908 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-756c4b86c6-rm274" Mar 13 10:26:35 crc kubenswrapper[4632]: I0313 10:26:35.743094 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 13 10:26:35 crc kubenswrapper[4632]: I0313 10:26:35.762182 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-756c4b86c6-rm274" podStartSLOduration=6.762159903 podStartE2EDuration="6.762159903s" podCreationTimestamp="2026-03-13 10:26:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:26:35.752599177 +0000 UTC m=+1369.775129330" watchObservedRunningTime="2026-03-13 10:26:35.762159903 +0000 UTC m=+1369.784690026" Mar 13 10:26:35 crc kubenswrapper[4632]: I0313 10:26:35.858866 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-689764498d-rg7vt" podUID="5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Mar 13 10:26:35 crc kubenswrapper[4632]: I0313 10:26:35.858998 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-689764498d-rg7vt" Mar 13 10:26:35 crc kubenswrapper[4632]: I0313 10:26:35.860040 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"8ce0185281fb59d0c6bda2b2c484ad3711b4bd3b729b4b8677e75ca6b8e1f739"} pod="openstack/horizon-689764498d-rg7vt" containerMessage="Container horizon failed startup probe, will be restarted" Mar 13 10:26:35 crc kubenswrapper[4632]: I0313 10:26:35.860108 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-689764498d-rg7vt" podUID="5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c" containerName="horizon" containerID="cri-o://8ce0185281fb59d0c6bda2b2c484ad3711b4bd3b729b4b8677e75ca6b8e1f739" gracePeriod=30 Mar 13 10:26:36 crc kubenswrapper[4632]: I0313 10:26:36.665635 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cc676b85c-q67wf" Mar 13 10:26:36 crc kubenswrapper[4632]: I0313 10:26:36.766105 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6cbca281-a753-4810-ab5f-a2d5a5e9c41d-ovsdbserver-nb\") pod \"6cbca281-a753-4810-ab5f-a2d5a5e9c41d\" (UID: \"6cbca281-a753-4810-ab5f-a2d5a5e9c41d\") " Mar 13 10:26:36 crc kubenswrapper[4632]: I0313 10:26:36.766369 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq8dx\" (UniqueName: \"kubernetes.io/projected/6cbca281-a753-4810-ab5f-a2d5a5e9c41d-kube-api-access-sq8dx\") pod \"6cbca281-a753-4810-ab5f-a2d5a5e9c41d\" (UID: \"6cbca281-a753-4810-ab5f-a2d5a5e9c41d\") " Mar 13 10:26:36 crc kubenswrapper[4632]: I0313 10:26:36.766761 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cbca281-a753-4810-ab5f-a2d5a5e9c41d-ovsdbserver-sb\") pod \"6cbca281-a753-4810-ab5f-a2d5a5e9c41d\" (UID: \"6cbca281-a753-4810-ab5f-a2d5a5e9c41d\") " Mar 13 10:26:36 crc kubenswrapper[4632]: I0313 10:26:36.766813 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cbca281-a753-4810-ab5f-a2d5a5e9c41d-config\") pod \"6cbca281-a753-4810-ab5f-a2d5a5e9c41d\" (UID: \"6cbca281-a753-4810-ab5f-a2d5a5e9c41d\") " Mar 13 10:26:36 crc kubenswrapper[4632]: I0313 10:26:36.766837 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6cbca281-a753-4810-ab5f-a2d5a5e9c41d-dns-swift-storage-0\") pod \"6cbca281-a753-4810-ab5f-a2d5a5e9c41d\" (UID: \"6cbca281-a753-4810-ab5f-a2d5a5e9c41d\") " Mar 13 10:26:36 crc kubenswrapper[4632]: I0313 10:26:36.767139 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cbca281-a753-4810-ab5f-a2d5a5e9c41d-dns-svc\") pod \"6cbca281-a753-4810-ab5f-a2d5a5e9c41d\" (UID: \"6cbca281-a753-4810-ab5f-a2d5a5e9c41d\") " Mar 13 10:26:36 crc kubenswrapper[4632]: I0313 10:26:36.826883 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cbca281-a753-4810-ab5f-a2d5a5e9c41d-kube-api-access-sq8dx" (OuterVolumeSpecName: "kube-api-access-sq8dx") pod "6cbca281-a753-4810-ab5f-a2d5a5e9c41d" (UID: "6cbca281-a753-4810-ab5f-a2d5a5e9c41d"). InnerVolumeSpecName "kube-api-access-sq8dx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:26:36 crc kubenswrapper[4632]: I0313 10:26:36.827112 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"58677e2e-9fc6-4e50-b342-e912afa8d969","Type":"ContainerStarted","Data":"8037b401a0baaaa45f09498066b3b722d38c4aef73b4ab3874c935fbc21eac6e"} Mar 13 10:26:36 crc kubenswrapper[4632]: I0313 10:26:36.872419 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sq8dx\" (UniqueName: \"kubernetes.io/projected/6cbca281-a753-4810-ab5f-a2d5a5e9c41d-kube-api-access-sq8dx\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:36 crc kubenswrapper[4632]: I0313 10:26:36.903199 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cbca281-a753-4810-ab5f-a2d5a5e9c41d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6cbca281-a753-4810-ab5f-a2d5a5e9c41d" (UID: "6cbca281-a753-4810-ab5f-a2d5a5e9c41d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:26:36 crc kubenswrapper[4632]: I0313 10:26:36.928154 4632 generic.go:334] "Generic (PLEG): container finished" podID="6cbca281-a753-4810-ab5f-a2d5a5e9c41d" containerID="d4574f8c63575790d1486fa1f8221ce400d19901d3317c976e702416503cba64" exitCode=0 Mar 13 10:26:36 crc kubenswrapper[4632]: I0313 10:26:36.928360 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cc676b85c-q67wf" Mar 13 10:26:36 crc kubenswrapper[4632]: I0313 10:26:36.929145 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cc676b85c-q67wf" event={"ID":"6cbca281-a753-4810-ab5f-a2d5a5e9c41d","Type":"ContainerDied","Data":"d4574f8c63575790d1486fa1f8221ce400d19901d3317c976e702416503cba64"} Mar 13 10:26:36 crc kubenswrapper[4632]: I0313 10:26:36.929176 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cc676b85c-q67wf" event={"ID":"6cbca281-a753-4810-ab5f-a2d5a5e9c41d","Type":"ContainerDied","Data":"c35af663b9fb4942a62d07ad5236c27fcd9454c66f1967196835246c5924112a"} Mar 13 10:26:36 crc kubenswrapper[4632]: I0313 10:26:36.929193 4632 scope.go:117] "RemoveContainer" containerID="d4574f8c63575790d1486fa1f8221ce400d19901d3317c976e702416503cba64" Mar 13 10:26:36 crc kubenswrapper[4632]: I0313 10:26:36.931012 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cbca281-a753-4810-ab5f-a2d5a5e9c41d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6cbca281-a753-4810-ab5f-a2d5a5e9c41d" (UID: "6cbca281-a753-4810-ab5f-a2d5a5e9c41d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:26:36 crc kubenswrapper[4632]: I0313 10:26:36.936995 4632 generic.go:334] "Generic (PLEG): container finished" podID="55712a50-9dcf-44ce-8bac-9aa3ecf65db4" containerID="af3fa8988b343c225a97a2143774e273237597ed7c92bf90057d129267e74a5e" exitCode=0 Mar 13 10:26:36 crc kubenswrapper[4632]: I0313 10:26:36.938278 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6d96bd7f-txx79" event={"ID":"55712a50-9dcf-44ce-8bac-9aa3ecf65db4","Type":"ContainerDied","Data":"af3fa8988b343c225a97a2143774e273237597ed7c92bf90057d129267e74a5e"} Mar 13 10:26:36 crc kubenswrapper[4632]: I0313 10:26:36.989822 4632 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cbca281-a753-4810-ab5f-a2d5a5e9c41d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:36 crc kubenswrapper[4632]: I0313 10:26:36.989852 4632 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6cbca281-a753-4810-ab5f-a2d5a5e9c41d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:37 crc kubenswrapper[4632]: I0313 10:26:37.003586 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cbca281-a753-4810-ab5f-a2d5a5e9c41d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6cbca281-a753-4810-ab5f-a2d5a5e9c41d" (UID: "6cbca281-a753-4810-ab5f-a2d5a5e9c41d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:26:37 crc kubenswrapper[4632]: I0313 10:26:37.009628 4632 scope.go:117] "RemoveContainer" containerID="de515c666eef870de947ba353b522abb4ca9136dc9f866c10cc7d68d392957ce" Mar 13 10:26:37 crc kubenswrapper[4632]: I0313 10:26:37.027669 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cbca281-a753-4810-ab5f-a2d5a5e9c41d-config" (OuterVolumeSpecName: "config") pod "6cbca281-a753-4810-ab5f-a2d5a5e9c41d" (UID: "6cbca281-a753-4810-ab5f-a2d5a5e9c41d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:26:37 crc kubenswrapper[4632]: I0313 10:26:37.034449 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cbca281-a753-4810-ab5f-a2d5a5e9c41d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6cbca281-a753-4810-ab5f-a2d5a5e9c41d" (UID: "6cbca281-a753-4810-ab5f-a2d5a5e9c41d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:26:37 crc kubenswrapper[4632]: I0313 10:26:37.048341 4632 scope.go:117] "RemoveContainer" containerID="d4574f8c63575790d1486fa1f8221ce400d19901d3317c976e702416503cba64" Mar 13 10:26:37 crc kubenswrapper[4632]: E0313 10:26:37.053091 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4574f8c63575790d1486fa1f8221ce400d19901d3317c976e702416503cba64\": container with ID starting with d4574f8c63575790d1486fa1f8221ce400d19901d3317c976e702416503cba64 not found: ID does not exist" containerID="d4574f8c63575790d1486fa1f8221ce400d19901d3317c976e702416503cba64" Mar 13 10:26:37 crc kubenswrapper[4632]: I0313 10:26:37.053155 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4574f8c63575790d1486fa1f8221ce400d19901d3317c976e702416503cba64"} err="failed to get container status \"d4574f8c63575790d1486fa1f8221ce400d19901d3317c976e702416503cba64\": rpc error: code = NotFound desc = could not find container \"d4574f8c63575790d1486fa1f8221ce400d19901d3317c976e702416503cba64\": container with ID starting with d4574f8c63575790d1486fa1f8221ce400d19901d3317c976e702416503cba64 not found: ID does not exist" Mar 13 10:26:37 crc kubenswrapper[4632]: I0313 10:26:37.053184 4632 scope.go:117] "RemoveContainer" containerID="de515c666eef870de947ba353b522abb4ca9136dc9f866c10cc7d68d392957ce" Mar 13 10:26:37 crc kubenswrapper[4632]: E0313 10:26:37.053667 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de515c666eef870de947ba353b522abb4ca9136dc9f866c10cc7d68d392957ce\": container with ID starting with de515c666eef870de947ba353b522abb4ca9136dc9f866c10cc7d68d392957ce not found: ID does not exist" containerID="de515c666eef870de947ba353b522abb4ca9136dc9f866c10cc7d68d392957ce" Mar 13 10:26:37 crc kubenswrapper[4632]: I0313 10:26:37.053728 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de515c666eef870de947ba353b522abb4ca9136dc9f866c10cc7d68d392957ce"} err="failed to get container status \"de515c666eef870de947ba353b522abb4ca9136dc9f866c10cc7d68d392957ce\": rpc error: code = NotFound desc = could not find container \"de515c666eef870de947ba353b522abb4ca9136dc9f866c10cc7d68d392957ce\": container with ID starting with de515c666eef870de947ba353b522abb4ca9136dc9f866c10cc7d68d392957ce not found: ID does not exist" Mar 13 10:26:37 crc kubenswrapper[4632]: I0313 10:26:37.091170 4632 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cbca281-a753-4810-ab5f-a2d5a5e9c41d-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:37 crc kubenswrapper[4632]: I0313 10:26:37.091203 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cbca281-a753-4810-ab5f-a2d5a5e9c41d-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:37 crc kubenswrapper[4632]: I0313 10:26:37.091213 4632 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6cbca281-a753-4810-ab5f-a2d5a5e9c41d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:37 crc kubenswrapper[4632]: I0313 10:26:37.270424 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cc676b85c-q67wf"] Mar 13 10:26:37 crc kubenswrapper[4632]: I0313 10:26:37.283972 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cc676b85c-q67wf"] Mar 13 10:26:37 crc kubenswrapper[4632]: I0313 10:26:37.812404 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 13 10:26:37 crc kubenswrapper[4632]: I0313 10:26:37.979453 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"58677e2e-9fc6-4e50-b342-e912afa8d969","Type":"ContainerStarted","Data":"a9d0bc7751d471197cb532c1a7e500502d2e1e74a150ed57680796972e393189"} Mar 13 10:26:38 crc kubenswrapper[4632]: I0313 10:26:38.007580 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6d96bd7f-txx79" event={"ID":"55712a50-9dcf-44ce-8bac-9aa3ecf65db4","Type":"ContainerStarted","Data":"fd4487114042316df9fc87c4e68537674ac28dafb4672e7c807d655817ad05cf"} Mar 13 10:26:38 crc kubenswrapper[4632]: I0313 10:26:38.008828 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f6d96bd7f-txx79" Mar 13 10:26:38 crc kubenswrapper[4632]: I0313 10:26:38.100225 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f6d96bd7f-txx79" podStartSLOduration=5.100201899 podStartE2EDuration="5.100201899s" podCreationTimestamp="2026-03-13 10:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:26:38.080818761 +0000 UTC m=+1372.103348894" watchObservedRunningTime="2026-03-13 10:26:38.100201899 +0000 UTC m=+1372.122732052" Mar 13 10:26:38 crc kubenswrapper[4632]: I0313 10:26:38.121224 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cbca281-a753-4810-ab5f-a2d5a5e9c41d" path="/var/lib/kubelet/pods/6cbca281-a753-4810-ab5f-a2d5a5e9c41d/volumes" Mar 13 10:26:38 crc kubenswrapper[4632]: I0313 10:26:38.121866 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6e715bfb-1bd5-4c21-ac77-df48fa58a69c","Type":"ContainerStarted","Data":"ef04a85e3f43e78fc0a1f042d4fba78426911b179ed385ad01f997acf3e9c595"} Mar 13 10:26:39 crc kubenswrapper[4632]: I0313 10:26:39.714196 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-548c8b4b94-2dglr" podUID="6fa310f1-40ef-4e74-9647-d3ea87858f11" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.168:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 10:26:39 crc kubenswrapper[4632]: I0313 10:26:39.714226 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-548c8b4b94-2dglr" podUID="6fa310f1-40ef-4e74-9647-d3ea87858f11" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.168:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 10:26:40 crc kubenswrapper[4632]: I0313 10:26:40.106660 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"58677e2e-9fc6-4e50-b342-e912afa8d969","Type":"ContainerStarted","Data":"3a8d9431bb58dc2e36bce7009280ffed0639f98e73ca93dba3c41c03d94fb14f"} Mar 13 10:26:40 crc kubenswrapper[4632]: I0313 10:26:40.106871 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="58677e2e-9fc6-4e50-b342-e912afa8d969" containerName="cinder-api-log" containerID="cri-o://a9d0bc7751d471197cb532c1a7e500502d2e1e74a150ed57680796972e393189" gracePeriod=30 Mar 13 10:26:40 crc kubenswrapper[4632]: I0313 10:26:40.107294 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 13 10:26:40 crc kubenswrapper[4632]: I0313 10:26:40.107643 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="58677e2e-9fc6-4e50-b342-e912afa8d969" containerName="cinder-api" containerID="cri-o://3a8d9431bb58dc2e36bce7009280ffed0639f98e73ca93dba3c41c03d94fb14f" gracePeriod=30 Mar 13 10:26:40 crc kubenswrapper[4632]: I0313 10:26:40.117374 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6e715bfb-1bd5-4c21-ac77-df48fa58a69c","Type":"ContainerStarted","Data":"2d245775b1eef98309f47f0b4c5c25147b2fa7c1b34bae4b51ee329f49499e55"} Mar 13 10:26:40 crc kubenswrapper[4632]: I0313 10:26:40.159690 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.159674761 podStartE2EDuration="6.159674761s" podCreationTimestamp="2026-03-13 10:26:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:26:40.147597415 +0000 UTC m=+1374.170127548" watchObservedRunningTime="2026-03-13 10:26:40.159674761 +0000 UTC m=+1374.182204894" Mar 13 10:26:40 crc kubenswrapper[4632]: I0313 10:26:40.190781 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.258534269 podStartE2EDuration="7.190759095s" podCreationTimestamp="2026-03-13 10:26:33 +0000 UTC" firstStartedPulling="2026-03-13 10:26:35.311884579 +0000 UTC m=+1369.334414712" lastFinishedPulling="2026-03-13 10:26:36.244109415 +0000 UTC m=+1370.266639538" observedRunningTime="2026-03-13 10:26:40.178923326 +0000 UTC m=+1374.201453469" watchObservedRunningTime="2026-03-13 10:26:40.190759095 +0000 UTC m=+1374.213289228" Mar 13 10:26:40 crc kubenswrapper[4632]: I0313 10:26:40.648439 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-548c8b4b94-2dglr" podUID="6fa310f1-40ef-4e74-9647-d3ea87858f11" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.168:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 10:26:40 crc kubenswrapper[4632]: I0313 10:26:40.697363 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-548c8b4b94-2dglr" podUID="6fa310f1-40ef-4e74-9647-d3ea87858f11" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.168:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 10:26:41 crc kubenswrapper[4632]: I0313 10:26:41.138267 4632 generic.go:334] "Generic (PLEG): container finished" podID="58677e2e-9fc6-4e50-b342-e912afa8d969" containerID="a9d0bc7751d471197cb532c1a7e500502d2e1e74a150ed57680796972e393189" exitCode=143 Mar 13 10:26:41 crc kubenswrapper[4632]: I0313 10:26:41.138400 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"58677e2e-9fc6-4e50-b342-e912afa8d969","Type":"ContainerDied","Data":"a9d0bc7751d471197cb532c1a7e500502d2e1e74a150ed57680796972e393189"} Mar 13 10:26:43 crc kubenswrapper[4632]: I0313 10:26:43.610409 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-756c4b86c6-rm274" Mar 13 10:26:44 crc kubenswrapper[4632]: I0313 10:26:44.174674 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 13 10:26:44 crc kubenswrapper[4632]: I0313 10:26:44.178138 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="6e715bfb-1bd5-4c21-ac77-df48fa58a69c" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.170:8080/\": dial tcp 10.217.0.170:8080: connect: connection refused" Mar 13 10:26:44 crc kubenswrapper[4632]: I0313 10:26:44.346562 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f6d96bd7f-txx79" Mar 13 10:26:44 crc kubenswrapper[4632]: I0313 10:26:44.447482 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5776d95bfc-hl9dv"] Mar 13 10:26:44 crc kubenswrapper[4632]: I0313 10:26:44.447727 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5776d95bfc-hl9dv" podUID="ff547198-2736-4059-8e66-e63ea9ce7345" containerName="dnsmasq-dns" containerID="cri-o://1076485d4d02b6cacd1f94b4c459b88d5309d73c47777ad04b4bed1ee81eb7ff" gracePeriod=10 Mar 13 10:26:44 crc kubenswrapper[4632]: I0313 10:26:44.757713 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-548c8b4b94-2dglr" podUID="6fa310f1-40ef-4e74-9647-d3ea87858f11" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.168:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 10:26:45 crc kubenswrapper[4632]: I0313 10:26:45.197147 4632 generic.go:334] "Generic (PLEG): container finished" podID="ff547198-2736-4059-8e66-e63ea9ce7345" containerID="1076485d4d02b6cacd1f94b4c459b88d5309d73c47777ad04b4bed1ee81eb7ff" exitCode=0 Mar 13 10:26:45 crc kubenswrapper[4632]: I0313 10:26:45.197200 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5776d95bfc-hl9dv" event={"ID":"ff547198-2736-4059-8e66-e63ea9ce7345","Type":"ContainerDied","Data":"1076485d4d02b6cacd1f94b4c459b88d5309d73c47777ad04b4bed1ee81eb7ff"} Mar 13 10:26:45 crc kubenswrapper[4632]: I0313 10:26:45.690156 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-548c8b4b94-2dglr" podUID="6fa310f1-40ef-4e74-9647-d3ea87858f11" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.168:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 10:26:45 crc kubenswrapper[4632]: I0313 10:26:45.739238 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-548c8b4b94-2dglr" podUID="6fa310f1-40ef-4e74-9647-d3ea87858f11" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.168:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 10:26:45 crc kubenswrapper[4632]: I0313 10:26:45.739287 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-548c8b4b94-2dglr" Mar 13 10:26:45 crc kubenswrapper[4632]: I0313 10:26:45.753301 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-548c8b4b94-2dglr" Mar 13 10:26:46 crc kubenswrapper[4632]: I0313 10:26:46.939132 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-756c4b86c6-rm274" podUID="dbc1c989-5fa1-46dc-818e-8d609c069e34" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.169:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:26:48 crc kubenswrapper[4632]: I0313 10:26:48.615179 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-756c4b86c6-rm274" podUID="dbc1c989-5fa1-46dc-818e-8d609c069e34" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.169:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:26:49 crc kubenswrapper[4632]: I0313 10:26:49.099314 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7dd5c7bdcd-4969b" Mar 13 10:26:49 crc kubenswrapper[4632]: I0313 10:26:49.110888 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7dd5c7bdcd-4969b" Mar 13 10:26:49 crc kubenswrapper[4632]: I0313 10:26:49.420235 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6db55c595b-pwgcg"] Mar 13 10:26:49 crc kubenswrapper[4632]: E0313 10:26:49.420608 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cbca281-a753-4810-ab5f-a2d5a5e9c41d" containerName="dnsmasq-dns" Mar 13 10:26:49 crc kubenswrapper[4632]: I0313 10:26:49.420625 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cbca281-a753-4810-ab5f-a2d5a5e9c41d" containerName="dnsmasq-dns" Mar 13 10:26:49 crc kubenswrapper[4632]: E0313 10:26:49.420641 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cbca281-a753-4810-ab5f-a2d5a5e9c41d" containerName="init" Mar 13 10:26:49 crc kubenswrapper[4632]: I0313 10:26:49.420647 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cbca281-a753-4810-ab5f-a2d5a5e9c41d" containerName="init" Mar 13 10:26:49 crc kubenswrapper[4632]: I0313 10:26:49.420849 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cbca281-a753-4810-ab5f-a2d5a5e9c41d" containerName="dnsmasq-dns" Mar 13 10:26:49 crc kubenswrapper[4632]: I0313 10:26:49.421810 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6db55c595b-pwgcg" Mar 13 10:26:49 crc kubenswrapper[4632]: I0313 10:26:49.457558 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5776d95bfc-hl9dv" podUID="ff547198-2736-4059-8e66-e63ea9ce7345" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.158:5353: connect: connection refused" Mar 13 10:26:49 crc kubenswrapper[4632]: I0313 10:26:49.482079 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6db55c595b-pwgcg"] Mar 13 10:26:49 crc kubenswrapper[4632]: I0313 10:26:49.549414 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab896d5b-a5b6-46a3-84d8-c3a8c968eac0-scripts\") pod \"placement-6db55c595b-pwgcg\" (UID: \"ab896d5b-a5b6-46a3-84d8-c3a8c968eac0\") " pod="openstack/placement-6db55c595b-pwgcg" Mar 13 10:26:49 crc kubenswrapper[4632]: I0313 10:26:49.549466 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab896d5b-a5b6-46a3-84d8-c3a8c968eac0-combined-ca-bundle\") pod \"placement-6db55c595b-pwgcg\" (UID: \"ab896d5b-a5b6-46a3-84d8-c3a8c968eac0\") " pod="openstack/placement-6db55c595b-pwgcg" Mar 13 10:26:49 crc kubenswrapper[4632]: I0313 10:26:49.549510 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jms5c\" (UniqueName: \"kubernetes.io/projected/ab896d5b-a5b6-46a3-84d8-c3a8c968eac0-kube-api-access-jms5c\") pod \"placement-6db55c595b-pwgcg\" (UID: \"ab896d5b-a5b6-46a3-84d8-c3a8c968eac0\") " pod="openstack/placement-6db55c595b-pwgcg" Mar 13 10:26:49 crc kubenswrapper[4632]: I0313 10:26:49.549570 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab896d5b-a5b6-46a3-84d8-c3a8c968eac0-config-data\") pod \"placement-6db55c595b-pwgcg\" (UID: \"ab896d5b-a5b6-46a3-84d8-c3a8c968eac0\") " pod="openstack/placement-6db55c595b-pwgcg" Mar 13 10:26:49 crc kubenswrapper[4632]: I0313 10:26:49.549608 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab896d5b-a5b6-46a3-84d8-c3a8c968eac0-internal-tls-certs\") pod \"placement-6db55c595b-pwgcg\" (UID: \"ab896d5b-a5b6-46a3-84d8-c3a8c968eac0\") " pod="openstack/placement-6db55c595b-pwgcg" Mar 13 10:26:49 crc kubenswrapper[4632]: I0313 10:26:49.549630 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab896d5b-a5b6-46a3-84d8-c3a8c968eac0-logs\") pod \"placement-6db55c595b-pwgcg\" (UID: \"ab896d5b-a5b6-46a3-84d8-c3a8c968eac0\") " pod="openstack/placement-6db55c595b-pwgcg" Mar 13 10:26:49 crc kubenswrapper[4632]: I0313 10:26:49.549683 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab896d5b-a5b6-46a3-84d8-c3a8c968eac0-public-tls-certs\") pod \"placement-6db55c595b-pwgcg\" (UID: \"ab896d5b-a5b6-46a3-84d8-c3a8c968eac0\") " pod="openstack/placement-6db55c595b-pwgcg" Mar 13 10:26:49 crc kubenswrapper[4632]: I0313 10:26:49.653497 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jms5c\" (UniqueName: \"kubernetes.io/projected/ab896d5b-a5b6-46a3-84d8-c3a8c968eac0-kube-api-access-jms5c\") pod \"placement-6db55c595b-pwgcg\" (UID: \"ab896d5b-a5b6-46a3-84d8-c3a8c968eac0\") " pod="openstack/placement-6db55c595b-pwgcg" Mar 13 10:26:49 crc kubenswrapper[4632]: I0313 10:26:49.653670 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab896d5b-a5b6-46a3-84d8-c3a8c968eac0-config-data\") pod \"placement-6db55c595b-pwgcg\" (UID: \"ab896d5b-a5b6-46a3-84d8-c3a8c968eac0\") " pod="openstack/placement-6db55c595b-pwgcg" Mar 13 10:26:49 crc kubenswrapper[4632]: I0313 10:26:49.653750 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab896d5b-a5b6-46a3-84d8-c3a8c968eac0-internal-tls-certs\") pod \"placement-6db55c595b-pwgcg\" (UID: \"ab896d5b-a5b6-46a3-84d8-c3a8c968eac0\") " pod="openstack/placement-6db55c595b-pwgcg" Mar 13 10:26:49 crc kubenswrapper[4632]: I0313 10:26:49.653808 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab896d5b-a5b6-46a3-84d8-c3a8c968eac0-logs\") pod \"placement-6db55c595b-pwgcg\" (UID: \"ab896d5b-a5b6-46a3-84d8-c3a8c968eac0\") " pod="openstack/placement-6db55c595b-pwgcg" Mar 13 10:26:49 crc kubenswrapper[4632]: I0313 10:26:49.653916 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab896d5b-a5b6-46a3-84d8-c3a8c968eac0-public-tls-certs\") pod \"placement-6db55c595b-pwgcg\" (UID: \"ab896d5b-a5b6-46a3-84d8-c3a8c968eac0\") " pod="openstack/placement-6db55c595b-pwgcg" Mar 13 10:26:49 crc kubenswrapper[4632]: I0313 10:26:49.653975 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab896d5b-a5b6-46a3-84d8-c3a8c968eac0-scripts\") pod \"placement-6db55c595b-pwgcg\" (UID: \"ab896d5b-a5b6-46a3-84d8-c3a8c968eac0\") " pod="openstack/placement-6db55c595b-pwgcg" Mar 13 10:26:49 crc kubenswrapper[4632]: I0313 10:26:49.654008 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab896d5b-a5b6-46a3-84d8-c3a8c968eac0-combined-ca-bundle\") pod \"placement-6db55c595b-pwgcg\" (UID: \"ab896d5b-a5b6-46a3-84d8-c3a8c968eac0\") " pod="openstack/placement-6db55c595b-pwgcg" Mar 13 10:26:49 crc kubenswrapper[4632]: I0313 10:26:49.656373 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab896d5b-a5b6-46a3-84d8-c3a8c968eac0-logs\") pod \"placement-6db55c595b-pwgcg\" (UID: \"ab896d5b-a5b6-46a3-84d8-c3a8c968eac0\") " pod="openstack/placement-6db55c595b-pwgcg" Mar 13 10:26:49 crc kubenswrapper[4632]: I0313 10:26:49.676245 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="58677e2e-9fc6-4e50-b342-e912afa8d969" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.172:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 10:26:49 crc kubenswrapper[4632]: I0313 10:26:49.677833 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab896d5b-a5b6-46a3-84d8-c3a8c968eac0-public-tls-certs\") pod \"placement-6db55c595b-pwgcg\" (UID: \"ab896d5b-a5b6-46a3-84d8-c3a8c968eac0\") " pod="openstack/placement-6db55c595b-pwgcg" Mar 13 10:26:49 crc kubenswrapper[4632]: I0313 10:26:49.679246 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab896d5b-a5b6-46a3-84d8-c3a8c968eac0-combined-ca-bundle\") pod \"placement-6db55c595b-pwgcg\" (UID: \"ab896d5b-a5b6-46a3-84d8-c3a8c968eac0\") " pod="openstack/placement-6db55c595b-pwgcg" Mar 13 10:26:49 crc kubenswrapper[4632]: I0313 10:26:49.680896 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab896d5b-a5b6-46a3-84d8-c3a8c968eac0-config-data\") pod \"placement-6db55c595b-pwgcg\" (UID: \"ab896d5b-a5b6-46a3-84d8-c3a8c968eac0\") " pod="openstack/placement-6db55c595b-pwgcg" Mar 13 10:26:49 crc kubenswrapper[4632]: I0313 10:26:49.689389 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab896d5b-a5b6-46a3-84d8-c3a8c968eac0-scripts\") pod \"placement-6db55c595b-pwgcg\" (UID: \"ab896d5b-a5b6-46a3-84d8-c3a8c968eac0\") " pod="openstack/placement-6db55c595b-pwgcg" Mar 13 10:26:49 crc kubenswrapper[4632]: I0313 10:26:49.690042 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab896d5b-a5b6-46a3-84d8-c3a8c968eac0-internal-tls-certs\") pod \"placement-6db55c595b-pwgcg\" (UID: \"ab896d5b-a5b6-46a3-84d8-c3a8c968eac0\") " pod="openstack/placement-6db55c595b-pwgcg" Mar 13 10:26:49 crc kubenswrapper[4632]: I0313 10:26:49.713285 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jms5c\" (UniqueName: \"kubernetes.io/projected/ab896d5b-a5b6-46a3-84d8-c3a8c968eac0-kube-api-access-jms5c\") pod \"placement-6db55c595b-pwgcg\" (UID: \"ab896d5b-a5b6-46a3-84d8-c3a8c968eac0\") " pod="openstack/placement-6db55c595b-pwgcg" Mar 13 10:26:49 crc kubenswrapper[4632]: I0313 10:26:49.785557 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6db55c595b-pwgcg" Mar 13 10:26:50 crc kubenswrapper[4632]: I0313 10:26:50.041813 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 13 10:26:50 crc kubenswrapper[4632]: I0313 10:26:50.123736 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 13 10:26:50 crc kubenswrapper[4632]: I0313 10:26:50.273810 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="6e715bfb-1bd5-4c21-ac77-df48fa58a69c" containerName="cinder-scheduler" containerID="cri-o://ef04a85e3f43e78fc0a1f042d4fba78426911b179ed385ad01f997acf3e9c595" gracePeriod=30 Mar 13 10:26:50 crc kubenswrapper[4632]: I0313 10:26:50.273978 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="6e715bfb-1bd5-4c21-ac77-df48fa58a69c" containerName="probe" containerID="cri-o://2d245775b1eef98309f47f0b4c5c25147b2fa7c1b34bae4b51ee329f49499e55" gracePeriod=30 Mar 13 10:26:51 crc kubenswrapper[4632]: E0313 10:26:51.155262 4632 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e715bfb_1bd5_4c21_ac77_df48fa58a69c.slice/crio-conmon-2d245775b1eef98309f47f0b4c5c25147b2fa7c1b34bae4b51ee329f49499e55.scope\": RecentStats: unable to find data in memory cache]" Mar 13 10:26:51 crc kubenswrapper[4632]: I0313 10:26:51.287223 4632 generic.go:334] "Generic (PLEG): container finished" podID="6e715bfb-1bd5-4c21-ac77-df48fa58a69c" containerID="2d245775b1eef98309f47f0b4c5c25147b2fa7c1b34bae4b51ee329f49499e55" exitCode=0 Mar 13 10:26:51 crc kubenswrapper[4632]: I0313 10:26:51.287607 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6e715bfb-1bd5-4c21-ac77-df48fa58a69c","Type":"ContainerDied","Data":"2d245775b1eef98309f47f0b4c5c25147b2fa7c1b34bae4b51ee329f49499e55"} Mar 13 10:26:51 crc kubenswrapper[4632]: I0313 10:26:51.316026 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-756c4b86c6-rm274" Mar 13 10:26:51 crc kubenswrapper[4632]: I0313 10:26:51.402358 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-548c8b4b94-2dglr"] Mar 13 10:26:51 crc kubenswrapper[4632]: I0313 10:26:51.411197 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-548c8b4b94-2dglr" podUID="6fa310f1-40ef-4e74-9647-d3ea87858f11" containerName="barbican-api-log" containerID="cri-o://67882325af120e97844e1aef36a358fdd186b89ba1f3def214e49a353ec793aa" gracePeriod=30 Mar 13 10:26:51 crc kubenswrapper[4632]: I0313 10:26:51.412279 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-548c8b4b94-2dglr" podUID="6fa310f1-40ef-4e74-9647-d3ea87858f11" containerName="barbican-api" containerID="cri-o://309fa94df210d44c275999bad3e9b781bb4f9646e038b1a9463656385d210cf3" gracePeriod=30 Mar 13 10:26:52 crc kubenswrapper[4632]: I0313 10:26:52.302551 4632 generic.go:334] "Generic (PLEG): container finished" podID="6fa310f1-40ef-4e74-9647-d3ea87858f11" containerID="67882325af120e97844e1aef36a358fdd186b89ba1f3def214e49a353ec793aa" exitCode=143 Mar 13 10:26:52 crc kubenswrapper[4632]: I0313 10:26:52.302657 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-548c8b4b94-2dglr" event={"ID":"6fa310f1-40ef-4e74-9647-d3ea87858f11","Type":"ContainerDied","Data":"67882325af120e97844e1aef36a358fdd186b89ba1f3def214e49a353ec793aa"} Mar 13 10:26:52 crc kubenswrapper[4632]: I0313 10:26:52.306666 4632 generic.go:334] "Generic (PLEG): container finished" podID="6e715bfb-1bd5-4c21-ac77-df48fa58a69c" containerID="ef04a85e3f43e78fc0a1f042d4fba78426911b179ed385ad01f997acf3e9c595" exitCode=0 Mar 13 10:26:52 crc kubenswrapper[4632]: I0313 10:26:52.306721 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6e715bfb-1bd5-4c21-ac77-df48fa58a69c","Type":"ContainerDied","Data":"ef04a85e3f43e78fc0a1f042d4fba78426911b179ed385ad01f997acf3e9c595"} Mar 13 10:26:52 crc kubenswrapper[4632]: I0313 10:26:52.310961 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Mar 13 10:26:53 crc kubenswrapper[4632]: E0313 10:26:53.488103 4632 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Mar 13 10:26:53 crc kubenswrapper[4632]: E0313 10:26:53.488574 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jz5nk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(270ebc10-986f-4473-8a5e-9094de34ae98): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 13 10:26:53 crc kubenswrapper[4632]: E0313 10:26:53.489799 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="270ebc10-986f-4473-8a5e-9094de34ae98" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.023900 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.116210 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5776d95bfc-hl9dv" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.160474 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff547198-2736-4059-8e66-e63ea9ce7345-ovsdbserver-sb\") pod \"ff547198-2736-4059-8e66-e63ea9ce7345\" (UID: \"ff547198-2736-4059-8e66-e63ea9ce7345\") " Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.160520 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff547198-2736-4059-8e66-e63ea9ce7345-dns-swift-storage-0\") pod \"ff547198-2736-4059-8e66-e63ea9ce7345\" (UID: \"ff547198-2736-4059-8e66-e63ea9ce7345\") " Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.160565 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkdch\" (UniqueName: \"kubernetes.io/projected/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-kube-api-access-lkdch\") pod \"6e715bfb-1bd5-4c21-ac77-df48fa58a69c\" (UID: \"6e715bfb-1bd5-4c21-ac77-df48fa58a69c\") " Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.160603 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff547198-2736-4059-8e66-e63ea9ce7345-config\") pod \"ff547198-2736-4059-8e66-e63ea9ce7345\" (UID: \"ff547198-2736-4059-8e66-e63ea9ce7345\") " Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.160639 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4rzv\" (UniqueName: \"kubernetes.io/projected/ff547198-2736-4059-8e66-e63ea9ce7345-kube-api-access-v4rzv\") pod \"ff547198-2736-4059-8e66-e63ea9ce7345\" (UID: \"ff547198-2736-4059-8e66-e63ea9ce7345\") " Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.160675 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-config-data-custom\") pod \"6e715bfb-1bd5-4c21-ac77-df48fa58a69c\" (UID: \"6e715bfb-1bd5-4c21-ac77-df48fa58a69c\") " Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.160777 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-scripts\") pod \"6e715bfb-1bd5-4c21-ac77-df48fa58a69c\" (UID: \"6e715bfb-1bd5-4c21-ac77-df48fa58a69c\") " Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.160835 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff547198-2736-4059-8e66-e63ea9ce7345-dns-svc\") pod \"ff547198-2736-4059-8e66-e63ea9ce7345\" (UID: \"ff547198-2736-4059-8e66-e63ea9ce7345\") " Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.160862 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-etc-machine-id\") pod \"6e715bfb-1bd5-4c21-ac77-df48fa58a69c\" (UID: \"6e715bfb-1bd5-4c21-ac77-df48fa58a69c\") " Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.160909 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-config-data\") pod \"6e715bfb-1bd5-4c21-ac77-df48fa58a69c\" (UID: \"6e715bfb-1bd5-4c21-ac77-df48fa58a69c\") " Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.160926 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-combined-ca-bundle\") pod \"6e715bfb-1bd5-4c21-ac77-df48fa58a69c\" (UID: \"6e715bfb-1bd5-4c21-ac77-df48fa58a69c\") " Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.160975 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff547198-2736-4059-8e66-e63ea9ce7345-ovsdbserver-nb\") pod \"ff547198-2736-4059-8e66-e63ea9ce7345\" (UID: \"ff547198-2736-4059-8e66-e63ea9ce7345\") " Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.164314 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6e715bfb-1bd5-4c21-ac77-df48fa58a69c" (UID: "6e715bfb-1bd5-4c21-ac77-df48fa58a69c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.172493 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6e715bfb-1bd5-4c21-ac77-df48fa58a69c" (UID: "6e715bfb-1bd5-4c21-ac77-df48fa58a69c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.172593 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-kube-api-access-lkdch" (OuterVolumeSpecName: "kube-api-access-lkdch") pod "6e715bfb-1bd5-4c21-ac77-df48fa58a69c" (UID: "6e715bfb-1bd5-4c21-ac77-df48fa58a69c"). InnerVolumeSpecName "kube-api-access-lkdch". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.183804 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff547198-2736-4059-8e66-e63ea9ce7345-kube-api-access-v4rzv" (OuterVolumeSpecName: "kube-api-access-v4rzv") pod "ff547198-2736-4059-8e66-e63ea9ce7345" (UID: "ff547198-2736-4059-8e66-e63ea9ce7345"). InnerVolumeSpecName "kube-api-access-v4rzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.184804 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-scripts" (OuterVolumeSpecName: "scripts") pod "6e715bfb-1bd5-4c21-ac77-df48fa58a69c" (UID: "6e715bfb-1bd5-4c21-ac77-df48fa58a69c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.262134 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e715bfb-1bd5-4c21-ac77-df48fa58a69c" (UID: "6e715bfb-1bd5-4c21-ac77-df48fa58a69c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.262578 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-combined-ca-bundle\") pod \"6e715bfb-1bd5-4c21-ac77-df48fa58a69c\" (UID: \"6e715bfb-1bd5-4c21-ac77-df48fa58a69c\") " Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.263107 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkdch\" (UniqueName: \"kubernetes.io/projected/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-kube-api-access-lkdch\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.263132 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4rzv\" (UniqueName: \"kubernetes.io/projected/ff547198-2736-4059-8e66-e63ea9ce7345-kube-api-access-v4rzv\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.263144 4632 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.263155 4632 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.263166 4632 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:54 crc kubenswrapper[4632]: W0313 10:26:54.263245 4632 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/6e715bfb-1bd5-4c21-ac77-df48fa58a69c/volumes/kubernetes.io~secret/combined-ca-bundle Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.263255 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e715bfb-1bd5-4c21-ac77-df48fa58a69c" (UID: "6e715bfb-1bd5-4c21-ac77-df48fa58a69c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.268215 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6db55c595b-pwgcg"] Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.275832 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff547198-2736-4059-8e66-e63ea9ce7345-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ff547198-2736-4059-8e66-e63ea9ce7345" (UID: "ff547198-2736-4059-8e66-e63ea9ce7345"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.277304 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff547198-2736-4059-8e66-e63ea9ce7345-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ff547198-2736-4059-8e66-e63ea9ce7345" (UID: "ff547198-2736-4059-8e66-e63ea9ce7345"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.293837 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5c86b4b888-l9574" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.294206 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff547198-2736-4059-8e66-e63ea9ce7345-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ff547198-2736-4059-8e66-e63ea9ce7345" (UID: "ff547198-2736-4059-8e66-e63ea9ce7345"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.331195 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff547198-2736-4059-8e66-e63ea9ce7345-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ff547198-2736-4059-8e66-e63ea9ce7345" (UID: "ff547198-2736-4059-8e66-e63ea9ce7345"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.337024 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5776d95bfc-hl9dv" event={"ID":"ff547198-2736-4059-8e66-e63ea9ce7345","Type":"ContainerDied","Data":"8db6fac31f3928e6490a77faa8cf72ab51791153ec4fce9dafd1cd9fb950c31f"} Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.337063 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5776d95bfc-hl9dv" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.337091 4632 scope.go:117] "RemoveContainer" containerID="1076485d4d02b6cacd1f94b4c459b88d5309d73c47777ad04b4bed1ee81eb7ff" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.340438 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6e715bfb-1bd5-4c21-ac77-df48fa58a69c","Type":"ContainerDied","Data":"39e3d26a54ef5266d55d8b5bd910a7757a4fa838dc128c8cd4f4e7a4524e6288"} Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.340554 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.347535 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="270ebc10-986f-4473-8a5e-9094de34ae98" containerName="ceilometer-central-agent" containerID="cri-o://27c121915dbbdfc336d1bc55bed50eb5edaf76e1bc92f4f6b5e249f4ffe5098a" gracePeriod=30 Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.347834 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6db55c595b-pwgcg" event={"ID":"ab896d5b-a5b6-46a3-84d8-c3a8c968eac0","Type":"ContainerStarted","Data":"cd52fc862b4f56e1e6058f382a29e8086f42b20e58a9788e92c543c0f3389cd6"} Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.347892 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="270ebc10-986f-4473-8a5e-9094de34ae98" containerName="sg-core" containerID="cri-o://d138d976167695fe9d299247eefcff55845f7ad27e84fc81cc086274294f2e51" gracePeriod=30 Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.347966 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="270ebc10-986f-4473-8a5e-9094de34ae98" containerName="ceilometer-notification-agent" containerID="cri-o://b63cc4f80efbb7b17b044808a5b6c8d5aa98b9e2ae8e38ab95a55c4e3ba911d1" gracePeriod=30 Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.365253 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff547198-2736-4059-8e66-e63ea9ce7345-config" (OuterVolumeSpecName: "config") pod "ff547198-2736-4059-8e66-e63ea9ce7345" (UID: "ff547198-2736-4059-8e66-e63ea9ce7345"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.365922 4632 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff547198-2736-4059-8e66-e63ea9ce7345-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.365956 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.365972 4632 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff547198-2736-4059-8e66-e63ea9ce7345-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.365984 4632 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff547198-2736-4059-8e66-e63ea9ce7345-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.365995 4632 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff547198-2736-4059-8e66-e63ea9ce7345-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.366006 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff547198-2736-4059-8e66-e63ea9ce7345-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.385398 4632 scope.go:117] "RemoveContainer" containerID="dec2a00b325c16f4a1d001f23d5e8b1ffdb30f4c935f90c479b4c2928a1f9cbd" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.425994 4632 scope.go:117] "RemoveContainer" containerID="2d245775b1eef98309f47f0b4c5c25147b2fa7c1b34bae4b51ee329f49499e55" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.446060 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-config-data" (OuterVolumeSpecName: "config-data") pod "6e715bfb-1bd5-4c21-ac77-df48fa58a69c" (UID: "6e715bfb-1bd5-4c21-ac77-df48fa58a69c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.467347 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e715bfb-1bd5-4c21-ac77-df48fa58a69c-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.521034 4632 scope.go:117] "RemoveContainer" containerID="ef04a85e3f43e78fc0a1f042d4fba78426911b179ed385ad01f997acf3e9c595" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.645714 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-64bdffbb5c-mpfvf"] Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.646081 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-64bdffbb5c-mpfvf" podUID="6c867fc1-05ed-46c3-99dc-71ef8a09dad3" containerName="neutron-api" containerID="cri-o://027b2c4436a3d137f7ef6a7921904bf128e17aa7812143af60d4d11a546759da" gracePeriod=30 Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.646252 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-64bdffbb5c-mpfvf" podUID="6c867fc1-05ed-46c3-99dc-71ef8a09dad3" containerName="neutron-httpd" containerID="cri-o://bbc256375bc79a61ff656574ec8a596aed3314e7ad4cd2f7fcf6a7462aee3274" gracePeriod=30 Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.723205 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6588559b77-6f4bf"] Mar 13 10:26:54 crc kubenswrapper[4632]: E0313 10:26:54.731329 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff547198-2736-4059-8e66-e63ea9ce7345" containerName="dnsmasq-dns" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.731541 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff547198-2736-4059-8e66-e63ea9ce7345" containerName="dnsmasq-dns" Mar 13 10:26:54 crc kubenswrapper[4632]: E0313 10:26:54.731617 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e715bfb-1bd5-4c21-ac77-df48fa58a69c" containerName="cinder-scheduler" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.731675 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e715bfb-1bd5-4c21-ac77-df48fa58a69c" containerName="cinder-scheduler" Mar 13 10:26:54 crc kubenswrapper[4632]: E0313 10:26:54.731751 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e715bfb-1bd5-4c21-ac77-df48fa58a69c" containerName="probe" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.731809 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e715bfb-1bd5-4c21-ac77-df48fa58a69c" containerName="probe" Mar 13 10:26:54 crc kubenswrapper[4632]: E0313 10:26:54.731873 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff547198-2736-4059-8e66-e63ea9ce7345" containerName="init" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.731924 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff547198-2736-4059-8e66-e63ea9ce7345" containerName="init" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.732151 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff547198-2736-4059-8e66-e63ea9ce7345" containerName="dnsmasq-dns" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.732225 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e715bfb-1bd5-4c21-ac77-df48fa58a69c" containerName="probe" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.732287 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e715bfb-1bd5-4c21-ac77-df48fa58a69c" containerName="cinder-scheduler" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.733323 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6588559b77-6f4bf" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.745245 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-64bdffbb5c-mpfvf" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.782457 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/79498b99-6b5c-4a95-8558-5d615fc7abba-httpd-config\") pod \"neutron-6588559b77-6f4bf\" (UID: \"79498b99-6b5c-4a95-8558-5d615fc7abba\") " pod="openstack/neutron-6588559b77-6f4bf" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.782507 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79498b99-6b5c-4a95-8558-5d615fc7abba-combined-ca-bundle\") pod \"neutron-6588559b77-6f4bf\" (UID: \"79498b99-6b5c-4a95-8558-5d615fc7abba\") " pod="openstack/neutron-6588559b77-6f4bf" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.782592 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/79498b99-6b5c-4a95-8558-5d615fc7abba-config\") pod \"neutron-6588559b77-6f4bf\" (UID: \"79498b99-6b5c-4a95-8558-5d615fc7abba\") " pod="openstack/neutron-6588559b77-6f4bf" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.782684 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79498b99-6b5c-4a95-8558-5d615fc7abba-public-tls-certs\") pod \"neutron-6588559b77-6f4bf\" (UID: \"79498b99-6b5c-4a95-8558-5d615fc7abba\") " pod="openstack/neutron-6588559b77-6f4bf" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.782716 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79498b99-6b5c-4a95-8558-5d615fc7abba-internal-tls-certs\") pod \"neutron-6588559b77-6f4bf\" (UID: \"79498b99-6b5c-4a95-8558-5d615fc7abba\") " pod="openstack/neutron-6588559b77-6f4bf" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.782764 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g98kg\" (UniqueName: \"kubernetes.io/projected/79498b99-6b5c-4a95-8558-5d615fc7abba-kube-api-access-g98kg\") pod \"neutron-6588559b77-6f4bf\" (UID: \"79498b99-6b5c-4a95-8558-5d615fc7abba\") " pod="openstack/neutron-6588559b77-6f4bf" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.782823 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/79498b99-6b5c-4a95-8558-5d615fc7abba-ovndb-tls-certs\") pod \"neutron-6588559b77-6f4bf\" (UID: \"79498b99-6b5c-4a95-8558-5d615fc7abba\") " pod="openstack/neutron-6588559b77-6f4bf" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.800161 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6588559b77-6f4bf"] Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.837335 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5776d95bfc-hl9dv"] Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.855485 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5776d95bfc-hl9dv"] Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.875126 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.886251 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/79498b99-6b5c-4a95-8558-5d615fc7abba-ovndb-tls-certs\") pod \"neutron-6588559b77-6f4bf\" (UID: \"79498b99-6b5c-4a95-8558-5d615fc7abba\") " pod="openstack/neutron-6588559b77-6f4bf" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.886328 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/79498b99-6b5c-4a95-8558-5d615fc7abba-httpd-config\") pod \"neutron-6588559b77-6f4bf\" (UID: \"79498b99-6b5c-4a95-8558-5d615fc7abba\") " pod="openstack/neutron-6588559b77-6f4bf" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.886356 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79498b99-6b5c-4a95-8558-5d615fc7abba-combined-ca-bundle\") pod \"neutron-6588559b77-6f4bf\" (UID: \"79498b99-6b5c-4a95-8558-5d615fc7abba\") " pod="openstack/neutron-6588559b77-6f4bf" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.886403 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/79498b99-6b5c-4a95-8558-5d615fc7abba-config\") pod \"neutron-6588559b77-6f4bf\" (UID: \"79498b99-6b5c-4a95-8558-5d615fc7abba\") " pod="openstack/neutron-6588559b77-6f4bf" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.886489 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79498b99-6b5c-4a95-8558-5d615fc7abba-public-tls-certs\") pod \"neutron-6588559b77-6f4bf\" (UID: \"79498b99-6b5c-4a95-8558-5d615fc7abba\") " pod="openstack/neutron-6588559b77-6f4bf" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.886528 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79498b99-6b5c-4a95-8558-5d615fc7abba-internal-tls-certs\") pod \"neutron-6588559b77-6f4bf\" (UID: \"79498b99-6b5c-4a95-8558-5d615fc7abba\") " pod="openstack/neutron-6588559b77-6f4bf" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.886576 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g98kg\" (UniqueName: \"kubernetes.io/projected/79498b99-6b5c-4a95-8558-5d615fc7abba-kube-api-access-g98kg\") pod \"neutron-6588559b77-6f4bf\" (UID: \"79498b99-6b5c-4a95-8558-5d615fc7abba\") " pod="openstack/neutron-6588559b77-6f4bf" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.894405 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.916383 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79498b99-6b5c-4a95-8558-5d615fc7abba-internal-tls-certs\") pod \"neutron-6588559b77-6f4bf\" (UID: \"79498b99-6b5c-4a95-8558-5d615fc7abba\") " pod="openstack/neutron-6588559b77-6f4bf" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.916907 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79498b99-6b5c-4a95-8558-5d615fc7abba-public-tls-certs\") pod \"neutron-6588559b77-6f4bf\" (UID: \"79498b99-6b5c-4a95-8558-5d615fc7abba\") " pod="openstack/neutron-6588559b77-6f4bf" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.917647 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/79498b99-6b5c-4a95-8558-5d615fc7abba-httpd-config\") pod \"neutron-6588559b77-6f4bf\" (UID: \"79498b99-6b5c-4a95-8558-5d615fc7abba\") " pod="openstack/neutron-6588559b77-6f4bf" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.920813 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/79498b99-6b5c-4a95-8558-5d615fc7abba-ovndb-tls-certs\") pod \"neutron-6588559b77-6f4bf\" (UID: \"79498b99-6b5c-4a95-8558-5d615fc7abba\") " pod="openstack/neutron-6588559b77-6f4bf" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.926091 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/79498b99-6b5c-4a95-8558-5d615fc7abba-config\") pod \"neutron-6588559b77-6f4bf\" (UID: \"79498b99-6b5c-4a95-8558-5d615fc7abba\") " pod="openstack/neutron-6588559b77-6f4bf" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.945800 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79498b99-6b5c-4a95-8558-5d615fc7abba-combined-ca-bundle\") pod \"neutron-6588559b77-6f4bf\" (UID: \"79498b99-6b5c-4a95-8558-5d615fc7abba\") " pod="openstack/neutron-6588559b77-6f4bf" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.954009 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g98kg\" (UniqueName: \"kubernetes.io/projected/79498b99-6b5c-4a95-8558-5d615fc7abba-kube-api-access-g98kg\") pod \"neutron-6588559b77-6f4bf\" (UID: \"79498b99-6b5c-4a95-8558-5d615fc7abba\") " pod="openstack/neutron-6588559b77-6f4bf" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.974006 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.975459 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 13 10:26:54 crc kubenswrapper[4632]: I0313 10:26:54.980262 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.082650 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.091579 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c1c19b-95a5-4db1-8e54-36fe83704b25-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d2c1c19b-95a5-4db1-8e54-36fe83704b25\") " pod="openstack/cinder-scheduler-0" Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.091621 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2c1c19b-95a5-4db1-8e54-36fe83704b25-scripts\") pod \"cinder-scheduler-0\" (UID: \"d2c1c19b-95a5-4db1-8e54-36fe83704b25\") " pod="openstack/cinder-scheduler-0" Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.091674 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn7fb\" (UniqueName: \"kubernetes.io/projected/d2c1c19b-95a5-4db1-8e54-36fe83704b25-kube-api-access-qn7fb\") pod \"cinder-scheduler-0\" (UID: \"d2c1c19b-95a5-4db1-8e54-36fe83704b25\") " pod="openstack/cinder-scheduler-0" Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.091697 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2c1c19b-95a5-4db1-8e54-36fe83704b25-config-data\") pod \"cinder-scheduler-0\" (UID: \"d2c1c19b-95a5-4db1-8e54-36fe83704b25\") " pod="openstack/cinder-scheduler-0" Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.091758 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d2c1c19b-95a5-4db1-8e54-36fe83704b25-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d2c1c19b-95a5-4db1-8e54-36fe83704b25\") " pod="openstack/cinder-scheduler-0" Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.091791 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2c1c19b-95a5-4db1-8e54-36fe83704b25-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d2c1c19b-95a5-4db1-8e54-36fe83704b25\") " pod="openstack/cinder-scheduler-0" Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.112086 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6588559b77-6f4bf" Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.194366 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c1c19b-95a5-4db1-8e54-36fe83704b25-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d2c1c19b-95a5-4db1-8e54-36fe83704b25\") " pod="openstack/cinder-scheduler-0" Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.194416 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2c1c19b-95a5-4db1-8e54-36fe83704b25-scripts\") pod \"cinder-scheduler-0\" (UID: \"d2c1c19b-95a5-4db1-8e54-36fe83704b25\") " pod="openstack/cinder-scheduler-0" Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.194474 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn7fb\" (UniqueName: \"kubernetes.io/projected/d2c1c19b-95a5-4db1-8e54-36fe83704b25-kube-api-access-qn7fb\") pod \"cinder-scheduler-0\" (UID: \"d2c1c19b-95a5-4db1-8e54-36fe83704b25\") " pod="openstack/cinder-scheduler-0" Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.194881 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2c1c19b-95a5-4db1-8e54-36fe83704b25-config-data\") pod \"cinder-scheduler-0\" (UID: \"d2c1c19b-95a5-4db1-8e54-36fe83704b25\") " pod="openstack/cinder-scheduler-0" Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.194956 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d2c1c19b-95a5-4db1-8e54-36fe83704b25-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d2c1c19b-95a5-4db1-8e54-36fe83704b25\") " pod="openstack/cinder-scheduler-0" Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.194984 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2c1c19b-95a5-4db1-8e54-36fe83704b25-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d2c1c19b-95a5-4db1-8e54-36fe83704b25\") " pod="openstack/cinder-scheduler-0" Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.197489 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d2c1c19b-95a5-4db1-8e54-36fe83704b25-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d2c1c19b-95a5-4db1-8e54-36fe83704b25\") " pod="openstack/cinder-scheduler-0" Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.215790 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2c1c19b-95a5-4db1-8e54-36fe83704b25-scripts\") pod \"cinder-scheduler-0\" (UID: \"d2c1c19b-95a5-4db1-8e54-36fe83704b25\") " pod="openstack/cinder-scheduler-0" Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.216300 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2c1c19b-95a5-4db1-8e54-36fe83704b25-config-data\") pod \"cinder-scheduler-0\" (UID: \"d2c1c19b-95a5-4db1-8e54-36fe83704b25\") " pod="openstack/cinder-scheduler-0" Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.216653 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2c1c19b-95a5-4db1-8e54-36fe83704b25-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d2c1c19b-95a5-4db1-8e54-36fe83704b25\") " pod="openstack/cinder-scheduler-0" Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.218428 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn7fb\" (UniqueName: \"kubernetes.io/projected/d2c1c19b-95a5-4db1-8e54-36fe83704b25-kube-api-access-qn7fb\") pod \"cinder-scheduler-0\" (UID: \"d2c1c19b-95a5-4db1-8e54-36fe83704b25\") " pod="openstack/cinder-scheduler-0" Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.222464 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c1c19b-95a5-4db1-8e54-36fe83704b25-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d2c1c19b-95a5-4db1-8e54-36fe83704b25\") " pod="openstack/cinder-scheduler-0" Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.325348 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.381322 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6db55c595b-pwgcg" event={"ID":"ab896d5b-a5b6-46a3-84d8-c3a8c968eac0","Type":"ContainerStarted","Data":"53916000669a99bf8cb4a1d3bdcfa4c5fdf4d945c1bc57cbcc0a66a10b039644"} Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.419356 4632 generic.go:334] "Generic (PLEG): container finished" podID="270ebc10-986f-4473-8a5e-9094de34ae98" containerID="d138d976167695fe9d299247eefcff55845f7ad27e84fc81cc086274294f2e51" exitCode=2 Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.419389 4632 generic.go:334] "Generic (PLEG): container finished" podID="270ebc10-986f-4473-8a5e-9094de34ae98" containerID="27c121915dbbdfc336d1bc55bed50eb5edaf76e1bc92f4f6b5e249f4ffe5098a" exitCode=0 Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.419448 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"270ebc10-986f-4473-8a5e-9094de34ae98","Type":"ContainerDied","Data":"d138d976167695fe9d299247eefcff55845f7ad27e84fc81cc086274294f2e51"} Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.419494 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"270ebc10-986f-4473-8a5e-9094de34ae98","Type":"ContainerDied","Data":"27c121915dbbdfc336d1bc55bed50eb5edaf76e1bc92f4f6b5e249f4ffe5098a"} Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.423490 4632 generic.go:334] "Generic (PLEG): container finished" podID="6fa310f1-40ef-4e74-9647-d3ea87858f11" containerID="309fa94df210d44c275999bad3e9b781bb4f9646e038b1a9463656385d210cf3" exitCode=0 Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.423567 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-548c8b4b94-2dglr" event={"ID":"6fa310f1-40ef-4e74-9647-d3ea87858f11","Type":"ContainerDied","Data":"309fa94df210d44c275999bad3e9b781bb4f9646e038b1a9463656385d210cf3"} Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.723428 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-548c8b4b94-2dglr" Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.811143 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6fa310f1-40ef-4e74-9647-d3ea87858f11-config-data-custom\") pod \"6fa310f1-40ef-4e74-9647-d3ea87858f11\" (UID: \"6fa310f1-40ef-4e74-9647-d3ea87858f11\") " Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.811242 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fa310f1-40ef-4e74-9647-d3ea87858f11-logs\") pod \"6fa310f1-40ef-4e74-9647-d3ea87858f11\" (UID: \"6fa310f1-40ef-4e74-9647-d3ea87858f11\") " Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.811320 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pmkj\" (UniqueName: \"kubernetes.io/projected/6fa310f1-40ef-4e74-9647-d3ea87858f11-kube-api-access-8pmkj\") pod \"6fa310f1-40ef-4e74-9647-d3ea87858f11\" (UID: \"6fa310f1-40ef-4e74-9647-d3ea87858f11\") " Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.811350 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fa310f1-40ef-4e74-9647-d3ea87858f11-config-data\") pod \"6fa310f1-40ef-4e74-9647-d3ea87858f11\" (UID: \"6fa310f1-40ef-4e74-9647-d3ea87858f11\") " Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.811368 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fa310f1-40ef-4e74-9647-d3ea87858f11-combined-ca-bundle\") pod \"6fa310f1-40ef-4e74-9647-d3ea87858f11\" (UID: \"6fa310f1-40ef-4e74-9647-d3ea87858f11\") " Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.811988 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fa310f1-40ef-4e74-9647-d3ea87858f11-logs" (OuterVolumeSpecName: "logs") pod "6fa310f1-40ef-4e74-9647-d3ea87858f11" (UID: "6fa310f1-40ef-4e74-9647-d3ea87858f11"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.837478 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fa310f1-40ef-4e74-9647-d3ea87858f11-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6fa310f1-40ef-4e74-9647-d3ea87858f11" (UID: "6fa310f1-40ef-4e74-9647-d3ea87858f11"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.837890 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fa310f1-40ef-4e74-9647-d3ea87858f11-kube-api-access-8pmkj" (OuterVolumeSpecName: "kube-api-access-8pmkj") pod "6fa310f1-40ef-4e74-9647-d3ea87858f11" (UID: "6fa310f1-40ef-4e74-9647-d3ea87858f11"). InnerVolumeSpecName "kube-api-access-8pmkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.894239 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fa310f1-40ef-4e74-9647-d3ea87858f11-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6fa310f1-40ef-4e74-9647-d3ea87858f11" (UID: "6fa310f1-40ef-4e74-9647-d3ea87858f11"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.916982 4632 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6fa310f1-40ef-4e74-9647-d3ea87858f11-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.917014 4632 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fa310f1-40ef-4e74-9647-d3ea87858f11-logs\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.917024 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pmkj\" (UniqueName: \"kubernetes.io/projected/6fa310f1-40ef-4e74-9647-d3ea87858f11-kube-api-access-8pmkj\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.917035 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fa310f1-40ef-4e74-9647-d3ea87858f11-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:55 crc kubenswrapper[4632]: I0313 10:26:55.963330 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fa310f1-40ef-4e74-9647-d3ea87858f11-config-data" (OuterVolumeSpecName: "config-data") pod "6fa310f1-40ef-4e74-9647-d3ea87858f11" (UID: "6fa310f1-40ef-4e74-9647-d3ea87858f11"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:26:56 crc kubenswrapper[4632]: I0313 10:26:56.021014 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fa310f1-40ef-4e74-9647-d3ea87858f11-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:26:56 crc kubenswrapper[4632]: I0313 10:26:56.085596 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e715bfb-1bd5-4c21-ac77-df48fa58a69c" path="/var/lib/kubelet/pods/6e715bfb-1bd5-4c21-ac77-df48fa58a69c/volumes" Mar 13 10:26:56 crc kubenswrapper[4632]: I0313 10:26:56.087455 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff547198-2736-4059-8e66-e63ea9ce7345" path="/var/lib/kubelet/pods/ff547198-2736-4059-8e66-e63ea9ce7345/volumes" Mar 13 10:26:56 crc kubenswrapper[4632]: I0313 10:26:56.095631 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6588559b77-6f4bf"] Mar 13 10:26:56 crc kubenswrapper[4632]: I0313 10:26:56.114844 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 13 10:26:56 crc kubenswrapper[4632]: W0313 10:26:56.121528 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79498b99_6b5c_4a95_8558_5d615fc7abba.slice/crio-cc3aa5e44b0dc25bbbe479e7210125c65a55be4449da25f59fdeef0322a73ed3 WatchSource:0}: Error finding container cc3aa5e44b0dc25bbbe479e7210125c65a55be4449da25f59fdeef0322a73ed3: Status 404 returned error can't find the container with id cc3aa5e44b0dc25bbbe479e7210125c65a55be4449da25f59fdeef0322a73ed3 Mar 13 10:26:56 crc kubenswrapper[4632]: I0313 10:26:56.453794 4632 generic.go:334] "Generic (PLEG): container finished" podID="6c867fc1-05ed-46c3-99dc-71ef8a09dad3" containerID="bbc256375bc79a61ff656574ec8a596aed3314e7ad4cd2f7fcf6a7462aee3274" exitCode=0 Mar 13 10:26:56 crc kubenswrapper[4632]: I0313 10:26:56.453869 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64bdffbb5c-mpfvf" event={"ID":"6c867fc1-05ed-46c3-99dc-71ef8a09dad3","Type":"ContainerDied","Data":"bbc256375bc79a61ff656574ec8a596aed3314e7ad4cd2f7fcf6a7462aee3274"} Mar 13 10:26:56 crc kubenswrapper[4632]: I0313 10:26:56.455527 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d2c1c19b-95a5-4db1-8e54-36fe83704b25","Type":"ContainerStarted","Data":"9fdece3800c287c4e18e7f493209526ed799ae59bb9b012c7f57b96117c81a49"} Mar 13 10:26:56 crc kubenswrapper[4632]: I0313 10:26:56.456744 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6588559b77-6f4bf" event={"ID":"79498b99-6b5c-4a95-8558-5d615fc7abba","Type":"ContainerStarted","Data":"cc3aa5e44b0dc25bbbe479e7210125c65a55be4449da25f59fdeef0322a73ed3"} Mar 13 10:26:56 crc kubenswrapper[4632]: I0313 10:26:56.459015 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6db55c595b-pwgcg" event={"ID":"ab896d5b-a5b6-46a3-84d8-c3a8c968eac0","Type":"ContainerStarted","Data":"3f503ce1a453ee49856abaa4a3d77ea00e6382fc8095631e76a224e4d7cf8ac2"} Mar 13 10:26:56 crc kubenswrapper[4632]: I0313 10:26:56.459528 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6db55c595b-pwgcg" Mar 13 10:26:56 crc kubenswrapper[4632]: I0313 10:26:56.459569 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6db55c595b-pwgcg" Mar 13 10:26:56 crc kubenswrapper[4632]: I0313 10:26:56.478187 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-548c8b4b94-2dglr" event={"ID":"6fa310f1-40ef-4e74-9647-d3ea87858f11","Type":"ContainerDied","Data":"4545ac42523c98f674d28d5d0acc10645d2b1e7d8486b7068d13c265711710a4"} Mar 13 10:26:56 crc kubenswrapper[4632]: I0313 10:26:56.478246 4632 scope.go:117] "RemoveContainer" containerID="309fa94df210d44c275999bad3e9b781bb4f9646e038b1a9463656385d210cf3" Mar 13 10:26:56 crc kubenswrapper[4632]: I0313 10:26:56.478258 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-548c8b4b94-2dglr" Mar 13 10:26:56 crc kubenswrapper[4632]: I0313 10:26:56.492173 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6db55c595b-pwgcg" podStartSLOduration=7.492142267 podStartE2EDuration="7.492142267s" podCreationTimestamp="2026-03-13 10:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:26:56.489266179 +0000 UTC m=+1390.511796312" watchObservedRunningTime="2026-03-13 10:26:56.492142267 +0000 UTC m=+1390.514672400" Mar 13 10:26:56 crc kubenswrapper[4632]: I0313 10:26:56.573716 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-548c8b4b94-2dglr"] Mar 13 10:26:56 crc kubenswrapper[4632]: I0313 10:26:56.588170 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-548c8b4b94-2dglr"] Mar 13 10:26:56 crc kubenswrapper[4632]: I0313 10:26:56.621190 4632 scope.go:117] "RemoveContainer" containerID="67882325af120e97844e1aef36a358fdd186b89ba1f3def214e49a353ec793aa" Mar 13 10:26:57 crc kubenswrapper[4632]: I0313 10:26:57.304590 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-64bdffbb5c-mpfvf" podUID="6c867fc1-05ed-46c3-99dc-71ef8a09dad3" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.159:9696/\": dial tcp 10.217.0.159:9696: connect: connection refused" Mar 13 10:26:57 crc kubenswrapper[4632]: I0313 10:26:57.530011 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d2c1c19b-95a5-4db1-8e54-36fe83704b25","Type":"ContainerStarted","Data":"67516bb124d863acdb93cbafff12001c1c53c2a821587b0e3e99f6135ee28e92"} Mar 13 10:26:57 crc kubenswrapper[4632]: I0313 10:26:57.544034 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6588559b77-6f4bf" event={"ID":"79498b99-6b5c-4a95-8558-5d615fc7abba","Type":"ContainerStarted","Data":"2e4dbe726a115e20d5697b52cbd987856c78465356a65ffaf180382482e42ad0"} Mar 13 10:26:57 crc kubenswrapper[4632]: I0313 10:26:57.544098 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6588559b77-6f4bf" event={"ID":"79498b99-6b5c-4a95-8558-5d615fc7abba","Type":"ContainerStarted","Data":"a37056b823559676b78bbdad36e07fb68a02ab13bf670546d16508926857a154"} Mar 13 10:26:57 crc kubenswrapper[4632]: I0313 10:26:57.544717 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6588559b77-6f4bf" Mar 13 10:26:57 crc kubenswrapper[4632]: I0313 10:26:57.569279 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6588559b77-6f4bf" podStartSLOduration=3.569255977 podStartE2EDuration="3.569255977s" podCreationTimestamp="2026-03-13 10:26:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:26:57.562276862 +0000 UTC m=+1391.584807015" watchObservedRunningTime="2026-03-13 10:26:57.569255977 +0000 UTC m=+1391.591786120" Mar 13 10:26:58 crc kubenswrapper[4632]: I0313 10:26:58.055710 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fa310f1-40ef-4e74-9647-d3ea87858f11" path="/var/lib/kubelet/pods/6fa310f1-40ef-4e74-9647-d3ea87858f11/volumes" Mar 13 10:26:59 crc kubenswrapper[4632]: I0313 10:26:59.579065 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d2c1c19b-95a5-4db1-8e54-36fe83704b25","Type":"ContainerStarted","Data":"5eee9458e5025ac2fc90f250d59097faf32248e3b261634e845944e47ef32ad2"} Mar 13 10:26:59 crc kubenswrapper[4632]: I0313 10:26:59.630182 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.630158762 podStartE2EDuration="5.630158762s" podCreationTimestamp="2026-03-13 10:26:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:26:59.625310737 +0000 UTC m=+1393.647840880" watchObservedRunningTime="2026-03-13 10:26:59.630158762 +0000 UTC m=+1393.652688885" Mar 13 10:26:59 crc kubenswrapper[4632]: I0313 10:26:59.735569 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-f664b756d-8fxf4" Mar 13 10:27:00 crc kubenswrapper[4632]: I0313 10:27:00.325881 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 13 10:27:00 crc kubenswrapper[4632]: I0313 10:27:00.525781 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 13 10:27:00 crc kubenswrapper[4632]: E0313 10:27:00.526646 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fa310f1-40ef-4e74-9647-d3ea87858f11" containerName="barbican-api" Mar 13 10:27:00 crc kubenswrapper[4632]: I0313 10:27:00.526673 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fa310f1-40ef-4e74-9647-d3ea87858f11" containerName="barbican-api" Mar 13 10:27:00 crc kubenswrapper[4632]: E0313 10:27:00.526695 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fa310f1-40ef-4e74-9647-d3ea87858f11" containerName="barbican-api-log" Mar 13 10:27:00 crc kubenswrapper[4632]: I0313 10:27:00.526704 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fa310f1-40ef-4e74-9647-d3ea87858f11" containerName="barbican-api-log" Mar 13 10:27:00 crc kubenswrapper[4632]: I0313 10:27:00.526991 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fa310f1-40ef-4e74-9647-d3ea87858f11" containerName="barbican-api" Mar 13 10:27:00 crc kubenswrapper[4632]: I0313 10:27:00.527037 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fa310f1-40ef-4e74-9647-d3ea87858f11" containerName="barbican-api-log" Mar 13 10:27:00 crc kubenswrapper[4632]: I0313 10:27:00.527830 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 13 10:27:00 crc kubenswrapper[4632]: I0313 10:27:00.530162 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 13 10:27:00 crc kubenswrapper[4632]: I0313 10:27:00.530679 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 13 10:27:00 crc kubenswrapper[4632]: I0313 10:27:00.536862 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-gvm2d" Mar 13 10:27:00 crc kubenswrapper[4632]: I0313 10:27:00.553049 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 13 10:27:00 crc kubenswrapper[4632]: I0313 10:27:00.601090 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-548c8b4b94-2dglr" podUID="6fa310f1-40ef-4e74-9647-d3ea87858f11" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.168:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 10:27:00 crc kubenswrapper[4632]: I0313 10:27:00.603345 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-548c8b4b94-2dglr" podUID="6fa310f1-40ef-4e74-9647-d3ea87858f11" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.168:9311/healthcheck\": dial tcp 10.217.0.168:9311: i/o timeout (Client.Timeout exceeded while awaiting headers)" Mar 13 10:27:00 crc kubenswrapper[4632]: I0313 10:27:00.625576 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8adc4254-ad10-4335-a365-876324d1af24-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8adc4254-ad10-4335-a365-876324d1af24\") " pod="openstack/openstackclient" Mar 13 10:27:00 crc kubenswrapper[4632]: I0313 10:27:00.625658 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzl69\" (UniqueName: \"kubernetes.io/projected/8adc4254-ad10-4335-a365-876324d1af24-kube-api-access-pzl69\") pod \"openstackclient\" (UID: \"8adc4254-ad10-4335-a365-876324d1af24\") " pod="openstack/openstackclient" Mar 13 10:27:00 crc kubenswrapper[4632]: I0313 10:27:00.625690 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8adc4254-ad10-4335-a365-876324d1af24-openstack-config-secret\") pod \"openstackclient\" (UID: \"8adc4254-ad10-4335-a365-876324d1af24\") " pod="openstack/openstackclient" Mar 13 10:27:00 crc kubenswrapper[4632]: I0313 10:27:00.625764 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8adc4254-ad10-4335-a365-876324d1af24-openstack-config\") pod \"openstackclient\" (UID: \"8adc4254-ad10-4335-a365-876324d1af24\") " pod="openstack/openstackclient" Mar 13 10:27:00 crc kubenswrapper[4632]: I0313 10:27:00.727226 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8adc4254-ad10-4335-a365-876324d1af24-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8adc4254-ad10-4335-a365-876324d1af24\") " pod="openstack/openstackclient" Mar 13 10:27:00 crc kubenswrapper[4632]: I0313 10:27:00.727505 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzl69\" (UniqueName: \"kubernetes.io/projected/8adc4254-ad10-4335-a365-876324d1af24-kube-api-access-pzl69\") pod \"openstackclient\" (UID: \"8adc4254-ad10-4335-a365-876324d1af24\") " pod="openstack/openstackclient" Mar 13 10:27:00 crc kubenswrapper[4632]: I0313 10:27:00.727524 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8adc4254-ad10-4335-a365-876324d1af24-openstack-config-secret\") pod \"openstackclient\" (UID: \"8adc4254-ad10-4335-a365-876324d1af24\") " pod="openstack/openstackclient" Mar 13 10:27:00 crc kubenswrapper[4632]: I0313 10:27:00.727588 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8adc4254-ad10-4335-a365-876324d1af24-openstack-config\") pod \"openstackclient\" (UID: \"8adc4254-ad10-4335-a365-876324d1af24\") " pod="openstack/openstackclient" Mar 13 10:27:00 crc kubenswrapper[4632]: I0313 10:27:00.730123 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8adc4254-ad10-4335-a365-876324d1af24-openstack-config\") pod \"openstackclient\" (UID: \"8adc4254-ad10-4335-a365-876324d1af24\") " pod="openstack/openstackclient" Mar 13 10:27:00 crc kubenswrapper[4632]: I0313 10:27:00.743582 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8adc4254-ad10-4335-a365-876324d1af24-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8adc4254-ad10-4335-a365-876324d1af24\") " pod="openstack/openstackclient" Mar 13 10:27:00 crc kubenswrapper[4632]: I0313 10:27:00.744771 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8adc4254-ad10-4335-a365-876324d1af24-openstack-config-secret\") pod \"openstackclient\" (UID: \"8adc4254-ad10-4335-a365-876324d1af24\") " pod="openstack/openstackclient" Mar 13 10:27:00 crc kubenswrapper[4632]: I0313 10:27:00.751875 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzl69\" (UniqueName: \"kubernetes.io/projected/8adc4254-ad10-4335-a365-876324d1af24-kube-api-access-pzl69\") pod \"openstackclient\" (UID: \"8adc4254-ad10-4335-a365-876324d1af24\") " pod="openstack/openstackclient" Mar 13 10:27:00 crc kubenswrapper[4632]: I0313 10:27:00.857509 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 13 10:27:00 crc kubenswrapper[4632]: I0313 10:27:00.946582 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Mar 13 10:27:00 crc kubenswrapper[4632]: I0313 10:27:00.964294 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Mar 13 10:27:01 crc kubenswrapper[4632]: I0313 10:27:01.015233 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 13 10:27:01 crc kubenswrapper[4632]: I0313 10:27:01.017438 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 13 10:27:01 crc kubenswrapper[4632]: I0313 10:27:01.037290 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 13 10:27:01 crc kubenswrapper[4632]: I0313 10:27:01.137330 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmjms\" (UniqueName: \"kubernetes.io/projected/aef9680f-df77-4e2e-ac53-9d7530c2270c-kube-api-access-cmjms\") pod \"openstackclient\" (UID: \"aef9680f-df77-4e2e-ac53-9d7530c2270c\") " pod="openstack/openstackclient" Mar 13 10:27:01 crc kubenswrapper[4632]: I0313 10:27:01.137423 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/aef9680f-df77-4e2e-ac53-9d7530c2270c-openstack-config-secret\") pod \"openstackclient\" (UID: \"aef9680f-df77-4e2e-ac53-9d7530c2270c\") " pod="openstack/openstackclient" Mar 13 10:27:01 crc kubenswrapper[4632]: I0313 10:27:01.137491 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/aef9680f-df77-4e2e-ac53-9d7530c2270c-openstack-config\") pod \"openstackclient\" (UID: \"aef9680f-df77-4e2e-ac53-9d7530c2270c\") " pod="openstack/openstackclient" Mar 13 10:27:01 crc kubenswrapper[4632]: I0313 10:27:01.137546 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aef9680f-df77-4e2e-ac53-9d7530c2270c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"aef9680f-df77-4e2e-ac53-9d7530c2270c\") " pod="openstack/openstackclient" Mar 13 10:27:01 crc kubenswrapper[4632]: I0313 10:27:01.240033 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/aef9680f-df77-4e2e-ac53-9d7530c2270c-openstack-config-secret\") pod \"openstackclient\" (UID: \"aef9680f-df77-4e2e-ac53-9d7530c2270c\") " pod="openstack/openstackclient" Mar 13 10:27:01 crc kubenswrapper[4632]: I0313 10:27:01.240431 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/aef9680f-df77-4e2e-ac53-9d7530c2270c-openstack-config\") pod \"openstackclient\" (UID: \"aef9680f-df77-4e2e-ac53-9d7530c2270c\") " pod="openstack/openstackclient" Mar 13 10:27:01 crc kubenswrapper[4632]: I0313 10:27:01.240494 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aef9680f-df77-4e2e-ac53-9d7530c2270c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"aef9680f-df77-4e2e-ac53-9d7530c2270c\") " pod="openstack/openstackclient" Mar 13 10:27:01 crc kubenswrapper[4632]: I0313 10:27:01.240598 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmjms\" (UniqueName: \"kubernetes.io/projected/aef9680f-df77-4e2e-ac53-9d7530c2270c-kube-api-access-cmjms\") pod \"openstackclient\" (UID: \"aef9680f-df77-4e2e-ac53-9d7530c2270c\") " pod="openstack/openstackclient" Mar 13 10:27:01 crc kubenswrapper[4632]: I0313 10:27:01.242273 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/aef9680f-df77-4e2e-ac53-9d7530c2270c-openstack-config\") pod \"openstackclient\" (UID: \"aef9680f-df77-4e2e-ac53-9d7530c2270c\") " pod="openstack/openstackclient" Mar 13 10:27:01 crc kubenswrapper[4632]: I0313 10:27:01.247206 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aef9680f-df77-4e2e-ac53-9d7530c2270c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"aef9680f-df77-4e2e-ac53-9d7530c2270c\") " pod="openstack/openstackclient" Mar 13 10:27:01 crc kubenswrapper[4632]: E0313 10:27:01.263624 4632 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 10:27:01 crc kubenswrapper[4632]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_8adc4254-ad10-4335-a365-876324d1af24_0(fd110ce1a0d6e4d5ad13c3bd776d304884dcdd79911e2024e5cf26000b535d37): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fd110ce1a0d6e4d5ad13c3bd776d304884dcdd79911e2024e5cf26000b535d37" Netns:"/var/run/netns/4de97584-a76a-4eaa-8a9c-1af1c056a2a4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=fd110ce1a0d6e4d5ad13c3bd776d304884dcdd79911e2024e5cf26000b535d37;K8S_POD_UID=8adc4254-ad10-4335-a365-876324d1af24" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/8adc4254-ad10-4335-a365-876324d1af24]: expected pod UID "8adc4254-ad10-4335-a365-876324d1af24" but got "aef9680f-df77-4e2e-ac53-9d7530c2270c" from Kube API Mar 13 10:27:01 crc kubenswrapper[4632]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 10:27:01 crc kubenswrapper[4632]: > Mar 13 10:27:01 crc kubenswrapper[4632]: E0313 10:27:01.263736 4632 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 10:27:01 crc kubenswrapper[4632]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_8adc4254-ad10-4335-a365-876324d1af24_0(fd110ce1a0d6e4d5ad13c3bd776d304884dcdd79911e2024e5cf26000b535d37): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fd110ce1a0d6e4d5ad13c3bd776d304884dcdd79911e2024e5cf26000b535d37" Netns:"/var/run/netns/4de97584-a76a-4eaa-8a9c-1af1c056a2a4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=fd110ce1a0d6e4d5ad13c3bd776d304884dcdd79911e2024e5cf26000b535d37;K8S_POD_UID=8adc4254-ad10-4335-a365-876324d1af24" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/8adc4254-ad10-4335-a365-876324d1af24]: expected pod UID "8adc4254-ad10-4335-a365-876324d1af24" but got "aef9680f-df77-4e2e-ac53-9d7530c2270c" from Kube API Mar 13 10:27:01 crc kubenswrapper[4632]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 10:27:01 crc kubenswrapper[4632]: > pod="openstack/openstackclient" Mar 13 10:27:01 crc kubenswrapper[4632]: I0313 10:27:01.264318 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/aef9680f-df77-4e2e-ac53-9d7530c2270c-openstack-config-secret\") pod \"openstackclient\" (UID: \"aef9680f-df77-4e2e-ac53-9d7530c2270c\") " pod="openstack/openstackclient" Mar 13 10:27:01 crc kubenswrapper[4632]: I0313 10:27:01.281711 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmjms\" (UniqueName: \"kubernetes.io/projected/aef9680f-df77-4e2e-ac53-9d7530c2270c-kube-api-access-cmjms\") pod \"openstackclient\" (UID: \"aef9680f-df77-4e2e-ac53-9d7530c2270c\") " pod="openstack/openstackclient" Mar 13 10:27:01 crc kubenswrapper[4632]: I0313 10:27:01.343200 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 13 10:27:01 crc kubenswrapper[4632]: I0313 10:27:01.659169 4632 generic.go:334] "Generic (PLEG): container finished" podID="270ebc10-986f-4473-8a5e-9094de34ae98" containerID="b63cc4f80efbb7b17b044808a5b6c8d5aa98b9e2ae8e38ab95a55c4e3ba911d1" exitCode=0 Mar 13 10:27:01 crc kubenswrapper[4632]: I0313 10:27:01.659615 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"270ebc10-986f-4473-8a5e-9094de34ae98","Type":"ContainerDied","Data":"b63cc4f80efbb7b17b044808a5b6c8d5aa98b9e2ae8e38ab95a55c4e3ba911d1"} Mar 13 10:27:01 crc kubenswrapper[4632]: I0313 10:27:01.663637 4632 generic.go:334] "Generic (PLEG): container finished" podID="6c867fc1-05ed-46c3-99dc-71ef8a09dad3" containerID="027b2c4436a3d137f7ef6a7921904bf128e17aa7812143af60d4d11a546759da" exitCode=0 Mar 13 10:27:01 crc kubenswrapper[4632]: I0313 10:27:01.663732 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 13 10:27:01 crc kubenswrapper[4632]: I0313 10:27:01.664357 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64bdffbb5c-mpfvf" event={"ID":"6c867fc1-05ed-46c3-99dc-71ef8a09dad3","Type":"ContainerDied","Data":"027b2c4436a3d137f7ef6a7921904bf128e17aa7812143af60d4d11a546759da"} Mar 13 10:27:01 crc kubenswrapper[4632]: I0313 10:27:01.670738 4632 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="8adc4254-ad10-4335-a365-876324d1af24" podUID="aef9680f-df77-4e2e-ac53-9d7530c2270c" Mar 13 10:27:01 crc kubenswrapper[4632]: I0313 10:27:01.682125 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 13 10:27:01 crc kubenswrapper[4632]: I0313 10:27:01.753466 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8adc4254-ad10-4335-a365-876324d1af24-openstack-config-secret\") pod \"8adc4254-ad10-4335-a365-876324d1af24\" (UID: \"8adc4254-ad10-4335-a365-876324d1af24\") " Mar 13 10:27:01 crc kubenswrapper[4632]: I0313 10:27:01.754728 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8adc4254-ad10-4335-a365-876324d1af24-combined-ca-bundle\") pod \"8adc4254-ad10-4335-a365-876324d1af24\" (UID: \"8adc4254-ad10-4335-a365-876324d1af24\") " Mar 13 10:27:01 crc kubenswrapper[4632]: I0313 10:27:01.754862 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8adc4254-ad10-4335-a365-876324d1af24-openstack-config\") pod \"8adc4254-ad10-4335-a365-876324d1af24\" (UID: \"8adc4254-ad10-4335-a365-876324d1af24\") " Mar 13 10:27:01 crc kubenswrapper[4632]: I0313 10:27:01.755088 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzl69\" (UniqueName: \"kubernetes.io/projected/8adc4254-ad10-4335-a365-876324d1af24-kube-api-access-pzl69\") pod \"8adc4254-ad10-4335-a365-876324d1af24\" (UID: \"8adc4254-ad10-4335-a365-876324d1af24\") " Mar 13 10:27:01 crc kubenswrapper[4632]: I0313 10:27:01.758378 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8adc4254-ad10-4335-a365-876324d1af24-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "8adc4254-ad10-4335-a365-876324d1af24" (UID: "8adc4254-ad10-4335-a365-876324d1af24"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:27:01 crc kubenswrapper[4632]: I0313 10:27:01.762223 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8adc4254-ad10-4335-a365-876324d1af24-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "8adc4254-ad10-4335-a365-876324d1af24" (UID: "8adc4254-ad10-4335-a365-876324d1af24"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:27:01 crc kubenswrapper[4632]: I0313 10:27:01.765225 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8adc4254-ad10-4335-a365-876324d1af24-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8adc4254-ad10-4335-a365-876324d1af24" (UID: "8adc4254-ad10-4335-a365-876324d1af24"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:27:01 crc kubenswrapper[4632]: I0313 10:27:01.768136 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8adc4254-ad10-4335-a365-876324d1af24-kube-api-access-pzl69" (OuterVolumeSpecName: "kube-api-access-pzl69") pod "8adc4254-ad10-4335-a365-876324d1af24" (UID: "8adc4254-ad10-4335-a365-876324d1af24"). InnerVolumeSpecName "kube-api-access-pzl69". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:27:01 crc kubenswrapper[4632]: I0313 10:27:01.859115 4632 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8adc4254-ad10-4335-a365-876324d1af24-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:01 crc kubenswrapper[4632]: I0313 10:27:01.859417 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8adc4254-ad10-4335-a365-876324d1af24-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:01 crc kubenswrapper[4632]: I0313 10:27:01.859427 4632 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8adc4254-ad10-4335-a365-876324d1af24-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:01 crc kubenswrapper[4632]: I0313 10:27:01.859436 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzl69\" (UniqueName: \"kubernetes.io/projected/8adc4254-ad10-4335-a365-876324d1af24-kube-api-access-pzl69\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:01 crc kubenswrapper[4632]: I0313 10:27:01.920378 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 13 10:27:01 crc kubenswrapper[4632]: W0313 10:27:01.934092 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaef9680f_df77_4e2e_ac53_9d7530c2270c.slice/crio-d29ea12656ced9b9702cd194ca8790d076046a2079e8c6f2febbe1128ab32a1a WatchSource:0}: Error finding container d29ea12656ced9b9702cd194ca8790d076046a2079e8c6f2febbe1128ab32a1a: Status 404 returned error can't find the container with id d29ea12656ced9b9702cd194ca8790d076046a2079e8c6f2febbe1128ab32a1a Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.082083 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8adc4254-ad10-4335-a365-876324d1af24" path="/var/lib/kubelet/pods/8adc4254-ad10-4335-a365-876324d1af24/volumes" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.299051 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.310381 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64bdffbb5c-mpfvf" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.397607 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/270ebc10-986f-4473-8a5e-9094de34ae98-config-data\") pod \"270ebc10-986f-4473-8a5e-9094de34ae98\" (UID: \"270ebc10-986f-4473-8a5e-9094de34ae98\") " Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.397656 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/270ebc10-986f-4473-8a5e-9094de34ae98-combined-ca-bundle\") pod \"270ebc10-986f-4473-8a5e-9094de34ae98\" (UID: \"270ebc10-986f-4473-8a5e-9094de34ae98\") " Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.397674 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-httpd-config\") pod \"6c867fc1-05ed-46c3-99dc-71ef8a09dad3\" (UID: \"6c867fc1-05ed-46c3-99dc-71ef8a09dad3\") " Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.397694 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/270ebc10-986f-4473-8a5e-9094de34ae98-scripts\") pod \"270ebc10-986f-4473-8a5e-9094de34ae98\" (UID: \"270ebc10-986f-4473-8a5e-9094de34ae98\") " Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.397713 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/270ebc10-986f-4473-8a5e-9094de34ae98-log-httpd\") pod \"270ebc10-986f-4473-8a5e-9094de34ae98\" (UID: \"270ebc10-986f-4473-8a5e-9094de34ae98\") " Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.397760 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jz5nk\" (UniqueName: \"kubernetes.io/projected/270ebc10-986f-4473-8a5e-9094de34ae98-kube-api-access-jz5nk\") pod \"270ebc10-986f-4473-8a5e-9094de34ae98\" (UID: \"270ebc10-986f-4473-8a5e-9094de34ae98\") " Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.397818 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/270ebc10-986f-4473-8a5e-9094de34ae98-run-httpd\") pod \"270ebc10-986f-4473-8a5e-9094de34ae98\" (UID: \"270ebc10-986f-4473-8a5e-9094de34ae98\") " Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.397857 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-internal-tls-certs\") pod \"6c867fc1-05ed-46c3-99dc-71ef8a09dad3\" (UID: \"6c867fc1-05ed-46c3-99dc-71ef8a09dad3\") " Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.397885 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-combined-ca-bundle\") pod \"6c867fc1-05ed-46c3-99dc-71ef8a09dad3\" (UID: \"6c867fc1-05ed-46c3-99dc-71ef8a09dad3\") " Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.397905 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbbzk\" (UniqueName: \"kubernetes.io/projected/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-kube-api-access-rbbzk\") pod \"6c867fc1-05ed-46c3-99dc-71ef8a09dad3\" (UID: \"6c867fc1-05ed-46c3-99dc-71ef8a09dad3\") " Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.397928 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-ovndb-tls-certs\") pod \"6c867fc1-05ed-46c3-99dc-71ef8a09dad3\" (UID: \"6c867fc1-05ed-46c3-99dc-71ef8a09dad3\") " Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.397984 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-public-tls-certs\") pod \"6c867fc1-05ed-46c3-99dc-71ef8a09dad3\" (UID: \"6c867fc1-05ed-46c3-99dc-71ef8a09dad3\") " Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.398032 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-config\") pod \"6c867fc1-05ed-46c3-99dc-71ef8a09dad3\" (UID: \"6c867fc1-05ed-46c3-99dc-71ef8a09dad3\") " Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.398054 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/270ebc10-986f-4473-8a5e-9094de34ae98-sg-core-conf-yaml\") pod \"270ebc10-986f-4473-8a5e-9094de34ae98\" (UID: \"270ebc10-986f-4473-8a5e-9094de34ae98\") " Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.399164 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/270ebc10-986f-4473-8a5e-9094de34ae98-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "270ebc10-986f-4473-8a5e-9094de34ae98" (UID: "270ebc10-986f-4473-8a5e-9094de34ae98"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.403441 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/270ebc10-986f-4473-8a5e-9094de34ae98-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "270ebc10-986f-4473-8a5e-9094de34ae98" (UID: "270ebc10-986f-4473-8a5e-9094de34ae98"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.407091 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/270ebc10-986f-4473-8a5e-9094de34ae98-scripts" (OuterVolumeSpecName: "scripts") pod "270ebc10-986f-4473-8a5e-9094de34ae98" (UID: "270ebc10-986f-4473-8a5e-9094de34ae98"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.409397 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-kube-api-access-rbbzk" (OuterVolumeSpecName: "kube-api-access-rbbzk") pod "6c867fc1-05ed-46c3-99dc-71ef8a09dad3" (UID: "6c867fc1-05ed-46c3-99dc-71ef8a09dad3"). InnerVolumeSpecName "kube-api-access-rbbzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.412417 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/270ebc10-986f-4473-8a5e-9094de34ae98-kube-api-access-jz5nk" (OuterVolumeSpecName: "kube-api-access-jz5nk") pod "270ebc10-986f-4473-8a5e-9094de34ae98" (UID: "270ebc10-986f-4473-8a5e-9094de34ae98"). InnerVolumeSpecName "kube-api-access-jz5nk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.413345 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "6c867fc1-05ed-46c3-99dc-71ef8a09dad3" (UID: "6c867fc1-05ed-46c3-99dc-71ef8a09dad3"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.488578 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/270ebc10-986f-4473-8a5e-9094de34ae98-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "270ebc10-986f-4473-8a5e-9094de34ae98" (UID: "270ebc10-986f-4473-8a5e-9094de34ae98"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.497634 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/270ebc10-986f-4473-8a5e-9094de34ae98-config-data" (OuterVolumeSpecName: "config-data") pod "270ebc10-986f-4473-8a5e-9094de34ae98" (UID: "270ebc10-986f-4473-8a5e-9094de34ae98"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.499859 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbbzk\" (UniqueName: \"kubernetes.io/projected/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-kube-api-access-rbbzk\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.499896 4632 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/270ebc10-986f-4473-8a5e-9094de34ae98-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.499906 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/270ebc10-986f-4473-8a5e-9094de34ae98-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.499914 4632 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.499922 4632 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/270ebc10-986f-4473-8a5e-9094de34ae98-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.499931 4632 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/270ebc10-986f-4473-8a5e-9094de34ae98-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.499942 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jz5nk\" (UniqueName: \"kubernetes.io/projected/270ebc10-986f-4473-8a5e-9094de34ae98-kube-api-access-jz5nk\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.499961 4632 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/270ebc10-986f-4473-8a5e-9094de34ae98-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.524182 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c867fc1-05ed-46c3-99dc-71ef8a09dad3" (UID: "6c867fc1-05ed-46c3-99dc-71ef8a09dad3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.543881 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-config" (OuterVolumeSpecName: "config") pod "6c867fc1-05ed-46c3-99dc-71ef8a09dad3" (UID: "6c867fc1-05ed-46c3-99dc-71ef8a09dad3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.553097 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/270ebc10-986f-4473-8a5e-9094de34ae98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "270ebc10-986f-4473-8a5e-9094de34ae98" (UID: "270ebc10-986f-4473-8a5e-9094de34ae98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.565863 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6c867fc1-05ed-46c3-99dc-71ef8a09dad3" (UID: "6c867fc1-05ed-46c3-99dc-71ef8a09dad3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.572005 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6c867fc1-05ed-46c3-99dc-71ef8a09dad3" (UID: "6c867fc1-05ed-46c3-99dc-71ef8a09dad3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.601673 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/270ebc10-986f-4473-8a5e-9094de34ae98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.601716 4632 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.601732 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.601743 4632 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.601757 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.612746 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "6c867fc1-05ed-46c3-99dc-71ef8a09dad3" (UID: "6c867fc1-05ed-46c3-99dc-71ef8a09dad3"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.682481 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"270ebc10-986f-4473-8a5e-9094de34ae98","Type":"ContainerDied","Data":"c4fcb786f7a33daa32bea87a76b7b56e9f86402051990ca301fe80823cca805f"} Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.682542 4632 scope.go:117] "RemoveContainer" containerID="d138d976167695fe9d299247eefcff55845f7ad27e84fc81cc086274294f2e51" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.682568 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.684539 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"aef9680f-df77-4e2e-ac53-9d7530c2270c","Type":"ContainerStarted","Data":"d29ea12656ced9b9702cd194ca8790d076046a2079e8c6f2febbe1128ab32a1a"} Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.688717 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.689496 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64bdffbb5c-mpfvf" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.692351 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64bdffbb5c-mpfvf" event={"ID":"6c867fc1-05ed-46c3-99dc-71ef8a09dad3","Type":"ContainerDied","Data":"bb71081b64258f79a4055c8e129128f47654fe94235aa2a730194da521f70fe1"} Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.702293 4632 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="8adc4254-ad10-4335-a365-876324d1af24" podUID="aef9680f-df77-4e2e-ac53-9d7530c2270c" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.703239 4632 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c867fc1-05ed-46c3-99dc-71ef8a09dad3-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.722857 4632 scope.go:117] "RemoveContainer" containerID="b63cc4f80efbb7b17b044808a5b6c8d5aa98b9e2ae8e38ab95a55c4e3ba911d1" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.816239 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.830710 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.840126 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:27:02 crc kubenswrapper[4632]: E0313 10:27:02.840623 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c867fc1-05ed-46c3-99dc-71ef8a09dad3" containerName="neutron-httpd" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.840646 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c867fc1-05ed-46c3-99dc-71ef8a09dad3" containerName="neutron-httpd" Mar 13 10:27:02 crc kubenswrapper[4632]: E0313 10:27:02.840665 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c867fc1-05ed-46c3-99dc-71ef8a09dad3" containerName="neutron-api" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.840674 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c867fc1-05ed-46c3-99dc-71ef8a09dad3" containerName="neutron-api" Mar 13 10:27:02 crc kubenswrapper[4632]: E0313 10:27:02.840718 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="270ebc10-986f-4473-8a5e-9094de34ae98" containerName="ceilometer-central-agent" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.840726 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="270ebc10-986f-4473-8a5e-9094de34ae98" containerName="ceilometer-central-agent" Mar 13 10:27:02 crc kubenswrapper[4632]: E0313 10:27:02.840738 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="270ebc10-986f-4473-8a5e-9094de34ae98" containerName="sg-core" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.840746 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="270ebc10-986f-4473-8a5e-9094de34ae98" containerName="sg-core" Mar 13 10:27:02 crc kubenswrapper[4632]: E0313 10:27:02.840768 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="270ebc10-986f-4473-8a5e-9094de34ae98" containerName="ceilometer-notification-agent" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.840776 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="270ebc10-986f-4473-8a5e-9094de34ae98" containerName="ceilometer-notification-agent" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.841018 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="270ebc10-986f-4473-8a5e-9094de34ae98" containerName="ceilometer-notification-agent" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.841042 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="270ebc10-986f-4473-8a5e-9094de34ae98" containerName="ceilometer-central-agent" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.841057 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c867fc1-05ed-46c3-99dc-71ef8a09dad3" containerName="neutron-api" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.841067 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="270ebc10-986f-4473-8a5e-9094de34ae98" containerName="sg-core" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.841083 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c867fc1-05ed-46c3-99dc-71ef8a09dad3" containerName="neutron-httpd" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.850316 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-64bdffbb5c-mpfvf"] Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.850494 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.853257 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.853528 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.857794 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-64bdffbb5c-mpfvf"] Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.862079 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.881683 4632 scope.go:117] "RemoveContainer" containerID="27c121915dbbdfc336d1bc55bed50eb5edaf76e1bc92f4f6b5e249f4ffe5098a" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.907284 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-run-httpd\") pod \"ceilometer-0\" (UID: \"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c\") " pod="openstack/ceilometer-0" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.907352 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-log-httpd\") pod \"ceilometer-0\" (UID: \"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c\") " pod="openstack/ceilometer-0" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.907407 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c\") " pod="openstack/ceilometer-0" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.907430 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c\") " pod="openstack/ceilometer-0" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.907472 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcjwv\" (UniqueName: \"kubernetes.io/projected/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-kube-api-access-xcjwv\") pod \"ceilometer-0\" (UID: \"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c\") " pod="openstack/ceilometer-0" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.908329 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-config-data\") pod \"ceilometer-0\" (UID: \"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c\") " pod="openstack/ceilometer-0" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.908368 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-scripts\") pod \"ceilometer-0\" (UID: \"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c\") " pod="openstack/ceilometer-0" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.935210 4632 scope.go:117] "RemoveContainer" containerID="bbc256375bc79a61ff656574ec8a596aed3314e7ad4cd2f7fcf6a7462aee3274" Mar 13 10:27:02 crc kubenswrapper[4632]: I0313 10:27:02.977896 4632 scope.go:117] "RemoveContainer" containerID="027b2c4436a3d137f7ef6a7921904bf128e17aa7812143af60d4d11a546759da" Mar 13 10:27:03 crc kubenswrapper[4632]: I0313 10:27:03.009811 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-config-data\") pod \"ceilometer-0\" (UID: \"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c\") " pod="openstack/ceilometer-0" Mar 13 10:27:03 crc kubenswrapper[4632]: I0313 10:27:03.009903 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-scripts\") pod \"ceilometer-0\" (UID: \"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c\") " pod="openstack/ceilometer-0" Mar 13 10:27:03 crc kubenswrapper[4632]: I0313 10:27:03.010002 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-run-httpd\") pod \"ceilometer-0\" (UID: \"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c\") " pod="openstack/ceilometer-0" Mar 13 10:27:03 crc kubenswrapper[4632]: I0313 10:27:03.010028 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-log-httpd\") pod \"ceilometer-0\" (UID: \"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c\") " pod="openstack/ceilometer-0" Mar 13 10:27:03 crc kubenswrapper[4632]: I0313 10:27:03.010072 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c\") " pod="openstack/ceilometer-0" Mar 13 10:27:03 crc kubenswrapper[4632]: I0313 10:27:03.010088 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c\") " pod="openstack/ceilometer-0" Mar 13 10:27:03 crc kubenswrapper[4632]: I0313 10:27:03.010119 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcjwv\" (UniqueName: \"kubernetes.io/projected/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-kube-api-access-xcjwv\") pod \"ceilometer-0\" (UID: \"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c\") " pod="openstack/ceilometer-0" Mar 13 10:27:03 crc kubenswrapper[4632]: I0313 10:27:03.011429 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-log-httpd\") pod \"ceilometer-0\" (UID: \"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c\") " pod="openstack/ceilometer-0" Mar 13 10:27:03 crc kubenswrapper[4632]: I0313 10:27:03.015431 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-run-httpd\") pod \"ceilometer-0\" (UID: \"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c\") " pod="openstack/ceilometer-0" Mar 13 10:27:03 crc kubenswrapper[4632]: I0313 10:27:03.021407 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c\") " pod="openstack/ceilometer-0" Mar 13 10:27:03 crc kubenswrapper[4632]: I0313 10:27:03.021482 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-scripts\") pod \"ceilometer-0\" (UID: \"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c\") " pod="openstack/ceilometer-0" Mar 13 10:27:03 crc kubenswrapper[4632]: I0313 10:27:03.023620 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-config-data\") pod \"ceilometer-0\" (UID: \"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c\") " pod="openstack/ceilometer-0" Mar 13 10:27:03 crc kubenswrapper[4632]: I0313 10:27:03.049913 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c\") " pod="openstack/ceilometer-0" Mar 13 10:27:03 crc kubenswrapper[4632]: I0313 10:27:03.054423 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcjwv\" (UniqueName: \"kubernetes.io/projected/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-kube-api-access-xcjwv\") pod \"ceilometer-0\" (UID: \"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c\") " pod="openstack/ceilometer-0" Mar 13 10:27:03 crc kubenswrapper[4632]: I0313 10:27:03.178217 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:27:03 crc kubenswrapper[4632]: I0313 10:27:03.334731 4632 scope.go:117] "RemoveContainer" containerID="746bd1f1584c6b468985171d618d35f15871608c045fd5e9f4070c7ace66e505" Mar 13 10:27:03 crc kubenswrapper[4632]: I0313 10:27:03.825200 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:27:04 crc kubenswrapper[4632]: I0313 10:27:04.059153 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="270ebc10-986f-4473-8a5e-9094de34ae98" path="/var/lib/kubelet/pods/270ebc10-986f-4473-8a5e-9094de34ae98/volumes" Mar 13 10:27:04 crc kubenswrapper[4632]: I0313 10:27:04.060143 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c867fc1-05ed-46c3-99dc-71ef8a09dad3" path="/var/lib/kubelet/pods/6c867fc1-05ed-46c3-99dc-71ef8a09dad3/volumes" Mar 13 10:27:04 crc kubenswrapper[4632]: I0313 10:27:04.719487 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c","Type":"ContainerStarted","Data":"bb00bf460a4849cc1a7c1bad8a739981e87c032a18a9222632d57abbccea8858"} Mar 13 10:27:04 crc kubenswrapper[4632]: I0313 10:27:04.720113 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c","Type":"ContainerStarted","Data":"df15286148314ab907f4a05031eefbb838636621ee25cc2f368e3d56ae19621b"} Mar 13 10:27:05 crc kubenswrapper[4632]: I0313 10:27:05.736622 4632 generic.go:334] "Generic (PLEG): container finished" podID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerID="dc4a058f6feb7822333693352f32f5677ff03988b7b5b71005c85c4bf733b402" exitCode=137 Mar 13 10:27:05 crc kubenswrapper[4632]: I0313 10:27:05.737003 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bdb5f7878-ng2k2" event={"ID":"3e4afb91-ce26-4325-89c9-2542da2ec48a","Type":"ContainerDied","Data":"dc4a058f6feb7822333693352f32f5677ff03988b7b5b71005c85c4bf733b402"} Mar 13 10:27:05 crc kubenswrapper[4632]: I0313 10:27:05.758043 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c","Type":"ContainerStarted","Data":"aa0a9edf7c00bb4d08cf1a3f2565b5016be14ce1312e093e1c44112d2d594f42"} Mar 13 10:27:05 crc kubenswrapper[4632]: I0313 10:27:05.758104 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c","Type":"ContainerStarted","Data":"c7001d72ce189e15496046472a90b656a4129de71ad96c6f49a1d6b92862a990"} Mar 13 10:27:05 crc kubenswrapper[4632]: I0313 10:27:05.813834 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 13 10:27:06 crc kubenswrapper[4632]: I0313 10:27:06.773423 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bdb5f7878-ng2k2" event={"ID":"3e4afb91-ce26-4325-89c9-2542da2ec48a","Type":"ContainerStarted","Data":"c9dfdd84c36e6ac95b45a488b62e176636bdecfbe3a88d3f5d2058d92ebbacdd"} Mar 13 10:27:06 crc kubenswrapper[4632]: I0313 10:27:06.804393 4632 generic.go:334] "Generic (PLEG): container finished" podID="5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c" containerID="8ce0185281fb59d0c6bda2b2c484ad3711b4bd3b729b4b8677e75ca6b8e1f739" exitCode=137 Mar 13 10:27:06 crc kubenswrapper[4632]: I0313 10:27:06.808692 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-689764498d-rg7vt" event={"ID":"5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c","Type":"ContainerDied","Data":"8ce0185281fb59d0c6bda2b2c484ad3711b4bd3b729b4b8677e75ca6b8e1f739"} Mar 13 10:27:06 crc kubenswrapper[4632]: I0313 10:27:06.809572 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-689764498d-rg7vt" event={"ID":"5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c","Type":"ContainerStarted","Data":"433c9aa5a02161c4bc7228b52cc460020479cbbb899bc6549755a59b8ad796f4"} Mar 13 10:27:09 crc kubenswrapper[4632]: I0313 10:27:09.847825 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c","Type":"ContainerStarted","Data":"d67a8223c96cadeaa871fcaaaad472258eb768daca2821f6757940c48f3eafd6"} Mar 13 10:27:09 crc kubenswrapper[4632]: I0313 10:27:09.875322 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.701995889 podStartE2EDuration="7.875300214s" podCreationTimestamp="2026-03-13 10:27:02 +0000 UTC" firstStartedPulling="2026-03-13 10:27:03.831548964 +0000 UTC m=+1397.854079097" lastFinishedPulling="2026-03-13 10:27:09.004853289 +0000 UTC m=+1403.027383422" observedRunningTime="2026-03-13 10:27:09.868418862 +0000 UTC m=+1403.890948995" watchObservedRunningTime="2026-03-13 10:27:09.875300214 +0000 UTC m=+1403.897830347" Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.244016 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7dbf8b9ddc-6p5vh"] Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.245639 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7dbf8b9ddc-6p5vh" Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.266743 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.267601 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.269400 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.309144 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7dbf8b9ddc-6p5vh"] Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.384552 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2gqw\" (UniqueName: \"kubernetes.io/projected/03ca050c-63a7-4b37-91fe-fe5c322cca78-kube-api-access-p2gqw\") pod \"swift-proxy-7dbf8b9ddc-6p5vh\" (UID: \"03ca050c-63a7-4b37-91fe-fe5c322cca78\") " pod="openstack/swift-proxy-7dbf8b9ddc-6p5vh" Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.384882 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ca050c-63a7-4b37-91fe-fe5c322cca78-combined-ca-bundle\") pod \"swift-proxy-7dbf8b9ddc-6p5vh\" (UID: \"03ca050c-63a7-4b37-91fe-fe5c322cca78\") " pod="openstack/swift-proxy-7dbf8b9ddc-6p5vh" Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.385124 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/03ca050c-63a7-4b37-91fe-fe5c322cca78-internal-tls-certs\") pod \"swift-proxy-7dbf8b9ddc-6p5vh\" (UID: \"03ca050c-63a7-4b37-91fe-fe5c322cca78\") " pod="openstack/swift-proxy-7dbf8b9ddc-6p5vh" Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.385352 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/03ca050c-63a7-4b37-91fe-fe5c322cca78-etc-swift\") pod \"swift-proxy-7dbf8b9ddc-6p5vh\" (UID: \"03ca050c-63a7-4b37-91fe-fe5c322cca78\") " pod="openstack/swift-proxy-7dbf8b9ddc-6p5vh" Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.385488 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/03ca050c-63a7-4b37-91fe-fe5c322cca78-public-tls-certs\") pod \"swift-proxy-7dbf8b9ddc-6p5vh\" (UID: \"03ca050c-63a7-4b37-91fe-fe5c322cca78\") " pod="openstack/swift-proxy-7dbf8b9ddc-6p5vh" Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.385639 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03ca050c-63a7-4b37-91fe-fe5c322cca78-config-data\") pod \"swift-proxy-7dbf8b9ddc-6p5vh\" (UID: \"03ca050c-63a7-4b37-91fe-fe5c322cca78\") " pod="openstack/swift-proxy-7dbf8b9ddc-6p5vh" Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.385796 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03ca050c-63a7-4b37-91fe-fe5c322cca78-run-httpd\") pod \"swift-proxy-7dbf8b9ddc-6p5vh\" (UID: \"03ca050c-63a7-4b37-91fe-fe5c322cca78\") " pod="openstack/swift-proxy-7dbf8b9ddc-6p5vh" Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.385966 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03ca050c-63a7-4b37-91fe-fe5c322cca78-log-httpd\") pod \"swift-proxy-7dbf8b9ddc-6p5vh\" (UID: \"03ca050c-63a7-4b37-91fe-fe5c322cca78\") " pod="openstack/swift-proxy-7dbf8b9ddc-6p5vh" Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.461762 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.461822 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.487779 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/03ca050c-63a7-4b37-91fe-fe5c322cca78-public-tls-certs\") pod \"swift-proxy-7dbf8b9ddc-6p5vh\" (UID: \"03ca050c-63a7-4b37-91fe-fe5c322cca78\") " pod="openstack/swift-proxy-7dbf8b9ddc-6p5vh" Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.487839 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03ca050c-63a7-4b37-91fe-fe5c322cca78-config-data\") pod \"swift-proxy-7dbf8b9ddc-6p5vh\" (UID: \"03ca050c-63a7-4b37-91fe-fe5c322cca78\") " pod="openstack/swift-proxy-7dbf8b9ddc-6p5vh" Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.487875 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03ca050c-63a7-4b37-91fe-fe5c322cca78-run-httpd\") pod \"swift-proxy-7dbf8b9ddc-6p5vh\" (UID: \"03ca050c-63a7-4b37-91fe-fe5c322cca78\") " pod="openstack/swift-proxy-7dbf8b9ddc-6p5vh" Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.487929 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03ca050c-63a7-4b37-91fe-fe5c322cca78-log-httpd\") pod \"swift-proxy-7dbf8b9ddc-6p5vh\" (UID: \"03ca050c-63a7-4b37-91fe-fe5c322cca78\") " pod="openstack/swift-proxy-7dbf8b9ddc-6p5vh" Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.488191 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2gqw\" (UniqueName: \"kubernetes.io/projected/03ca050c-63a7-4b37-91fe-fe5c322cca78-kube-api-access-p2gqw\") pod \"swift-proxy-7dbf8b9ddc-6p5vh\" (UID: \"03ca050c-63a7-4b37-91fe-fe5c322cca78\") " pod="openstack/swift-proxy-7dbf8b9ddc-6p5vh" Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.488227 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ca050c-63a7-4b37-91fe-fe5c322cca78-combined-ca-bundle\") pod \"swift-proxy-7dbf8b9ddc-6p5vh\" (UID: \"03ca050c-63a7-4b37-91fe-fe5c322cca78\") " pod="openstack/swift-proxy-7dbf8b9ddc-6p5vh" Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.488253 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/03ca050c-63a7-4b37-91fe-fe5c322cca78-internal-tls-certs\") pod \"swift-proxy-7dbf8b9ddc-6p5vh\" (UID: \"03ca050c-63a7-4b37-91fe-fe5c322cca78\") " pod="openstack/swift-proxy-7dbf8b9ddc-6p5vh" Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.488303 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/03ca050c-63a7-4b37-91fe-fe5c322cca78-etc-swift\") pod \"swift-proxy-7dbf8b9ddc-6p5vh\" (UID: \"03ca050c-63a7-4b37-91fe-fe5c322cca78\") " pod="openstack/swift-proxy-7dbf8b9ddc-6p5vh" Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.489438 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03ca050c-63a7-4b37-91fe-fe5c322cca78-run-httpd\") pod \"swift-proxy-7dbf8b9ddc-6p5vh\" (UID: \"03ca050c-63a7-4b37-91fe-fe5c322cca78\") " pod="openstack/swift-proxy-7dbf8b9ddc-6p5vh" Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.489499 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03ca050c-63a7-4b37-91fe-fe5c322cca78-log-httpd\") pod \"swift-proxy-7dbf8b9ddc-6p5vh\" (UID: \"03ca050c-63a7-4b37-91fe-fe5c322cca78\") " pod="openstack/swift-proxy-7dbf8b9ddc-6p5vh" Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.496192 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/03ca050c-63a7-4b37-91fe-fe5c322cca78-internal-tls-certs\") pod \"swift-proxy-7dbf8b9ddc-6p5vh\" (UID: \"03ca050c-63a7-4b37-91fe-fe5c322cca78\") " pod="openstack/swift-proxy-7dbf8b9ddc-6p5vh" Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.497349 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/03ca050c-63a7-4b37-91fe-fe5c322cca78-etc-swift\") pod \"swift-proxy-7dbf8b9ddc-6p5vh\" (UID: \"03ca050c-63a7-4b37-91fe-fe5c322cca78\") " pod="openstack/swift-proxy-7dbf8b9ddc-6p5vh" Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.497546 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/03ca050c-63a7-4b37-91fe-fe5c322cca78-public-tls-certs\") pod \"swift-proxy-7dbf8b9ddc-6p5vh\" (UID: \"03ca050c-63a7-4b37-91fe-fe5c322cca78\") " pod="openstack/swift-proxy-7dbf8b9ddc-6p5vh" Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.518188 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2gqw\" (UniqueName: \"kubernetes.io/projected/03ca050c-63a7-4b37-91fe-fe5c322cca78-kube-api-access-p2gqw\") pod \"swift-proxy-7dbf8b9ddc-6p5vh\" (UID: \"03ca050c-63a7-4b37-91fe-fe5c322cca78\") " pod="openstack/swift-proxy-7dbf8b9ddc-6p5vh" Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.519470 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03ca050c-63a7-4b37-91fe-fe5c322cca78-config-data\") pod \"swift-proxy-7dbf8b9ddc-6p5vh\" (UID: \"03ca050c-63a7-4b37-91fe-fe5c322cca78\") " pod="openstack/swift-proxy-7dbf8b9ddc-6p5vh" Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.523779 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ca050c-63a7-4b37-91fe-fe5c322cca78-combined-ca-bundle\") pod \"swift-proxy-7dbf8b9ddc-6p5vh\" (UID: \"03ca050c-63a7-4b37-91fe-fe5c322cca78\") " pod="openstack/swift-proxy-7dbf8b9ddc-6p5vh" Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.612042 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7dbf8b9ddc-6p5vh" Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.805426 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.864249 4632 generic.go:334] "Generic (PLEG): container finished" podID="58677e2e-9fc6-4e50-b342-e912afa8d969" containerID="3a8d9431bb58dc2e36bce7009280ffed0639f98e73ca93dba3c41c03d94fb14f" exitCode=137 Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.864305 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"58677e2e-9fc6-4e50-b342-e912afa8d969","Type":"ContainerDied","Data":"3a8d9431bb58dc2e36bce7009280ffed0639f98e73ca93dba3c41c03d94fb14f"} Mar 13 10:27:10 crc kubenswrapper[4632]: I0313 10:27:10.864534 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 13 10:27:11 crc kubenswrapper[4632]: I0313 10:27:11.875449 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c4e63898-c65b-42c6-9ec5-3089ae7a8d8c" containerName="ceilometer-central-agent" containerID="cri-o://bb00bf460a4849cc1a7c1bad8a739981e87c032a18a9222632d57abbccea8858" gracePeriod=30 Mar 13 10:27:11 crc kubenswrapper[4632]: I0313 10:27:11.875508 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c4e63898-c65b-42c6-9ec5-3089ae7a8d8c" containerName="sg-core" containerID="cri-o://aa0a9edf7c00bb4d08cf1a3f2565b5016be14ce1312e093e1c44112d2d594f42" gracePeriod=30 Mar 13 10:27:11 crc kubenswrapper[4632]: I0313 10:27:11.875543 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c4e63898-c65b-42c6-9ec5-3089ae7a8d8c" containerName="ceilometer-notification-agent" containerID="cri-o://c7001d72ce189e15496046472a90b656a4129de71ad96c6f49a1d6b92862a990" gracePeriod=30 Mar 13 10:27:11 crc kubenswrapper[4632]: I0313 10:27:11.875530 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c4e63898-c65b-42c6-9ec5-3089ae7a8d8c" containerName="proxy-httpd" containerID="cri-o://d67a8223c96cadeaa871fcaaaad472258eb768daca2821f6757940c48f3eafd6" gracePeriod=30 Mar 13 10:27:12 crc kubenswrapper[4632]: I0313 10:27:12.899186 4632 generic.go:334] "Generic (PLEG): container finished" podID="c4e63898-c65b-42c6-9ec5-3089ae7a8d8c" containerID="d67a8223c96cadeaa871fcaaaad472258eb768daca2821f6757940c48f3eafd6" exitCode=0 Mar 13 10:27:12 crc kubenswrapper[4632]: I0313 10:27:12.899543 4632 generic.go:334] "Generic (PLEG): container finished" podID="c4e63898-c65b-42c6-9ec5-3089ae7a8d8c" containerID="aa0a9edf7c00bb4d08cf1a3f2565b5016be14ce1312e093e1c44112d2d594f42" exitCode=2 Mar 13 10:27:12 crc kubenswrapper[4632]: I0313 10:27:12.899562 4632 generic.go:334] "Generic (PLEG): container finished" podID="c4e63898-c65b-42c6-9ec5-3089ae7a8d8c" containerID="c7001d72ce189e15496046472a90b656a4129de71ad96c6f49a1d6b92862a990" exitCode=0 Mar 13 10:27:12 crc kubenswrapper[4632]: I0313 10:27:12.899572 4632 generic.go:334] "Generic (PLEG): container finished" podID="c4e63898-c65b-42c6-9ec5-3089ae7a8d8c" containerID="bb00bf460a4849cc1a7c1bad8a739981e87c032a18a9222632d57abbccea8858" exitCode=0 Mar 13 10:27:12 crc kubenswrapper[4632]: I0313 10:27:12.899363 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c","Type":"ContainerDied","Data":"d67a8223c96cadeaa871fcaaaad472258eb768daca2821f6757940c48f3eafd6"} Mar 13 10:27:12 crc kubenswrapper[4632]: I0313 10:27:12.899616 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c","Type":"ContainerDied","Data":"aa0a9edf7c00bb4d08cf1a3f2565b5016be14ce1312e093e1c44112d2d594f42"} Mar 13 10:27:12 crc kubenswrapper[4632]: I0313 10:27:12.899636 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c","Type":"ContainerDied","Data":"c7001d72ce189e15496046472a90b656a4129de71ad96c6f49a1d6b92862a990"} Mar 13 10:27:12 crc kubenswrapper[4632]: I0313 10:27:12.899651 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c","Type":"ContainerDied","Data":"bb00bf460a4849cc1a7c1bad8a739981e87c032a18a9222632d57abbccea8858"} Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.208431 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7f9df5b5b5-q6dp2"] Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.213146 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7f9df5b5b5-q6dp2" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.221826 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.229462 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-vbbdq" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.229923 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.243984 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7f9df5b5b5-q6dp2"] Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.353555 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/757b852e-068c-4885-99b8-af2e6f23e445-config-data\") pod \"heat-engine-7f9df5b5b5-q6dp2\" (UID: \"757b852e-068c-4885-99b8-af2e6f23e445\") " pod="openstack/heat-engine-7f9df5b5b5-q6dp2" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.353606 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/757b852e-068c-4885-99b8-af2e6f23e445-config-data-custom\") pod \"heat-engine-7f9df5b5b5-q6dp2\" (UID: \"757b852e-068c-4885-99b8-af2e6f23e445\") " pod="openstack/heat-engine-7f9df5b5b5-q6dp2" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.353655 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/757b852e-068c-4885-99b8-af2e6f23e445-combined-ca-bundle\") pod \"heat-engine-7f9df5b5b5-q6dp2\" (UID: \"757b852e-068c-4885-99b8-af2e6f23e445\") " pod="openstack/heat-engine-7f9df5b5b5-q6dp2" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.353842 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5895\" (UniqueName: \"kubernetes.io/projected/757b852e-068c-4885-99b8-af2e6f23e445-kube-api-access-d5895\") pod \"heat-engine-7f9df5b5b5-q6dp2\" (UID: \"757b852e-068c-4885-99b8-af2e6f23e445\") " pod="openstack/heat-engine-7f9df5b5b5-q6dp2" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.439791 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7888df55c7-mw5p4"] Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.441812 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7888df55c7-mw5p4" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.455263 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5895\" (UniqueName: \"kubernetes.io/projected/757b852e-068c-4885-99b8-af2e6f23e445-kube-api-access-d5895\") pod \"heat-engine-7f9df5b5b5-q6dp2\" (UID: \"757b852e-068c-4885-99b8-af2e6f23e445\") " pod="openstack/heat-engine-7f9df5b5b5-q6dp2" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.455350 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/757b852e-068c-4885-99b8-af2e6f23e445-config-data\") pod \"heat-engine-7f9df5b5b5-q6dp2\" (UID: \"757b852e-068c-4885-99b8-af2e6f23e445\") " pod="openstack/heat-engine-7f9df5b5b5-q6dp2" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.455377 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/757b852e-068c-4885-99b8-af2e6f23e445-config-data-custom\") pod \"heat-engine-7f9df5b5b5-q6dp2\" (UID: \"757b852e-068c-4885-99b8-af2e6f23e445\") " pod="openstack/heat-engine-7f9df5b5b5-q6dp2" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.455430 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/757b852e-068c-4885-99b8-af2e6f23e445-combined-ca-bundle\") pod \"heat-engine-7f9df5b5b5-q6dp2\" (UID: \"757b852e-068c-4885-99b8-af2e6f23e445\") " pod="openstack/heat-engine-7f9df5b5b5-q6dp2" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.465335 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/757b852e-068c-4885-99b8-af2e6f23e445-config-data\") pod \"heat-engine-7f9df5b5b5-q6dp2\" (UID: \"757b852e-068c-4885-99b8-af2e6f23e445\") " pod="openstack/heat-engine-7f9df5b5b5-q6dp2" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.466570 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/757b852e-068c-4885-99b8-af2e6f23e445-combined-ca-bundle\") pod \"heat-engine-7f9df5b5b5-q6dp2\" (UID: \"757b852e-068c-4885-99b8-af2e6f23e445\") " pod="openstack/heat-engine-7f9df5b5b5-q6dp2" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.548834 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7888df55c7-mw5p4"] Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.549806 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/757b852e-068c-4885-99b8-af2e6f23e445-config-data-custom\") pod \"heat-engine-7f9df5b5b5-q6dp2\" (UID: \"757b852e-068c-4885-99b8-af2e6f23e445\") " pod="openstack/heat-engine-7f9df5b5b5-q6dp2" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.567014 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/904f04cd-8110-4637-8bb4-67c4b83e189b-ovsdbserver-nb\") pod \"dnsmasq-dns-7888df55c7-mw5p4\" (UID: \"904f04cd-8110-4637-8bb4-67c4b83e189b\") " pod="openstack/dnsmasq-dns-7888df55c7-mw5p4" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.567168 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/904f04cd-8110-4637-8bb4-67c4b83e189b-dns-svc\") pod \"dnsmasq-dns-7888df55c7-mw5p4\" (UID: \"904f04cd-8110-4637-8bb4-67c4b83e189b\") " pod="openstack/dnsmasq-dns-7888df55c7-mw5p4" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.567264 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/904f04cd-8110-4637-8bb4-67c4b83e189b-dns-swift-storage-0\") pod \"dnsmasq-dns-7888df55c7-mw5p4\" (UID: \"904f04cd-8110-4637-8bb4-67c4b83e189b\") " pod="openstack/dnsmasq-dns-7888df55c7-mw5p4" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.567304 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/904f04cd-8110-4637-8bb4-67c4b83e189b-ovsdbserver-sb\") pod \"dnsmasq-dns-7888df55c7-mw5p4\" (UID: \"904f04cd-8110-4637-8bb4-67c4b83e189b\") " pod="openstack/dnsmasq-dns-7888df55c7-mw5p4" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.567380 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/904f04cd-8110-4637-8bb4-67c4b83e189b-config\") pod \"dnsmasq-dns-7888df55c7-mw5p4\" (UID: \"904f04cd-8110-4637-8bb4-67c4b83e189b\") " pod="openstack/dnsmasq-dns-7888df55c7-mw5p4" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.567446 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6mzz\" (UniqueName: \"kubernetes.io/projected/904f04cd-8110-4637-8bb4-67c4b83e189b-kube-api-access-k6mzz\") pod \"dnsmasq-dns-7888df55c7-mw5p4\" (UID: \"904f04cd-8110-4637-8bb4-67c4b83e189b\") " pod="openstack/dnsmasq-dns-7888df55c7-mw5p4" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.651316 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5895\" (UniqueName: \"kubernetes.io/projected/757b852e-068c-4885-99b8-af2e6f23e445-kube-api-access-d5895\") pod \"heat-engine-7f9df5b5b5-q6dp2\" (UID: \"757b852e-068c-4885-99b8-af2e6f23e445\") " pod="openstack/heat-engine-7f9df5b5b5-q6dp2" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.674689 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6mzz\" (UniqueName: \"kubernetes.io/projected/904f04cd-8110-4637-8bb4-67c4b83e189b-kube-api-access-k6mzz\") pod \"dnsmasq-dns-7888df55c7-mw5p4\" (UID: \"904f04cd-8110-4637-8bb4-67c4b83e189b\") " pod="openstack/dnsmasq-dns-7888df55c7-mw5p4" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.674751 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/904f04cd-8110-4637-8bb4-67c4b83e189b-ovsdbserver-nb\") pod \"dnsmasq-dns-7888df55c7-mw5p4\" (UID: \"904f04cd-8110-4637-8bb4-67c4b83e189b\") " pod="openstack/dnsmasq-dns-7888df55c7-mw5p4" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.674868 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/904f04cd-8110-4637-8bb4-67c4b83e189b-dns-svc\") pod \"dnsmasq-dns-7888df55c7-mw5p4\" (UID: \"904f04cd-8110-4637-8bb4-67c4b83e189b\") " pod="openstack/dnsmasq-dns-7888df55c7-mw5p4" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.674962 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/904f04cd-8110-4637-8bb4-67c4b83e189b-dns-swift-storage-0\") pod \"dnsmasq-dns-7888df55c7-mw5p4\" (UID: \"904f04cd-8110-4637-8bb4-67c4b83e189b\") " pod="openstack/dnsmasq-dns-7888df55c7-mw5p4" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.674989 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/904f04cd-8110-4637-8bb4-67c4b83e189b-ovsdbserver-sb\") pod \"dnsmasq-dns-7888df55c7-mw5p4\" (UID: \"904f04cd-8110-4637-8bb4-67c4b83e189b\") " pod="openstack/dnsmasq-dns-7888df55c7-mw5p4" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.675041 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/904f04cd-8110-4637-8bb4-67c4b83e189b-config\") pod \"dnsmasq-dns-7888df55c7-mw5p4\" (UID: \"904f04cd-8110-4637-8bb4-67c4b83e189b\") " pod="openstack/dnsmasq-dns-7888df55c7-mw5p4" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.683905 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/904f04cd-8110-4637-8bb4-67c4b83e189b-config\") pod \"dnsmasq-dns-7888df55c7-mw5p4\" (UID: \"904f04cd-8110-4637-8bb4-67c4b83e189b\") " pod="openstack/dnsmasq-dns-7888df55c7-mw5p4" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.685092 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/904f04cd-8110-4637-8bb4-67c4b83e189b-dns-swift-storage-0\") pod \"dnsmasq-dns-7888df55c7-mw5p4\" (UID: \"904f04cd-8110-4637-8bb4-67c4b83e189b\") " pod="openstack/dnsmasq-dns-7888df55c7-mw5p4" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.687727 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/904f04cd-8110-4637-8bb4-67c4b83e189b-ovsdbserver-sb\") pod \"dnsmasq-dns-7888df55c7-mw5p4\" (UID: \"904f04cd-8110-4637-8bb4-67c4b83e189b\") " pod="openstack/dnsmasq-dns-7888df55c7-mw5p4" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.690830 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/904f04cd-8110-4637-8bb4-67c4b83e189b-dns-svc\") pod \"dnsmasq-dns-7888df55c7-mw5p4\" (UID: \"904f04cd-8110-4637-8bb4-67c4b83e189b\") " pod="openstack/dnsmasq-dns-7888df55c7-mw5p4" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.703874 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-d856c56c-cmd2q"] Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.711612 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/904f04cd-8110-4637-8bb4-67c4b83e189b-ovsdbserver-nb\") pod \"dnsmasq-dns-7888df55c7-mw5p4\" (UID: \"904f04cd-8110-4637-8bb4-67c4b83e189b\") " pod="openstack/dnsmasq-dns-7888df55c7-mw5p4" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.730687 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-d856c56c-cmd2q" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.743806 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.744734 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6mzz\" (UniqueName: \"kubernetes.io/projected/904f04cd-8110-4637-8bb4-67c4b83e189b-kube-api-access-k6mzz\") pod \"dnsmasq-dns-7888df55c7-mw5p4\" (UID: \"904f04cd-8110-4637-8bb4-67c4b83e189b\") " pod="openstack/dnsmasq-dns-7888df55c7-mw5p4" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.825078 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-d856c56c-cmd2q"] Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.835135 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7f9df5b5b5-q6dp2" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.858922 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-b547848c4-bn5vs"] Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.867702 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-b547848c4-bn5vs" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.875314 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.881516 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prc4f\" (UniqueName: \"kubernetes.io/projected/5d10747e-ba77-4986-9d4b-636fcbf823ab-kube-api-access-prc4f\") pod \"heat-cfnapi-d856c56c-cmd2q\" (UID: \"5d10747e-ba77-4986-9d4b-636fcbf823ab\") " pod="openstack/heat-cfnapi-d856c56c-cmd2q" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.881573 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d10747e-ba77-4986-9d4b-636fcbf823ab-config-data-custom\") pod \"heat-cfnapi-d856c56c-cmd2q\" (UID: \"5d10747e-ba77-4986-9d4b-636fcbf823ab\") " pod="openstack/heat-cfnapi-d856c56c-cmd2q" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.881655 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d10747e-ba77-4986-9d4b-636fcbf823ab-config-data\") pod \"heat-cfnapi-d856c56c-cmd2q\" (UID: \"5d10747e-ba77-4986-9d4b-636fcbf823ab\") " pod="openstack/heat-cfnapi-d856c56c-cmd2q" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.881748 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d10747e-ba77-4986-9d4b-636fcbf823ab-combined-ca-bundle\") pod \"heat-cfnapi-d856c56c-cmd2q\" (UID: \"5d10747e-ba77-4986-9d4b-636fcbf823ab\") " pod="openstack/heat-cfnapi-d856c56c-cmd2q" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.897226 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-b547848c4-bn5vs"] Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.967458 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7888df55c7-mw5p4" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.983603 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prc4f\" (UniqueName: \"kubernetes.io/projected/5d10747e-ba77-4986-9d4b-636fcbf823ab-kube-api-access-prc4f\") pod \"heat-cfnapi-d856c56c-cmd2q\" (UID: \"5d10747e-ba77-4986-9d4b-636fcbf823ab\") " pod="openstack/heat-cfnapi-d856c56c-cmd2q" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.986694 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d10747e-ba77-4986-9d4b-636fcbf823ab-config-data-custom\") pod \"heat-cfnapi-d856c56c-cmd2q\" (UID: \"5d10747e-ba77-4986-9d4b-636fcbf823ab\") " pod="openstack/heat-cfnapi-d856c56c-cmd2q" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.986840 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn98r\" (UniqueName: \"kubernetes.io/projected/07914020-653d-4509-9f60-22726224c7c6-kube-api-access-nn98r\") pod \"heat-api-b547848c4-bn5vs\" (UID: \"07914020-653d-4509-9f60-22726224c7c6\") " pod="openstack/heat-api-b547848c4-bn5vs" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.986883 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d10747e-ba77-4986-9d4b-636fcbf823ab-config-data\") pod \"heat-cfnapi-d856c56c-cmd2q\" (UID: \"5d10747e-ba77-4986-9d4b-636fcbf823ab\") " pod="openstack/heat-cfnapi-d856c56c-cmd2q" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.986968 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07914020-653d-4509-9f60-22726224c7c6-config-data-custom\") pod \"heat-api-b547848c4-bn5vs\" (UID: \"07914020-653d-4509-9f60-22726224c7c6\") " pod="openstack/heat-api-b547848c4-bn5vs" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.987037 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07914020-653d-4509-9f60-22726224c7c6-config-data\") pod \"heat-api-b547848c4-bn5vs\" (UID: \"07914020-653d-4509-9f60-22726224c7c6\") " pod="openstack/heat-api-b547848c4-bn5vs" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.987137 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d10747e-ba77-4986-9d4b-636fcbf823ab-combined-ca-bundle\") pod \"heat-cfnapi-d856c56c-cmd2q\" (UID: \"5d10747e-ba77-4986-9d4b-636fcbf823ab\") " pod="openstack/heat-cfnapi-d856c56c-cmd2q" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.987210 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07914020-653d-4509-9f60-22726224c7c6-combined-ca-bundle\") pod \"heat-api-b547848c4-bn5vs\" (UID: \"07914020-653d-4509-9f60-22726224c7c6\") " pod="openstack/heat-api-b547848c4-bn5vs" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.997881 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d10747e-ba77-4986-9d4b-636fcbf823ab-config-data-custom\") pod \"heat-cfnapi-d856c56c-cmd2q\" (UID: \"5d10747e-ba77-4986-9d4b-636fcbf823ab\") " pod="openstack/heat-cfnapi-d856c56c-cmd2q" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.998401 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d10747e-ba77-4986-9d4b-636fcbf823ab-config-data\") pod \"heat-cfnapi-d856c56c-cmd2q\" (UID: \"5d10747e-ba77-4986-9d4b-636fcbf823ab\") " pod="openstack/heat-cfnapi-d856c56c-cmd2q" Mar 13 10:27:13 crc kubenswrapper[4632]: I0313 10:27:13.998856 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d10747e-ba77-4986-9d4b-636fcbf823ab-combined-ca-bundle\") pod \"heat-cfnapi-d856c56c-cmd2q\" (UID: \"5d10747e-ba77-4986-9d4b-636fcbf823ab\") " pod="openstack/heat-cfnapi-d856c56c-cmd2q" Mar 13 10:27:14 crc kubenswrapper[4632]: I0313 10:27:14.004576 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prc4f\" (UniqueName: \"kubernetes.io/projected/5d10747e-ba77-4986-9d4b-636fcbf823ab-kube-api-access-prc4f\") pod \"heat-cfnapi-d856c56c-cmd2q\" (UID: \"5d10747e-ba77-4986-9d4b-636fcbf823ab\") " pod="openstack/heat-cfnapi-d856c56c-cmd2q" Mar 13 10:27:14 crc kubenswrapper[4632]: I0313 10:27:14.091294 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn98r\" (UniqueName: \"kubernetes.io/projected/07914020-653d-4509-9f60-22726224c7c6-kube-api-access-nn98r\") pod \"heat-api-b547848c4-bn5vs\" (UID: \"07914020-653d-4509-9f60-22726224c7c6\") " pod="openstack/heat-api-b547848c4-bn5vs" Mar 13 10:27:14 crc kubenswrapper[4632]: I0313 10:27:14.091367 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07914020-653d-4509-9f60-22726224c7c6-config-data-custom\") pod \"heat-api-b547848c4-bn5vs\" (UID: \"07914020-653d-4509-9f60-22726224c7c6\") " pod="openstack/heat-api-b547848c4-bn5vs" Mar 13 10:27:14 crc kubenswrapper[4632]: I0313 10:27:14.091407 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07914020-653d-4509-9f60-22726224c7c6-config-data\") pod \"heat-api-b547848c4-bn5vs\" (UID: \"07914020-653d-4509-9f60-22726224c7c6\") " pod="openstack/heat-api-b547848c4-bn5vs" Mar 13 10:27:14 crc kubenswrapper[4632]: I0313 10:27:14.091477 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07914020-653d-4509-9f60-22726224c7c6-combined-ca-bundle\") pod \"heat-api-b547848c4-bn5vs\" (UID: \"07914020-653d-4509-9f60-22726224c7c6\") " pod="openstack/heat-api-b547848c4-bn5vs" Mar 13 10:27:14 crc kubenswrapper[4632]: I0313 10:27:14.103533 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07914020-653d-4509-9f60-22726224c7c6-combined-ca-bundle\") pod \"heat-api-b547848c4-bn5vs\" (UID: \"07914020-653d-4509-9f60-22726224c7c6\") " pod="openstack/heat-api-b547848c4-bn5vs" Mar 13 10:27:14 crc kubenswrapper[4632]: I0313 10:27:14.103739 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07914020-653d-4509-9f60-22726224c7c6-config-data-custom\") pod \"heat-api-b547848c4-bn5vs\" (UID: \"07914020-653d-4509-9f60-22726224c7c6\") " pod="openstack/heat-api-b547848c4-bn5vs" Mar 13 10:27:14 crc kubenswrapper[4632]: I0313 10:27:14.104481 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07914020-653d-4509-9f60-22726224c7c6-config-data\") pod \"heat-api-b547848c4-bn5vs\" (UID: \"07914020-653d-4509-9f60-22726224c7c6\") " pod="openstack/heat-api-b547848c4-bn5vs" Mar 13 10:27:14 crc kubenswrapper[4632]: I0313 10:27:14.115484 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn98r\" (UniqueName: \"kubernetes.io/projected/07914020-653d-4509-9f60-22726224c7c6-kube-api-access-nn98r\") pod \"heat-api-b547848c4-bn5vs\" (UID: \"07914020-653d-4509-9f60-22726224c7c6\") " pod="openstack/heat-api-b547848c4-bn5vs" Mar 13 10:27:14 crc kubenswrapper[4632]: I0313 10:27:14.143112 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-d856c56c-cmd2q" Mar 13 10:27:14 crc kubenswrapper[4632]: I0313 10:27:14.226033 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-b547848c4-bn5vs" Mar 13 10:27:14 crc kubenswrapper[4632]: I0313 10:27:14.633228 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="58677e2e-9fc6-4e50-b342-e912afa8d969" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.172:8776/healthcheck\": dial tcp 10.217.0.172:8776: connect: connection refused" Mar 13 10:27:15 crc kubenswrapper[4632]: I0313 10:27:15.394762 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:27:15 crc kubenswrapper[4632]: I0313 10:27:15.395671 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:27:15 crc kubenswrapper[4632]: I0313 10:27:15.395964 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7bdb5f7878-ng2k2" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Mar 13 10:27:15 crc kubenswrapper[4632]: I0313 10:27:15.857265 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-689764498d-rg7vt" Mar 13 10:27:15 crc kubenswrapper[4632]: I0313 10:27:15.857574 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-689764498d-rg7vt" Mar 13 10:27:18 crc kubenswrapper[4632]: I0313 10:27:18.716393 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-wgv42"] Mar 13 10:27:18 crc kubenswrapper[4632]: I0313 10:27:18.717823 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wgv42" Mar 13 10:27:18 crc kubenswrapper[4632]: I0313 10:27:18.746963 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-wgv42"] Mar 13 10:27:18 crc kubenswrapper[4632]: I0313 10:27:18.760913 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsxhh\" (UniqueName: \"kubernetes.io/projected/f0c32ed5-c3b0-45ea-99de-87c45cb1ba77-kube-api-access-nsxhh\") pod \"nova-api-db-create-wgv42\" (UID: \"f0c32ed5-c3b0-45ea-99de-87c45cb1ba77\") " pod="openstack/nova-api-db-create-wgv42" Mar 13 10:27:18 crc kubenswrapper[4632]: I0313 10:27:18.761098 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0c32ed5-c3b0-45ea-99de-87c45cb1ba77-operator-scripts\") pod \"nova-api-db-create-wgv42\" (UID: \"f0c32ed5-c3b0-45ea-99de-87c45cb1ba77\") " pod="openstack/nova-api-db-create-wgv42" Mar 13 10:27:18 crc kubenswrapper[4632]: I0313 10:27:18.817524 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-kswhw"] Mar 13 10:27:18 crc kubenswrapper[4632]: I0313 10:27:18.821349 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kswhw" Mar 13 10:27:18 crc kubenswrapper[4632]: I0313 10:27:18.832201 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-kswhw"] Mar 13 10:27:18 crc kubenswrapper[4632]: I0313 10:27:18.863544 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdg5s\" (UniqueName: \"kubernetes.io/projected/8e0fb1fc-c94a-44f0-a269-e7211c6fcfba-kube-api-access-jdg5s\") pod \"nova-cell0-db-create-kswhw\" (UID: \"8e0fb1fc-c94a-44f0-a269-e7211c6fcfba\") " pod="openstack/nova-cell0-db-create-kswhw" Mar 13 10:27:18 crc kubenswrapper[4632]: I0313 10:27:18.863623 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0c32ed5-c3b0-45ea-99de-87c45cb1ba77-operator-scripts\") pod \"nova-api-db-create-wgv42\" (UID: \"f0c32ed5-c3b0-45ea-99de-87c45cb1ba77\") " pod="openstack/nova-api-db-create-wgv42" Mar 13 10:27:18 crc kubenswrapper[4632]: I0313 10:27:18.863662 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e0fb1fc-c94a-44f0-a269-e7211c6fcfba-operator-scripts\") pod \"nova-cell0-db-create-kswhw\" (UID: \"8e0fb1fc-c94a-44f0-a269-e7211c6fcfba\") " pod="openstack/nova-cell0-db-create-kswhw" Mar 13 10:27:18 crc kubenswrapper[4632]: I0313 10:27:18.872120 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsxhh\" (UniqueName: \"kubernetes.io/projected/f0c32ed5-c3b0-45ea-99de-87c45cb1ba77-kube-api-access-nsxhh\") pod \"nova-api-db-create-wgv42\" (UID: \"f0c32ed5-c3b0-45ea-99de-87c45cb1ba77\") " pod="openstack/nova-api-db-create-wgv42" Mar 13 10:27:18 crc kubenswrapper[4632]: I0313 10:27:18.873463 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0c32ed5-c3b0-45ea-99de-87c45cb1ba77-operator-scripts\") pod \"nova-api-db-create-wgv42\" (UID: \"f0c32ed5-c3b0-45ea-99de-87c45cb1ba77\") " pod="openstack/nova-api-db-create-wgv42" Mar 13 10:27:18 crc kubenswrapper[4632]: I0313 10:27:18.905983 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsxhh\" (UniqueName: \"kubernetes.io/projected/f0c32ed5-c3b0-45ea-99de-87c45cb1ba77-kube-api-access-nsxhh\") pod \"nova-api-db-create-wgv42\" (UID: \"f0c32ed5-c3b0-45ea-99de-87c45cb1ba77\") " pod="openstack/nova-api-db-create-wgv42" Mar 13 10:27:18 crc kubenswrapper[4632]: I0313 10:27:18.954573 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-fshjb"] Mar 13 10:27:18 crc kubenswrapper[4632]: I0313 10:27:18.956205 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fshjb" Mar 13 10:27:18 crc kubenswrapper[4632]: I0313 10:27:18.973680 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e0fb1fc-c94a-44f0-a269-e7211c6fcfba-operator-scripts\") pod \"nova-cell0-db-create-kswhw\" (UID: \"8e0fb1fc-c94a-44f0-a269-e7211c6fcfba\") " pod="openstack/nova-cell0-db-create-kswhw" Mar 13 10:27:18 crc kubenswrapper[4632]: I0313 10:27:18.974077 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdg5s\" (UniqueName: \"kubernetes.io/projected/8e0fb1fc-c94a-44f0-a269-e7211c6fcfba-kube-api-access-jdg5s\") pod \"nova-cell0-db-create-kswhw\" (UID: \"8e0fb1fc-c94a-44f0-a269-e7211c6fcfba\") " pod="openstack/nova-cell0-db-create-kswhw" Mar 13 10:27:18 crc kubenswrapper[4632]: I0313 10:27:18.975761 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e0fb1fc-c94a-44f0-a269-e7211c6fcfba-operator-scripts\") pod \"nova-cell0-db-create-kswhw\" (UID: \"8e0fb1fc-c94a-44f0-a269-e7211c6fcfba\") " pod="openstack/nova-cell0-db-create-kswhw" Mar 13 10:27:18 crc kubenswrapper[4632]: I0313 10:27:18.978158 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-f3f1-account-create-update-29g8s"] Mar 13 10:27:18 crc kubenswrapper[4632]: I0313 10:27:18.979522 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f3f1-account-create-update-29g8s" Mar 13 10:27:18 crc kubenswrapper[4632]: I0313 10:27:18.981582 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.002124 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-fshjb"] Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.026752 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-f3f1-account-create-update-29g8s"] Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.037759 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdg5s\" (UniqueName: \"kubernetes.io/projected/8e0fb1fc-c94a-44f0-a269-e7211c6fcfba-kube-api-access-jdg5s\") pod \"nova-cell0-db-create-kswhw\" (UID: \"8e0fb1fc-c94a-44f0-a269-e7211c6fcfba\") " pod="openstack/nova-cell0-db-create-kswhw" Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.054558 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wgv42" Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.093869 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09bd98be-9d10-4a53-8ef6-c4718b05c3f6-operator-scripts\") pod \"nova-cell1-db-create-fshjb\" (UID: \"09bd98be-9d10-4a53-8ef6-c4718b05c3f6\") " pod="openstack/nova-cell1-db-create-fshjb" Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.093953 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qgng\" (UniqueName: \"kubernetes.io/projected/234a900d-887b-448c-8336-010107726c1e-kube-api-access-9qgng\") pod \"nova-api-f3f1-account-create-update-29g8s\" (UID: \"234a900d-887b-448c-8336-010107726c1e\") " pod="openstack/nova-api-f3f1-account-create-update-29g8s" Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.094046 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl442\" (UniqueName: \"kubernetes.io/projected/09bd98be-9d10-4a53-8ef6-c4718b05c3f6-kube-api-access-cl442\") pod \"nova-cell1-db-create-fshjb\" (UID: \"09bd98be-9d10-4a53-8ef6-c4718b05c3f6\") " pod="openstack/nova-cell1-db-create-fshjb" Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.095880 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/234a900d-887b-448c-8336-010107726c1e-operator-scripts\") pod \"nova-api-f3f1-account-create-update-29g8s\" (UID: \"234a900d-887b-448c-8336-010107726c1e\") " pod="openstack/nova-api-f3f1-account-create-update-29g8s" Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.134654 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-86d4-account-create-update-5c7rj"] Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.142277 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-86d4-account-create-update-5c7rj" Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.145229 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.160956 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-86d4-account-create-update-5c7rj"] Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.168582 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kswhw" Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.199435 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fx7d\" (UniqueName: \"kubernetes.io/projected/bbaf5a79-1c34-4518-afb9-19703fe6c45b-kube-api-access-5fx7d\") pod \"nova-cell0-86d4-account-create-update-5c7rj\" (UID: \"bbaf5a79-1c34-4518-afb9-19703fe6c45b\") " pod="openstack/nova-cell0-86d4-account-create-update-5c7rj" Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.199552 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09bd98be-9d10-4a53-8ef6-c4718b05c3f6-operator-scripts\") pod \"nova-cell1-db-create-fshjb\" (UID: \"09bd98be-9d10-4a53-8ef6-c4718b05c3f6\") " pod="openstack/nova-cell1-db-create-fshjb" Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.199617 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qgng\" (UniqueName: \"kubernetes.io/projected/234a900d-887b-448c-8336-010107726c1e-kube-api-access-9qgng\") pod \"nova-api-f3f1-account-create-update-29g8s\" (UID: \"234a900d-887b-448c-8336-010107726c1e\") " pod="openstack/nova-api-f3f1-account-create-update-29g8s" Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.199701 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbaf5a79-1c34-4518-afb9-19703fe6c45b-operator-scripts\") pod \"nova-cell0-86d4-account-create-update-5c7rj\" (UID: \"bbaf5a79-1c34-4518-afb9-19703fe6c45b\") " pod="openstack/nova-cell0-86d4-account-create-update-5c7rj" Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.199750 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl442\" (UniqueName: \"kubernetes.io/projected/09bd98be-9d10-4a53-8ef6-c4718b05c3f6-kube-api-access-cl442\") pod \"nova-cell1-db-create-fshjb\" (UID: \"09bd98be-9d10-4a53-8ef6-c4718b05c3f6\") " pod="openstack/nova-cell1-db-create-fshjb" Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.199895 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/234a900d-887b-448c-8336-010107726c1e-operator-scripts\") pod \"nova-api-f3f1-account-create-update-29g8s\" (UID: \"234a900d-887b-448c-8336-010107726c1e\") " pod="openstack/nova-api-f3f1-account-create-update-29g8s" Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.200476 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09bd98be-9d10-4a53-8ef6-c4718b05c3f6-operator-scripts\") pod \"nova-cell1-db-create-fshjb\" (UID: \"09bd98be-9d10-4a53-8ef6-c4718b05c3f6\") " pod="openstack/nova-cell1-db-create-fshjb" Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.200706 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/234a900d-887b-448c-8336-010107726c1e-operator-scripts\") pod \"nova-api-f3f1-account-create-update-29g8s\" (UID: \"234a900d-887b-448c-8336-010107726c1e\") " pod="openstack/nova-api-f3f1-account-create-update-29g8s" Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.230447 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qgng\" (UniqueName: \"kubernetes.io/projected/234a900d-887b-448c-8336-010107726c1e-kube-api-access-9qgng\") pod \"nova-api-f3f1-account-create-update-29g8s\" (UID: \"234a900d-887b-448c-8336-010107726c1e\") " pod="openstack/nova-api-f3f1-account-create-update-29g8s" Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.245645 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl442\" (UniqueName: \"kubernetes.io/projected/09bd98be-9d10-4a53-8ef6-c4718b05c3f6-kube-api-access-cl442\") pod \"nova-cell1-db-create-fshjb\" (UID: \"09bd98be-9d10-4a53-8ef6-c4718b05c3f6\") " pod="openstack/nova-cell1-db-create-fshjb" Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.302202 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fx7d\" (UniqueName: \"kubernetes.io/projected/bbaf5a79-1c34-4518-afb9-19703fe6c45b-kube-api-access-5fx7d\") pod \"nova-cell0-86d4-account-create-update-5c7rj\" (UID: \"bbaf5a79-1c34-4518-afb9-19703fe6c45b\") " pod="openstack/nova-cell0-86d4-account-create-update-5c7rj" Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.302346 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbaf5a79-1c34-4518-afb9-19703fe6c45b-operator-scripts\") pod \"nova-cell0-86d4-account-create-update-5c7rj\" (UID: \"bbaf5a79-1c34-4518-afb9-19703fe6c45b\") " pod="openstack/nova-cell0-86d4-account-create-update-5c7rj" Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.303282 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbaf5a79-1c34-4518-afb9-19703fe6c45b-operator-scripts\") pod \"nova-cell0-86d4-account-create-update-5c7rj\" (UID: \"bbaf5a79-1c34-4518-afb9-19703fe6c45b\") " pod="openstack/nova-cell0-86d4-account-create-update-5c7rj" Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.315585 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fshjb" Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.327315 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f3f1-account-create-update-29g8s" Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.341523 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fx7d\" (UniqueName: \"kubernetes.io/projected/bbaf5a79-1c34-4518-afb9-19703fe6c45b-kube-api-access-5fx7d\") pod \"nova-cell0-86d4-account-create-update-5c7rj\" (UID: \"bbaf5a79-1c34-4518-afb9-19703fe6c45b\") " pod="openstack/nova-cell0-86d4-account-create-update-5c7rj" Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.355373 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-2f8c-account-create-update-g4b8g"] Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.356932 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2f8c-account-create-update-g4b8g" Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.360637 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.391075 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-2f8c-account-create-update-g4b8g"] Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.409515 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrgfq\" (UniqueName: \"kubernetes.io/projected/8462be25-a577-476d-b54a-73790a8aa189-kube-api-access-rrgfq\") pod \"nova-cell1-2f8c-account-create-update-g4b8g\" (UID: \"8462be25-a577-476d-b54a-73790a8aa189\") " pod="openstack/nova-cell1-2f8c-account-create-update-g4b8g" Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.409607 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8462be25-a577-476d-b54a-73790a8aa189-operator-scripts\") pod \"nova-cell1-2f8c-account-create-update-g4b8g\" (UID: \"8462be25-a577-476d-b54a-73790a8aa189\") " pod="openstack/nova-cell1-2f8c-account-create-update-g4b8g" Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.463445 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-86d4-account-create-update-5c7rj" Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.512202 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrgfq\" (UniqueName: \"kubernetes.io/projected/8462be25-a577-476d-b54a-73790a8aa189-kube-api-access-rrgfq\") pod \"nova-cell1-2f8c-account-create-update-g4b8g\" (UID: \"8462be25-a577-476d-b54a-73790a8aa189\") " pod="openstack/nova-cell1-2f8c-account-create-update-g4b8g" Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.512363 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8462be25-a577-476d-b54a-73790a8aa189-operator-scripts\") pod \"nova-cell1-2f8c-account-create-update-g4b8g\" (UID: \"8462be25-a577-476d-b54a-73790a8aa189\") " pod="openstack/nova-cell1-2f8c-account-create-update-g4b8g" Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.513598 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8462be25-a577-476d-b54a-73790a8aa189-operator-scripts\") pod \"nova-cell1-2f8c-account-create-update-g4b8g\" (UID: \"8462be25-a577-476d-b54a-73790a8aa189\") " pod="openstack/nova-cell1-2f8c-account-create-update-g4b8g" Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.530924 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrgfq\" (UniqueName: \"kubernetes.io/projected/8462be25-a577-476d-b54a-73790a8aa189-kube-api-access-rrgfq\") pod \"nova-cell1-2f8c-account-create-update-g4b8g\" (UID: \"8462be25-a577-476d-b54a-73790a8aa189\") " pod="openstack/nova-cell1-2f8c-account-create-update-g4b8g" Mar 13 10:27:19 crc kubenswrapper[4632]: E0313 10:27:19.555902 4632 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-openstackclient:e43235cb19da04699a53f42b6a75afe9" Mar 13 10:27:19 crc kubenswrapper[4632]: E0313 10:27:19.556208 4632 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-openstackclient:e43235cb19da04699a53f42b6a75afe9" Mar 13 10:27:19 crc kubenswrapper[4632]: E0313 10:27:19.556445 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-openstackclient:e43235cb19da04699a53f42b6a75afe9,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n55dh85h67dh9h89h686h667h58ch57hc4h5c8h556hf9h567h5f5h66dh65dhc6hfh56bh655h67fh88h555h65h5dfh5b8h5d6h65fh684hdh6q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cmjms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(aef9680f-df77-4e2e-ac53-9d7530c2270c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 10:27:19 crc kubenswrapper[4632]: E0313 10:27:19.559317 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="aef9680f-df77-4e2e-ac53-9d7530c2270c" Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.632971 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="58677e2e-9fc6-4e50-b342-e912afa8d969" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.172:8776/healthcheck\": dial tcp 10.217.0.172:8776: connect: connection refused" Mar 13 10:27:19 crc kubenswrapper[4632]: I0313 10:27:19.737373 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2f8c-account-create-update-g4b8g" Mar 13 10:27:20 crc kubenswrapper[4632]: E0313 10:27:20.096552 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-openstackclient:e43235cb19da04699a53f42b6a75afe9\\\"\"" pod="openstack/openstackclient" podUID="aef9680f-df77-4e2e-ac53-9d7530c2270c" Mar 13 10:27:20 crc kubenswrapper[4632]: I0313 10:27:20.372100 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 13 10:27:20 crc kubenswrapper[4632]: I0313 10:27:20.466769 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58677e2e-9fc6-4e50-b342-e912afa8d969-config-data\") pod \"58677e2e-9fc6-4e50-b342-e912afa8d969\" (UID: \"58677e2e-9fc6-4e50-b342-e912afa8d969\") " Mar 13 10:27:20 crc kubenswrapper[4632]: I0313 10:27:20.466852 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58677e2e-9fc6-4e50-b342-e912afa8d969-combined-ca-bundle\") pod \"58677e2e-9fc6-4e50-b342-e912afa8d969\" (UID: \"58677e2e-9fc6-4e50-b342-e912afa8d969\") " Mar 13 10:27:20 crc kubenswrapper[4632]: I0313 10:27:20.466906 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58677e2e-9fc6-4e50-b342-e912afa8d969-config-data-custom\") pod \"58677e2e-9fc6-4e50-b342-e912afa8d969\" (UID: \"58677e2e-9fc6-4e50-b342-e912afa8d969\") " Mar 13 10:27:20 crc kubenswrapper[4632]: I0313 10:27:20.467010 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd7tw\" (UniqueName: \"kubernetes.io/projected/58677e2e-9fc6-4e50-b342-e912afa8d969-kube-api-access-vd7tw\") pod \"58677e2e-9fc6-4e50-b342-e912afa8d969\" (UID: \"58677e2e-9fc6-4e50-b342-e912afa8d969\") " Mar 13 10:27:20 crc kubenswrapper[4632]: I0313 10:27:20.467041 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/58677e2e-9fc6-4e50-b342-e912afa8d969-etc-machine-id\") pod \"58677e2e-9fc6-4e50-b342-e912afa8d969\" (UID: \"58677e2e-9fc6-4e50-b342-e912afa8d969\") " Mar 13 10:27:20 crc kubenswrapper[4632]: I0313 10:27:20.467069 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58677e2e-9fc6-4e50-b342-e912afa8d969-logs\") pod \"58677e2e-9fc6-4e50-b342-e912afa8d969\" (UID: \"58677e2e-9fc6-4e50-b342-e912afa8d969\") " Mar 13 10:27:20 crc kubenswrapper[4632]: I0313 10:27:20.467152 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58677e2e-9fc6-4e50-b342-e912afa8d969-scripts\") pod \"58677e2e-9fc6-4e50-b342-e912afa8d969\" (UID: \"58677e2e-9fc6-4e50-b342-e912afa8d969\") " Mar 13 10:27:20 crc kubenswrapper[4632]: I0313 10:27:20.470308 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58677e2e-9fc6-4e50-b342-e912afa8d969-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "58677e2e-9fc6-4e50-b342-e912afa8d969" (UID: "58677e2e-9fc6-4e50-b342-e912afa8d969"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:27:20 crc kubenswrapper[4632]: I0313 10:27:20.473608 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58677e2e-9fc6-4e50-b342-e912afa8d969-logs" (OuterVolumeSpecName: "logs") pod "58677e2e-9fc6-4e50-b342-e912afa8d969" (UID: "58677e2e-9fc6-4e50-b342-e912afa8d969"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:27:20 crc kubenswrapper[4632]: I0313 10:27:20.492924 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58677e2e-9fc6-4e50-b342-e912afa8d969-scripts" (OuterVolumeSpecName: "scripts") pod "58677e2e-9fc6-4e50-b342-e912afa8d969" (UID: "58677e2e-9fc6-4e50-b342-e912afa8d969"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:27:20 crc kubenswrapper[4632]: I0313 10:27:20.493090 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58677e2e-9fc6-4e50-b342-e912afa8d969-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "58677e2e-9fc6-4e50-b342-e912afa8d969" (UID: "58677e2e-9fc6-4e50-b342-e912afa8d969"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:27:20 crc kubenswrapper[4632]: I0313 10:27:20.493228 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58677e2e-9fc6-4e50-b342-e912afa8d969-kube-api-access-vd7tw" (OuterVolumeSpecName: "kube-api-access-vd7tw") pod "58677e2e-9fc6-4e50-b342-e912afa8d969" (UID: "58677e2e-9fc6-4e50-b342-e912afa8d969"). InnerVolumeSpecName "kube-api-access-vd7tw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:27:20 crc kubenswrapper[4632]: I0313 10:27:20.585107 4632 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58677e2e-9fc6-4e50-b342-e912afa8d969-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:20 crc kubenswrapper[4632]: I0313 10:27:20.585151 4632 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58677e2e-9fc6-4e50-b342-e912afa8d969-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:20 crc kubenswrapper[4632]: I0313 10:27:20.585166 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vd7tw\" (UniqueName: \"kubernetes.io/projected/58677e2e-9fc6-4e50-b342-e912afa8d969-kube-api-access-vd7tw\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:20 crc kubenswrapper[4632]: I0313 10:27:20.585180 4632 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/58677e2e-9fc6-4e50-b342-e912afa8d969-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:20 crc kubenswrapper[4632]: I0313 10:27:20.585192 4632 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58677e2e-9fc6-4e50-b342-e912afa8d969-logs\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:20 crc kubenswrapper[4632]: I0313 10:27:20.621868 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58677e2e-9fc6-4e50-b342-e912afa8d969-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58677e2e-9fc6-4e50-b342-e912afa8d969" (UID: "58677e2e-9fc6-4e50-b342-e912afa8d969"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:27:20 crc kubenswrapper[4632]: I0313 10:27:20.690209 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58677e2e-9fc6-4e50-b342-e912afa8d969-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:20 crc kubenswrapper[4632]: I0313 10:27:20.814523 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58677e2e-9fc6-4e50-b342-e912afa8d969-config-data" (OuterVolumeSpecName: "config-data") pod "58677e2e-9fc6-4e50-b342-e912afa8d969" (UID: "58677e2e-9fc6-4e50-b342-e912afa8d969"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:27:20 crc kubenswrapper[4632]: I0313 10:27:20.902783 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58677e2e-9fc6-4e50-b342-e912afa8d969-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.058251 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.190167 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"58677e2e-9fc6-4e50-b342-e912afa8d969","Type":"ContainerDied","Data":"8037b401a0baaaa45f09498066b3b722d38c4aef73b4ab3874c935fbc21eac6e"} Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.190231 4632 scope.go:117] "RemoveContainer" containerID="3a8d9431bb58dc2e36bce7009280ffed0639f98e73ca93dba3c41c03d94fb14f" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.190394 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.214564 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-run-httpd\") pod \"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c\" (UID: \"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c\") " Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.214639 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-log-httpd\") pod \"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c\" (UID: \"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c\") " Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.214674 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-scripts\") pod \"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c\" (UID: \"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c\") " Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.214696 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-config-data\") pod \"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c\" (UID: \"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c\") " Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.214876 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcjwv\" (UniqueName: \"kubernetes.io/projected/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-kube-api-access-xcjwv\") pod \"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c\" (UID: \"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c\") " Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.215027 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-combined-ca-bundle\") pod \"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c\" (UID: \"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c\") " Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.215060 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-sg-core-conf-yaml\") pod \"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c\" (UID: \"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c\") " Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.217758 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c4e63898-c65b-42c6-9ec5-3089ae7a8d8c" (UID: "c4e63898-c65b-42c6-9ec5-3089ae7a8d8c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.225900 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7888df55c7-mw5p4"] Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.230867 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c4e63898-c65b-42c6-9ec5-3089ae7a8d8c" (UID: "c4e63898-c65b-42c6-9ec5-3089ae7a8d8c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.252428 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-scripts" (OuterVolumeSpecName: "scripts") pod "c4e63898-c65b-42c6-9ec5-3089ae7a8d8c" (UID: "c4e63898-c65b-42c6-9ec5-3089ae7a8d8c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.315782 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4e63898-c65b-42c6-9ec5-3089ae7a8d8c","Type":"ContainerDied","Data":"df15286148314ab907f4a05031eefbb838636621ee25cc2f368e3d56ae19621b"} Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.316074 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.331003 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-kube-api-access-xcjwv" (OuterVolumeSpecName: "kube-api-access-xcjwv") pod "c4e63898-c65b-42c6-9ec5-3089ae7a8d8c" (UID: "c4e63898-c65b-42c6-9ec5-3089ae7a8d8c"). InnerVolumeSpecName "kube-api-access-xcjwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.332635 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcjwv\" (UniqueName: \"kubernetes.io/projected/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-kube-api-access-xcjwv\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.368932 4632 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.368971 4632 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.368984 4632 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.423384 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c4e63898-c65b-42c6-9ec5-3089ae7a8d8c" (UID: "c4e63898-c65b-42c6-9ec5-3089ae7a8d8c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.470327 4632 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.552032 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.631365 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.668092 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 13 10:27:21 crc kubenswrapper[4632]: E0313 10:27:21.668585 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4e63898-c65b-42c6-9ec5-3089ae7a8d8c" containerName="ceilometer-central-agent" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.668603 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4e63898-c65b-42c6-9ec5-3089ae7a8d8c" containerName="ceilometer-central-agent" Mar 13 10:27:21 crc kubenswrapper[4632]: E0313 10:27:21.668628 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58677e2e-9fc6-4e50-b342-e912afa8d969" containerName="cinder-api-log" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.668636 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="58677e2e-9fc6-4e50-b342-e912afa8d969" containerName="cinder-api-log" Mar 13 10:27:21 crc kubenswrapper[4632]: E0313 10:27:21.668651 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4e63898-c65b-42c6-9ec5-3089ae7a8d8c" containerName="ceilometer-notification-agent" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.668659 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4e63898-c65b-42c6-9ec5-3089ae7a8d8c" containerName="ceilometer-notification-agent" Mar 13 10:27:21 crc kubenswrapper[4632]: E0313 10:27:21.668685 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58677e2e-9fc6-4e50-b342-e912afa8d969" containerName="cinder-api" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.668695 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="58677e2e-9fc6-4e50-b342-e912afa8d969" containerName="cinder-api" Mar 13 10:27:21 crc kubenswrapper[4632]: E0313 10:27:21.668723 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4e63898-c65b-42c6-9ec5-3089ae7a8d8c" containerName="proxy-httpd" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.668730 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4e63898-c65b-42c6-9ec5-3089ae7a8d8c" containerName="proxy-httpd" Mar 13 10:27:21 crc kubenswrapper[4632]: E0313 10:27:21.668745 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4e63898-c65b-42c6-9ec5-3089ae7a8d8c" containerName="sg-core" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.668752 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4e63898-c65b-42c6-9ec5-3089ae7a8d8c" containerName="sg-core" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.668984 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4e63898-c65b-42c6-9ec5-3089ae7a8d8c" containerName="proxy-httpd" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.669006 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="58677e2e-9fc6-4e50-b342-e912afa8d969" containerName="cinder-api" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.669022 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4e63898-c65b-42c6-9ec5-3089ae7a8d8c" containerName="ceilometer-notification-agent" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.669041 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4e63898-c65b-42c6-9ec5-3089ae7a8d8c" containerName="sg-core" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.669053 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4e63898-c65b-42c6-9ec5-3089ae7a8d8c" containerName="ceilometer-central-agent" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.669070 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="58677e2e-9fc6-4e50-b342-e912afa8d969" containerName="cinder-api-log" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.676657 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.683359 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.683607 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.683788 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.720046 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.790274 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6785ba8c-a47b-4851-945e-c07ccecb9911-config-data-custom\") pod \"cinder-api-0\" (UID: \"6785ba8c-a47b-4851-945e-c07ccecb9911\") " pod="openstack/cinder-api-0" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.790325 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6785ba8c-a47b-4851-945e-c07ccecb9911-public-tls-certs\") pod \"cinder-api-0\" (UID: \"6785ba8c-a47b-4851-945e-c07ccecb9911\") " pod="openstack/cinder-api-0" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.790348 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6785ba8c-a47b-4851-945e-c07ccecb9911-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6785ba8c-a47b-4851-945e-c07ccecb9911\") " pod="openstack/cinder-api-0" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.790370 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6785ba8c-a47b-4851-945e-c07ccecb9911-config-data\") pod \"cinder-api-0\" (UID: \"6785ba8c-a47b-4851-945e-c07ccecb9911\") " pod="openstack/cinder-api-0" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.790391 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6785ba8c-a47b-4851-945e-c07ccecb9911-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6785ba8c-a47b-4851-945e-c07ccecb9911\") " pod="openstack/cinder-api-0" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.790452 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89xkn\" (UniqueName: \"kubernetes.io/projected/6785ba8c-a47b-4851-945e-c07ccecb9911-kube-api-access-89xkn\") pod \"cinder-api-0\" (UID: \"6785ba8c-a47b-4851-945e-c07ccecb9911\") " pod="openstack/cinder-api-0" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.790479 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6785ba8c-a47b-4851-945e-c07ccecb9911-scripts\") pod \"cinder-api-0\" (UID: \"6785ba8c-a47b-4851-945e-c07ccecb9911\") " pod="openstack/cinder-api-0" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.790509 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6785ba8c-a47b-4851-945e-c07ccecb9911-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"6785ba8c-a47b-4851-945e-c07ccecb9911\") " pod="openstack/cinder-api-0" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.790545 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6785ba8c-a47b-4851-945e-c07ccecb9911-logs\") pod \"cinder-api-0\" (UID: \"6785ba8c-a47b-4851-945e-c07ccecb9911\") " pod="openstack/cinder-api-0" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.807050 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-kswhw"] Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.807418 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4e63898-c65b-42c6-9ec5-3089ae7a8d8c" (UID: "c4e63898-c65b-42c6-9ec5-3089ae7a8d8c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.811167 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-d856c56c-cmd2q"] Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.832489 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7dbf8b9ddc-6p5vh"] Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.888599 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-config-data" (OuterVolumeSpecName: "config-data") pod "c4e63898-c65b-42c6-9ec5-3089ae7a8d8c" (UID: "c4e63898-c65b-42c6-9ec5-3089ae7a8d8c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.902621 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6785ba8c-a47b-4851-945e-c07ccecb9911-config-data-custom\") pod \"cinder-api-0\" (UID: \"6785ba8c-a47b-4851-945e-c07ccecb9911\") " pod="openstack/cinder-api-0" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.902784 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6785ba8c-a47b-4851-945e-c07ccecb9911-public-tls-certs\") pod \"cinder-api-0\" (UID: \"6785ba8c-a47b-4851-945e-c07ccecb9911\") " pod="openstack/cinder-api-0" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.902855 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6785ba8c-a47b-4851-945e-c07ccecb9911-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6785ba8c-a47b-4851-945e-c07ccecb9911\") " pod="openstack/cinder-api-0" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.902899 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6785ba8c-a47b-4851-945e-c07ccecb9911-config-data\") pod \"cinder-api-0\" (UID: \"6785ba8c-a47b-4851-945e-c07ccecb9911\") " pod="openstack/cinder-api-0" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.902988 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6785ba8c-a47b-4851-945e-c07ccecb9911-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6785ba8c-a47b-4851-945e-c07ccecb9911\") " pod="openstack/cinder-api-0" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.903065 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89xkn\" (UniqueName: \"kubernetes.io/projected/6785ba8c-a47b-4851-945e-c07ccecb9911-kube-api-access-89xkn\") pod \"cinder-api-0\" (UID: \"6785ba8c-a47b-4851-945e-c07ccecb9911\") " pod="openstack/cinder-api-0" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.903109 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6785ba8c-a47b-4851-945e-c07ccecb9911-scripts\") pod \"cinder-api-0\" (UID: \"6785ba8c-a47b-4851-945e-c07ccecb9911\") " pod="openstack/cinder-api-0" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.903184 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6785ba8c-a47b-4851-945e-c07ccecb9911-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"6785ba8c-a47b-4851-945e-c07ccecb9911\") " pod="openstack/cinder-api-0" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.903416 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6785ba8c-a47b-4851-945e-c07ccecb9911-logs\") pod \"cinder-api-0\" (UID: \"6785ba8c-a47b-4851-945e-c07ccecb9911\") " pod="openstack/cinder-api-0" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.904859 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.959091 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.936484 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6785ba8c-a47b-4851-945e-c07ccecb9911-logs\") pod \"cinder-api-0\" (UID: \"6785ba8c-a47b-4851-945e-c07ccecb9911\") " pod="openstack/cinder-api-0" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.905243 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6785ba8c-a47b-4851-945e-c07ccecb9911-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6785ba8c-a47b-4851-945e-c07ccecb9911\") " pod="openstack/cinder-api-0" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.966799 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6785ba8c-a47b-4851-945e-c07ccecb9911-config-data\") pod \"cinder-api-0\" (UID: \"6785ba8c-a47b-4851-945e-c07ccecb9911\") " pod="openstack/cinder-api-0" Mar 13 10:27:21 crc kubenswrapper[4632]: I0313 10:27:21.970222 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6785ba8c-a47b-4851-945e-c07ccecb9911-scripts\") pod \"cinder-api-0\" (UID: \"6785ba8c-a47b-4851-945e-c07ccecb9911\") " pod="openstack/cinder-api-0" Mar 13 10:27:22 crc kubenswrapper[4632]: I0313 10:27:22.014419 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6785ba8c-a47b-4851-945e-c07ccecb9911-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6785ba8c-a47b-4851-945e-c07ccecb9911\") " pod="openstack/cinder-api-0" Mar 13 10:27:22 crc kubenswrapper[4632]: I0313 10:27:22.004091 4632 scope.go:117] "RemoveContainer" containerID="a9d0bc7751d471197cb532c1a7e500502d2e1e74a150ed57680796972e393189" Mar 13 10:27:22 crc kubenswrapper[4632]: I0313 10:27:22.027792 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6785ba8c-a47b-4851-945e-c07ccecb9911-config-data-custom\") pod \"cinder-api-0\" (UID: \"6785ba8c-a47b-4851-945e-c07ccecb9911\") " pod="openstack/cinder-api-0" Mar 13 10:27:22 crc kubenswrapper[4632]: I0313 10:27:22.028796 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89xkn\" (UniqueName: \"kubernetes.io/projected/6785ba8c-a47b-4851-945e-c07ccecb9911-kube-api-access-89xkn\") pod \"cinder-api-0\" (UID: \"6785ba8c-a47b-4851-945e-c07ccecb9911\") " pod="openstack/cinder-api-0" Mar 13 10:27:22 crc kubenswrapper[4632]: I0313 10:27:22.031250 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6785ba8c-a47b-4851-945e-c07ccecb9911-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"6785ba8c-a47b-4851-945e-c07ccecb9911\") " pod="openstack/cinder-api-0" Mar 13 10:27:22 crc kubenswrapper[4632]: I0313 10:27:22.138578 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6785ba8c-a47b-4851-945e-c07ccecb9911-public-tls-certs\") pod \"cinder-api-0\" (UID: \"6785ba8c-a47b-4851-945e-c07ccecb9911\") " pod="openstack/cinder-api-0" Mar 13 10:27:22 crc kubenswrapper[4632]: I0313 10:27:22.252460 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58677e2e-9fc6-4e50-b342-e912afa8d969" path="/var/lib/kubelet/pods/58677e2e-9fc6-4e50-b342-e912afa8d969/volumes" Mar 13 10:27:22 crc kubenswrapper[4632]: I0313 10:27:22.268496 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-fshjb"] Mar 13 10:27:22 crc kubenswrapper[4632]: I0313 10:27:22.305834 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-86d4-account-create-update-5c7rj"] Mar 13 10:27:22 crc kubenswrapper[4632]: I0313 10:27:22.343028 4632 scope.go:117] "RemoveContainer" containerID="d67a8223c96cadeaa871fcaaaad472258eb768daca2821f6757940c48f3eafd6" Mar 13 10:27:22 crc kubenswrapper[4632]: I0313 10:27:22.350146 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7888df55c7-mw5p4" event={"ID":"904f04cd-8110-4637-8bb4-67c4b83e189b","Type":"ContainerStarted","Data":"b305d4370882ddeb316b7136e1b6a31fb9b050f68adc94baa9487a0176e85bb7"} Mar 13 10:27:22 crc kubenswrapper[4632]: I0313 10:27:22.381637 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-b547848c4-bn5vs"] Mar 13 10:27:22 crc kubenswrapper[4632]: I0313 10:27:22.384032 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kswhw" event={"ID":"8e0fb1fc-c94a-44f0-a269-e7211c6fcfba","Type":"ContainerStarted","Data":"d2670f3af135aaf60a4a9f708985b74e740f5e7bc5471b2c38d01bfe606d1cfa"} Mar 13 10:27:22 crc kubenswrapper[4632]: I0313 10:27:22.394248 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 13 10:27:22 crc kubenswrapper[4632]: I0313 10:27:22.396180 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-d856c56c-cmd2q" event={"ID":"5d10747e-ba77-4986-9d4b-636fcbf823ab","Type":"ContainerStarted","Data":"c4b118bba3eb9eaa2f3d30625225786b624eac290ce33f3a700f116e125abbc7"} Mar 13 10:27:22 crc kubenswrapper[4632]: I0313 10:27:22.410010 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fshjb" event={"ID":"09bd98be-9d10-4a53-8ef6-c4718b05c3f6","Type":"ContainerStarted","Data":"c4ef1230411d68688aa0ca250739c759e42cb6d89e416542cd5bc528c1419eff"} Mar 13 10:27:22 crc kubenswrapper[4632]: I0313 10:27:22.411162 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7f9df5b5b5-q6dp2"] Mar 13 10:27:22 crc kubenswrapper[4632]: I0313 10:27:22.435196 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7dbf8b9ddc-6p5vh" event={"ID":"03ca050c-63a7-4b37-91fe-fe5c322cca78","Type":"ContainerStarted","Data":"fdf316a11cb9aca1f8cd6fb110ee0a2edd19f7386a5741d37b2da1b481bd8466"} Mar 13 10:27:22 crc kubenswrapper[4632]: I0313 10:27:22.467509 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-2f8c-account-create-update-g4b8g"] Mar 13 10:27:22 crc kubenswrapper[4632]: I0313 10:27:22.556106 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-wgv42"] Mar 13 10:27:22 crc kubenswrapper[4632]: I0313 10:27:22.595495 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-f3f1-account-create-update-29g8s"] Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:22.796599 4632 scope.go:117] "RemoveContainer" containerID="aa0a9edf7c00bb4d08cf1a3f2565b5016be14ce1312e093e1c44112d2d594f42" Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.141335 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.182853 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.215472 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.233788 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.234164 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.250498 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.251339 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 13 10:27:23 crc kubenswrapper[4632]: E0313 10:27:23.263731 4632 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4e63898_c65b_42c6_9ec5_3089ae7a8d8c.slice/crio-df15286148314ab907f4a05031eefbb838636621ee25cc2f368e3d56ae19621b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4e63898_c65b_42c6_9ec5_3089ae7a8d8c.slice\": RecentStats: unable to find data in memory cache]" Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.270035 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6db55c595b-pwgcg" Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.342587 4632 scope.go:117] "RemoveContainer" containerID="c7001d72ce189e15496046472a90b656a4129de71ad96c6f49a1d6b92862a990" Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.343865 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6db55c595b-pwgcg" Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.347021 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/536490c7-c218-43ca-b601-84fdf0721b13-scripts\") pod \"ceilometer-0\" (UID: \"536490c7-c218-43ca-b601-84fdf0721b13\") " pod="openstack/ceilometer-0" Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.347165 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/536490c7-c218-43ca-b601-84fdf0721b13-run-httpd\") pod \"ceilometer-0\" (UID: \"536490c7-c218-43ca-b601-84fdf0721b13\") " pod="openstack/ceilometer-0" Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.347205 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/536490c7-c218-43ca-b601-84fdf0721b13-log-httpd\") pod \"ceilometer-0\" (UID: \"536490c7-c218-43ca-b601-84fdf0721b13\") " pod="openstack/ceilometer-0" Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.347256 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/536490c7-c218-43ca-b601-84fdf0721b13-config-data\") pod \"ceilometer-0\" (UID: \"536490c7-c218-43ca-b601-84fdf0721b13\") " pod="openstack/ceilometer-0" Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.347401 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/536490c7-c218-43ca-b601-84fdf0721b13-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"536490c7-c218-43ca-b601-84fdf0721b13\") " pod="openstack/ceilometer-0" Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.347583 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crtzk\" (UniqueName: \"kubernetes.io/projected/536490c7-c218-43ca-b601-84fdf0721b13-kube-api-access-crtzk\") pod \"ceilometer-0\" (UID: \"536490c7-c218-43ca-b601-84fdf0721b13\") " pod="openstack/ceilometer-0" Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.347628 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/536490c7-c218-43ca-b601-84fdf0721b13-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"536490c7-c218-43ca-b601-84fdf0721b13\") " pod="openstack/ceilometer-0" Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.473451 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/536490c7-c218-43ca-b601-84fdf0721b13-scripts\") pod \"ceilometer-0\" (UID: \"536490c7-c218-43ca-b601-84fdf0721b13\") " pod="openstack/ceilometer-0" Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.473554 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/536490c7-c218-43ca-b601-84fdf0721b13-run-httpd\") pod \"ceilometer-0\" (UID: \"536490c7-c218-43ca-b601-84fdf0721b13\") " pod="openstack/ceilometer-0" Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.473586 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/536490c7-c218-43ca-b601-84fdf0721b13-log-httpd\") pod \"ceilometer-0\" (UID: \"536490c7-c218-43ca-b601-84fdf0721b13\") " pod="openstack/ceilometer-0" Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.473621 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/536490c7-c218-43ca-b601-84fdf0721b13-config-data\") pod \"ceilometer-0\" (UID: \"536490c7-c218-43ca-b601-84fdf0721b13\") " pod="openstack/ceilometer-0" Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.473746 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/536490c7-c218-43ca-b601-84fdf0721b13-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"536490c7-c218-43ca-b601-84fdf0721b13\") " pod="openstack/ceilometer-0" Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.473859 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crtzk\" (UniqueName: \"kubernetes.io/projected/536490c7-c218-43ca-b601-84fdf0721b13-kube-api-access-crtzk\") pod \"ceilometer-0\" (UID: \"536490c7-c218-43ca-b601-84fdf0721b13\") " pod="openstack/ceilometer-0" Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.473898 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/536490c7-c218-43ca-b601-84fdf0721b13-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"536490c7-c218-43ca-b601-84fdf0721b13\") " pod="openstack/ceilometer-0" Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.476071 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/536490c7-c218-43ca-b601-84fdf0721b13-run-httpd\") pod \"ceilometer-0\" (UID: \"536490c7-c218-43ca-b601-84fdf0721b13\") " pod="openstack/ceilometer-0" Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.480333 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/536490c7-c218-43ca-b601-84fdf0721b13-log-httpd\") pod \"ceilometer-0\" (UID: \"536490c7-c218-43ca-b601-84fdf0721b13\") " pod="openstack/ceilometer-0" Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.499826 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/536490c7-c218-43ca-b601-84fdf0721b13-scripts\") pod \"ceilometer-0\" (UID: \"536490c7-c218-43ca-b601-84fdf0721b13\") " pod="openstack/ceilometer-0" Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.512499 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7dd5c7bdcd-4969b"] Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.512814 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7dd5c7bdcd-4969b" podUID="5abe7bf3-d44d-4ee5-b568-2d497868f1e5" containerName="placement-log" containerID="cri-o://0b15584f3607b654abe16b00ac290d1bc5ee6f763bd08234d8697e7f5b5b20bb" gracePeriod=30 Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.513014 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7dd5c7bdcd-4969b" podUID="5abe7bf3-d44d-4ee5-b568-2d497868f1e5" containerName="placement-api" containerID="cri-o://2cfe7ebd70fe3427d7ef352e87ea88bca1736af36e0c260541ced9066c436503" gracePeriod=30 Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.532248 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/536490c7-c218-43ca-b601-84fdf0721b13-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"536490c7-c218-43ca-b601-84fdf0721b13\") " pod="openstack/ceilometer-0" Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.532549 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/536490c7-c218-43ca-b601-84fdf0721b13-config-data\") pod \"ceilometer-0\" (UID: \"536490c7-c218-43ca-b601-84fdf0721b13\") " pod="openstack/ceilometer-0" Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.533273 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/536490c7-c218-43ca-b601-84fdf0721b13-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"536490c7-c218-43ca-b601-84fdf0721b13\") " pod="openstack/ceilometer-0" Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.569477 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crtzk\" (UniqueName: \"kubernetes.io/projected/536490c7-c218-43ca-b601-84fdf0721b13-kube-api-access-crtzk\") pod \"ceilometer-0\" (UID: \"536490c7-c218-43ca-b601-84fdf0721b13\") " pod="openstack/ceilometer-0" Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.577451 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2f8c-account-create-update-g4b8g" event={"ID":"8462be25-a577-476d-b54a-73790a8aa189","Type":"ContainerStarted","Data":"d5791dd2ca6757eefbd1007667971c0817daf4611f261b3e01b5a58673b3e353"} Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.578351 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.613223 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f3f1-account-create-update-29g8s" event={"ID":"234a900d-887b-448c-8336-010107726c1e","Type":"ContainerStarted","Data":"6eb1c0223253d25c6c8ddc47c33c06c64e6f5b3a0035afe9508e0276ff9d5aaf"} Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.642181 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-b547848c4-bn5vs" event={"ID":"07914020-653d-4509-9f60-22726224c7c6","Type":"ContainerStarted","Data":"bb01c2352414aa3e5bdfcb4abaaae4c47a152945a1d74d64f5cf1228335558e9"} Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.650120 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7f9df5b5b5-q6dp2" event={"ID":"757b852e-068c-4885-99b8-af2e6f23e445","Type":"ContainerStarted","Data":"db0a88b20ef1358b7cfb558aebb52cdeba5b5f143eee06ddc98fa0acfb3ab01b"} Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.708314 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wgv42" event={"ID":"f0c32ed5-c3b0-45ea-99de-87c45cb1ba77","Type":"ContainerStarted","Data":"645f6056b5af662b78f01666e55121dd00fa1cf0b8aa9bf79ae1ebdf5a74d21d"} Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.711220 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-86d4-account-create-update-5c7rj" event={"ID":"bbaf5a79-1c34-4518-afb9-19703fe6c45b","Type":"ContainerStarted","Data":"32dece687daf55d82de95e6edff961fa23c0b30410ca7620ec1c15a1d72b8f64"} Mar 13 10:27:23 crc kubenswrapper[4632]: I0313 10:27:23.750262 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-kswhw" podStartSLOduration=5.750232897 podStartE2EDuration="5.750232897s" podCreationTimestamp="2026-03-13 10:27:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:27:23.737241921 +0000 UTC m=+1417.759772054" watchObservedRunningTime="2026-03-13 10:27:23.750232897 +0000 UTC m=+1417.772763030" Mar 13 10:27:24 crc kubenswrapper[4632]: I0313 10:27:24.025288 4632 scope.go:117] "RemoveContainer" containerID="bb00bf460a4849cc1a7c1bad8a739981e87c032a18a9222632d57abbccea8858" Mar 13 10:27:24 crc kubenswrapper[4632]: I0313 10:27:24.080962 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4e63898-c65b-42c6-9ec5-3089ae7a8d8c" path="/var/lib/kubelet/pods/c4e63898-c65b-42c6-9ec5-3089ae7a8d8c/volumes" Mar 13 10:27:24 crc kubenswrapper[4632]: I0313 10:27:24.314121 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:27:24 crc kubenswrapper[4632]: I0313 10:27:24.584064 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-c959f64fb-hx4t8"] Mar 13 10:27:24 crc kubenswrapper[4632]: I0313 10:27:24.608009 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-c959f64fb-hx4t8" Mar 13 10:27:24 crc kubenswrapper[4632]: I0313 10:27:24.620864 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-c959f64fb-hx4t8"] Mar 13 10:27:24 crc kubenswrapper[4632]: I0313 10:27:24.652580 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53145947-4584-4cef-b085-a0e0f550dde9-config-data-custom\") pod \"heat-engine-c959f64fb-hx4t8\" (UID: \"53145947-4584-4cef-b085-a0e0f550dde9\") " pod="openstack/heat-engine-c959f64fb-hx4t8" Mar 13 10:27:24 crc kubenswrapper[4632]: I0313 10:27:24.652817 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsdfl\" (UniqueName: \"kubernetes.io/projected/53145947-4584-4cef-b085-a0e0f550dde9-kube-api-access-bsdfl\") pod \"heat-engine-c959f64fb-hx4t8\" (UID: \"53145947-4584-4cef-b085-a0e0f550dde9\") " pod="openstack/heat-engine-c959f64fb-hx4t8" Mar 13 10:27:24 crc kubenswrapper[4632]: I0313 10:27:24.652885 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53145947-4584-4cef-b085-a0e0f550dde9-combined-ca-bundle\") pod \"heat-engine-c959f64fb-hx4t8\" (UID: \"53145947-4584-4cef-b085-a0e0f550dde9\") " pod="openstack/heat-engine-c959f64fb-hx4t8" Mar 13 10:27:24 crc kubenswrapper[4632]: I0313 10:27:24.653228 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53145947-4584-4cef-b085-a0e0f550dde9-config-data\") pod \"heat-engine-c959f64fb-hx4t8\" (UID: \"53145947-4584-4cef-b085-a0e0f550dde9\") " pod="openstack/heat-engine-c959f64fb-hx4t8" Mar 13 10:27:24 crc kubenswrapper[4632]: I0313 10:27:24.693383 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6f597ccc7c-zgmpr"] Mar 13 10:27:24 crc kubenswrapper[4632]: I0313 10:27:24.694890 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f597ccc7c-zgmpr" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.769309 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6f597ccc7c-zgmpr"] Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.769373 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-66b64f87f7-6z95j"] Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.774115 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53145947-4584-4cef-b085-a0e0f550dde9-config-data\") pod \"heat-engine-c959f64fb-hx4t8\" (UID: \"53145947-4584-4cef-b085-a0e0f550dde9\") " pod="openstack/heat-engine-c959f64fb-hx4t8" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.774184 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53145947-4584-4cef-b085-a0e0f550dde9-config-data-custom\") pod \"heat-engine-c959f64fb-hx4t8\" (UID: \"53145947-4584-4cef-b085-a0e0f550dde9\") " pod="openstack/heat-engine-c959f64fb-hx4t8" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.774210 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a721ddbf-6e3d-4c04-9fd5-52a29a4926cb-config-data-custom\") pod \"heat-api-6f597ccc7c-zgmpr\" (UID: \"a721ddbf-6e3d-4c04-9fd5-52a29a4926cb\") " pod="openstack/heat-api-6f597ccc7c-zgmpr" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.774281 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a721ddbf-6e3d-4c04-9fd5-52a29a4926cb-combined-ca-bundle\") pod \"heat-api-6f597ccc7c-zgmpr\" (UID: \"a721ddbf-6e3d-4c04-9fd5-52a29a4926cb\") " pod="openstack/heat-api-6f597ccc7c-zgmpr" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.774311 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsdfl\" (UniqueName: \"kubernetes.io/projected/53145947-4584-4cef-b085-a0e0f550dde9-kube-api-access-bsdfl\") pod \"heat-engine-c959f64fb-hx4t8\" (UID: \"53145947-4584-4cef-b085-a0e0f550dde9\") " pod="openstack/heat-engine-c959f64fb-hx4t8" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.774330 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53145947-4584-4cef-b085-a0e0f550dde9-combined-ca-bundle\") pod \"heat-engine-c959f64fb-hx4t8\" (UID: \"53145947-4584-4cef-b085-a0e0f550dde9\") " pod="openstack/heat-engine-c959f64fb-hx4t8" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.774375 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a721ddbf-6e3d-4c04-9fd5-52a29a4926cb-config-data\") pod \"heat-api-6f597ccc7c-zgmpr\" (UID: \"a721ddbf-6e3d-4c04-9fd5-52a29a4926cb\") " pod="openstack/heat-api-6f597ccc7c-zgmpr" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.774417 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vct9r\" (UniqueName: \"kubernetes.io/projected/a721ddbf-6e3d-4c04-9fd5-52a29a4926cb-kube-api-access-vct9r\") pod \"heat-api-6f597ccc7c-zgmpr\" (UID: \"a721ddbf-6e3d-4c04-9fd5-52a29a4926cb\") " pod="openstack/heat-api-6f597ccc7c-zgmpr" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.788050 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53145947-4584-4cef-b085-a0e0f550dde9-config-data\") pod \"heat-engine-c959f64fb-hx4t8\" (UID: \"53145947-4584-4cef-b085-a0e0f550dde9\") " pod="openstack/heat-engine-c959f64fb-hx4t8" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.812818 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53145947-4584-4cef-b085-a0e0f550dde9-config-data-custom\") pod \"heat-engine-c959f64fb-hx4t8\" (UID: \"53145947-4584-4cef-b085-a0e0f550dde9\") " pod="openstack/heat-engine-c959f64fb-hx4t8" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.815139 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-66b64f87f7-6z95j"] Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.815245 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-66b64f87f7-6z95j" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.826909 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53145947-4584-4cef-b085-a0e0f550dde9-combined-ca-bundle\") pod \"heat-engine-c959f64fb-hx4t8\" (UID: \"53145947-4584-4cef-b085-a0e0f550dde9\") " pod="openstack/heat-engine-c959f64fb-hx4t8" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.835706 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsdfl\" (UniqueName: \"kubernetes.io/projected/53145947-4584-4cef-b085-a0e0f550dde9-kube-api-access-bsdfl\") pod \"heat-engine-c959f64fb-hx4t8\" (UID: \"53145947-4584-4cef-b085-a0e0f550dde9\") " pod="openstack/heat-engine-c959f64fb-hx4t8" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.841550 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2f8c-account-create-update-g4b8g" event={"ID":"8462be25-a577-476d-b54a-73790a8aa189","Type":"ContainerStarted","Data":"baa73e1779483e615256cb324392bd7ff43cccd507e79b501108b7a61007ed58"} Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.886172 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a721ddbf-6e3d-4c04-9fd5-52a29a4926cb-config-data-custom\") pod \"heat-api-6f597ccc7c-zgmpr\" (UID: \"a721ddbf-6e3d-4c04-9fd5-52a29a4926cb\") " pod="openstack/heat-api-6f597ccc7c-zgmpr" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.886247 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a721ddbf-6e3d-4c04-9fd5-52a29a4926cb-combined-ca-bundle\") pod \"heat-api-6f597ccc7c-zgmpr\" (UID: \"a721ddbf-6e3d-4c04-9fd5-52a29a4926cb\") " pod="openstack/heat-api-6f597ccc7c-zgmpr" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.886279 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bca285e-17f7-4505-8a25-21f5ee739584-config-data\") pod \"heat-cfnapi-66b64f87f7-6z95j\" (UID: \"8bca285e-17f7-4505-8a25-21f5ee739584\") " pod="openstack/heat-cfnapi-66b64f87f7-6z95j" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.886312 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8bca285e-17f7-4505-8a25-21f5ee739584-config-data-custom\") pod \"heat-cfnapi-66b64f87f7-6z95j\" (UID: \"8bca285e-17f7-4505-8a25-21f5ee739584\") " pod="openstack/heat-cfnapi-66b64f87f7-6z95j" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.886345 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a721ddbf-6e3d-4c04-9fd5-52a29a4926cb-config-data\") pod \"heat-api-6f597ccc7c-zgmpr\" (UID: \"a721ddbf-6e3d-4c04-9fd5-52a29a4926cb\") " pod="openstack/heat-api-6f597ccc7c-zgmpr" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.886372 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bca285e-17f7-4505-8a25-21f5ee739584-combined-ca-bundle\") pod \"heat-cfnapi-66b64f87f7-6z95j\" (UID: \"8bca285e-17f7-4505-8a25-21f5ee739584\") " pod="openstack/heat-cfnapi-66b64f87f7-6z95j" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.886404 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vct9r\" (UniqueName: \"kubernetes.io/projected/a721ddbf-6e3d-4c04-9fd5-52a29a4926cb-kube-api-access-vct9r\") pod \"heat-api-6f597ccc7c-zgmpr\" (UID: \"a721ddbf-6e3d-4c04-9fd5-52a29a4926cb\") " pod="openstack/heat-api-6f597ccc7c-zgmpr" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.886426 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5qrk\" (UniqueName: \"kubernetes.io/projected/8bca285e-17f7-4505-8a25-21f5ee739584-kube-api-access-w5qrk\") pod \"heat-cfnapi-66b64f87f7-6z95j\" (UID: \"8bca285e-17f7-4505-8a25-21f5ee739584\") " pod="openstack/heat-cfnapi-66b64f87f7-6z95j" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.887761 4632 generic.go:334] "Generic (PLEG): container finished" podID="09bd98be-9d10-4a53-8ef6-c4718b05c3f6" containerID="a73d11226d1411728675707324588174ab20222ac0a86a31f153adf5c08496b7" exitCode=0 Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.887814 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fshjb" event={"ID":"09bd98be-9d10-4a53-8ef6-c4718b05c3f6","Type":"ContainerDied","Data":"a73d11226d1411728675707324588174ab20222ac0a86a31f153adf5c08496b7"} Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.901719 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a721ddbf-6e3d-4c04-9fd5-52a29a4926cb-config-data\") pod \"heat-api-6f597ccc7c-zgmpr\" (UID: \"a721ddbf-6e3d-4c04-9fd5-52a29a4926cb\") " pod="openstack/heat-api-6f597ccc7c-zgmpr" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.903348 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a721ddbf-6e3d-4c04-9fd5-52a29a4926cb-config-data-custom\") pod \"heat-api-6f597ccc7c-zgmpr\" (UID: \"a721ddbf-6e3d-4c04-9fd5-52a29a4926cb\") " pod="openstack/heat-api-6f597ccc7c-zgmpr" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.903922 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a721ddbf-6e3d-4c04-9fd5-52a29a4926cb-combined-ca-bundle\") pod \"heat-api-6f597ccc7c-zgmpr\" (UID: \"a721ddbf-6e3d-4c04-9fd5-52a29a4926cb\") " pod="openstack/heat-api-6f597ccc7c-zgmpr" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.909850 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-2f8c-account-create-update-g4b8g" podStartSLOduration=5.909822227 podStartE2EDuration="5.909822227s" podCreationTimestamp="2026-03-13 10:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:27:24.885005621 +0000 UTC m=+1418.907535744" watchObservedRunningTime="2026-03-13 10:27:24.909822227 +0000 UTC m=+1418.932352360" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.925244 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7dbf8b9ddc-6p5vh" event={"ID":"03ca050c-63a7-4b37-91fe-fe5c322cca78","Type":"ContainerStarted","Data":"94b786c65a2ca6a08eecb9fac67251053ce759dcb6a34953019d7b0f5ae51054"} Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.930971 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vct9r\" (UniqueName: \"kubernetes.io/projected/a721ddbf-6e3d-4c04-9fd5-52a29a4926cb-kube-api-access-vct9r\") pod \"heat-api-6f597ccc7c-zgmpr\" (UID: \"a721ddbf-6e3d-4c04-9fd5-52a29a4926cb\") " pod="openstack/heat-api-6f597ccc7c-zgmpr" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.931899 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6785ba8c-a47b-4851-945e-c07ccecb9911","Type":"ContainerStarted","Data":"b0c8d0ab5767df5dcc1a0b06d386ffcd85be89291952021813dc42c8e426af90"} Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.980482 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-c959f64fb-hx4t8" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.988293 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bca285e-17f7-4505-8a25-21f5ee739584-config-data\") pod \"heat-cfnapi-66b64f87f7-6z95j\" (UID: \"8bca285e-17f7-4505-8a25-21f5ee739584\") " pod="openstack/heat-cfnapi-66b64f87f7-6z95j" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.988354 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8bca285e-17f7-4505-8a25-21f5ee739584-config-data-custom\") pod \"heat-cfnapi-66b64f87f7-6z95j\" (UID: \"8bca285e-17f7-4505-8a25-21f5ee739584\") " pod="openstack/heat-cfnapi-66b64f87f7-6z95j" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.988392 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bca285e-17f7-4505-8a25-21f5ee739584-combined-ca-bundle\") pod \"heat-cfnapi-66b64f87f7-6z95j\" (UID: \"8bca285e-17f7-4505-8a25-21f5ee739584\") " pod="openstack/heat-cfnapi-66b64f87f7-6z95j" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.988433 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5qrk\" (UniqueName: \"kubernetes.io/projected/8bca285e-17f7-4505-8a25-21f5ee739584-kube-api-access-w5qrk\") pod \"heat-cfnapi-66b64f87f7-6z95j\" (UID: \"8bca285e-17f7-4505-8a25-21f5ee739584\") " pod="openstack/heat-cfnapi-66b64f87f7-6z95j" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.996959 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bca285e-17f7-4505-8a25-21f5ee739584-combined-ca-bundle\") pod \"heat-cfnapi-66b64f87f7-6z95j\" (UID: \"8bca285e-17f7-4505-8a25-21f5ee739584\") " pod="openstack/heat-cfnapi-66b64f87f7-6z95j" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.997637 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8bca285e-17f7-4505-8a25-21f5ee739584-config-data-custom\") pod \"heat-cfnapi-66b64f87f7-6z95j\" (UID: \"8bca285e-17f7-4505-8a25-21f5ee739584\") " pod="openstack/heat-cfnapi-66b64f87f7-6z95j" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:24.998751 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bca285e-17f7-4505-8a25-21f5ee739584-config-data\") pod \"heat-cfnapi-66b64f87f7-6z95j\" (UID: \"8bca285e-17f7-4505-8a25-21f5ee739584\") " pod="openstack/heat-cfnapi-66b64f87f7-6z95j" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:25.017229 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5qrk\" (UniqueName: \"kubernetes.io/projected/8bca285e-17f7-4505-8a25-21f5ee739584-kube-api-access-w5qrk\") pod \"heat-cfnapi-66b64f87f7-6z95j\" (UID: \"8bca285e-17f7-4505-8a25-21f5ee739584\") " pod="openstack/heat-cfnapi-66b64f87f7-6z95j" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:25.053030 4632 generic.go:334] "Generic (PLEG): container finished" podID="904f04cd-8110-4637-8bb4-67c4b83e189b" containerID="10ef0805fc14af19dcea5ad4d4426bd1471fa5008be0ab704ad9b901662ea060" exitCode=0 Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:25.053110 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7888df55c7-mw5p4" event={"ID":"904f04cd-8110-4637-8bb4-67c4b83e189b","Type":"ContainerDied","Data":"10ef0805fc14af19dcea5ad4d4426bd1471fa5008be0ab704ad9b901662ea060"} Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:25.082237 4632 generic.go:334] "Generic (PLEG): container finished" podID="5abe7bf3-d44d-4ee5-b568-2d497868f1e5" containerID="0b15584f3607b654abe16b00ac290d1bc5ee6f763bd08234d8697e7f5b5b20bb" exitCode=143 Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:25.082339 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7dd5c7bdcd-4969b" event={"ID":"5abe7bf3-d44d-4ee5-b568-2d497868f1e5","Type":"ContainerDied","Data":"0b15584f3607b654abe16b00ac290d1bc5ee6f763bd08234d8697e7f5b5b20bb"} Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:25.097032 4632 generic.go:334] "Generic (PLEG): container finished" podID="8e0fb1fc-c94a-44f0-a269-e7211c6fcfba" containerID="f531fb1c9798e5386771f799aeaf5ec81a37e70faa215029f1e44845844c0b7a" exitCode=0 Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:25.097099 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kswhw" event={"ID":"8e0fb1fc-c94a-44f0-a269-e7211c6fcfba","Type":"ContainerDied","Data":"f531fb1c9798e5386771f799aeaf5ec81a37e70faa215029f1e44845844c0b7a"} Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:25.233312 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6588559b77-6f4bf" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:25.239443 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f597ccc7c-zgmpr" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:25.255019 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-66b64f87f7-6z95j" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:25.368050 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5c86b4b888-l9574"] Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:25.368286 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5c86b4b888-l9574" podUID="6d73a499-d334-4a7a-9783-640b98760672" containerName="neutron-api" containerID="cri-o://8c839401b1db62da93454588496b8ab534c9e6313aa3bcb0003cb9137b63b2ca" gracePeriod=30 Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:25.368838 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5c86b4b888-l9574" podUID="6d73a499-d334-4a7a-9783-640b98760672" containerName="neutron-httpd" containerID="cri-o://e005b4f09b297f1fe00efd39c9534b7382173cd69b88dca5466ba89c0f3c0de7" gracePeriod=30 Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:25.396295 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7bdb5f7878-ng2k2" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:25.793874 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:27:25 crc kubenswrapper[4632]: I0313 10:27:25.858160 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-689764498d-rg7vt" podUID="5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Mar 13 10:27:26 crc kubenswrapper[4632]: I0313 10:27:26.120245 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7f9df5b5b5-q6dp2" event={"ID":"757b852e-068c-4885-99b8-af2e6f23e445","Type":"ContainerStarted","Data":"c974019fb18638d059aa9b080871c3232dbdb322c997ebb8d28de7a80fef50a0"} Mar 13 10:27:26 crc kubenswrapper[4632]: I0313 10:27:26.120372 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7f9df5b5b5-q6dp2" Mar 13 10:27:26 crc kubenswrapper[4632]: I0313 10:27:26.122960 4632 generic.go:334] "Generic (PLEG): container finished" podID="6d73a499-d334-4a7a-9783-640b98760672" containerID="e005b4f09b297f1fe00efd39c9534b7382173cd69b88dca5466ba89c0f3c0de7" exitCode=0 Mar 13 10:27:26 crc kubenswrapper[4632]: I0313 10:27:26.122981 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c86b4b888-l9574" event={"ID":"6d73a499-d334-4a7a-9783-640b98760672","Type":"ContainerDied","Data":"e005b4f09b297f1fe00efd39c9534b7382173cd69b88dca5466ba89c0f3c0de7"} Mar 13 10:27:26 crc kubenswrapper[4632]: I0313 10:27:26.126908 4632 generic.go:334] "Generic (PLEG): container finished" podID="f0c32ed5-c3b0-45ea-99de-87c45cb1ba77" containerID="62c66b71b16f2cd37ff478080f4c30eed65f51b807f687725f8ec89f5dd9d0dc" exitCode=0 Mar 13 10:27:26 crc kubenswrapper[4632]: I0313 10:27:26.127004 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wgv42" event={"ID":"f0c32ed5-c3b0-45ea-99de-87c45cb1ba77","Type":"ContainerDied","Data":"62c66b71b16f2cd37ff478080f4c30eed65f51b807f687725f8ec89f5dd9d0dc"} Mar 13 10:27:26 crc kubenswrapper[4632]: I0313 10:27:26.129917 4632 generic.go:334] "Generic (PLEG): container finished" podID="8462be25-a577-476d-b54a-73790a8aa189" containerID="baa73e1779483e615256cb324392bd7ff43cccd507e79b501108b7a61007ed58" exitCode=0 Mar 13 10:27:26 crc kubenswrapper[4632]: I0313 10:27:26.129994 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2f8c-account-create-update-g4b8g" event={"ID":"8462be25-a577-476d-b54a-73790a8aa189","Type":"ContainerDied","Data":"baa73e1779483e615256cb324392bd7ff43cccd507e79b501108b7a61007ed58"} Mar 13 10:27:26 crc kubenswrapper[4632]: I0313 10:27:26.134322 4632 generic.go:334] "Generic (PLEG): container finished" podID="234a900d-887b-448c-8336-010107726c1e" containerID="9cee7abc6c76d73494106b5582f85b871d225f179b8f40700ad2248a8daa7c60" exitCode=0 Mar 13 10:27:26 crc kubenswrapper[4632]: I0313 10:27:26.134407 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f3f1-account-create-update-29g8s" event={"ID":"234a900d-887b-448c-8336-010107726c1e","Type":"ContainerDied","Data":"9cee7abc6c76d73494106b5582f85b871d225f179b8f40700ad2248a8daa7c60"} Mar 13 10:27:26 crc kubenswrapper[4632]: I0313 10:27:26.142082 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-86d4-account-create-update-5c7rj" event={"ID":"bbaf5a79-1c34-4518-afb9-19703fe6c45b","Type":"ContainerStarted","Data":"d9f2ab5e1a5be1d4939b9fe05ba3a5cdbc725953ea1e78a027cf1f61d4444ba0"} Mar 13 10:27:26 crc kubenswrapper[4632]: I0313 10:27:26.142136 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7f9df5b5b5-q6dp2" podStartSLOduration=13.142116976 podStartE2EDuration="13.142116976s" podCreationTimestamp="2026-03-13 10:27:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:27:26.137892606 +0000 UTC m=+1420.160422759" watchObservedRunningTime="2026-03-13 10:27:26.142116976 +0000 UTC m=+1420.164647099" Mar 13 10:27:26 crc kubenswrapper[4632]: I0313 10:27:26.149248 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7dbf8b9ddc-6p5vh" event={"ID":"03ca050c-63a7-4b37-91fe-fe5c322cca78","Type":"ContainerStarted","Data":"d8e95d6e44c7d021f2ace88c9a6c134873fb843b4515ef7629a6970c5d1bd8f9"} Mar 13 10:27:26 crc kubenswrapper[4632]: I0313 10:27:26.149610 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7dbf8b9ddc-6p5vh" Mar 13 10:27:26 crc kubenswrapper[4632]: I0313 10:27:26.149738 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7dbf8b9ddc-6p5vh" Mar 13 10:27:26 crc kubenswrapper[4632]: I0313 10:27:26.155779 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6785ba8c-a47b-4851-945e-c07ccecb9911","Type":"ContainerStarted","Data":"33649b65fc233ba28987b27f24de1fa0c00851971451968b6f04cf9bbe1f240d"} Mar 13 10:27:26 crc kubenswrapper[4632]: I0313 10:27:26.236656 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-86d4-account-create-update-5c7rj" podStartSLOduration=7.236633671 podStartE2EDuration="7.236633671s" podCreationTimestamp="2026-03-13 10:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:27:26.227118226 +0000 UTC m=+1420.249648369" watchObservedRunningTime="2026-03-13 10:27:26.236633671 +0000 UTC m=+1420.259163804" Mar 13 10:27:26 crc kubenswrapper[4632]: I0313 10:27:26.277978 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7dbf8b9ddc-6p5vh" podStartSLOduration=16.277934496 podStartE2EDuration="16.277934496s" podCreationTimestamp="2026-03-13 10:27:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:27:26.253115419 +0000 UTC m=+1420.275645552" watchObservedRunningTime="2026-03-13 10:27:26.277934496 +0000 UTC m=+1420.300464649" Mar 13 10:27:27 crc kubenswrapper[4632]: W0313 10:27:27.168031 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod536490c7_c218_43ca_b601_84fdf0721b13.slice/crio-f402196243381c3faf3165d4fe49b7c43a1af16813bae58fca9b53eb4badf807 WatchSource:0}: Error finding container f402196243381c3faf3165d4fe49b7c43a1af16813bae58fca9b53eb4badf807: Status 404 returned error can't find the container with id f402196243381c3faf3165d4fe49b7c43a1af16813bae58fca9b53eb4badf807 Mar 13 10:27:27 crc kubenswrapper[4632]: I0313 10:27:27.196609 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fshjb" event={"ID":"09bd98be-9d10-4a53-8ef6-c4718b05c3f6","Type":"ContainerDied","Data":"c4ef1230411d68688aa0ca250739c759e42cb6d89e416542cd5bc528c1419eff"} Mar 13 10:27:27 crc kubenswrapper[4632]: I0313 10:27:27.196660 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4ef1230411d68688aa0ca250739c759e42cb6d89e416542cd5bc528c1419eff" Mar 13 10:27:27 crc kubenswrapper[4632]: I0313 10:27:27.222003 4632 generic.go:334] "Generic (PLEG): container finished" podID="bbaf5a79-1c34-4518-afb9-19703fe6c45b" containerID="d9f2ab5e1a5be1d4939b9fe05ba3a5cdbc725953ea1e78a027cf1f61d4444ba0" exitCode=0 Mar 13 10:27:27 crc kubenswrapper[4632]: I0313 10:27:27.222082 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-86d4-account-create-update-5c7rj" event={"ID":"bbaf5a79-1c34-4518-afb9-19703fe6c45b","Type":"ContainerDied","Data":"d9f2ab5e1a5be1d4939b9fe05ba3a5cdbc725953ea1e78a027cf1f61d4444ba0"} Mar 13 10:27:27 crc kubenswrapper[4632]: I0313 10:27:27.245549 4632 generic.go:334] "Generic (PLEG): container finished" podID="5abe7bf3-d44d-4ee5-b568-2d497868f1e5" containerID="2cfe7ebd70fe3427d7ef352e87ea88bca1736af36e0c260541ced9066c436503" exitCode=0 Mar 13 10:27:27 crc kubenswrapper[4632]: I0313 10:27:27.245877 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7dd5c7bdcd-4969b" event={"ID":"5abe7bf3-d44d-4ee5-b568-2d497868f1e5","Type":"ContainerDied","Data":"2cfe7ebd70fe3427d7ef352e87ea88bca1736af36e0c260541ced9066c436503"} Mar 13 10:27:27 crc kubenswrapper[4632]: I0313 10:27:27.258122 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kswhw" event={"ID":"8e0fb1fc-c94a-44f0-a269-e7211c6fcfba","Type":"ContainerDied","Data":"d2670f3af135aaf60a4a9f708985b74e740f5e7bc5471b2c38d01bfe606d1cfa"} Mar 13 10:27:27 crc kubenswrapper[4632]: I0313 10:27:27.258168 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2670f3af135aaf60a4a9f708985b74e740f5e7bc5471b2c38d01bfe606d1cfa" Mar 13 10:27:27 crc kubenswrapper[4632]: I0313 10:27:27.390806 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fshjb" Mar 13 10:27:27 crc kubenswrapper[4632]: I0313 10:27:27.400889 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kswhw" Mar 13 10:27:27 crc kubenswrapper[4632]: I0313 10:27:27.482609 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdg5s\" (UniqueName: \"kubernetes.io/projected/8e0fb1fc-c94a-44f0-a269-e7211c6fcfba-kube-api-access-jdg5s\") pod \"8e0fb1fc-c94a-44f0-a269-e7211c6fcfba\" (UID: \"8e0fb1fc-c94a-44f0-a269-e7211c6fcfba\") " Mar 13 10:27:27 crc kubenswrapper[4632]: I0313 10:27:27.482812 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09bd98be-9d10-4a53-8ef6-c4718b05c3f6-operator-scripts\") pod \"09bd98be-9d10-4a53-8ef6-c4718b05c3f6\" (UID: \"09bd98be-9d10-4a53-8ef6-c4718b05c3f6\") " Mar 13 10:27:27 crc kubenswrapper[4632]: I0313 10:27:27.482956 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e0fb1fc-c94a-44f0-a269-e7211c6fcfba-operator-scripts\") pod \"8e0fb1fc-c94a-44f0-a269-e7211c6fcfba\" (UID: \"8e0fb1fc-c94a-44f0-a269-e7211c6fcfba\") " Mar 13 10:27:27 crc kubenswrapper[4632]: I0313 10:27:27.482989 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl442\" (UniqueName: \"kubernetes.io/projected/09bd98be-9d10-4a53-8ef6-c4718b05c3f6-kube-api-access-cl442\") pod \"09bd98be-9d10-4a53-8ef6-c4718b05c3f6\" (UID: \"09bd98be-9d10-4a53-8ef6-c4718b05c3f6\") " Mar 13 10:27:27 crc kubenswrapper[4632]: I0313 10:27:27.483603 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09bd98be-9d10-4a53-8ef6-c4718b05c3f6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "09bd98be-9d10-4a53-8ef6-c4718b05c3f6" (UID: "09bd98be-9d10-4a53-8ef6-c4718b05c3f6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:27:27 crc kubenswrapper[4632]: I0313 10:27:27.485748 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e0fb1fc-c94a-44f0-a269-e7211c6fcfba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8e0fb1fc-c94a-44f0-a269-e7211c6fcfba" (UID: "8e0fb1fc-c94a-44f0-a269-e7211c6fcfba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:27:27 crc kubenswrapper[4632]: I0313 10:27:27.507131 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e0fb1fc-c94a-44f0-a269-e7211c6fcfba-kube-api-access-jdg5s" (OuterVolumeSpecName: "kube-api-access-jdg5s") pod "8e0fb1fc-c94a-44f0-a269-e7211c6fcfba" (UID: "8e0fb1fc-c94a-44f0-a269-e7211c6fcfba"). InnerVolumeSpecName "kube-api-access-jdg5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:27:27 crc kubenswrapper[4632]: I0313 10:27:27.508727 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09bd98be-9d10-4a53-8ef6-c4718b05c3f6-kube-api-access-cl442" (OuterVolumeSpecName: "kube-api-access-cl442") pod "09bd98be-9d10-4a53-8ef6-c4718b05c3f6" (UID: "09bd98be-9d10-4a53-8ef6-c4718b05c3f6"). InnerVolumeSpecName "kube-api-access-cl442". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:27:27 crc kubenswrapper[4632]: I0313 10:27:27.585648 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cl442\" (UniqueName: \"kubernetes.io/projected/09bd98be-9d10-4a53-8ef6-c4718b05c3f6-kube-api-access-cl442\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:27 crc kubenswrapper[4632]: I0313 10:27:27.585690 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdg5s\" (UniqueName: \"kubernetes.io/projected/8e0fb1fc-c94a-44f0-a269-e7211c6fcfba-kube-api-access-jdg5s\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:27 crc kubenswrapper[4632]: I0313 10:27:27.585700 4632 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09bd98be-9d10-4a53-8ef6-c4718b05c3f6-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:27 crc kubenswrapper[4632]: I0313 10:27:27.585709 4632 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e0fb1fc-c94a-44f0-a269-e7211c6fcfba-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:28 crc kubenswrapper[4632]: I0313 10:27:28.277074 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"536490c7-c218-43ca-b601-84fdf0721b13","Type":"ContainerStarted","Data":"f402196243381c3faf3165d4fe49b7c43a1af16813bae58fca9b53eb4badf807"} Mar 13 10:27:28 crc kubenswrapper[4632]: I0313 10:27:28.277425 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kswhw" Mar 13 10:27:28 crc kubenswrapper[4632]: I0313 10:27:28.278252 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fshjb" Mar 13 10:27:29 crc kubenswrapper[4632]: I0313 10:27:29.686643 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f3f1-account-create-update-29g8s" Mar 13 10:27:29 crc kubenswrapper[4632]: I0313 10:27:29.694134 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7dd5c7bdcd-4969b" Mar 13 10:27:29 crc kubenswrapper[4632]: I0313 10:27:29.736314 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wgv42" Mar 13 10:27:29 crc kubenswrapper[4632]: I0313 10:27:29.803395 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2f8c-account-create-update-g4b8g" Mar 13 10:27:29 crc kubenswrapper[4632]: I0313 10:27:29.830739 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-logs\") pod \"5abe7bf3-d44d-4ee5-b568-2d497868f1e5\" (UID: \"5abe7bf3-d44d-4ee5-b568-2d497868f1e5\") " Mar 13 10:27:29 crc kubenswrapper[4632]: I0313 10:27:29.830827 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-internal-tls-certs\") pod \"5abe7bf3-d44d-4ee5-b568-2d497868f1e5\" (UID: \"5abe7bf3-d44d-4ee5-b568-2d497868f1e5\") " Mar 13 10:27:29 crc kubenswrapper[4632]: I0313 10:27:29.830893 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-config-data\") pod \"5abe7bf3-d44d-4ee5-b568-2d497868f1e5\" (UID: \"5abe7bf3-d44d-4ee5-b568-2d497868f1e5\") " Mar 13 10:27:29 crc kubenswrapper[4632]: I0313 10:27:29.830918 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xk8t2\" (UniqueName: \"kubernetes.io/projected/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-kube-api-access-xk8t2\") pod \"5abe7bf3-d44d-4ee5-b568-2d497868f1e5\" (UID: \"5abe7bf3-d44d-4ee5-b568-2d497868f1e5\") " Mar 13 10:27:29 crc kubenswrapper[4632]: I0313 10:27:29.830969 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-combined-ca-bundle\") pod \"5abe7bf3-d44d-4ee5-b568-2d497868f1e5\" (UID: \"5abe7bf3-d44d-4ee5-b568-2d497868f1e5\") " Mar 13 10:27:29 crc kubenswrapper[4632]: I0313 10:27:29.831003 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/234a900d-887b-448c-8336-010107726c1e-operator-scripts\") pod \"234a900d-887b-448c-8336-010107726c1e\" (UID: \"234a900d-887b-448c-8336-010107726c1e\") " Mar 13 10:27:29 crc kubenswrapper[4632]: I0313 10:27:29.831096 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-scripts\") pod \"5abe7bf3-d44d-4ee5-b568-2d497868f1e5\" (UID: \"5abe7bf3-d44d-4ee5-b568-2d497868f1e5\") " Mar 13 10:27:29 crc kubenswrapper[4632]: I0313 10:27:29.831291 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-public-tls-certs\") pod \"5abe7bf3-d44d-4ee5-b568-2d497868f1e5\" (UID: \"5abe7bf3-d44d-4ee5-b568-2d497868f1e5\") " Mar 13 10:27:29 crc kubenswrapper[4632]: I0313 10:27:29.831331 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qgng\" (UniqueName: \"kubernetes.io/projected/234a900d-887b-448c-8336-010107726c1e-kube-api-access-9qgng\") pod \"234a900d-887b-448c-8336-010107726c1e\" (UID: \"234a900d-887b-448c-8336-010107726c1e\") " Mar 13 10:27:29 crc kubenswrapper[4632]: I0313 10:27:29.838141 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-logs" (OuterVolumeSpecName: "logs") pod "5abe7bf3-d44d-4ee5-b568-2d497868f1e5" (UID: "5abe7bf3-d44d-4ee5-b568-2d497868f1e5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:27:29 crc kubenswrapper[4632]: I0313 10:27:29.838285 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/234a900d-887b-448c-8336-010107726c1e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "234a900d-887b-448c-8336-010107726c1e" (UID: "234a900d-887b-448c-8336-010107726c1e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:27:29 crc kubenswrapper[4632]: I0313 10:27:29.880868 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-kube-api-access-xk8t2" (OuterVolumeSpecName: "kube-api-access-xk8t2") pod "5abe7bf3-d44d-4ee5-b568-2d497868f1e5" (UID: "5abe7bf3-d44d-4ee5-b568-2d497868f1e5"). InnerVolumeSpecName "kube-api-access-xk8t2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:27:29 crc kubenswrapper[4632]: I0313 10:27:29.881126 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/234a900d-887b-448c-8336-010107726c1e-kube-api-access-9qgng" (OuterVolumeSpecName: "kube-api-access-9qgng") pod "234a900d-887b-448c-8336-010107726c1e" (UID: "234a900d-887b-448c-8336-010107726c1e"). InnerVolumeSpecName "kube-api-access-9qgng". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:27:29 crc kubenswrapper[4632]: I0313 10:27:29.893273 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-86d4-account-create-update-5c7rj" Mar 13 10:27:29 crc kubenswrapper[4632]: I0313 10:27:29.902251 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-scripts" (OuterVolumeSpecName: "scripts") pod "5abe7bf3-d44d-4ee5-b568-2d497868f1e5" (UID: "5abe7bf3-d44d-4ee5-b568-2d497868f1e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:27:29 crc kubenswrapper[4632]: I0313 10:27:29.996477 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fx7d\" (UniqueName: \"kubernetes.io/projected/bbaf5a79-1c34-4518-afb9-19703fe6c45b-kube-api-access-5fx7d\") pod \"bbaf5a79-1c34-4518-afb9-19703fe6c45b\" (UID: \"bbaf5a79-1c34-4518-afb9-19703fe6c45b\") " Mar 13 10:27:29 crc kubenswrapper[4632]: I0313 10:27:29.996535 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrgfq\" (UniqueName: \"kubernetes.io/projected/8462be25-a577-476d-b54a-73790a8aa189-kube-api-access-rrgfq\") pod \"8462be25-a577-476d-b54a-73790a8aa189\" (UID: \"8462be25-a577-476d-b54a-73790a8aa189\") " Mar 13 10:27:29 crc kubenswrapper[4632]: I0313 10:27:29.996775 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0c32ed5-c3b0-45ea-99de-87c45cb1ba77-operator-scripts\") pod \"f0c32ed5-c3b0-45ea-99de-87c45cb1ba77\" (UID: \"f0c32ed5-c3b0-45ea-99de-87c45cb1ba77\") " Mar 13 10:27:29 crc kubenswrapper[4632]: I0313 10:27:29.996822 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbaf5a79-1c34-4518-afb9-19703fe6c45b-operator-scripts\") pod \"bbaf5a79-1c34-4518-afb9-19703fe6c45b\" (UID: \"bbaf5a79-1c34-4518-afb9-19703fe6c45b\") " Mar 13 10:27:29 crc kubenswrapper[4632]: I0313 10:27:29.996881 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8462be25-a577-476d-b54a-73790a8aa189-operator-scripts\") pod \"8462be25-a577-476d-b54a-73790a8aa189\" (UID: \"8462be25-a577-476d-b54a-73790a8aa189\") " Mar 13 10:27:29 crc kubenswrapper[4632]: I0313 10:27:29.997011 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsxhh\" (UniqueName: \"kubernetes.io/projected/f0c32ed5-c3b0-45ea-99de-87c45cb1ba77-kube-api-access-nsxhh\") pod \"f0c32ed5-c3b0-45ea-99de-87c45cb1ba77\" (UID: \"f0c32ed5-c3b0-45ea-99de-87c45cb1ba77\") " Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:29.997628 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbaf5a79-1c34-4518-afb9-19703fe6c45b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bbaf5a79-1c34-4518-afb9-19703fe6c45b" (UID: "bbaf5a79-1c34-4518-afb9-19703fe6c45b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:29.997955 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0c32ed5-c3b0-45ea-99de-87c45cb1ba77-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f0c32ed5-c3b0-45ea-99de-87c45cb1ba77" (UID: "f0c32ed5-c3b0-45ea-99de-87c45cb1ba77"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:29.999009 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8462be25-a577-476d-b54a-73790a8aa189-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8462be25-a577-476d-b54a-73790a8aa189" (UID: "8462be25-a577-476d-b54a-73790a8aa189"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.010462 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qgng\" (UniqueName: \"kubernetes.io/projected/234a900d-887b-448c-8336-010107726c1e-kube-api-access-9qgng\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.010500 4632 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0c32ed5-c3b0-45ea-99de-87c45cb1ba77-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.010512 4632 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbaf5a79-1c34-4518-afb9-19703fe6c45b-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.010538 4632 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-logs\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.010553 4632 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8462be25-a577-476d-b54a-73790a8aa189-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.010566 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xk8t2\" (UniqueName: \"kubernetes.io/projected/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-kube-api-access-xk8t2\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.010576 4632 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/234a900d-887b-448c-8336-010107726c1e-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.010585 4632 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.073033 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbaf5a79-1c34-4518-afb9-19703fe6c45b-kube-api-access-5fx7d" (OuterVolumeSpecName: "kube-api-access-5fx7d") pod "bbaf5a79-1c34-4518-afb9-19703fe6c45b" (UID: "bbaf5a79-1c34-4518-afb9-19703fe6c45b"). InnerVolumeSpecName "kube-api-access-5fx7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.077658 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0c32ed5-c3b0-45ea-99de-87c45cb1ba77-kube-api-access-nsxhh" (OuterVolumeSpecName: "kube-api-access-nsxhh") pod "f0c32ed5-c3b0-45ea-99de-87c45cb1ba77" (UID: "f0c32ed5-c3b0-45ea-99de-87c45cb1ba77"). InnerVolumeSpecName "kube-api-access-nsxhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.085702 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8462be25-a577-476d-b54a-73790a8aa189-kube-api-access-rrgfq" (OuterVolumeSpecName: "kube-api-access-rrgfq") pod "8462be25-a577-476d-b54a-73790a8aa189" (UID: "8462be25-a577-476d-b54a-73790a8aa189"). InnerVolumeSpecName "kube-api-access-rrgfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.126424 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fx7d\" (UniqueName: \"kubernetes.io/projected/bbaf5a79-1c34-4518-afb9-19703fe6c45b-kube-api-access-5fx7d\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.126453 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrgfq\" (UniqueName: \"kubernetes.io/projected/8462be25-a577-476d-b54a-73790a8aa189-kube-api-access-rrgfq\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.126464 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsxhh\" (UniqueName: \"kubernetes.io/projected/f0c32ed5-c3b0-45ea-99de-87c45cb1ba77-kube-api-access-nsxhh\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.328403 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-66b64f87f7-6z95j"] Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.352118 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-86d4-account-create-update-5c7rj" event={"ID":"bbaf5a79-1c34-4518-afb9-19703fe6c45b","Type":"ContainerDied","Data":"32dece687daf55d82de95e6edff961fa23c0b30410ca7620ec1c15a1d72b8f64"} Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.352160 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32dece687daf55d82de95e6edff961fa23c0b30410ca7620ec1c15a1d72b8f64" Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.352228 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-86d4-account-create-update-5c7rj" Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.357575 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7dd5c7bdcd-4969b" event={"ID":"5abe7bf3-d44d-4ee5-b568-2d497868f1e5","Type":"ContainerDied","Data":"604b160eb4cd534ac8def868fbcdab1d748e8bc2952c85fe7198dc4a2b05d7f7"} Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.357627 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7dd5c7bdcd-4969b" Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.357633 4632 scope.go:117] "RemoveContainer" containerID="2cfe7ebd70fe3427d7ef352e87ea88bca1736af36e0c260541ced9066c436503" Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.420246 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wgv42" Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.420650 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wgv42" event={"ID":"f0c32ed5-c3b0-45ea-99de-87c45cb1ba77","Type":"ContainerDied","Data":"645f6056b5af662b78f01666e55121dd00fa1cf0b8aa9bf79ae1ebdf5a74d21d"} Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.420701 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="645f6056b5af662b78f01666e55121dd00fa1cf0b8aa9bf79ae1ebdf5a74d21d" Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.461389 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2f8c-account-create-update-g4b8g" event={"ID":"8462be25-a577-476d-b54a-73790a8aa189","Type":"ContainerDied","Data":"d5791dd2ca6757eefbd1007667971c0817daf4611f261b3e01b5a58673b3e353"} Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.461435 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5791dd2ca6757eefbd1007667971c0817daf4611f261b3e01b5a58673b3e353" Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.461562 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2f8c-account-create-update-g4b8g" Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.498068 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f3f1-account-create-update-29g8s" event={"ID":"234a900d-887b-448c-8336-010107726c1e","Type":"ContainerDied","Data":"6eb1c0223253d25c6c8ddc47c33c06c64e6f5b3a0035afe9508e0276ff9d5aaf"} Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.498143 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6eb1c0223253d25c6c8ddc47c33c06c64e6f5b3a0035afe9508e0276ff9d5aaf" Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.498237 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f3f1-account-create-update-29g8s" Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.558577 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-c959f64fb-hx4t8"] Mar 13 10:27:30 crc kubenswrapper[4632]: W0313 10:27:30.599101 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53145947_4584_4cef_b085_a0e0f550dde9.slice/crio-4b36cced26c065af3bdca097c2ba77c4f4683335facc1febb5ca7b6e7e41d41b WatchSource:0}: Error finding container 4b36cced26c065af3bdca097c2ba77c4f4683335facc1febb5ca7b6e7e41d41b: Status 404 returned error can't find the container with id 4b36cced26c065af3bdca097c2ba77c4f4683335facc1febb5ca7b6e7e41d41b Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.644008 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6f597ccc7c-zgmpr"] Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.650549 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7dbf8b9ddc-6p5vh" Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.670906 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7dbf8b9ddc-6p5vh" Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.704723 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5abe7bf3-d44d-4ee5-b568-2d497868f1e5" (UID: "5abe7bf3-d44d-4ee5-b568-2d497868f1e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.799143 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.908994 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5abe7bf3-d44d-4ee5-b568-2d497868f1e5" (UID: "5abe7bf3-d44d-4ee5-b568-2d497868f1e5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.918775 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-config-data" (OuterVolumeSpecName: "config-data") pod "5abe7bf3-d44d-4ee5-b568-2d497868f1e5" (UID: "5abe7bf3-d44d-4ee5-b568-2d497868f1e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.942404 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5abe7bf3-d44d-4ee5-b568-2d497868f1e5" (UID: "5abe7bf3-d44d-4ee5-b568-2d497868f1e5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:27:30 crc kubenswrapper[4632]: I0313 10:27:30.968071 4632 scope.go:117] "RemoveContainer" containerID="0b15584f3607b654abe16b00ac290d1bc5ee6f763bd08234d8697e7f5b5b20bb" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.006868 4632 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.006916 4632 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.006933 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5abe7bf3-d44d-4ee5-b568-2d497868f1e5-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.060596 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7dd5c7bdcd-4969b"] Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.090286 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7dd5c7bdcd-4969b"] Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.118329 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-b547848c4-bn5vs"] Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.157340 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-d856c56c-cmd2q"] Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.177620 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-86bb565f45-ntq5k"] Mar 13 10:27:31 crc kubenswrapper[4632]: E0313 10:27:31.182189 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5abe7bf3-d44d-4ee5-b568-2d497868f1e5" containerName="placement-log" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.182237 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="5abe7bf3-d44d-4ee5-b568-2d497868f1e5" containerName="placement-log" Mar 13 10:27:31 crc kubenswrapper[4632]: E0313 10:27:31.182267 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5abe7bf3-d44d-4ee5-b568-2d497868f1e5" containerName="placement-api" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.182289 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="5abe7bf3-d44d-4ee5-b568-2d497868f1e5" containerName="placement-api" Mar 13 10:27:31 crc kubenswrapper[4632]: E0313 10:27:31.182303 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="234a900d-887b-448c-8336-010107726c1e" containerName="mariadb-account-create-update" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.182310 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="234a900d-887b-448c-8336-010107726c1e" containerName="mariadb-account-create-update" Mar 13 10:27:31 crc kubenswrapper[4632]: E0313 10:27:31.182326 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbaf5a79-1c34-4518-afb9-19703fe6c45b" containerName="mariadb-account-create-update" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.182332 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbaf5a79-1c34-4518-afb9-19703fe6c45b" containerName="mariadb-account-create-update" Mar 13 10:27:31 crc kubenswrapper[4632]: E0313 10:27:31.182347 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09bd98be-9d10-4a53-8ef6-c4718b05c3f6" containerName="mariadb-database-create" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.182369 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="09bd98be-9d10-4a53-8ef6-c4718b05c3f6" containerName="mariadb-database-create" Mar 13 10:27:31 crc kubenswrapper[4632]: E0313 10:27:31.182382 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0c32ed5-c3b0-45ea-99de-87c45cb1ba77" containerName="mariadb-database-create" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.182388 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0c32ed5-c3b0-45ea-99de-87c45cb1ba77" containerName="mariadb-database-create" Mar 13 10:27:31 crc kubenswrapper[4632]: E0313 10:27:31.182398 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e0fb1fc-c94a-44f0-a269-e7211c6fcfba" containerName="mariadb-database-create" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.182404 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e0fb1fc-c94a-44f0-a269-e7211c6fcfba" containerName="mariadb-database-create" Mar 13 10:27:31 crc kubenswrapper[4632]: E0313 10:27:31.182411 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8462be25-a577-476d-b54a-73790a8aa189" containerName="mariadb-account-create-update" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.182418 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="8462be25-a577-476d-b54a-73790a8aa189" containerName="mariadb-account-create-update" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.182753 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="5abe7bf3-d44d-4ee5-b568-2d497868f1e5" containerName="placement-log" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.182769 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="8462be25-a577-476d-b54a-73790a8aa189" containerName="mariadb-account-create-update" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.182780 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0c32ed5-c3b0-45ea-99de-87c45cb1ba77" containerName="mariadb-database-create" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.182786 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbaf5a79-1c34-4518-afb9-19703fe6c45b" containerName="mariadb-account-create-update" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.182796 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="09bd98be-9d10-4a53-8ef6-c4718b05c3f6" containerName="mariadb-database-create" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.182807 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="5abe7bf3-d44d-4ee5-b568-2d497868f1e5" containerName="placement-api" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.182832 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="234a900d-887b-448c-8336-010107726c1e" containerName="mariadb-account-create-update" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.182843 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e0fb1fc-c94a-44f0-a269-e7211c6fcfba" containerName="mariadb-database-create" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.183725 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-86bb565f45-ntq5k" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.193300 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.193493 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.206232 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7fcc47f8dc-lhqhx"] Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.207454 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7fcc47f8dc-lhqhx" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.209529 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.209964 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.249006 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-86bb565f45-ntq5k"] Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.276341 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7fcc47f8dc-lhqhx"] Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.333753 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/00b138c6-9e7c-4782-8454-1a4c035b1fbc-config-data-custom\") pod \"heat-api-7fcc47f8dc-lhqhx\" (UID: \"00b138c6-9e7c-4782-8454-1a4c035b1fbc\") " pod="openstack/heat-api-7fcc47f8dc-lhqhx" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.333808 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de2e3cc7-c5cb-449a-a19c-2d671f08c656-combined-ca-bundle\") pod \"heat-cfnapi-86bb565f45-ntq5k\" (UID: \"de2e3cc7-c5cb-449a-a19c-2d671f08c656\") " pod="openstack/heat-cfnapi-86bb565f45-ntq5k" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.333838 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00b138c6-9e7c-4782-8454-1a4c035b1fbc-config-data\") pod \"heat-api-7fcc47f8dc-lhqhx\" (UID: \"00b138c6-9e7c-4782-8454-1a4c035b1fbc\") " pod="openstack/heat-api-7fcc47f8dc-lhqhx" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.333865 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de2e3cc7-c5cb-449a-a19c-2d671f08c656-config-data-custom\") pod \"heat-cfnapi-86bb565f45-ntq5k\" (UID: \"de2e3cc7-c5cb-449a-a19c-2d671f08c656\") " pod="openstack/heat-cfnapi-86bb565f45-ntq5k" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.333892 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de2e3cc7-c5cb-449a-a19c-2d671f08c656-config-data\") pod \"heat-cfnapi-86bb565f45-ntq5k\" (UID: \"de2e3cc7-c5cb-449a-a19c-2d671f08c656\") " pod="openstack/heat-cfnapi-86bb565f45-ntq5k" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.333961 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/00b138c6-9e7c-4782-8454-1a4c035b1fbc-public-tls-certs\") pod \"heat-api-7fcc47f8dc-lhqhx\" (UID: \"00b138c6-9e7c-4782-8454-1a4c035b1fbc\") " pod="openstack/heat-api-7fcc47f8dc-lhqhx" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.333986 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de2e3cc7-c5cb-449a-a19c-2d671f08c656-public-tls-certs\") pod \"heat-cfnapi-86bb565f45-ntq5k\" (UID: \"de2e3cc7-c5cb-449a-a19c-2d671f08c656\") " pod="openstack/heat-cfnapi-86bb565f45-ntq5k" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.334011 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j4c4\" (UniqueName: \"kubernetes.io/projected/de2e3cc7-c5cb-449a-a19c-2d671f08c656-kube-api-access-9j4c4\") pod \"heat-cfnapi-86bb565f45-ntq5k\" (UID: \"de2e3cc7-c5cb-449a-a19c-2d671f08c656\") " pod="openstack/heat-cfnapi-86bb565f45-ntq5k" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.334032 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd7g7\" (UniqueName: \"kubernetes.io/projected/00b138c6-9e7c-4782-8454-1a4c035b1fbc-kube-api-access-nd7g7\") pod \"heat-api-7fcc47f8dc-lhqhx\" (UID: \"00b138c6-9e7c-4782-8454-1a4c035b1fbc\") " pod="openstack/heat-api-7fcc47f8dc-lhqhx" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.334054 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/00b138c6-9e7c-4782-8454-1a4c035b1fbc-internal-tls-certs\") pod \"heat-api-7fcc47f8dc-lhqhx\" (UID: \"00b138c6-9e7c-4782-8454-1a4c035b1fbc\") " pod="openstack/heat-api-7fcc47f8dc-lhqhx" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.334070 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de2e3cc7-c5cb-449a-a19c-2d671f08c656-internal-tls-certs\") pod \"heat-cfnapi-86bb565f45-ntq5k\" (UID: \"de2e3cc7-c5cb-449a-a19c-2d671f08c656\") " pod="openstack/heat-cfnapi-86bb565f45-ntq5k" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.334087 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00b138c6-9e7c-4782-8454-1a4c035b1fbc-combined-ca-bundle\") pod \"heat-api-7fcc47f8dc-lhqhx\" (UID: \"00b138c6-9e7c-4782-8454-1a4c035b1fbc\") " pod="openstack/heat-api-7fcc47f8dc-lhqhx" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.436229 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00b138c6-9e7c-4782-8454-1a4c035b1fbc-config-data\") pod \"heat-api-7fcc47f8dc-lhqhx\" (UID: \"00b138c6-9e7c-4782-8454-1a4c035b1fbc\") " pod="openstack/heat-api-7fcc47f8dc-lhqhx" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.436295 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de2e3cc7-c5cb-449a-a19c-2d671f08c656-config-data-custom\") pod \"heat-cfnapi-86bb565f45-ntq5k\" (UID: \"de2e3cc7-c5cb-449a-a19c-2d671f08c656\") " pod="openstack/heat-cfnapi-86bb565f45-ntq5k" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.436358 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de2e3cc7-c5cb-449a-a19c-2d671f08c656-config-data\") pod \"heat-cfnapi-86bb565f45-ntq5k\" (UID: \"de2e3cc7-c5cb-449a-a19c-2d671f08c656\") " pod="openstack/heat-cfnapi-86bb565f45-ntq5k" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.436408 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/00b138c6-9e7c-4782-8454-1a4c035b1fbc-public-tls-certs\") pod \"heat-api-7fcc47f8dc-lhqhx\" (UID: \"00b138c6-9e7c-4782-8454-1a4c035b1fbc\") " pod="openstack/heat-api-7fcc47f8dc-lhqhx" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.436445 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de2e3cc7-c5cb-449a-a19c-2d671f08c656-public-tls-certs\") pod \"heat-cfnapi-86bb565f45-ntq5k\" (UID: \"de2e3cc7-c5cb-449a-a19c-2d671f08c656\") " pod="openstack/heat-cfnapi-86bb565f45-ntq5k" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.436470 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j4c4\" (UniqueName: \"kubernetes.io/projected/de2e3cc7-c5cb-449a-a19c-2d671f08c656-kube-api-access-9j4c4\") pod \"heat-cfnapi-86bb565f45-ntq5k\" (UID: \"de2e3cc7-c5cb-449a-a19c-2d671f08c656\") " pod="openstack/heat-cfnapi-86bb565f45-ntq5k" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.436489 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd7g7\" (UniqueName: \"kubernetes.io/projected/00b138c6-9e7c-4782-8454-1a4c035b1fbc-kube-api-access-nd7g7\") pod \"heat-api-7fcc47f8dc-lhqhx\" (UID: \"00b138c6-9e7c-4782-8454-1a4c035b1fbc\") " pod="openstack/heat-api-7fcc47f8dc-lhqhx" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.436530 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/00b138c6-9e7c-4782-8454-1a4c035b1fbc-internal-tls-certs\") pod \"heat-api-7fcc47f8dc-lhqhx\" (UID: \"00b138c6-9e7c-4782-8454-1a4c035b1fbc\") " pod="openstack/heat-api-7fcc47f8dc-lhqhx" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.436548 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de2e3cc7-c5cb-449a-a19c-2d671f08c656-internal-tls-certs\") pod \"heat-cfnapi-86bb565f45-ntq5k\" (UID: \"de2e3cc7-c5cb-449a-a19c-2d671f08c656\") " pod="openstack/heat-cfnapi-86bb565f45-ntq5k" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.436564 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00b138c6-9e7c-4782-8454-1a4c035b1fbc-combined-ca-bundle\") pod \"heat-api-7fcc47f8dc-lhqhx\" (UID: \"00b138c6-9e7c-4782-8454-1a4c035b1fbc\") " pod="openstack/heat-api-7fcc47f8dc-lhqhx" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.436640 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/00b138c6-9e7c-4782-8454-1a4c035b1fbc-config-data-custom\") pod \"heat-api-7fcc47f8dc-lhqhx\" (UID: \"00b138c6-9e7c-4782-8454-1a4c035b1fbc\") " pod="openstack/heat-api-7fcc47f8dc-lhqhx" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.436684 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de2e3cc7-c5cb-449a-a19c-2d671f08c656-combined-ca-bundle\") pod \"heat-cfnapi-86bb565f45-ntq5k\" (UID: \"de2e3cc7-c5cb-449a-a19c-2d671f08c656\") " pod="openstack/heat-cfnapi-86bb565f45-ntq5k" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.459095 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de2e3cc7-c5cb-449a-a19c-2d671f08c656-combined-ca-bundle\") pod \"heat-cfnapi-86bb565f45-ntq5k\" (UID: \"de2e3cc7-c5cb-449a-a19c-2d671f08c656\") " pod="openstack/heat-cfnapi-86bb565f45-ntq5k" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.462418 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j4c4\" (UniqueName: \"kubernetes.io/projected/de2e3cc7-c5cb-449a-a19c-2d671f08c656-kube-api-access-9j4c4\") pod \"heat-cfnapi-86bb565f45-ntq5k\" (UID: \"de2e3cc7-c5cb-449a-a19c-2d671f08c656\") " pod="openstack/heat-cfnapi-86bb565f45-ntq5k" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.464104 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de2e3cc7-c5cb-449a-a19c-2d671f08c656-config-data\") pod \"heat-cfnapi-86bb565f45-ntq5k\" (UID: \"de2e3cc7-c5cb-449a-a19c-2d671f08c656\") " pod="openstack/heat-cfnapi-86bb565f45-ntq5k" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.465713 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00b138c6-9e7c-4782-8454-1a4c035b1fbc-config-data\") pod \"heat-api-7fcc47f8dc-lhqhx\" (UID: \"00b138c6-9e7c-4782-8454-1a4c035b1fbc\") " pod="openstack/heat-api-7fcc47f8dc-lhqhx" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.470145 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/00b138c6-9e7c-4782-8454-1a4c035b1fbc-config-data-custom\") pod \"heat-api-7fcc47f8dc-lhqhx\" (UID: \"00b138c6-9e7c-4782-8454-1a4c035b1fbc\") " pod="openstack/heat-api-7fcc47f8dc-lhqhx" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.470665 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de2e3cc7-c5cb-449a-a19c-2d671f08c656-config-data-custom\") pod \"heat-cfnapi-86bb565f45-ntq5k\" (UID: \"de2e3cc7-c5cb-449a-a19c-2d671f08c656\") " pod="openstack/heat-cfnapi-86bb565f45-ntq5k" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.471288 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/00b138c6-9e7c-4782-8454-1a4c035b1fbc-internal-tls-certs\") pod \"heat-api-7fcc47f8dc-lhqhx\" (UID: \"00b138c6-9e7c-4782-8454-1a4c035b1fbc\") " pod="openstack/heat-api-7fcc47f8dc-lhqhx" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.473637 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de2e3cc7-c5cb-449a-a19c-2d671f08c656-internal-tls-certs\") pod \"heat-cfnapi-86bb565f45-ntq5k\" (UID: \"de2e3cc7-c5cb-449a-a19c-2d671f08c656\") " pod="openstack/heat-cfnapi-86bb565f45-ntq5k" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.474467 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00b138c6-9e7c-4782-8454-1a4c035b1fbc-combined-ca-bundle\") pod \"heat-api-7fcc47f8dc-lhqhx\" (UID: \"00b138c6-9e7c-4782-8454-1a4c035b1fbc\") " pod="openstack/heat-api-7fcc47f8dc-lhqhx" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.477096 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de2e3cc7-c5cb-449a-a19c-2d671f08c656-public-tls-certs\") pod \"heat-cfnapi-86bb565f45-ntq5k\" (UID: \"de2e3cc7-c5cb-449a-a19c-2d671f08c656\") " pod="openstack/heat-cfnapi-86bb565f45-ntq5k" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.481681 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd7g7\" (UniqueName: \"kubernetes.io/projected/00b138c6-9e7c-4782-8454-1a4c035b1fbc-kube-api-access-nd7g7\") pod \"heat-api-7fcc47f8dc-lhqhx\" (UID: \"00b138c6-9e7c-4782-8454-1a4c035b1fbc\") " pod="openstack/heat-api-7fcc47f8dc-lhqhx" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.481876 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/00b138c6-9e7c-4782-8454-1a4c035b1fbc-public-tls-certs\") pod \"heat-api-7fcc47f8dc-lhqhx\" (UID: \"00b138c6-9e7c-4782-8454-1a4c035b1fbc\") " pod="openstack/heat-api-7fcc47f8dc-lhqhx" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.527989 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-c959f64fb-hx4t8" event={"ID":"53145947-4584-4cef-b085-a0e0f550dde9","Type":"ContainerStarted","Data":"4b36cced26c065af3bdca097c2ba77c4f4683335facc1febb5ca7b6e7e41d41b"} Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.530561 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-d856c56c-cmd2q" event={"ID":"5d10747e-ba77-4986-9d4b-636fcbf823ab","Type":"ContainerStarted","Data":"35f6f30aa35f7a79445d6acba6d7d99ce02bc8679e546b9d8ecccf0df51e3ce6"} Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.530736 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-d856c56c-cmd2q" podUID="5d10747e-ba77-4986-9d4b-636fcbf823ab" containerName="heat-cfnapi" containerID="cri-o://35f6f30aa35f7a79445d6acba6d7d99ce02bc8679e546b9d8ecccf0df51e3ce6" gracePeriod=60 Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.531115 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-d856c56c-cmd2q" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.537454 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-b547848c4-bn5vs" event={"ID":"07914020-653d-4509-9f60-22726224c7c6","Type":"ContainerStarted","Data":"b0d25fd2c9604e3f96cafaee37ffd660ff8a4f27903a0e0bf9e82ba66554c126"} Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.537625 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-b547848c4-bn5vs" podUID="07914020-653d-4509-9f60-22726224c7c6" containerName="heat-api" containerID="cri-o://b0d25fd2c9604e3f96cafaee37ffd660ff8a4f27903a0e0bf9e82ba66554c126" gracePeriod=60 Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.537702 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-b547848c4-bn5vs" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.541645 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-66b64f87f7-6z95j" event={"ID":"8bca285e-17f7-4505-8a25-21f5ee739584","Type":"ContainerStarted","Data":"1293c4aa6c50a69d6aecb56e9f4df43ee392e8db0df66dd79160ca393da72310"} Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.541721 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-66b64f87f7-6z95j" event={"ID":"8bca285e-17f7-4505-8a25-21f5ee739584","Type":"ContainerStarted","Data":"17501791bdc7f7056cbbb54c8ba1821e2768aef3ea1d8c030f27232cf3c7d16a"} Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.542821 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-66b64f87f7-6z95j" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.545735 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f597ccc7c-zgmpr" event={"ID":"a721ddbf-6e3d-4c04-9fd5-52a29a4926cb","Type":"ContainerStarted","Data":"307b8fb6df45672fd631f8323c011769dd77981dd1c4da51a966ca64e7bdf956"} Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.550045 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7888df55c7-mw5p4" event={"ID":"904f04cd-8110-4637-8bb4-67c4b83e189b","Type":"ContainerStarted","Data":"4cc9fd73a35e44ae17915d74f83df931e877bf9d4b7384d1b90a6239d1a72628"} Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.550977 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7888df55c7-mw5p4" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.554541 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"536490c7-c218-43ca-b601-84fdf0721b13","Type":"ContainerStarted","Data":"0e185e3360ac7d555a53f4a6a5858f9b0a423c695033ffc9d71eb6f71e6ca6e1"} Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.563778 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-d856c56c-cmd2q" podStartSLOduration=11.095062347 podStartE2EDuration="18.563756691s" podCreationTimestamp="2026-03-13 10:27:13 +0000 UTC" firstStartedPulling="2026-03-13 10:27:22.002496455 +0000 UTC m=+1416.025026588" lastFinishedPulling="2026-03-13 10:27:29.471190799 +0000 UTC m=+1423.493720932" observedRunningTime="2026-03-13 10:27:31.551318988 +0000 UTC m=+1425.573849121" watchObservedRunningTime="2026-03-13 10:27:31.563756691 +0000 UTC m=+1425.586286824" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.594357 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7888df55c7-mw5p4" podStartSLOduration=18.594337035 podStartE2EDuration="18.594337035s" podCreationTimestamp="2026-03-13 10:27:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:27:31.587369756 +0000 UTC m=+1425.609899889" watchObservedRunningTime="2026-03-13 10:27:31.594337035 +0000 UTC m=+1425.616867168" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.611237 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-b547848c4-bn5vs" podStartSLOduration=12.22209661 podStartE2EDuration="18.611219476s" podCreationTimestamp="2026-03-13 10:27:13 +0000 UTC" firstStartedPulling="2026-03-13 10:27:23.082037383 +0000 UTC m=+1417.104567516" lastFinishedPulling="2026-03-13 10:27:29.471160249 +0000 UTC m=+1423.493690382" observedRunningTime="2026-03-13 10:27:31.60644589 +0000 UTC m=+1425.628976023" watchObservedRunningTime="2026-03-13 10:27:31.611219476 +0000 UTC m=+1425.633749609" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.824169 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-86bb565f45-ntq5k" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.857329 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-66b64f87f7-6z95j" podStartSLOduration=7.857304752 podStartE2EDuration="7.857304752s" podCreationTimestamp="2026-03-13 10:27:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:27:31.743863922 +0000 UTC m=+1425.766394075" watchObservedRunningTime="2026-03-13 10:27:31.857304752 +0000 UTC m=+1425.879834885" Mar 13 10:27:31 crc kubenswrapper[4632]: I0313 10:27:31.879254 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7fcc47f8dc-lhqhx" Mar 13 10:27:32 crc kubenswrapper[4632]: I0313 10:27:32.062754 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5abe7bf3-d44d-4ee5-b568-2d497868f1e5" path="/var/lib/kubelet/pods/5abe7bf3-d44d-4ee5-b568-2d497868f1e5/volumes" Mar 13 10:27:32 crc kubenswrapper[4632]: I0313 10:27:32.446075 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-86bb565f45-ntq5k"] Mar 13 10:27:32 crc kubenswrapper[4632]: I0313 10:27:32.554862 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7fcc47f8dc-lhqhx"] Mar 13 10:27:32 crc kubenswrapper[4632]: I0313 10:27:32.604313 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-86bb565f45-ntq5k" event={"ID":"de2e3cc7-c5cb-449a-a19c-2d671f08c656","Type":"ContainerStarted","Data":"7d396e7b6be3f9a3a20f53a205998858ab9fe01adc0c6f2295e912f92fb9fcc5"} Mar 13 10:27:32 crc kubenswrapper[4632]: I0313 10:27:32.632150 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f597ccc7c-zgmpr" event={"ID":"a721ddbf-6e3d-4c04-9fd5-52a29a4926cb","Type":"ContainerStarted","Data":"2b98409c671c4984548f0269cc4d51793d9e2a7f34b6828d8febd63ffbc09eee"} Mar 13 10:27:32 crc kubenswrapper[4632]: I0313 10:27:32.634015 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6f597ccc7c-zgmpr" Mar 13 10:27:32 crc kubenswrapper[4632]: I0313 10:27:32.670866 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6785ba8c-a47b-4851-945e-c07ccecb9911","Type":"ContainerStarted","Data":"341d5aa1c19947aafc036067af21bc4eee52624758dd74337e755e99f3f5eb7b"} Mar 13 10:27:32 crc kubenswrapper[4632]: I0313 10:27:32.671563 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 13 10:27:32 crc kubenswrapper[4632]: I0313 10:27:32.701119 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"536490c7-c218-43ca-b601-84fdf0721b13","Type":"ContainerStarted","Data":"45f5d86800aa7ead2bd3ca8e9cc3cc79ae2d441610ccb1ee742ca8de3f0990d9"} Mar 13 10:27:32 crc kubenswrapper[4632]: I0313 10:27:32.702609 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6f597ccc7c-zgmpr" podStartSLOduration=8.702579492 podStartE2EDuration="8.702579492s" podCreationTimestamp="2026-03-13 10:27:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:27:32.673927165 +0000 UTC m=+1426.696457308" watchObservedRunningTime="2026-03-13 10:27:32.702579492 +0000 UTC m=+1426.725109625" Mar 13 10:27:32 crc kubenswrapper[4632]: I0313 10:27:32.722211 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=11.722191559 podStartE2EDuration="11.722191559s" podCreationTimestamp="2026-03-13 10:27:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:27:32.715858825 +0000 UTC m=+1426.738388958" watchObservedRunningTime="2026-03-13 10:27:32.722191559 +0000 UTC m=+1426.744721692" Mar 13 10:27:32 crc kubenswrapper[4632]: I0313 10:27:32.737786 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-c959f64fb-hx4t8" event={"ID":"53145947-4584-4cef-b085-a0e0f550dde9","Type":"ContainerStarted","Data":"4b6b5b347edd33f31b21125267ebc982899aa7660b332796152bdc7475805cd9"} Mar 13 10:27:32 crc kubenswrapper[4632]: I0313 10:27:32.738200 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-c959f64fb-hx4t8" Mar 13 10:27:32 crc kubenswrapper[4632]: I0313 10:27:32.778232 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-c959f64fb-hx4t8" podStartSLOduration=8.778212602 podStartE2EDuration="8.778212602s" podCreationTimestamp="2026-03-13 10:27:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:27:32.764450107 +0000 UTC m=+1426.786980240" watchObservedRunningTime="2026-03-13 10:27:32.778212602 +0000 UTC m=+1426.800742735" Mar 13 10:27:33 crc kubenswrapper[4632]: I0313 10:27:33.766385 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-86bb565f45-ntq5k" event={"ID":"de2e3cc7-c5cb-449a-a19c-2d671f08c656","Type":"ContainerStarted","Data":"24a4b974cc05b9cc4e7554437e75e3cf003b41e725d29de74b6b204902d42317"} Mar 13 10:27:33 crc kubenswrapper[4632]: I0313 10:27:33.768302 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-86bb565f45-ntq5k" Mar 13 10:27:33 crc kubenswrapper[4632]: I0313 10:27:33.783565 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7fcc47f8dc-lhqhx" event={"ID":"00b138c6-9e7c-4782-8454-1a4c035b1fbc","Type":"ContainerStarted","Data":"b13fdea311401efe2917beec715abcb0d7c10958cf6ad0d72c84fbe190534cae"} Mar 13 10:27:33 crc kubenswrapper[4632]: I0313 10:27:33.783831 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7fcc47f8dc-lhqhx" event={"ID":"00b138c6-9e7c-4782-8454-1a4c035b1fbc","Type":"ContainerStarted","Data":"4b879070327e10dfbbf89f023602552a89f6b74e9ae41bf4fd6224b99c3ddb61"} Mar 13 10:27:33 crc kubenswrapper[4632]: I0313 10:27:33.784826 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7fcc47f8dc-lhqhx" Mar 13 10:27:33 crc kubenswrapper[4632]: I0313 10:27:33.793961 4632 generic.go:334] "Generic (PLEG): container finished" podID="8bca285e-17f7-4505-8a25-21f5ee739584" containerID="1293c4aa6c50a69d6aecb56e9f4df43ee392e8db0df66dd79160ca393da72310" exitCode=1 Mar 13 10:27:33 crc kubenswrapper[4632]: I0313 10:27:33.794071 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-66b64f87f7-6z95j" event={"ID":"8bca285e-17f7-4505-8a25-21f5ee739584","Type":"ContainerDied","Data":"1293c4aa6c50a69d6aecb56e9f4df43ee392e8db0df66dd79160ca393da72310"} Mar 13 10:27:33 crc kubenswrapper[4632]: I0313 10:27:33.794894 4632 scope.go:117] "RemoveContainer" containerID="1293c4aa6c50a69d6aecb56e9f4df43ee392e8db0df66dd79160ca393da72310" Mar 13 10:27:33 crc kubenswrapper[4632]: I0313 10:27:33.807311 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-86bb565f45-ntq5k" podStartSLOduration=2.807285153 podStartE2EDuration="2.807285153s" podCreationTimestamp="2026-03-13 10:27:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:27:33.79648559 +0000 UTC m=+1427.819015723" watchObservedRunningTime="2026-03-13 10:27:33.807285153 +0000 UTC m=+1427.829815286" Mar 13 10:27:33 crc kubenswrapper[4632]: I0313 10:27:33.821323 4632 generic.go:334] "Generic (PLEG): container finished" podID="a721ddbf-6e3d-4c04-9fd5-52a29a4926cb" containerID="2b98409c671c4984548f0269cc4d51793d9e2a7f34b6828d8febd63ffbc09eee" exitCode=1 Mar 13 10:27:33 crc kubenswrapper[4632]: I0313 10:27:33.829953 4632 scope.go:117] "RemoveContainer" containerID="2b98409c671c4984548f0269cc4d51793d9e2a7f34b6828d8febd63ffbc09eee" Mar 13 10:27:33 crc kubenswrapper[4632]: I0313 10:27:33.830344 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f597ccc7c-zgmpr" event={"ID":"a721ddbf-6e3d-4c04-9fd5-52a29a4926cb","Type":"ContainerDied","Data":"2b98409c671c4984548f0269cc4d51793d9e2a7f34b6828d8febd63ffbc09eee"} Mar 13 10:27:33 crc kubenswrapper[4632]: E0313 10:27:33.870865 4632 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda721ddbf_6e3d_4c04_9fd5_52a29a4926cb.slice/crio-conmon-2b98409c671c4984548f0269cc4d51793d9e2a7f34b6828d8febd63ffbc09eee.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda721ddbf_6e3d_4c04_9fd5_52a29a4926cb.slice/crio-2b98409c671c4984548f0269cc4d51793d9e2a7f34b6828d8febd63ffbc09eee.scope\": RecentStats: unable to find data in memory cache]" Mar 13 10:27:33 crc kubenswrapper[4632]: I0313 10:27:33.893006 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7fcc47f8dc-lhqhx" podStartSLOduration=2.864273088 podStartE2EDuration="2.864273088s" podCreationTimestamp="2026-03-13 10:27:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:27:33.862514856 +0000 UTC m=+1427.885044989" watchObservedRunningTime="2026-03-13 10:27:33.864273088 +0000 UTC m=+1427.886803211" Mar 13 10:27:34 crc kubenswrapper[4632]: I0313 10:27:34.573806 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5mlm2"] Mar 13 10:27:34 crc kubenswrapper[4632]: I0313 10:27:34.589748 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5mlm2" Mar 13 10:27:34 crc kubenswrapper[4632]: I0313 10:27:34.603533 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Mar 13 10:27:34 crc kubenswrapper[4632]: I0313 10:27:34.603767 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-h4qk2" Mar 13 10:27:34 crc kubenswrapper[4632]: I0313 10:27:34.603910 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 13 10:27:34 crc kubenswrapper[4632]: I0313 10:27:34.631656 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5mlm2"] Mar 13 10:27:34 crc kubenswrapper[4632]: I0313 10:27:34.711189 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5de81924-9bfc-484e-8276-0216f0bbf72c-scripts\") pod \"nova-cell0-conductor-db-sync-5mlm2\" (UID: \"5de81924-9bfc-484e-8276-0216f0bbf72c\") " pod="openstack/nova-cell0-conductor-db-sync-5mlm2" Mar 13 10:27:34 crc kubenswrapper[4632]: I0313 10:27:34.711292 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5de81924-9bfc-484e-8276-0216f0bbf72c-config-data\") pod \"nova-cell0-conductor-db-sync-5mlm2\" (UID: \"5de81924-9bfc-484e-8276-0216f0bbf72c\") " pod="openstack/nova-cell0-conductor-db-sync-5mlm2" Mar 13 10:27:34 crc kubenswrapper[4632]: I0313 10:27:34.711328 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5de81924-9bfc-484e-8276-0216f0bbf72c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-5mlm2\" (UID: \"5de81924-9bfc-484e-8276-0216f0bbf72c\") " pod="openstack/nova-cell0-conductor-db-sync-5mlm2" Mar 13 10:27:34 crc kubenswrapper[4632]: I0313 10:27:34.711620 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg9bb\" (UniqueName: \"kubernetes.io/projected/5de81924-9bfc-484e-8276-0216f0bbf72c-kube-api-access-bg9bb\") pod \"nova-cell0-conductor-db-sync-5mlm2\" (UID: \"5de81924-9bfc-484e-8276-0216f0bbf72c\") " pod="openstack/nova-cell0-conductor-db-sync-5mlm2" Mar 13 10:27:34 crc kubenswrapper[4632]: I0313 10:27:34.817379 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg9bb\" (UniqueName: \"kubernetes.io/projected/5de81924-9bfc-484e-8276-0216f0bbf72c-kube-api-access-bg9bb\") pod \"nova-cell0-conductor-db-sync-5mlm2\" (UID: \"5de81924-9bfc-484e-8276-0216f0bbf72c\") " pod="openstack/nova-cell0-conductor-db-sync-5mlm2" Mar 13 10:27:34 crc kubenswrapper[4632]: I0313 10:27:34.817444 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5de81924-9bfc-484e-8276-0216f0bbf72c-scripts\") pod \"nova-cell0-conductor-db-sync-5mlm2\" (UID: \"5de81924-9bfc-484e-8276-0216f0bbf72c\") " pod="openstack/nova-cell0-conductor-db-sync-5mlm2" Mar 13 10:27:34 crc kubenswrapper[4632]: I0313 10:27:34.817491 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5de81924-9bfc-484e-8276-0216f0bbf72c-config-data\") pod \"nova-cell0-conductor-db-sync-5mlm2\" (UID: \"5de81924-9bfc-484e-8276-0216f0bbf72c\") " pod="openstack/nova-cell0-conductor-db-sync-5mlm2" Mar 13 10:27:34 crc kubenswrapper[4632]: I0313 10:27:34.817516 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5de81924-9bfc-484e-8276-0216f0bbf72c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-5mlm2\" (UID: \"5de81924-9bfc-484e-8276-0216f0bbf72c\") " pod="openstack/nova-cell0-conductor-db-sync-5mlm2" Mar 13 10:27:34 crc kubenswrapper[4632]: I0313 10:27:34.858313 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5de81924-9bfc-484e-8276-0216f0bbf72c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-5mlm2\" (UID: \"5de81924-9bfc-484e-8276-0216f0bbf72c\") " pod="openstack/nova-cell0-conductor-db-sync-5mlm2" Mar 13 10:27:34 crc kubenswrapper[4632]: I0313 10:27:34.859582 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5de81924-9bfc-484e-8276-0216f0bbf72c-config-data\") pod \"nova-cell0-conductor-db-sync-5mlm2\" (UID: \"5de81924-9bfc-484e-8276-0216f0bbf72c\") " pod="openstack/nova-cell0-conductor-db-sync-5mlm2" Mar 13 10:27:34 crc kubenswrapper[4632]: I0313 10:27:34.861399 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg9bb\" (UniqueName: \"kubernetes.io/projected/5de81924-9bfc-484e-8276-0216f0bbf72c-kube-api-access-bg9bb\") pod \"nova-cell0-conductor-db-sync-5mlm2\" (UID: \"5de81924-9bfc-484e-8276-0216f0bbf72c\") " pod="openstack/nova-cell0-conductor-db-sync-5mlm2" Mar 13 10:27:34 crc kubenswrapper[4632]: I0313 10:27:34.865299 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5de81924-9bfc-484e-8276-0216f0bbf72c-scripts\") pod \"nova-cell0-conductor-db-sync-5mlm2\" (UID: \"5de81924-9bfc-484e-8276-0216f0bbf72c\") " pod="openstack/nova-cell0-conductor-db-sync-5mlm2" Mar 13 10:27:34 crc kubenswrapper[4632]: I0313 10:27:34.879657 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-66b64f87f7-6z95j" event={"ID":"8bca285e-17f7-4505-8a25-21f5ee739584","Type":"ContainerStarted","Data":"e13e391c062e12374c23aa9fcea624691b178835192b17850a99a310e07ef571"} Mar 13 10:27:34 crc kubenswrapper[4632]: I0313 10:27:34.881108 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-66b64f87f7-6z95j" Mar 13 10:27:34 crc kubenswrapper[4632]: I0313 10:27:34.892087 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f597ccc7c-zgmpr" event={"ID":"a721ddbf-6e3d-4c04-9fd5-52a29a4926cb","Type":"ContainerStarted","Data":"5149ba55c6bee8420be5c57ba0eca5a40286dd8e8d58bfbcc05f3ba9f2ac9d98"} Mar 13 10:27:34 crc kubenswrapper[4632]: I0313 10:27:34.893047 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6f597ccc7c-zgmpr" Mar 13 10:27:34 crc kubenswrapper[4632]: I0313 10:27:34.912437 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"536490c7-c218-43ca-b601-84fdf0721b13","Type":"ContainerStarted","Data":"1fedf410b0ec76e58a0488f6518b4a44b2c019d46fe77aa230c5ac262bb32cff"} Mar 13 10:27:34 crc kubenswrapper[4632]: I0313 10:27:34.979569 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5mlm2" Mar 13 10:27:35 crc kubenswrapper[4632]: I0313 10:27:35.395218 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7bdb5f7878-ng2k2" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Mar 13 10:27:35 crc kubenswrapper[4632]: I0313 10:27:35.395763 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:27:35 crc kubenswrapper[4632]: I0313 10:27:35.397487 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"c9dfdd84c36e6ac95b45a488b62e176636bdecfbe3a88d3f5d2058d92ebbacdd"} pod="openstack/horizon-7bdb5f7878-ng2k2" containerMessage="Container horizon failed startup probe, will be restarted" Mar 13 10:27:35 crc kubenswrapper[4632]: I0313 10:27:35.397551 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7bdb5f7878-ng2k2" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerName="horizon" containerID="cri-o://c9dfdd84c36e6ac95b45a488b62e176636bdecfbe3a88d3f5d2058d92ebbacdd" gracePeriod=30 Mar 13 10:27:35 crc kubenswrapper[4632]: I0313 10:27:35.683803 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5mlm2"] Mar 13 10:27:35 crc kubenswrapper[4632]: I0313 10:27:35.868280 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-689764498d-rg7vt" podUID="5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Mar 13 10:27:35 crc kubenswrapper[4632]: I0313 10:27:35.944236 4632 generic.go:334] "Generic (PLEG): container finished" podID="8bca285e-17f7-4505-8a25-21f5ee739584" containerID="e13e391c062e12374c23aa9fcea624691b178835192b17850a99a310e07ef571" exitCode=1 Mar 13 10:27:35 crc kubenswrapper[4632]: I0313 10:27:35.944368 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-66b64f87f7-6z95j" event={"ID":"8bca285e-17f7-4505-8a25-21f5ee739584","Type":"ContainerDied","Data":"e13e391c062e12374c23aa9fcea624691b178835192b17850a99a310e07ef571"} Mar 13 10:27:35 crc kubenswrapper[4632]: I0313 10:27:35.944413 4632 scope.go:117] "RemoveContainer" containerID="1293c4aa6c50a69d6aecb56e9f4df43ee392e8db0df66dd79160ca393da72310" Mar 13 10:27:35 crc kubenswrapper[4632]: I0313 10:27:35.945116 4632 scope.go:117] "RemoveContainer" containerID="e13e391c062e12374c23aa9fcea624691b178835192b17850a99a310e07ef571" Mar 13 10:27:35 crc kubenswrapper[4632]: E0313 10:27:35.945798 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-66b64f87f7-6z95j_openstack(8bca285e-17f7-4505-8a25-21f5ee739584)\"" pod="openstack/heat-cfnapi-66b64f87f7-6z95j" podUID="8bca285e-17f7-4505-8a25-21f5ee739584" Mar 13 10:27:35 crc kubenswrapper[4632]: I0313 10:27:35.956683 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5mlm2" event={"ID":"5de81924-9bfc-484e-8276-0216f0bbf72c","Type":"ContainerStarted","Data":"951453764e4945cebf39ce493b8227004659c89412dc1b5f0146d76b115b3607"} Mar 13 10:27:35 crc kubenswrapper[4632]: I0313 10:27:35.978141 4632 generic.go:334] "Generic (PLEG): container finished" podID="a721ddbf-6e3d-4c04-9fd5-52a29a4926cb" containerID="5149ba55c6bee8420be5c57ba0eca5a40286dd8e8d58bfbcc05f3ba9f2ac9d98" exitCode=1 Mar 13 10:27:35 crc kubenswrapper[4632]: I0313 10:27:35.978425 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f597ccc7c-zgmpr" event={"ID":"a721ddbf-6e3d-4c04-9fd5-52a29a4926cb","Type":"ContainerDied","Data":"5149ba55c6bee8420be5c57ba0eca5a40286dd8e8d58bfbcc05f3ba9f2ac9d98"} Mar 13 10:27:35 crc kubenswrapper[4632]: I0313 10:27:35.980888 4632 scope.go:117] "RemoveContainer" containerID="5149ba55c6bee8420be5c57ba0eca5a40286dd8e8d58bfbcc05f3ba9f2ac9d98" Mar 13 10:27:35 crc kubenswrapper[4632]: E0313 10:27:35.981268 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6f597ccc7c-zgmpr_openstack(a721ddbf-6e3d-4c04-9fd5-52a29a4926cb)\"" pod="openstack/heat-api-6f597ccc7c-zgmpr" podUID="a721ddbf-6e3d-4c04-9fd5-52a29a4926cb" Mar 13 10:27:36 crc kubenswrapper[4632]: I0313 10:27:36.015537 4632 generic.go:334] "Generic (PLEG): container finished" podID="6d73a499-d334-4a7a-9783-640b98760672" containerID="8c839401b1db62da93454588496b8ab534c9e6313aa3bcb0003cb9137b63b2ca" exitCode=0 Mar 13 10:27:36 crc kubenswrapper[4632]: I0313 10:27:36.016677 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c86b4b888-l9574" event={"ID":"6d73a499-d334-4a7a-9783-640b98760672","Type":"ContainerDied","Data":"8c839401b1db62da93454588496b8ab534c9e6313aa3bcb0003cb9137b63b2ca"} Mar 13 10:27:36 crc kubenswrapper[4632]: I0313 10:27:36.128592 4632 scope.go:117] "RemoveContainer" containerID="2b98409c671c4984548f0269cc4d51793d9e2a7f34b6828d8febd63ffbc09eee" Mar 13 10:27:36 crc kubenswrapper[4632]: I0313 10:27:36.619192 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c86b4b888-l9574" Mar 13 10:27:36 crc kubenswrapper[4632]: I0313 10:27:36.778535 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6d73a499-d334-4a7a-9783-640b98760672-config\") pod \"6d73a499-d334-4a7a-9783-640b98760672\" (UID: \"6d73a499-d334-4a7a-9783-640b98760672\") " Mar 13 10:27:36 crc kubenswrapper[4632]: I0313 10:27:36.778737 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d73a499-d334-4a7a-9783-640b98760672-ovndb-tls-certs\") pod \"6d73a499-d334-4a7a-9783-640b98760672\" (UID: \"6d73a499-d334-4a7a-9783-640b98760672\") " Mar 13 10:27:36 crc kubenswrapper[4632]: I0313 10:27:36.778828 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6d73a499-d334-4a7a-9783-640b98760672-httpd-config\") pod \"6d73a499-d334-4a7a-9783-640b98760672\" (UID: \"6d73a499-d334-4a7a-9783-640b98760672\") " Mar 13 10:27:36 crc kubenswrapper[4632]: I0313 10:27:36.778847 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lsp7\" (UniqueName: \"kubernetes.io/projected/6d73a499-d334-4a7a-9783-640b98760672-kube-api-access-6lsp7\") pod \"6d73a499-d334-4a7a-9783-640b98760672\" (UID: \"6d73a499-d334-4a7a-9783-640b98760672\") " Mar 13 10:27:36 crc kubenswrapper[4632]: I0313 10:27:36.778866 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d73a499-d334-4a7a-9783-640b98760672-combined-ca-bundle\") pod \"6d73a499-d334-4a7a-9783-640b98760672\" (UID: \"6d73a499-d334-4a7a-9783-640b98760672\") " Mar 13 10:27:36 crc kubenswrapper[4632]: I0313 10:27:36.811048 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d73a499-d334-4a7a-9783-640b98760672-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "6d73a499-d334-4a7a-9783-640b98760672" (UID: "6d73a499-d334-4a7a-9783-640b98760672"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:27:36 crc kubenswrapper[4632]: I0313 10:27:36.811129 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d73a499-d334-4a7a-9783-640b98760672-kube-api-access-6lsp7" (OuterVolumeSpecName: "kube-api-access-6lsp7") pod "6d73a499-d334-4a7a-9783-640b98760672" (UID: "6d73a499-d334-4a7a-9783-640b98760672"). InnerVolumeSpecName "kube-api-access-6lsp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:27:36 crc kubenswrapper[4632]: I0313 10:27:36.861037 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d73a499-d334-4a7a-9783-640b98760672-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d73a499-d334-4a7a-9783-640b98760672" (UID: "6d73a499-d334-4a7a-9783-640b98760672"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:27:36 crc kubenswrapper[4632]: I0313 10:27:36.883969 4632 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6d73a499-d334-4a7a-9783-640b98760672-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:36 crc kubenswrapper[4632]: I0313 10:27:36.884009 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lsp7\" (UniqueName: \"kubernetes.io/projected/6d73a499-d334-4a7a-9783-640b98760672-kube-api-access-6lsp7\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:36 crc kubenswrapper[4632]: I0313 10:27:36.884025 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d73a499-d334-4a7a-9783-640b98760672-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:36 crc kubenswrapper[4632]: I0313 10:27:36.903606 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d73a499-d334-4a7a-9783-640b98760672-config" (OuterVolumeSpecName: "config") pod "6d73a499-d334-4a7a-9783-640b98760672" (UID: "6d73a499-d334-4a7a-9783-640b98760672"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:27:36 crc kubenswrapper[4632]: I0313 10:27:36.961375 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d73a499-d334-4a7a-9783-640b98760672-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "6d73a499-d334-4a7a-9783-640b98760672" (UID: "6d73a499-d334-4a7a-9783-640b98760672"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:27:36 crc kubenswrapper[4632]: I0313 10:27:36.985552 4632 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d73a499-d334-4a7a-9783-640b98760672-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:36 crc kubenswrapper[4632]: I0313 10:27:36.985591 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/6d73a499-d334-4a7a-9783-640b98760672-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:37 crc kubenswrapper[4632]: I0313 10:27:37.027897 4632 scope.go:117] "RemoveContainer" containerID="e13e391c062e12374c23aa9fcea624691b178835192b17850a99a310e07ef571" Mar 13 10:27:37 crc kubenswrapper[4632]: E0313 10:27:37.028346 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-66b64f87f7-6z95j_openstack(8bca285e-17f7-4505-8a25-21f5ee739584)\"" pod="openstack/heat-cfnapi-66b64f87f7-6z95j" podUID="8bca285e-17f7-4505-8a25-21f5ee739584" Mar 13 10:27:37 crc kubenswrapper[4632]: I0313 10:27:37.065689 4632 scope.go:117] "RemoveContainer" containerID="5149ba55c6bee8420be5c57ba0eca5a40286dd8e8d58bfbcc05f3ba9f2ac9d98" Mar 13 10:27:37 crc kubenswrapper[4632]: E0313 10:27:37.066161 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6f597ccc7c-zgmpr_openstack(a721ddbf-6e3d-4c04-9fd5-52a29a4926cb)\"" pod="openstack/heat-api-6f597ccc7c-zgmpr" podUID="a721ddbf-6e3d-4c04-9fd5-52a29a4926cb" Mar 13 10:27:37 crc kubenswrapper[4632]: I0313 10:27:37.073998 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c86b4b888-l9574" event={"ID":"6d73a499-d334-4a7a-9783-640b98760672","Type":"ContainerDied","Data":"c201c6ed0f734df3747387db31697b083007f33831a2be5b5b4d93d97a61d2c9"} Mar 13 10:27:37 crc kubenswrapper[4632]: I0313 10:27:37.074281 4632 scope.go:117] "RemoveContainer" containerID="e005b4f09b297f1fe00efd39c9534b7382173cd69b88dca5466ba89c0f3c0de7" Mar 13 10:27:37 crc kubenswrapper[4632]: I0313 10:27:37.075561 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c86b4b888-l9574" Mar 13 10:27:37 crc kubenswrapper[4632]: I0313 10:27:37.112241 4632 scope.go:117] "RemoveContainer" containerID="8c839401b1db62da93454588496b8ab534c9e6313aa3bcb0003cb9137b63b2ca" Mar 13 10:27:37 crc kubenswrapper[4632]: I0313 10:27:37.113463 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"536490c7-c218-43ca-b601-84fdf0721b13","Type":"ContainerStarted","Data":"62a2bf3fb649bf768c02b7ee6d2e17db6f7164a75bcbf4a85047247a011f076e"} Mar 13 10:27:37 crc kubenswrapper[4632]: I0313 10:27:37.114340 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 13 10:27:37 crc kubenswrapper[4632]: I0313 10:27:37.130638 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"aef9680f-df77-4e2e-ac53-9d7530c2270c","Type":"ContainerStarted","Data":"cc5e594c8a0fc7e3fadb59d676c0ee796f4179df2acec563525b4171416f0e00"} Mar 13 10:27:37 crc kubenswrapper[4632]: I0313 10:27:37.178994 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5c86b4b888-l9574"] Mar 13 10:27:37 crc kubenswrapper[4632]: I0313 10:27:37.190022 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5c86b4b888-l9574"] Mar 13 10:27:37 crc kubenswrapper[4632]: I0313 10:27:37.219015 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=7.904201063 podStartE2EDuration="14.218995108s" podCreationTimestamp="2026-03-13 10:27:23 +0000 UTC" firstStartedPulling="2026-03-13 10:27:29.226365712 +0000 UTC m=+1423.248895845" lastFinishedPulling="2026-03-13 10:27:35.541159757 +0000 UTC m=+1429.563689890" observedRunningTime="2026-03-13 10:27:37.187749958 +0000 UTC m=+1431.210280111" watchObservedRunningTime="2026-03-13 10:27:37.218995108 +0000 UTC m=+1431.241525241" Mar 13 10:27:37 crc kubenswrapper[4632]: I0313 10:27:37.230560 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.865906468 podStartE2EDuration="37.230530689s" podCreationTimestamp="2026-03-13 10:27:00 +0000 UTC" firstStartedPulling="2026-03-13 10:27:01.98182196 +0000 UTC m=+1396.004352093" lastFinishedPulling="2026-03-13 10:27:35.346446191 +0000 UTC m=+1429.368976314" observedRunningTime="2026-03-13 10:27:37.214097879 +0000 UTC m=+1431.236628002" watchObservedRunningTime="2026-03-13 10:27:37.230530689 +0000 UTC m=+1431.253060822" Mar 13 10:27:38 crc kubenswrapper[4632]: I0313 10:27:38.066566 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d73a499-d334-4a7a-9783-640b98760672" path="/var/lib/kubelet/pods/6d73a499-d334-4a7a-9783-640b98760672/volumes" Mar 13 10:27:38 crc kubenswrapper[4632]: I0313 10:27:38.970130 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7888df55c7-mw5p4" Mar 13 10:27:39 crc kubenswrapper[4632]: I0313 10:27:39.048658 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f6d96bd7f-txx79"] Mar 13 10:27:39 crc kubenswrapper[4632]: I0313 10:27:39.048901 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f6d96bd7f-txx79" podUID="55712a50-9dcf-44ce-8bac-9aa3ecf65db4" containerName="dnsmasq-dns" containerID="cri-o://fd4487114042316df9fc87c4e68537674ac28dafb4672e7c807d655817ad05cf" gracePeriod=10 Mar 13 10:27:39 crc kubenswrapper[4632]: I0313 10:27:39.213897 4632 generic.go:334] "Generic (PLEG): container finished" podID="55712a50-9dcf-44ce-8bac-9aa3ecf65db4" containerID="fd4487114042316df9fc87c4e68537674ac28dafb4672e7c807d655817ad05cf" exitCode=0 Mar 13 10:27:39 crc kubenswrapper[4632]: I0313 10:27:39.213985 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6d96bd7f-txx79" event={"ID":"55712a50-9dcf-44ce-8bac-9aa3ecf65db4","Type":"ContainerDied","Data":"fd4487114042316df9fc87c4e68537674ac28dafb4672e7c807d655817ad05cf"} Mar 13 10:27:39 crc kubenswrapper[4632]: I0313 10:27:39.343431 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-f6d96bd7f-txx79" podUID="55712a50-9dcf-44ce-8bac-9aa3ecf65db4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.171:5353: connect: connection refused" Mar 13 10:27:39 crc kubenswrapper[4632]: I0313 10:27:39.865754 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6d96bd7f-txx79" Mar 13 10:27:39 crc kubenswrapper[4632]: I0313 10:27:39.990403 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55712a50-9dcf-44ce-8bac-9aa3ecf65db4-config\") pod \"55712a50-9dcf-44ce-8bac-9aa3ecf65db4\" (UID: \"55712a50-9dcf-44ce-8bac-9aa3ecf65db4\") " Mar 13 10:27:39 crc kubenswrapper[4632]: I0313 10:27:39.990716 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55712a50-9dcf-44ce-8bac-9aa3ecf65db4-ovsdbserver-nb\") pod \"55712a50-9dcf-44ce-8bac-9aa3ecf65db4\" (UID: \"55712a50-9dcf-44ce-8bac-9aa3ecf65db4\") " Mar 13 10:27:39 crc kubenswrapper[4632]: I0313 10:27:39.991060 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55712a50-9dcf-44ce-8bac-9aa3ecf65db4-dns-svc\") pod \"55712a50-9dcf-44ce-8bac-9aa3ecf65db4\" (UID: \"55712a50-9dcf-44ce-8bac-9aa3ecf65db4\") " Mar 13 10:27:39 crc kubenswrapper[4632]: I0313 10:27:39.991221 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55712a50-9dcf-44ce-8bac-9aa3ecf65db4-ovsdbserver-sb\") pod \"55712a50-9dcf-44ce-8bac-9aa3ecf65db4\" (UID: \"55712a50-9dcf-44ce-8bac-9aa3ecf65db4\") " Mar 13 10:27:39 crc kubenswrapper[4632]: I0313 10:27:39.991377 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7sjz\" (UniqueName: \"kubernetes.io/projected/55712a50-9dcf-44ce-8bac-9aa3ecf65db4-kube-api-access-g7sjz\") pod \"55712a50-9dcf-44ce-8bac-9aa3ecf65db4\" (UID: \"55712a50-9dcf-44ce-8bac-9aa3ecf65db4\") " Mar 13 10:27:39 crc kubenswrapper[4632]: I0313 10:27:39.991535 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/55712a50-9dcf-44ce-8bac-9aa3ecf65db4-dns-swift-storage-0\") pod \"55712a50-9dcf-44ce-8bac-9aa3ecf65db4\" (UID: \"55712a50-9dcf-44ce-8bac-9aa3ecf65db4\") " Mar 13 10:27:40 crc kubenswrapper[4632]: I0313 10:27:40.020315 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55712a50-9dcf-44ce-8bac-9aa3ecf65db4-kube-api-access-g7sjz" (OuterVolumeSpecName: "kube-api-access-g7sjz") pod "55712a50-9dcf-44ce-8bac-9aa3ecf65db4" (UID: "55712a50-9dcf-44ce-8bac-9aa3ecf65db4"). InnerVolumeSpecName "kube-api-access-g7sjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:27:40 crc kubenswrapper[4632]: I0313 10:27:40.113061 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7sjz\" (UniqueName: \"kubernetes.io/projected/55712a50-9dcf-44ce-8bac-9aa3ecf65db4-kube-api-access-g7sjz\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:40 crc kubenswrapper[4632]: I0313 10:27:40.149487 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55712a50-9dcf-44ce-8bac-9aa3ecf65db4-config" (OuterVolumeSpecName: "config") pod "55712a50-9dcf-44ce-8bac-9aa3ecf65db4" (UID: "55712a50-9dcf-44ce-8bac-9aa3ecf65db4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:27:40 crc kubenswrapper[4632]: I0313 10:27:40.150024 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55712a50-9dcf-44ce-8bac-9aa3ecf65db4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "55712a50-9dcf-44ce-8bac-9aa3ecf65db4" (UID: "55712a50-9dcf-44ce-8bac-9aa3ecf65db4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:27:40 crc kubenswrapper[4632]: I0313 10:27:40.188921 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55712a50-9dcf-44ce-8bac-9aa3ecf65db4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "55712a50-9dcf-44ce-8bac-9aa3ecf65db4" (UID: "55712a50-9dcf-44ce-8bac-9aa3ecf65db4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:27:40 crc kubenswrapper[4632]: I0313 10:27:40.191911 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55712a50-9dcf-44ce-8bac-9aa3ecf65db4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "55712a50-9dcf-44ce-8bac-9aa3ecf65db4" (UID: "55712a50-9dcf-44ce-8bac-9aa3ecf65db4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:27:40 crc kubenswrapper[4632]: I0313 10:27:40.216869 4632 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55712a50-9dcf-44ce-8bac-9aa3ecf65db4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:40 crc kubenswrapper[4632]: I0313 10:27:40.216918 4632 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/55712a50-9dcf-44ce-8bac-9aa3ecf65db4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:40 crc kubenswrapper[4632]: I0313 10:27:40.216963 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55712a50-9dcf-44ce-8bac-9aa3ecf65db4-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:40 crc kubenswrapper[4632]: I0313 10:27:40.216981 4632 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55712a50-9dcf-44ce-8bac-9aa3ecf65db4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:40 crc kubenswrapper[4632]: I0313 10:27:40.273020 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55712a50-9dcf-44ce-8bac-9aa3ecf65db4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "55712a50-9dcf-44ce-8bac-9aa3ecf65db4" (UID: "55712a50-9dcf-44ce-8bac-9aa3ecf65db4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:27:40 crc kubenswrapper[4632]: I0313 10:27:40.273235 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6d96bd7f-txx79" Mar 13 10:27:40 crc kubenswrapper[4632]: I0313 10:27:40.321438 4632 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55712a50-9dcf-44ce-8bac-9aa3ecf65db4-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:40 crc kubenswrapper[4632]: I0313 10:27:40.339028 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-6f597ccc7c-zgmpr" Mar 13 10:27:40 crc kubenswrapper[4632]: I0313 10:27:40.339066 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-66b64f87f7-6z95j" Mar 13 10:27:40 crc kubenswrapper[4632]: I0313 10:27:40.339081 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6d96bd7f-txx79" event={"ID":"55712a50-9dcf-44ce-8bac-9aa3ecf65db4","Type":"ContainerDied","Data":"749192ea37afcdb5bad8f984bb1339eb6de202d1531a18803ce98189920ca65c"} Mar 13 10:27:40 crc kubenswrapper[4632]: I0313 10:27:40.339119 4632 scope.go:117] "RemoveContainer" containerID="fd4487114042316df9fc87c4e68537674ac28dafb4672e7c807d655817ad05cf" Mar 13 10:27:40 crc kubenswrapper[4632]: I0313 10:27:40.340269 4632 scope.go:117] "RemoveContainer" containerID="5149ba55c6bee8420be5c57ba0eca5a40286dd8e8d58bfbcc05f3ba9f2ac9d98" Mar 13 10:27:40 crc kubenswrapper[4632]: E0313 10:27:40.340734 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6f597ccc7c-zgmpr_openstack(a721ddbf-6e3d-4c04-9fd5-52a29a4926cb)\"" pod="openstack/heat-api-6f597ccc7c-zgmpr" podUID="a721ddbf-6e3d-4c04-9fd5-52a29a4926cb" Mar 13 10:27:40 crc kubenswrapper[4632]: I0313 10:27:40.341188 4632 scope.go:117] "RemoveContainer" containerID="e13e391c062e12374c23aa9fcea624691b178835192b17850a99a310e07ef571" Mar 13 10:27:40 crc kubenswrapper[4632]: E0313 10:27:40.341481 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-66b64f87f7-6z95j_openstack(8bca285e-17f7-4505-8a25-21f5ee739584)\"" pod="openstack/heat-cfnapi-66b64f87f7-6z95j" podUID="8bca285e-17f7-4505-8a25-21f5ee739584" Mar 13 10:27:40 crc kubenswrapper[4632]: I0313 10:27:40.388035 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f6d96bd7f-txx79"] Mar 13 10:27:40 crc kubenswrapper[4632]: I0313 10:27:40.388727 4632 scope.go:117] "RemoveContainer" containerID="af3fa8988b343c225a97a2143774e273237597ed7c92bf90057d129267e74a5e" Mar 13 10:27:40 crc kubenswrapper[4632]: I0313 10:27:40.407430 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f6d96bd7f-txx79"] Mar 13 10:27:40 crc kubenswrapper[4632]: I0313 10:27:40.464357 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 10:27:40 crc kubenswrapper[4632]: I0313 10:27:40.464414 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 10:27:42 crc kubenswrapper[4632]: I0313 10:27:42.058563 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55712a50-9dcf-44ce-8bac-9aa3ecf65db4" path="/var/lib/kubelet/pods/55712a50-9dcf-44ce-8bac-9aa3ecf65db4/volumes" Mar 13 10:27:42 crc kubenswrapper[4632]: I0313 10:27:42.410232 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="6785ba8c-a47b-4851-945e-c07ccecb9911" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.190:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:27:42 crc kubenswrapper[4632]: I0313 10:27:42.410250 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="6785ba8c-a47b-4851-945e-c07ccecb9911" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.190:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:27:43 crc kubenswrapper[4632]: I0313 10:27:43.917087 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7f9df5b5b5-q6dp2" Mar 13 10:27:44 crc kubenswrapper[4632]: I0313 10:27:44.108930 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-b547848c4-bn5vs" Mar 13 10:27:45 crc kubenswrapper[4632]: I0313 10:27:45.028634 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-c959f64fb-hx4t8" Mar 13 10:27:45 crc kubenswrapper[4632]: I0313 10:27:45.097650 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7f9df5b5b5-q6dp2"] Mar 13 10:27:45 crc kubenswrapper[4632]: I0313 10:27:45.097882 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-7f9df5b5b5-q6dp2" podUID="757b852e-068c-4885-99b8-af2e6f23e445" containerName="heat-engine" containerID="cri-o://c974019fb18638d059aa9b080871c3232dbdb322c997ebb8d28de7a80fef50a0" gracePeriod=60 Mar 13 10:27:45 crc kubenswrapper[4632]: I0313 10:27:45.128594 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-d856c56c-cmd2q" Mar 13 10:27:45 crc kubenswrapper[4632]: I0313 10:27:45.359312 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-7fcc47f8dc-lhqhx" Mar 13 10:27:45 crc kubenswrapper[4632]: I0313 10:27:45.485335 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6f597ccc7c-zgmpr"] Mar 13 10:27:45 crc kubenswrapper[4632]: I0313 10:27:45.866890 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-689764498d-rg7vt" podUID="5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Mar 13 10:27:45 crc kubenswrapper[4632]: I0313 10:27:45.866994 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-689764498d-rg7vt" Mar 13 10:27:45 crc kubenswrapper[4632]: I0313 10:27:45.867966 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"433c9aa5a02161c4bc7228b52cc460020479cbbb899bc6549755a59b8ad796f4"} pod="openstack/horizon-689764498d-rg7vt" containerMessage="Container horizon failed startup probe, will be restarted" Mar 13 10:27:45 crc kubenswrapper[4632]: I0313 10:27:45.868015 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-689764498d-rg7vt" podUID="5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c" containerName="horizon" containerID="cri-o://433c9aa5a02161c4bc7228b52cc460020479cbbb899bc6549755a59b8ad796f4" gracePeriod=30 Mar 13 10:27:47 crc kubenswrapper[4632]: I0313 10:27:47.418123 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="6785ba8c-a47b-4851-945e-c07ccecb9911" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.190:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 10:27:47 crc kubenswrapper[4632]: I0313 10:27:47.418258 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="6785ba8c-a47b-4851-945e-c07ccecb9911" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.190:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 10:27:47 crc kubenswrapper[4632]: I0313 10:27:47.546466 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Mar 13 10:27:47 crc kubenswrapper[4632]: I0313 10:27:47.955477 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-86bb565f45-ntq5k" Mar 13 10:27:48 crc kubenswrapper[4632]: I0313 10:27:48.119479 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-66b64f87f7-6z95j"] Mar 13 10:27:49 crc kubenswrapper[4632]: I0313 10:27:49.508500 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:27:49 crc kubenswrapper[4632]: I0313 10:27:49.509025 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="536490c7-c218-43ca-b601-84fdf0721b13" containerName="ceilometer-central-agent" containerID="cri-o://0e185e3360ac7d555a53f4a6a5858f9b0a423c695033ffc9d71eb6f71e6ca6e1" gracePeriod=30 Mar 13 10:27:49 crc kubenswrapper[4632]: I0313 10:27:49.511288 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="536490c7-c218-43ca-b601-84fdf0721b13" containerName="sg-core" containerID="cri-o://1fedf410b0ec76e58a0488f6518b4a44b2c019d46fe77aa230c5ac262bb32cff" gracePeriod=30 Mar 13 10:27:49 crc kubenswrapper[4632]: I0313 10:27:49.511411 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="536490c7-c218-43ca-b601-84fdf0721b13" containerName="proxy-httpd" containerID="cri-o://62a2bf3fb649bf768c02b7ee6d2e17db6f7164a75bcbf4a85047247a011f076e" gracePeriod=30 Mar 13 10:27:49 crc kubenswrapper[4632]: I0313 10:27:49.511452 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="536490c7-c218-43ca-b601-84fdf0721b13" containerName="ceilometer-notification-agent" containerID="cri-o://45f5d86800aa7ead2bd3ca8e9cc3cc79ae2d441610ccb1ee742ca8de3f0990d9" gracePeriod=30 Mar 13 10:27:49 crc kubenswrapper[4632]: I0313 10:27:49.567604 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="536490c7-c218-43ca-b601-84fdf0721b13" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.191:3000/\": EOF" Mar 13 10:27:50 crc kubenswrapper[4632]: I0313 10:27:50.580421 4632 generic.go:334] "Generic (PLEG): container finished" podID="536490c7-c218-43ca-b601-84fdf0721b13" containerID="62a2bf3fb649bf768c02b7ee6d2e17db6f7164a75bcbf4a85047247a011f076e" exitCode=0 Mar 13 10:27:50 crc kubenswrapper[4632]: I0313 10:27:50.580482 4632 generic.go:334] "Generic (PLEG): container finished" podID="536490c7-c218-43ca-b601-84fdf0721b13" containerID="1fedf410b0ec76e58a0488f6518b4a44b2c019d46fe77aa230c5ac262bb32cff" exitCode=2 Mar 13 10:27:50 crc kubenswrapper[4632]: I0313 10:27:50.580495 4632 generic.go:334] "Generic (PLEG): container finished" podID="536490c7-c218-43ca-b601-84fdf0721b13" containerID="45f5d86800aa7ead2bd3ca8e9cc3cc79ae2d441610ccb1ee742ca8de3f0990d9" exitCode=0 Mar 13 10:27:50 crc kubenswrapper[4632]: I0313 10:27:50.580538 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"536490c7-c218-43ca-b601-84fdf0721b13","Type":"ContainerDied","Data":"62a2bf3fb649bf768c02b7ee6d2e17db6f7164a75bcbf4a85047247a011f076e"} Mar 13 10:27:50 crc kubenswrapper[4632]: I0313 10:27:50.580569 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"536490c7-c218-43ca-b601-84fdf0721b13","Type":"ContainerDied","Data":"1fedf410b0ec76e58a0488f6518b4a44b2c019d46fe77aa230c5ac262bb32cff"} Mar 13 10:27:50 crc kubenswrapper[4632]: I0313 10:27:50.580583 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"536490c7-c218-43ca-b601-84fdf0721b13","Type":"ContainerDied","Data":"45f5d86800aa7ead2bd3ca8e9cc3cc79ae2d441610ccb1ee742ca8de3f0990d9"} Mar 13 10:27:51 crc kubenswrapper[4632]: I0313 10:27:51.594599 4632 generic.go:334] "Generic (PLEG): container finished" podID="536490c7-c218-43ca-b601-84fdf0721b13" containerID="0e185e3360ac7d555a53f4a6a5858f9b0a423c695033ffc9d71eb6f71e6ca6e1" exitCode=0 Mar 13 10:27:51 crc kubenswrapper[4632]: I0313 10:27:51.594824 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"536490c7-c218-43ca-b601-84fdf0721b13","Type":"ContainerDied","Data":"0e185e3360ac7d555a53f4a6a5858f9b0a423c695033ffc9d71eb6f71e6ca6e1"} Mar 13 10:27:52 crc kubenswrapper[4632]: I0313 10:27:52.619481 4632 generic.go:334] "Generic (PLEG): container finished" podID="757b852e-068c-4885-99b8-af2e6f23e445" containerID="c974019fb18638d059aa9b080871c3232dbdb322c997ebb8d28de7a80fef50a0" exitCode=0 Mar 13 10:27:52 crc kubenswrapper[4632]: I0313 10:27:52.619521 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7f9df5b5b5-q6dp2" event={"ID":"757b852e-068c-4885-99b8-af2e6f23e445","Type":"ContainerDied","Data":"c974019fb18638d059aa9b080871c3232dbdb322c997ebb8d28de7a80fef50a0"} Mar 13 10:27:53 crc kubenswrapper[4632]: E0313 10:27:53.836929 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c974019fb18638d059aa9b080871c3232dbdb322c997ebb8d28de7a80fef50a0 is running failed: container process not found" containerID="c974019fb18638d059aa9b080871c3232dbdb322c997ebb8d28de7a80fef50a0" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 13 10:27:53 crc kubenswrapper[4632]: E0313 10:27:53.868694 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c974019fb18638d059aa9b080871c3232dbdb322c997ebb8d28de7a80fef50a0 is running failed: container process not found" containerID="c974019fb18638d059aa9b080871c3232dbdb322c997ebb8d28de7a80fef50a0" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 13 10:27:53 crc kubenswrapper[4632]: E0313 10:27:53.869441 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c974019fb18638d059aa9b080871c3232dbdb322c997ebb8d28de7a80fef50a0 is running failed: container process not found" containerID="c974019fb18638d059aa9b080871c3232dbdb322c997ebb8d28de7a80fef50a0" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 13 10:27:53 crc kubenswrapper[4632]: E0313 10:27:53.869493 4632 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c974019fb18638d059aa9b080871c3232dbdb322c997ebb8d28de7a80fef50a0 is running failed: container process not found" probeType="Readiness" pod="openstack/heat-engine-7f9df5b5b5-q6dp2" podUID="757b852e-068c-4885-99b8-af2e6f23e445" containerName="heat-engine" Mar 13 10:27:54 crc kubenswrapper[4632]: I0313 10:27:54.314867 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="536490c7-c218-43ca-b601-84fdf0721b13" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.191:3000/\": dial tcp 10.217.0.191:3000: connect: connection refused" Mar 13 10:27:58 crc kubenswrapper[4632]: I0313 10:27:58.360617 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-66b64f87f7-6z95j" Mar 13 10:27:58 crc kubenswrapper[4632]: I0313 10:27:58.375885 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f597ccc7c-zgmpr" Mar 13 10:27:58 crc kubenswrapper[4632]: I0313 10:27:58.445750 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a721ddbf-6e3d-4c04-9fd5-52a29a4926cb-combined-ca-bundle\") pod \"a721ddbf-6e3d-4c04-9fd5-52a29a4926cb\" (UID: \"a721ddbf-6e3d-4c04-9fd5-52a29a4926cb\") " Mar 13 10:27:58 crc kubenswrapper[4632]: I0313 10:27:58.446362 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a721ddbf-6e3d-4c04-9fd5-52a29a4926cb-config-data-custom\") pod \"a721ddbf-6e3d-4c04-9fd5-52a29a4926cb\" (UID: \"a721ddbf-6e3d-4c04-9fd5-52a29a4926cb\") " Mar 13 10:27:58 crc kubenswrapper[4632]: I0313 10:27:58.446430 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5qrk\" (UniqueName: \"kubernetes.io/projected/8bca285e-17f7-4505-8a25-21f5ee739584-kube-api-access-w5qrk\") pod \"8bca285e-17f7-4505-8a25-21f5ee739584\" (UID: \"8bca285e-17f7-4505-8a25-21f5ee739584\") " Mar 13 10:27:58 crc kubenswrapper[4632]: I0313 10:27:58.446607 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vct9r\" (UniqueName: \"kubernetes.io/projected/a721ddbf-6e3d-4c04-9fd5-52a29a4926cb-kube-api-access-vct9r\") pod \"a721ddbf-6e3d-4c04-9fd5-52a29a4926cb\" (UID: \"a721ddbf-6e3d-4c04-9fd5-52a29a4926cb\") " Mar 13 10:27:58 crc kubenswrapper[4632]: I0313 10:27:58.446760 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bca285e-17f7-4505-8a25-21f5ee739584-combined-ca-bundle\") pod \"8bca285e-17f7-4505-8a25-21f5ee739584\" (UID: \"8bca285e-17f7-4505-8a25-21f5ee739584\") " Mar 13 10:27:58 crc kubenswrapper[4632]: I0313 10:27:58.446853 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8bca285e-17f7-4505-8a25-21f5ee739584-config-data-custom\") pod \"8bca285e-17f7-4505-8a25-21f5ee739584\" (UID: \"8bca285e-17f7-4505-8a25-21f5ee739584\") " Mar 13 10:27:58 crc kubenswrapper[4632]: I0313 10:27:58.447018 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bca285e-17f7-4505-8a25-21f5ee739584-config-data\") pod \"8bca285e-17f7-4505-8a25-21f5ee739584\" (UID: \"8bca285e-17f7-4505-8a25-21f5ee739584\") " Mar 13 10:27:58 crc kubenswrapper[4632]: I0313 10:27:58.447108 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a721ddbf-6e3d-4c04-9fd5-52a29a4926cb-config-data\") pod \"a721ddbf-6e3d-4c04-9fd5-52a29a4926cb\" (UID: \"a721ddbf-6e3d-4c04-9fd5-52a29a4926cb\") " Mar 13 10:27:58 crc kubenswrapper[4632]: I0313 10:27:58.488106 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bca285e-17f7-4505-8a25-21f5ee739584-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8bca285e-17f7-4505-8a25-21f5ee739584" (UID: "8bca285e-17f7-4505-8a25-21f5ee739584"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:27:58 crc kubenswrapper[4632]: I0313 10:27:58.538339 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a721ddbf-6e3d-4c04-9fd5-52a29a4926cb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a721ddbf-6e3d-4c04-9fd5-52a29a4926cb" (UID: "a721ddbf-6e3d-4c04-9fd5-52a29a4926cb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:27:58 crc kubenswrapper[4632]: I0313 10:27:58.564246 4632 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8bca285e-17f7-4505-8a25-21f5ee739584-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:58 crc kubenswrapper[4632]: I0313 10:27:58.564290 4632 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a721ddbf-6e3d-4c04-9fd5-52a29a4926cb-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:58 crc kubenswrapper[4632]: I0313 10:27:58.570999 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a721ddbf-6e3d-4c04-9fd5-52a29a4926cb-kube-api-access-vct9r" (OuterVolumeSpecName: "kube-api-access-vct9r") pod "a721ddbf-6e3d-4c04-9fd5-52a29a4926cb" (UID: "a721ddbf-6e3d-4c04-9fd5-52a29a4926cb"). InnerVolumeSpecName "kube-api-access-vct9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:27:58 crc kubenswrapper[4632]: I0313 10:27:58.593593 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bca285e-17f7-4505-8a25-21f5ee739584-kube-api-access-w5qrk" (OuterVolumeSpecName: "kube-api-access-w5qrk") pod "8bca285e-17f7-4505-8a25-21f5ee739584" (UID: "8bca285e-17f7-4505-8a25-21f5ee739584"). InnerVolumeSpecName "kube-api-access-w5qrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:27:58 crc kubenswrapper[4632]: I0313 10:27:58.708978 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5qrk\" (UniqueName: \"kubernetes.io/projected/8bca285e-17f7-4505-8a25-21f5ee739584-kube-api-access-w5qrk\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:58 crc kubenswrapper[4632]: I0313 10:27:58.710141 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vct9r\" (UniqueName: \"kubernetes.io/projected/a721ddbf-6e3d-4c04-9fd5-52a29a4926cb-kube-api-access-vct9r\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:58 crc kubenswrapper[4632]: I0313 10:27:58.710508 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a721ddbf-6e3d-4c04-9fd5-52a29a4926cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a721ddbf-6e3d-4c04-9fd5-52a29a4926cb" (UID: "a721ddbf-6e3d-4c04-9fd5-52a29a4926cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:27:58 crc kubenswrapper[4632]: I0313 10:27:58.725555 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bca285e-17f7-4505-8a25-21f5ee739584-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8bca285e-17f7-4505-8a25-21f5ee739584" (UID: "8bca285e-17f7-4505-8a25-21f5ee739584"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:27:58 crc kubenswrapper[4632]: I0313 10:27:58.740913 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f597ccc7c-zgmpr" event={"ID":"a721ddbf-6e3d-4c04-9fd5-52a29a4926cb","Type":"ContainerDied","Data":"307b8fb6df45672fd631f8323c011769dd77981dd1c4da51a966ca64e7bdf956"} Mar 13 10:27:58 crc kubenswrapper[4632]: I0313 10:27:58.741345 4632 scope.go:117] "RemoveContainer" containerID="5149ba55c6bee8420be5c57ba0eca5a40286dd8e8d58bfbcc05f3ba9f2ac9d98" Mar 13 10:27:58 crc kubenswrapper[4632]: I0313 10:27:58.741381 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f597ccc7c-zgmpr" Mar 13 10:27:58 crc kubenswrapper[4632]: I0313 10:27:58.760165 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-66b64f87f7-6z95j" event={"ID":"8bca285e-17f7-4505-8a25-21f5ee739584","Type":"ContainerDied","Data":"17501791bdc7f7056cbbb54c8ba1821e2768aef3ea1d8c030f27232cf3c7d16a"} Mar 13 10:27:58 crc kubenswrapper[4632]: I0313 10:27:58.760304 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-66b64f87f7-6z95j" Mar 13 10:27:58 crc kubenswrapper[4632]: I0313 10:27:58.766483 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bca285e-17f7-4505-8a25-21f5ee739584-config-data" (OuterVolumeSpecName: "config-data") pod "8bca285e-17f7-4505-8a25-21f5ee739584" (UID: "8bca285e-17f7-4505-8a25-21f5ee739584"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:27:58 crc kubenswrapper[4632]: I0313 10:27:58.783628 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a721ddbf-6e3d-4c04-9fd5-52a29a4926cb-config-data" (OuterVolumeSpecName: "config-data") pod "a721ddbf-6e3d-4c04-9fd5-52a29a4926cb" (UID: "a721ddbf-6e3d-4c04-9fd5-52a29a4926cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:27:58 crc kubenswrapper[4632]: I0313 10:27:58.812822 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a721ddbf-6e3d-4c04-9fd5-52a29a4926cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:58 crc kubenswrapper[4632]: I0313 10:27:58.813478 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bca285e-17f7-4505-8a25-21f5ee739584-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:58 crc kubenswrapper[4632]: I0313 10:27:58.813601 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bca285e-17f7-4505-8a25-21f5ee739584-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:58 crc kubenswrapper[4632]: I0313 10:27:58.813703 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a721ddbf-6e3d-4c04-9fd5-52a29a4926cb-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:27:59 crc kubenswrapper[4632]: I0313 10:27:59.176592 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-66b64f87f7-6z95j"] Mar 13 10:27:59 crc kubenswrapper[4632]: I0313 10:27:59.219210 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-66b64f87f7-6z95j"] Mar 13 10:27:59 crc kubenswrapper[4632]: I0313 10:27:59.243019 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6f597ccc7c-zgmpr"] Mar 13 10:27:59 crc kubenswrapper[4632]: I0313 10:27:59.256438 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6f597ccc7c-zgmpr"] Mar 13 10:27:59 crc kubenswrapper[4632]: E0313 10:27:59.805080 4632 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-nova-conductor:e43235cb19da04699a53f42b6a75afe9" Mar 13 10:27:59 crc kubenswrapper[4632]: E0313 10:27:59.805149 4632 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-nova-conductor:e43235cb19da04699a53f42b6a75afe9" Mar 13 10:27:59 crc kubenswrapper[4632]: E0313 10:27:59.805296 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-nova-conductor:e43235cb19da04699a53f42b6a75afe9,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bg9bb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-5mlm2_openstack(5de81924-9bfc-484e-8276-0216f0bbf72c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 10:27:59 crc kubenswrapper[4632]: E0313 10:27:59.809761 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-5mlm2" podUID="5de81924-9bfc-484e-8276-0216f0bbf72c" Mar 13 10:27:59 crc kubenswrapper[4632]: I0313 10:27:59.830718 4632 scope.go:117] "RemoveContainer" containerID="e13e391c062e12374c23aa9fcea624691b178835192b17850a99a310e07ef571" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.055442 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bca285e-17f7-4505-8a25-21f5ee739584" path="/var/lib/kubelet/pods/8bca285e-17f7-4505-8a25-21f5ee739584/volumes" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.056609 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a721ddbf-6e3d-4c04-9fd5-52a29a4926cb" path="/var/lib/kubelet/pods/a721ddbf-6e3d-4c04-9fd5-52a29a4926cb/volumes" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.274608 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556628-479rr"] Mar 13 10:28:00 crc kubenswrapper[4632]: E0313 10:28:00.275471 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55712a50-9dcf-44ce-8bac-9aa3ecf65db4" containerName="init" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.275489 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="55712a50-9dcf-44ce-8bac-9aa3ecf65db4" containerName="init" Mar 13 10:28:00 crc kubenswrapper[4632]: E0313 10:28:00.275506 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55712a50-9dcf-44ce-8bac-9aa3ecf65db4" containerName="dnsmasq-dns" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.275516 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="55712a50-9dcf-44ce-8bac-9aa3ecf65db4" containerName="dnsmasq-dns" Mar 13 10:28:00 crc kubenswrapper[4632]: E0313 10:28:00.275535 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bca285e-17f7-4505-8a25-21f5ee739584" containerName="heat-cfnapi" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.275544 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bca285e-17f7-4505-8a25-21f5ee739584" containerName="heat-cfnapi" Mar 13 10:28:00 crc kubenswrapper[4632]: E0313 10:28:00.275554 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d73a499-d334-4a7a-9783-640b98760672" containerName="neutron-httpd" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.275561 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d73a499-d334-4a7a-9783-640b98760672" containerName="neutron-httpd" Mar 13 10:28:00 crc kubenswrapper[4632]: E0313 10:28:00.275584 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a721ddbf-6e3d-4c04-9fd5-52a29a4926cb" containerName="heat-api" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.275591 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="a721ddbf-6e3d-4c04-9fd5-52a29a4926cb" containerName="heat-api" Mar 13 10:28:00 crc kubenswrapper[4632]: E0313 10:28:00.275614 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d73a499-d334-4a7a-9783-640b98760672" containerName="neutron-api" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.275624 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d73a499-d334-4a7a-9783-640b98760672" containerName="neutron-api" Mar 13 10:28:00 crc kubenswrapper[4632]: E0313 10:28:00.275633 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a721ddbf-6e3d-4c04-9fd5-52a29a4926cb" containerName="heat-api" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.275640 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="a721ddbf-6e3d-4c04-9fd5-52a29a4926cb" containerName="heat-api" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.275874 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d73a499-d334-4a7a-9783-640b98760672" containerName="neutron-api" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.275886 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="a721ddbf-6e3d-4c04-9fd5-52a29a4926cb" containerName="heat-api" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.275896 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="a721ddbf-6e3d-4c04-9fd5-52a29a4926cb" containerName="heat-api" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.275914 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bca285e-17f7-4505-8a25-21f5ee739584" containerName="heat-cfnapi" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.275927 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bca285e-17f7-4505-8a25-21f5ee739584" containerName="heat-cfnapi" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.275985 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="55712a50-9dcf-44ce-8bac-9aa3ecf65db4" containerName="dnsmasq-dns" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.276006 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d73a499-d334-4a7a-9783-640b98760672" containerName="neutron-httpd" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.277056 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556628-479rr" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.283875 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.284113 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.284326 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.379139 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvxks\" (UniqueName: \"kubernetes.io/projected/658f9ba3-69b7-4d2d-8258-bb7bdf272398-kube-api-access-nvxks\") pod \"auto-csr-approver-29556628-479rr\" (UID: \"658f9ba3-69b7-4d2d-8258-bb7bdf272398\") " pod="openshift-infra/auto-csr-approver-29556628-479rr" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.392350 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556628-479rr"] Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.482105 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvxks\" (UniqueName: \"kubernetes.io/projected/658f9ba3-69b7-4d2d-8258-bb7bdf272398-kube-api-access-nvxks\") pod \"auto-csr-approver-29556628-479rr\" (UID: \"658f9ba3-69b7-4d2d-8258-bb7bdf272398\") " pod="openshift-infra/auto-csr-approver-29556628-479rr" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.561634 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvxks\" (UniqueName: \"kubernetes.io/projected/658f9ba3-69b7-4d2d-8258-bb7bdf272398-kube-api-access-nvxks\") pod \"auto-csr-approver-29556628-479rr\" (UID: \"658f9ba3-69b7-4d2d-8258-bb7bdf272398\") " pod="openshift-infra/auto-csr-approver-29556628-479rr" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.613916 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556628-479rr" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.789486 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.792084 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7f9df5b5b5-q6dp2" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.888875 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7f9df5b5b5-q6dp2" event={"ID":"757b852e-068c-4885-99b8-af2e6f23e445","Type":"ContainerDied","Data":"db0a88b20ef1358b7cfb558aebb52cdeba5b5f143eee06ddc98fa0acfb3ab01b"} Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.888968 4632 scope.go:117] "RemoveContainer" containerID="c974019fb18638d059aa9b080871c3232dbdb322c997ebb8d28de7a80fef50a0" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.889106 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7f9df5b5b5-q6dp2" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.894123 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/536490c7-c218-43ca-b601-84fdf0721b13-combined-ca-bundle\") pod \"536490c7-c218-43ca-b601-84fdf0721b13\" (UID: \"536490c7-c218-43ca-b601-84fdf0721b13\") " Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.894174 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/757b852e-068c-4885-99b8-af2e6f23e445-config-data-custom\") pod \"757b852e-068c-4885-99b8-af2e6f23e445\" (UID: \"757b852e-068c-4885-99b8-af2e6f23e445\") " Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.894239 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/536490c7-c218-43ca-b601-84fdf0721b13-log-httpd\") pod \"536490c7-c218-43ca-b601-84fdf0721b13\" (UID: \"536490c7-c218-43ca-b601-84fdf0721b13\") " Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.894267 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/536490c7-c218-43ca-b601-84fdf0721b13-run-httpd\") pod \"536490c7-c218-43ca-b601-84fdf0721b13\" (UID: \"536490c7-c218-43ca-b601-84fdf0721b13\") " Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.894374 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crtzk\" (UniqueName: \"kubernetes.io/projected/536490c7-c218-43ca-b601-84fdf0721b13-kube-api-access-crtzk\") pod \"536490c7-c218-43ca-b601-84fdf0721b13\" (UID: \"536490c7-c218-43ca-b601-84fdf0721b13\") " Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.894430 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/757b852e-068c-4885-99b8-af2e6f23e445-combined-ca-bundle\") pod \"757b852e-068c-4885-99b8-af2e6f23e445\" (UID: \"757b852e-068c-4885-99b8-af2e6f23e445\") " Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.894478 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/536490c7-c218-43ca-b601-84fdf0721b13-scripts\") pod \"536490c7-c218-43ca-b601-84fdf0721b13\" (UID: \"536490c7-c218-43ca-b601-84fdf0721b13\") " Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.894513 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/536490c7-c218-43ca-b601-84fdf0721b13-config-data\") pod \"536490c7-c218-43ca-b601-84fdf0721b13\" (UID: \"536490c7-c218-43ca-b601-84fdf0721b13\") " Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.894528 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5895\" (UniqueName: \"kubernetes.io/projected/757b852e-068c-4885-99b8-af2e6f23e445-kube-api-access-d5895\") pod \"757b852e-068c-4885-99b8-af2e6f23e445\" (UID: \"757b852e-068c-4885-99b8-af2e6f23e445\") " Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.894577 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/757b852e-068c-4885-99b8-af2e6f23e445-config-data\") pod \"757b852e-068c-4885-99b8-af2e6f23e445\" (UID: \"757b852e-068c-4885-99b8-af2e6f23e445\") " Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.894605 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/536490c7-c218-43ca-b601-84fdf0721b13-sg-core-conf-yaml\") pod \"536490c7-c218-43ca-b601-84fdf0721b13\" (UID: \"536490c7-c218-43ca-b601-84fdf0721b13\") " Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.919786 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/757b852e-068c-4885-99b8-af2e6f23e445-kube-api-access-d5895" (OuterVolumeSpecName: "kube-api-access-d5895") pod "757b852e-068c-4885-99b8-af2e6f23e445" (UID: "757b852e-068c-4885-99b8-af2e6f23e445"). InnerVolumeSpecName "kube-api-access-d5895". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.921316 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/536490c7-c218-43ca-b601-84fdf0721b13-kube-api-access-crtzk" (OuterVolumeSpecName: "kube-api-access-crtzk") pod "536490c7-c218-43ca-b601-84fdf0721b13" (UID: "536490c7-c218-43ca-b601-84fdf0721b13"). InnerVolumeSpecName "kube-api-access-crtzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.921758 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/536490c7-c218-43ca-b601-84fdf0721b13-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "536490c7-c218-43ca-b601-84fdf0721b13" (UID: "536490c7-c218-43ca-b601-84fdf0721b13"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.935264 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/536490c7-c218-43ca-b601-84fdf0721b13-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "536490c7-c218-43ca-b601-84fdf0721b13" (UID: "536490c7-c218-43ca-b601-84fdf0721b13"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.955468 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"536490c7-c218-43ca-b601-84fdf0721b13","Type":"ContainerDied","Data":"f402196243381c3faf3165d4fe49b7c43a1af16813bae58fca9b53eb4badf807"} Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.955911 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.961593 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/757b852e-068c-4885-99b8-af2e6f23e445-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "757b852e-068c-4885-99b8-af2e6f23e445" (UID: "757b852e-068c-4885-99b8-af2e6f23e445"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.963385 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/536490c7-c218-43ca-b601-84fdf0721b13-scripts" (OuterVolumeSpecName: "scripts") pod "536490c7-c218-43ca-b601-84fdf0721b13" (UID: "536490c7-c218-43ca-b601-84fdf0721b13"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.964351 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/757b852e-068c-4885-99b8-af2e6f23e445-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "757b852e-068c-4885-99b8-af2e6f23e445" (UID: "757b852e-068c-4885-99b8-af2e6f23e445"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:28:00 crc kubenswrapper[4632]: I0313 10:28:00.987335 4632 scope.go:117] "RemoveContainer" containerID="62a2bf3fb649bf768c02b7ee6d2e17db6f7164a75bcbf4a85047247a011f076e" Mar 13 10:28:00 crc kubenswrapper[4632]: E0313 10:28:00.987353 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-nova-conductor:e43235cb19da04699a53f42b6a75afe9\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-5mlm2" podUID="5de81924-9bfc-484e-8276-0216f0bbf72c" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:00.998822 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crtzk\" (UniqueName: \"kubernetes.io/projected/536490c7-c218-43ca-b601-84fdf0721b13-kube-api-access-crtzk\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:00.998856 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/757b852e-068c-4885-99b8-af2e6f23e445-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:00.998869 4632 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/536490c7-c218-43ca-b601-84fdf0721b13-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:00.998883 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5895\" (UniqueName: \"kubernetes.io/projected/757b852e-068c-4885-99b8-af2e6f23e445-kube-api-access-d5895\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:00.998896 4632 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/757b852e-068c-4885-99b8-af2e6f23e445-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:00.998908 4632 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/536490c7-c218-43ca-b601-84fdf0721b13-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:00.998920 4632 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/536490c7-c218-43ca-b601-84fdf0721b13-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.020509 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/536490c7-c218-43ca-b601-84fdf0721b13-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "536490c7-c218-43ca-b601-84fdf0721b13" (UID: "536490c7-c218-43ca-b601-84fdf0721b13"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.082101 4632 scope.go:117] "RemoveContainer" containerID="1fedf410b0ec76e58a0488f6518b4a44b2c019d46fe77aa230c5ac262bb32cff" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.084777 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/757b852e-068c-4885-99b8-af2e6f23e445-config-data" (OuterVolumeSpecName: "config-data") pod "757b852e-068c-4885-99b8-af2e6f23e445" (UID: "757b852e-068c-4885-99b8-af2e6f23e445"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.101372 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/757b852e-068c-4885-99b8-af2e6f23e445-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.101417 4632 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/536490c7-c218-43ca-b601-84fdf0721b13-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.129970 4632 scope.go:117] "RemoveContainer" containerID="45f5d86800aa7ead2bd3ca8e9cc3cc79ae2d441610ccb1ee742ca8de3f0990d9" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.185699 4632 scope.go:117] "RemoveContainer" containerID="0e185e3360ac7d555a53f4a6a5858f9b0a423c695033ffc9d71eb6f71e6ca6e1" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.268354 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7f9df5b5b5-q6dp2"] Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.278473 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-7f9df5b5b5-q6dp2"] Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.287698 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/536490c7-c218-43ca-b601-84fdf0721b13-config-data" (OuterVolumeSpecName: "config-data") pod "536490c7-c218-43ca-b601-84fdf0721b13" (UID: "536490c7-c218-43ca-b601-84fdf0721b13"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.304263 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/536490c7-c218-43ca-b601-84fdf0721b13-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "536490c7-c218-43ca-b601-84fdf0721b13" (UID: "536490c7-c218-43ca-b601-84fdf0721b13"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.318368 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/536490c7-c218-43ca-b601-84fdf0721b13-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.318412 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/536490c7-c218-43ca-b601-84fdf0721b13-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.431097 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556628-479rr"] Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.600529 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.610875 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.648577 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:28:01 crc kubenswrapper[4632]: E0313 10:28:01.649265 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="536490c7-c218-43ca-b601-84fdf0721b13" containerName="proxy-httpd" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.649292 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="536490c7-c218-43ca-b601-84fdf0721b13" containerName="proxy-httpd" Mar 13 10:28:01 crc kubenswrapper[4632]: E0313 10:28:01.649310 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="536490c7-c218-43ca-b601-84fdf0721b13" containerName="sg-core" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.649318 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="536490c7-c218-43ca-b601-84fdf0721b13" containerName="sg-core" Mar 13 10:28:01 crc kubenswrapper[4632]: E0313 10:28:01.649331 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bca285e-17f7-4505-8a25-21f5ee739584" containerName="heat-cfnapi" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.649339 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bca285e-17f7-4505-8a25-21f5ee739584" containerName="heat-cfnapi" Mar 13 10:28:01 crc kubenswrapper[4632]: E0313 10:28:01.649359 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="536490c7-c218-43ca-b601-84fdf0721b13" containerName="ceilometer-notification-agent" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.649365 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="536490c7-c218-43ca-b601-84fdf0721b13" containerName="ceilometer-notification-agent" Mar 13 10:28:01 crc kubenswrapper[4632]: E0313 10:28:01.649403 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="536490c7-c218-43ca-b601-84fdf0721b13" containerName="ceilometer-central-agent" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.649414 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="536490c7-c218-43ca-b601-84fdf0721b13" containerName="ceilometer-central-agent" Mar 13 10:28:01 crc kubenswrapper[4632]: E0313 10:28:01.649427 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="757b852e-068c-4885-99b8-af2e6f23e445" containerName="heat-engine" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.649435 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="757b852e-068c-4885-99b8-af2e6f23e445" containerName="heat-engine" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.649686 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="536490c7-c218-43ca-b601-84fdf0721b13" containerName="ceilometer-central-agent" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.649714 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="536490c7-c218-43ca-b601-84fdf0721b13" containerName="ceilometer-notification-agent" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.649725 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="536490c7-c218-43ca-b601-84fdf0721b13" containerName="sg-core" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.649737 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="757b852e-068c-4885-99b8-af2e6f23e445" containerName="heat-engine" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.649747 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="536490c7-c218-43ca-b601-84fdf0721b13" containerName="proxy-httpd" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.652101 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.660149 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.660591 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.677134 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.770010 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:28:01 crc kubenswrapper[4632]: E0313 10:28:01.770968 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-wsdt8 log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="ae5601e7-98bb-4c10-bd02-b365269a60e5" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.833269 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae5601e7-98bb-4c10-bd02-b365269a60e5-log-httpd\") pod \"ceilometer-0\" (UID: \"ae5601e7-98bb-4c10-bd02-b365269a60e5\") " pod="openstack/ceilometer-0" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.833335 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsdt8\" (UniqueName: \"kubernetes.io/projected/ae5601e7-98bb-4c10-bd02-b365269a60e5-kube-api-access-wsdt8\") pod \"ceilometer-0\" (UID: \"ae5601e7-98bb-4c10-bd02-b365269a60e5\") " pod="openstack/ceilometer-0" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.833428 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae5601e7-98bb-4c10-bd02-b365269a60e5-config-data\") pod \"ceilometer-0\" (UID: \"ae5601e7-98bb-4c10-bd02-b365269a60e5\") " pod="openstack/ceilometer-0" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.833475 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae5601e7-98bb-4c10-bd02-b365269a60e5-run-httpd\") pod \"ceilometer-0\" (UID: \"ae5601e7-98bb-4c10-bd02-b365269a60e5\") " pod="openstack/ceilometer-0" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.833495 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae5601e7-98bb-4c10-bd02-b365269a60e5-scripts\") pod \"ceilometer-0\" (UID: \"ae5601e7-98bb-4c10-bd02-b365269a60e5\") " pod="openstack/ceilometer-0" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.833654 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ae5601e7-98bb-4c10-bd02-b365269a60e5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ae5601e7-98bb-4c10-bd02-b365269a60e5\") " pod="openstack/ceilometer-0" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.833710 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae5601e7-98bb-4c10-bd02-b365269a60e5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ae5601e7-98bb-4c10-bd02-b365269a60e5\") " pod="openstack/ceilometer-0" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.935992 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae5601e7-98bb-4c10-bd02-b365269a60e5-config-data\") pod \"ceilometer-0\" (UID: \"ae5601e7-98bb-4c10-bd02-b365269a60e5\") " pod="openstack/ceilometer-0" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.936120 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae5601e7-98bb-4c10-bd02-b365269a60e5-run-httpd\") pod \"ceilometer-0\" (UID: \"ae5601e7-98bb-4c10-bd02-b365269a60e5\") " pod="openstack/ceilometer-0" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.936149 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae5601e7-98bb-4c10-bd02-b365269a60e5-scripts\") pod \"ceilometer-0\" (UID: \"ae5601e7-98bb-4c10-bd02-b365269a60e5\") " pod="openstack/ceilometer-0" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.936231 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ae5601e7-98bb-4c10-bd02-b365269a60e5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ae5601e7-98bb-4c10-bd02-b365269a60e5\") " pod="openstack/ceilometer-0" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.936282 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae5601e7-98bb-4c10-bd02-b365269a60e5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ae5601e7-98bb-4c10-bd02-b365269a60e5\") " pod="openstack/ceilometer-0" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.936334 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae5601e7-98bb-4c10-bd02-b365269a60e5-log-httpd\") pod \"ceilometer-0\" (UID: \"ae5601e7-98bb-4c10-bd02-b365269a60e5\") " pod="openstack/ceilometer-0" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.936369 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsdt8\" (UniqueName: \"kubernetes.io/projected/ae5601e7-98bb-4c10-bd02-b365269a60e5-kube-api-access-wsdt8\") pod \"ceilometer-0\" (UID: \"ae5601e7-98bb-4c10-bd02-b365269a60e5\") " pod="openstack/ceilometer-0" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.937312 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae5601e7-98bb-4c10-bd02-b365269a60e5-run-httpd\") pod \"ceilometer-0\" (UID: \"ae5601e7-98bb-4c10-bd02-b365269a60e5\") " pod="openstack/ceilometer-0" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.937619 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae5601e7-98bb-4c10-bd02-b365269a60e5-log-httpd\") pod \"ceilometer-0\" (UID: \"ae5601e7-98bb-4c10-bd02-b365269a60e5\") " pod="openstack/ceilometer-0" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.949898 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae5601e7-98bb-4c10-bd02-b365269a60e5-config-data\") pod \"ceilometer-0\" (UID: \"ae5601e7-98bb-4c10-bd02-b365269a60e5\") " pod="openstack/ceilometer-0" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.951782 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae5601e7-98bb-4c10-bd02-b365269a60e5-scripts\") pod \"ceilometer-0\" (UID: \"ae5601e7-98bb-4c10-bd02-b365269a60e5\") " pod="openstack/ceilometer-0" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.951779 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ae5601e7-98bb-4c10-bd02-b365269a60e5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ae5601e7-98bb-4c10-bd02-b365269a60e5\") " pod="openstack/ceilometer-0" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.951954 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae5601e7-98bb-4c10-bd02-b365269a60e5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ae5601e7-98bb-4c10-bd02-b365269a60e5\") " pod="openstack/ceilometer-0" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.963230 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsdt8\" (UniqueName: \"kubernetes.io/projected/ae5601e7-98bb-4c10-bd02-b365269a60e5-kube-api-access-wsdt8\") pod \"ceilometer-0\" (UID: \"ae5601e7-98bb-4c10-bd02-b365269a60e5\") " pod="openstack/ceilometer-0" Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.977833 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556628-479rr" event={"ID":"658f9ba3-69b7-4d2d-8258-bb7bdf272398","Type":"ContainerStarted","Data":"bd4fb0467d5e125fd924fc111e74635f851cb7c2c04439ff732217c17ddbb8dc"} Mar 13 10:28:01 crc kubenswrapper[4632]: I0313 10:28:01.979207 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:28:02 crc kubenswrapper[4632]: I0313 10:28:02.061162 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="536490c7-c218-43ca-b601-84fdf0721b13" path="/var/lib/kubelet/pods/536490c7-c218-43ca-b601-84fdf0721b13/volumes" Mar 13 10:28:02 crc kubenswrapper[4632]: I0313 10:28:02.064169 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="757b852e-068c-4885-99b8-af2e6f23e445" path="/var/lib/kubelet/pods/757b852e-068c-4885-99b8-af2e6f23e445/volumes" Mar 13 10:28:02 crc kubenswrapper[4632]: I0313 10:28:02.072049 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:28:02 crc kubenswrapper[4632]: I0313 10:28:02.241961 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae5601e7-98bb-4c10-bd02-b365269a60e5-config-data\") pod \"ae5601e7-98bb-4c10-bd02-b365269a60e5\" (UID: \"ae5601e7-98bb-4c10-bd02-b365269a60e5\") " Mar 13 10:28:02 crc kubenswrapper[4632]: I0313 10:28:02.242064 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ae5601e7-98bb-4c10-bd02-b365269a60e5-sg-core-conf-yaml\") pod \"ae5601e7-98bb-4c10-bd02-b365269a60e5\" (UID: \"ae5601e7-98bb-4c10-bd02-b365269a60e5\") " Mar 13 10:28:02 crc kubenswrapper[4632]: I0313 10:28:02.242112 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae5601e7-98bb-4c10-bd02-b365269a60e5-run-httpd\") pod \"ae5601e7-98bb-4c10-bd02-b365269a60e5\" (UID: \"ae5601e7-98bb-4c10-bd02-b365269a60e5\") " Mar 13 10:28:02 crc kubenswrapper[4632]: I0313 10:28:02.242164 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae5601e7-98bb-4c10-bd02-b365269a60e5-combined-ca-bundle\") pod \"ae5601e7-98bb-4c10-bd02-b365269a60e5\" (UID: \"ae5601e7-98bb-4c10-bd02-b365269a60e5\") " Mar 13 10:28:02 crc kubenswrapper[4632]: I0313 10:28:02.242198 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsdt8\" (UniqueName: \"kubernetes.io/projected/ae5601e7-98bb-4c10-bd02-b365269a60e5-kube-api-access-wsdt8\") pod \"ae5601e7-98bb-4c10-bd02-b365269a60e5\" (UID: \"ae5601e7-98bb-4c10-bd02-b365269a60e5\") " Mar 13 10:28:02 crc kubenswrapper[4632]: I0313 10:28:02.242240 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae5601e7-98bb-4c10-bd02-b365269a60e5-scripts\") pod \"ae5601e7-98bb-4c10-bd02-b365269a60e5\" (UID: \"ae5601e7-98bb-4c10-bd02-b365269a60e5\") " Mar 13 10:28:02 crc kubenswrapper[4632]: I0313 10:28:02.242265 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae5601e7-98bb-4c10-bd02-b365269a60e5-log-httpd\") pod \"ae5601e7-98bb-4c10-bd02-b365269a60e5\" (UID: \"ae5601e7-98bb-4c10-bd02-b365269a60e5\") " Mar 13 10:28:02 crc kubenswrapper[4632]: I0313 10:28:02.243035 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae5601e7-98bb-4c10-bd02-b365269a60e5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ae5601e7-98bb-4c10-bd02-b365269a60e5" (UID: "ae5601e7-98bb-4c10-bd02-b365269a60e5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:28:02 crc kubenswrapper[4632]: I0313 10:28:02.243055 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae5601e7-98bb-4c10-bd02-b365269a60e5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ae5601e7-98bb-4c10-bd02-b365269a60e5" (UID: "ae5601e7-98bb-4c10-bd02-b365269a60e5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:28:02 crc kubenswrapper[4632]: I0313 10:28:02.245328 4632 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae5601e7-98bb-4c10-bd02-b365269a60e5-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:02 crc kubenswrapper[4632]: I0313 10:28:02.245365 4632 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae5601e7-98bb-4c10-bd02-b365269a60e5-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:02 crc kubenswrapper[4632]: I0313 10:28:02.249064 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae5601e7-98bb-4c10-bd02-b365269a60e5-config-data" (OuterVolumeSpecName: "config-data") pod "ae5601e7-98bb-4c10-bd02-b365269a60e5" (UID: "ae5601e7-98bb-4c10-bd02-b365269a60e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:28:02 crc kubenswrapper[4632]: I0313 10:28:02.249602 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae5601e7-98bb-4c10-bd02-b365269a60e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae5601e7-98bb-4c10-bd02-b365269a60e5" (UID: "ae5601e7-98bb-4c10-bd02-b365269a60e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:28:02 crc kubenswrapper[4632]: I0313 10:28:02.253212 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae5601e7-98bb-4c10-bd02-b365269a60e5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ae5601e7-98bb-4c10-bd02-b365269a60e5" (UID: "ae5601e7-98bb-4c10-bd02-b365269a60e5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:28:02 crc kubenswrapper[4632]: I0313 10:28:02.255410 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae5601e7-98bb-4c10-bd02-b365269a60e5-kube-api-access-wsdt8" (OuterVolumeSpecName: "kube-api-access-wsdt8") pod "ae5601e7-98bb-4c10-bd02-b365269a60e5" (UID: "ae5601e7-98bb-4c10-bd02-b365269a60e5"). InnerVolumeSpecName "kube-api-access-wsdt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:28:02 crc kubenswrapper[4632]: I0313 10:28:02.269111 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae5601e7-98bb-4c10-bd02-b365269a60e5-scripts" (OuterVolumeSpecName: "scripts") pod "ae5601e7-98bb-4c10-bd02-b365269a60e5" (UID: "ae5601e7-98bb-4c10-bd02-b365269a60e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:28:02 crc kubenswrapper[4632]: I0313 10:28:02.347146 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae5601e7-98bb-4c10-bd02-b365269a60e5-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:02 crc kubenswrapper[4632]: I0313 10:28:02.347191 4632 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ae5601e7-98bb-4c10-bd02-b365269a60e5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:02 crc kubenswrapper[4632]: I0313 10:28:02.347210 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae5601e7-98bb-4c10-bd02-b365269a60e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:02 crc kubenswrapper[4632]: I0313 10:28:02.347224 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsdt8\" (UniqueName: \"kubernetes.io/projected/ae5601e7-98bb-4c10-bd02-b365269a60e5-kube-api-access-wsdt8\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:02 crc kubenswrapper[4632]: I0313 10:28:02.347237 4632 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae5601e7-98bb-4c10-bd02-b365269a60e5-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:02 crc kubenswrapper[4632]: I0313 10:28:02.991252 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:28:03 crc kubenswrapper[4632]: I0313 10:28:03.077296 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:28:03 crc kubenswrapper[4632]: I0313 10:28:03.100590 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:28:03 crc kubenswrapper[4632]: I0313 10:28:03.112684 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:28:03 crc kubenswrapper[4632]: I0313 10:28:03.116976 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:28:03 crc kubenswrapper[4632]: I0313 10:28:03.128340 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:28:03 crc kubenswrapper[4632]: I0313 10:28:03.133744 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 13 10:28:03 crc kubenswrapper[4632]: I0313 10:28:03.133987 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 13 10:28:03 crc kubenswrapper[4632]: I0313 10:28:03.264012 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a89045f-ad86-47f4-9837-ccae12089508-scripts\") pod \"ceilometer-0\" (UID: \"8a89045f-ad86-47f4-9837-ccae12089508\") " pod="openstack/ceilometer-0" Mar 13 10:28:03 crc kubenswrapper[4632]: I0313 10:28:03.264099 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a89045f-ad86-47f4-9837-ccae12089508-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8a89045f-ad86-47f4-9837-ccae12089508\") " pod="openstack/ceilometer-0" Mar 13 10:28:03 crc kubenswrapper[4632]: I0313 10:28:03.264141 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a89045f-ad86-47f4-9837-ccae12089508-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8a89045f-ad86-47f4-9837-ccae12089508\") " pod="openstack/ceilometer-0" Mar 13 10:28:03 crc kubenswrapper[4632]: I0313 10:28:03.264262 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a89045f-ad86-47f4-9837-ccae12089508-log-httpd\") pod \"ceilometer-0\" (UID: \"8a89045f-ad86-47f4-9837-ccae12089508\") " pod="openstack/ceilometer-0" Mar 13 10:28:03 crc kubenswrapper[4632]: I0313 10:28:03.264369 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a89045f-ad86-47f4-9837-ccae12089508-run-httpd\") pod \"ceilometer-0\" (UID: \"8a89045f-ad86-47f4-9837-ccae12089508\") " pod="openstack/ceilometer-0" Mar 13 10:28:03 crc kubenswrapper[4632]: I0313 10:28:03.264404 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vmhd\" (UniqueName: \"kubernetes.io/projected/8a89045f-ad86-47f4-9837-ccae12089508-kube-api-access-8vmhd\") pod \"ceilometer-0\" (UID: \"8a89045f-ad86-47f4-9837-ccae12089508\") " pod="openstack/ceilometer-0" Mar 13 10:28:03 crc kubenswrapper[4632]: I0313 10:28:03.265485 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a89045f-ad86-47f4-9837-ccae12089508-config-data\") pod \"ceilometer-0\" (UID: \"8a89045f-ad86-47f4-9837-ccae12089508\") " pod="openstack/ceilometer-0" Mar 13 10:28:03 crc kubenswrapper[4632]: I0313 10:28:03.367404 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a89045f-ad86-47f4-9837-ccae12089508-config-data\") pod \"ceilometer-0\" (UID: \"8a89045f-ad86-47f4-9837-ccae12089508\") " pod="openstack/ceilometer-0" Mar 13 10:28:03 crc kubenswrapper[4632]: I0313 10:28:03.367494 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a89045f-ad86-47f4-9837-ccae12089508-scripts\") pod \"ceilometer-0\" (UID: \"8a89045f-ad86-47f4-9837-ccae12089508\") " pod="openstack/ceilometer-0" Mar 13 10:28:03 crc kubenswrapper[4632]: I0313 10:28:03.367530 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a89045f-ad86-47f4-9837-ccae12089508-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8a89045f-ad86-47f4-9837-ccae12089508\") " pod="openstack/ceilometer-0" Mar 13 10:28:03 crc kubenswrapper[4632]: I0313 10:28:03.367555 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a89045f-ad86-47f4-9837-ccae12089508-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8a89045f-ad86-47f4-9837-ccae12089508\") " pod="openstack/ceilometer-0" Mar 13 10:28:03 crc kubenswrapper[4632]: I0313 10:28:03.367572 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a89045f-ad86-47f4-9837-ccae12089508-log-httpd\") pod \"ceilometer-0\" (UID: \"8a89045f-ad86-47f4-9837-ccae12089508\") " pod="openstack/ceilometer-0" Mar 13 10:28:03 crc kubenswrapper[4632]: I0313 10:28:03.367606 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a89045f-ad86-47f4-9837-ccae12089508-run-httpd\") pod \"ceilometer-0\" (UID: \"8a89045f-ad86-47f4-9837-ccae12089508\") " pod="openstack/ceilometer-0" Mar 13 10:28:03 crc kubenswrapper[4632]: I0313 10:28:03.367621 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vmhd\" (UniqueName: \"kubernetes.io/projected/8a89045f-ad86-47f4-9837-ccae12089508-kube-api-access-8vmhd\") pod \"ceilometer-0\" (UID: \"8a89045f-ad86-47f4-9837-ccae12089508\") " pod="openstack/ceilometer-0" Mar 13 10:28:03 crc kubenswrapper[4632]: I0313 10:28:03.372365 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a89045f-ad86-47f4-9837-ccae12089508-log-httpd\") pod \"ceilometer-0\" (UID: \"8a89045f-ad86-47f4-9837-ccae12089508\") " pod="openstack/ceilometer-0" Mar 13 10:28:03 crc kubenswrapper[4632]: I0313 10:28:03.372610 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a89045f-ad86-47f4-9837-ccae12089508-run-httpd\") pod \"ceilometer-0\" (UID: \"8a89045f-ad86-47f4-9837-ccae12089508\") " pod="openstack/ceilometer-0" Mar 13 10:28:03 crc kubenswrapper[4632]: I0313 10:28:03.375053 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a89045f-ad86-47f4-9837-ccae12089508-scripts\") pod \"ceilometer-0\" (UID: \"8a89045f-ad86-47f4-9837-ccae12089508\") " pod="openstack/ceilometer-0" Mar 13 10:28:03 crc kubenswrapper[4632]: I0313 10:28:03.375304 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a89045f-ad86-47f4-9837-ccae12089508-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8a89045f-ad86-47f4-9837-ccae12089508\") " pod="openstack/ceilometer-0" Mar 13 10:28:03 crc kubenswrapper[4632]: I0313 10:28:03.375333 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a89045f-ad86-47f4-9837-ccae12089508-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8a89045f-ad86-47f4-9837-ccae12089508\") " pod="openstack/ceilometer-0" Mar 13 10:28:03 crc kubenswrapper[4632]: I0313 10:28:03.389492 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a89045f-ad86-47f4-9837-ccae12089508-config-data\") pod \"ceilometer-0\" (UID: \"8a89045f-ad86-47f4-9837-ccae12089508\") " pod="openstack/ceilometer-0" Mar 13 10:28:03 crc kubenswrapper[4632]: I0313 10:28:03.390491 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vmhd\" (UniqueName: \"kubernetes.io/projected/8a89045f-ad86-47f4-9837-ccae12089508-kube-api-access-8vmhd\") pod \"ceilometer-0\" (UID: \"8a89045f-ad86-47f4-9837-ccae12089508\") " pod="openstack/ceilometer-0" Mar 13 10:28:03 crc kubenswrapper[4632]: I0313 10:28:03.461397 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:28:03 crc kubenswrapper[4632]: I0313 10:28:03.983560 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:28:04 crc kubenswrapper[4632]: I0313 10:28:04.008469 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556628-479rr" event={"ID":"658f9ba3-69b7-4d2d-8258-bb7bdf272398","Type":"ContainerStarted","Data":"dccd7606dfc8be32af7f5d6d0a4bf2a63f79937bfd68d93b573f727a7eb9e402"} Mar 13 10:28:04 crc kubenswrapper[4632]: I0313 10:28:04.035479 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556628-479rr" podStartSLOduration=2.777339145 podStartE2EDuration="4.035457336s" podCreationTimestamp="2026-03-13 10:28:00 +0000 UTC" firstStartedPulling="2026-03-13 10:28:01.39823517 +0000 UTC m=+1455.420765303" lastFinishedPulling="2026-03-13 10:28:02.656353361 +0000 UTC m=+1456.678883494" observedRunningTime="2026-03-13 10:28:04.0281987 +0000 UTC m=+1458.050728833" watchObservedRunningTime="2026-03-13 10:28:04.035457336 +0000 UTC m=+1458.057987469" Mar 13 10:28:04 crc kubenswrapper[4632]: I0313 10:28:04.068118 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae5601e7-98bb-4c10-bd02-b365269a60e5" path="/var/lib/kubelet/pods/ae5601e7-98bb-4c10-bd02-b365269a60e5/volumes" Mar 13 10:28:05 crc kubenswrapper[4632]: I0313 10:28:05.112046 4632 generic.go:334] "Generic (PLEG): container finished" podID="658f9ba3-69b7-4d2d-8258-bb7bdf272398" containerID="dccd7606dfc8be32af7f5d6d0a4bf2a63f79937bfd68d93b573f727a7eb9e402" exitCode=0 Mar 13 10:28:05 crc kubenswrapper[4632]: I0313 10:28:05.112416 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556628-479rr" event={"ID":"658f9ba3-69b7-4d2d-8258-bb7bdf272398","Type":"ContainerDied","Data":"dccd7606dfc8be32af7f5d6d0a4bf2a63f79937bfd68d93b573f727a7eb9e402"} Mar 13 10:28:05 crc kubenswrapper[4632]: I0313 10:28:05.134824 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a89045f-ad86-47f4-9837-ccae12089508","Type":"ContainerStarted","Data":"20cdc9a0316367f683a4923e148488d3441ea045da71a429f3922a31b37d7874"} Mar 13 10:28:05 crc kubenswrapper[4632]: I0313 10:28:05.134875 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a89045f-ad86-47f4-9837-ccae12089508","Type":"ContainerStarted","Data":"9e4a51f5856378812a236a35be56fcba04f467dfec7755d8229b1d7b6a9e6bd0"} Mar 13 10:28:05 crc kubenswrapper[4632]: I0313 10:28:05.134886 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a89045f-ad86-47f4-9837-ccae12089508","Type":"ContainerStarted","Data":"f18f1919b2952189a9f9202d1451d5a08fb72b179ba9686af3513e79b4500446"} Mar 13 10:28:06 crc kubenswrapper[4632]: I0313 10:28:06.149428 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a89045f-ad86-47f4-9837-ccae12089508","Type":"ContainerStarted","Data":"a98d9a2847bb67335882172b35537561c8c602078a182d937a48977eab66a688"} Mar 13 10:28:06 crc kubenswrapper[4632]: I0313 10:28:06.157261 4632 generic.go:334] "Generic (PLEG): container finished" podID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerID="c9dfdd84c36e6ac95b45a488b62e176636bdecfbe3a88d3f5d2058d92ebbacdd" exitCode=137 Mar 13 10:28:06 crc kubenswrapper[4632]: I0313 10:28:06.157341 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bdb5f7878-ng2k2" event={"ID":"3e4afb91-ce26-4325-89c9-2542da2ec48a","Type":"ContainerDied","Data":"c9dfdd84c36e6ac95b45a488b62e176636bdecfbe3a88d3f5d2058d92ebbacdd"} Mar 13 10:28:06 crc kubenswrapper[4632]: I0313 10:28:06.157419 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bdb5f7878-ng2k2" event={"ID":"3e4afb91-ce26-4325-89c9-2542da2ec48a","Type":"ContainerStarted","Data":"2dbb3ede37abc9f5b483ae48b13ac3ed8913ac4529c34c39494b1541e21ce00b"} Mar 13 10:28:06 crc kubenswrapper[4632]: I0313 10:28:06.157445 4632 scope.go:117] "RemoveContainer" containerID="dc4a058f6feb7822333693352f32f5677ff03988b7b5b71005c85c4bf733b402" Mar 13 10:28:06 crc kubenswrapper[4632]: I0313 10:28:06.611093 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556628-479rr" Mar 13 10:28:06 crc kubenswrapper[4632]: I0313 10:28:06.763050 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvxks\" (UniqueName: \"kubernetes.io/projected/658f9ba3-69b7-4d2d-8258-bb7bdf272398-kube-api-access-nvxks\") pod \"658f9ba3-69b7-4d2d-8258-bb7bdf272398\" (UID: \"658f9ba3-69b7-4d2d-8258-bb7bdf272398\") " Mar 13 10:28:06 crc kubenswrapper[4632]: I0313 10:28:06.771179 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/658f9ba3-69b7-4d2d-8258-bb7bdf272398-kube-api-access-nvxks" (OuterVolumeSpecName: "kube-api-access-nvxks") pod "658f9ba3-69b7-4d2d-8258-bb7bdf272398" (UID: "658f9ba3-69b7-4d2d-8258-bb7bdf272398"). InnerVolumeSpecName "kube-api-access-nvxks". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:28:06 crc kubenswrapper[4632]: I0313 10:28:06.866127 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvxks\" (UniqueName: \"kubernetes.io/projected/658f9ba3-69b7-4d2d-8258-bb7bdf272398-kube-api-access-nvxks\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:07 crc kubenswrapper[4632]: I0313 10:28:07.168565 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556622-7428t"] Mar 13 10:28:07 crc kubenswrapper[4632]: I0313 10:28:07.179439 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556628-479rr" event={"ID":"658f9ba3-69b7-4d2d-8258-bb7bdf272398","Type":"ContainerDied","Data":"bd4fb0467d5e125fd924fc111e74635f851cb7c2c04439ff732217c17ddbb8dc"} Mar 13 10:28:07 crc kubenswrapper[4632]: I0313 10:28:07.179485 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd4fb0467d5e125fd924fc111e74635f851cb7c2c04439ff732217c17ddbb8dc" Mar 13 10:28:07 crc kubenswrapper[4632]: I0313 10:28:07.179664 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556628-479rr" Mar 13 10:28:07 crc kubenswrapper[4632]: I0313 10:28:07.224579 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556622-7428t"] Mar 13 10:28:08 crc kubenswrapper[4632]: I0313 10:28:08.069069 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bedc1d17-f5c4-4a62-ab0c-f20a002e859b" path="/var/lib/kubelet/pods/bedc1d17-f5c4-4a62-ab0c-f20a002e859b/volumes" Mar 13 10:28:08 crc kubenswrapper[4632]: I0313 10:28:08.192835 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a89045f-ad86-47f4-9837-ccae12089508","Type":"ContainerStarted","Data":"33db7b55742606799ae347e8dc24186eda0df552d34a8ff8896a3591ab4845bf"} Mar 13 10:28:08 crc kubenswrapper[4632]: I0313 10:28:08.194147 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 13 10:28:08 crc kubenswrapper[4632]: I0313 10:28:08.219968 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.6769290909999999 podStartE2EDuration="5.219931259s" podCreationTimestamp="2026-03-13 10:28:03 +0000 UTC" firstStartedPulling="2026-03-13 10:28:04.007353453 +0000 UTC m=+1458.029883586" lastFinishedPulling="2026-03-13 10:28:07.550355621 +0000 UTC m=+1461.572885754" observedRunningTime="2026-03-13 10:28:08.21341748 +0000 UTC m=+1462.235947613" watchObservedRunningTime="2026-03-13 10:28:08.219931259 +0000 UTC m=+1462.242461412" Mar 13 10:28:10 crc kubenswrapper[4632]: I0313 10:28:10.460553 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 10:28:10 crc kubenswrapper[4632]: I0313 10:28:10.461187 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 10:28:10 crc kubenswrapper[4632]: I0313 10:28:10.462150 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 10:28:10 crc kubenswrapper[4632]: I0313 10:28:10.463051 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a148dfa9ef48de458189e9fda19ce88937bedd25c3ec76e22d14f43a4745805f"} pod="openshift-machine-config-operator/machine-config-daemon-zkscb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 10:28:10 crc kubenswrapper[4632]: I0313 10:28:10.463129 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" containerID="cri-o://a148dfa9ef48de458189e9fda19ce88937bedd25c3ec76e22d14f43a4745805f" gracePeriod=600 Mar 13 10:28:11 crc kubenswrapper[4632]: I0313 10:28:11.224601 4632 generic.go:334] "Generic (PLEG): container finished" podID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerID="a148dfa9ef48de458189e9fda19ce88937bedd25c3ec76e22d14f43a4745805f" exitCode=0 Mar 13 10:28:11 crc kubenswrapper[4632]: I0313 10:28:11.224690 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerDied","Data":"a148dfa9ef48de458189e9fda19ce88937bedd25c3ec76e22d14f43a4745805f"} Mar 13 10:28:11 crc kubenswrapper[4632]: I0313 10:28:11.225424 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerStarted","Data":"8c0ae371a519eb9db1c6eb843b2bd1981f031101e4086e8bec3fc57f1e905a6f"} Mar 13 10:28:11 crc kubenswrapper[4632]: I0313 10:28:11.225484 4632 scope.go:117] "RemoveContainer" containerID="e9a22f93dffae95945f5e47a3d15b0ebe11dc6b72712dcbe34fa0191ff687b27" Mar 13 10:28:14 crc kubenswrapper[4632]: I0313 10:28:14.835555 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:28:14 crc kubenswrapper[4632]: I0313 10:28:14.836212 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8a89045f-ad86-47f4-9837-ccae12089508" containerName="ceilometer-central-agent" containerID="cri-o://9e4a51f5856378812a236a35be56fcba04f467dfec7755d8229b1d7b6a9e6bd0" gracePeriod=30 Mar 13 10:28:14 crc kubenswrapper[4632]: I0313 10:28:14.836360 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8a89045f-ad86-47f4-9837-ccae12089508" containerName="ceilometer-notification-agent" containerID="cri-o://20cdc9a0316367f683a4923e148488d3441ea045da71a429f3922a31b37d7874" gracePeriod=30 Mar 13 10:28:14 crc kubenswrapper[4632]: I0313 10:28:14.836432 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8a89045f-ad86-47f4-9837-ccae12089508" containerName="sg-core" containerID="cri-o://a98d9a2847bb67335882172b35537561c8c602078a182d937a48977eab66a688" gracePeriod=30 Mar 13 10:28:14 crc kubenswrapper[4632]: I0313 10:28:14.836542 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8a89045f-ad86-47f4-9837-ccae12089508" containerName="proxy-httpd" containerID="cri-o://33db7b55742606799ae347e8dc24186eda0df552d34a8ff8896a3591ab4845bf" gracePeriod=30 Mar 13 10:28:15 crc kubenswrapper[4632]: I0313 10:28:15.294160 4632 generic.go:334] "Generic (PLEG): container finished" podID="8a89045f-ad86-47f4-9837-ccae12089508" containerID="33db7b55742606799ae347e8dc24186eda0df552d34a8ff8896a3591ab4845bf" exitCode=0 Mar 13 10:28:15 crc kubenswrapper[4632]: I0313 10:28:15.294527 4632 generic.go:334] "Generic (PLEG): container finished" podID="8a89045f-ad86-47f4-9837-ccae12089508" containerID="a98d9a2847bb67335882172b35537561c8c602078a182d937a48977eab66a688" exitCode=2 Mar 13 10:28:15 crc kubenswrapper[4632]: I0313 10:28:15.294550 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a89045f-ad86-47f4-9837-ccae12089508","Type":"ContainerDied","Data":"33db7b55742606799ae347e8dc24186eda0df552d34a8ff8896a3591ab4845bf"} Mar 13 10:28:15 crc kubenswrapper[4632]: I0313 10:28:15.294594 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a89045f-ad86-47f4-9837-ccae12089508","Type":"ContainerDied","Data":"a98d9a2847bb67335882172b35537561c8c602078a182d937a48977eab66a688"} Mar 13 10:28:15 crc kubenswrapper[4632]: I0313 10:28:15.395130 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:28:15 crc kubenswrapper[4632]: I0313 10:28:15.395208 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:28:15 crc kubenswrapper[4632]: I0313 10:28:15.395866 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7bdb5f7878-ng2k2" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Mar 13 10:28:15 crc kubenswrapper[4632]: E0313 10:28:15.711067 4632 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a89045f_ad86_47f4_9837_ccae12089508.slice/crio-20cdc9a0316367f683a4923e148488d3441ea045da71a429f3922a31b37d7874.scope\": RecentStats: unable to find data in memory cache]" Mar 13 10:28:15 crc kubenswrapper[4632]: I0313 10:28:15.824913 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:28:15 crc kubenswrapper[4632]: I0313 10:28:15.953553 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a89045f-ad86-47f4-9837-ccae12089508-run-httpd\") pod \"8a89045f-ad86-47f4-9837-ccae12089508\" (UID: \"8a89045f-ad86-47f4-9837-ccae12089508\") " Mar 13 10:28:15 crc kubenswrapper[4632]: I0313 10:28:15.953976 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a89045f-ad86-47f4-9837-ccae12089508-config-data\") pod \"8a89045f-ad86-47f4-9837-ccae12089508\" (UID: \"8a89045f-ad86-47f4-9837-ccae12089508\") " Mar 13 10:28:15 crc kubenswrapper[4632]: I0313 10:28:15.953997 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a89045f-ad86-47f4-9837-ccae12089508-sg-core-conf-yaml\") pod \"8a89045f-ad86-47f4-9837-ccae12089508\" (UID: \"8a89045f-ad86-47f4-9837-ccae12089508\") " Mar 13 10:28:15 crc kubenswrapper[4632]: I0313 10:28:15.954017 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a89045f-ad86-47f4-9837-ccae12089508-scripts\") pod \"8a89045f-ad86-47f4-9837-ccae12089508\" (UID: \"8a89045f-ad86-47f4-9837-ccae12089508\") " Mar 13 10:28:15 crc kubenswrapper[4632]: I0313 10:28:15.954055 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a89045f-ad86-47f4-9837-ccae12089508-combined-ca-bundle\") pod \"8a89045f-ad86-47f4-9837-ccae12089508\" (UID: \"8a89045f-ad86-47f4-9837-ccae12089508\") " Mar 13 10:28:15 crc kubenswrapper[4632]: I0313 10:28:15.954203 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vmhd\" (UniqueName: \"kubernetes.io/projected/8a89045f-ad86-47f4-9837-ccae12089508-kube-api-access-8vmhd\") pod \"8a89045f-ad86-47f4-9837-ccae12089508\" (UID: \"8a89045f-ad86-47f4-9837-ccae12089508\") " Mar 13 10:28:15 crc kubenswrapper[4632]: I0313 10:28:15.954221 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a89045f-ad86-47f4-9837-ccae12089508-log-httpd\") pod \"8a89045f-ad86-47f4-9837-ccae12089508\" (UID: \"8a89045f-ad86-47f4-9837-ccae12089508\") " Mar 13 10:28:15 crc kubenswrapper[4632]: I0313 10:28:15.954370 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a89045f-ad86-47f4-9837-ccae12089508-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8a89045f-ad86-47f4-9837-ccae12089508" (UID: "8a89045f-ad86-47f4-9837-ccae12089508"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:28:15 crc kubenswrapper[4632]: I0313 10:28:15.954808 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a89045f-ad86-47f4-9837-ccae12089508-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8a89045f-ad86-47f4-9837-ccae12089508" (UID: "8a89045f-ad86-47f4-9837-ccae12089508"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:28:15 crc kubenswrapper[4632]: I0313 10:28:15.954914 4632 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a89045f-ad86-47f4-9837-ccae12089508-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:15 crc kubenswrapper[4632]: I0313 10:28:15.954927 4632 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a89045f-ad86-47f4-9837-ccae12089508-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:15 crc kubenswrapper[4632]: I0313 10:28:15.965116 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a89045f-ad86-47f4-9837-ccae12089508-scripts" (OuterVolumeSpecName: "scripts") pod "8a89045f-ad86-47f4-9837-ccae12089508" (UID: "8a89045f-ad86-47f4-9837-ccae12089508"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:28:15 crc kubenswrapper[4632]: I0313 10:28:15.965223 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a89045f-ad86-47f4-9837-ccae12089508-kube-api-access-8vmhd" (OuterVolumeSpecName: "kube-api-access-8vmhd") pod "8a89045f-ad86-47f4-9837-ccae12089508" (UID: "8a89045f-ad86-47f4-9837-ccae12089508"). InnerVolumeSpecName "kube-api-access-8vmhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.048261 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a89045f-ad86-47f4-9837-ccae12089508-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8a89045f-ad86-47f4-9837-ccae12089508" (UID: "8a89045f-ad86-47f4-9837-ccae12089508"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.058181 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vmhd\" (UniqueName: \"kubernetes.io/projected/8a89045f-ad86-47f4-9837-ccae12089508-kube-api-access-8vmhd\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.058211 4632 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a89045f-ad86-47f4-9837-ccae12089508-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.058221 4632 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a89045f-ad86-47f4-9837-ccae12089508-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.128871 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a89045f-ad86-47f4-9837-ccae12089508-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a89045f-ad86-47f4-9837-ccae12089508" (UID: "8a89045f-ad86-47f4-9837-ccae12089508"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.160027 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a89045f-ad86-47f4-9837-ccae12089508-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.207076 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a89045f-ad86-47f4-9837-ccae12089508-config-data" (OuterVolumeSpecName: "config-data") pod "8a89045f-ad86-47f4-9837-ccae12089508" (UID: "8a89045f-ad86-47f4-9837-ccae12089508"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.261747 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a89045f-ad86-47f4-9837-ccae12089508-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.306721 4632 generic.go:334] "Generic (PLEG): container finished" podID="8a89045f-ad86-47f4-9837-ccae12089508" containerID="20cdc9a0316367f683a4923e148488d3441ea045da71a429f3922a31b37d7874" exitCode=0 Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.306751 4632 generic.go:334] "Generic (PLEG): container finished" podID="8a89045f-ad86-47f4-9837-ccae12089508" containerID="9e4a51f5856378812a236a35be56fcba04f467dfec7755d8229b1d7b6a9e6bd0" exitCode=0 Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.306835 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.306825 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a89045f-ad86-47f4-9837-ccae12089508","Type":"ContainerDied","Data":"20cdc9a0316367f683a4923e148488d3441ea045da71a429f3922a31b37d7874"} Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.306994 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a89045f-ad86-47f4-9837-ccae12089508","Type":"ContainerDied","Data":"9e4a51f5856378812a236a35be56fcba04f467dfec7755d8229b1d7b6a9e6bd0"} Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.307018 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a89045f-ad86-47f4-9837-ccae12089508","Type":"ContainerDied","Data":"f18f1919b2952189a9f9202d1451d5a08fb72b179ba9686af3513e79b4500446"} Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.307042 4632 scope.go:117] "RemoveContainer" containerID="33db7b55742606799ae347e8dc24186eda0df552d34a8ff8896a3591ab4845bf" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.315120 4632 generic.go:334] "Generic (PLEG): container finished" podID="5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c" containerID="433c9aa5a02161c4bc7228b52cc460020479cbbb899bc6549755a59b8ad796f4" exitCode=137 Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.315187 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-689764498d-rg7vt" event={"ID":"5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c","Type":"ContainerDied","Data":"433c9aa5a02161c4bc7228b52cc460020479cbbb899bc6549755a59b8ad796f4"} Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.672794 4632 scope.go:117] "RemoveContainer" containerID="a98d9a2847bb67335882172b35537561c8c602078a182d937a48977eab66a688" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.701135 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.707151 4632 scope.go:117] "RemoveContainer" containerID="20cdc9a0316367f683a4923e148488d3441ea045da71a429f3922a31b37d7874" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.721462 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.734119 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:28:16 crc kubenswrapper[4632]: E0313 10:28:16.734661 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a89045f-ad86-47f4-9837-ccae12089508" containerName="sg-core" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.734700 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a89045f-ad86-47f4-9837-ccae12089508" containerName="sg-core" Mar 13 10:28:16 crc kubenswrapper[4632]: E0313 10:28:16.734753 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a89045f-ad86-47f4-9837-ccae12089508" containerName="ceilometer-central-agent" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.734767 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a89045f-ad86-47f4-9837-ccae12089508" containerName="ceilometer-central-agent" Mar 13 10:28:16 crc kubenswrapper[4632]: E0313 10:28:16.734792 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a89045f-ad86-47f4-9837-ccae12089508" containerName="ceilometer-notification-agent" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.734810 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a89045f-ad86-47f4-9837-ccae12089508" containerName="ceilometer-notification-agent" Mar 13 10:28:16 crc kubenswrapper[4632]: E0313 10:28:16.734868 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="658f9ba3-69b7-4d2d-8258-bb7bdf272398" containerName="oc" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.734881 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="658f9ba3-69b7-4d2d-8258-bb7bdf272398" containerName="oc" Mar 13 10:28:16 crc kubenswrapper[4632]: E0313 10:28:16.734910 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a89045f-ad86-47f4-9837-ccae12089508" containerName="proxy-httpd" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.734922 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a89045f-ad86-47f4-9837-ccae12089508" containerName="proxy-httpd" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.735233 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a89045f-ad86-47f4-9837-ccae12089508" containerName="sg-core" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.735272 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a89045f-ad86-47f4-9837-ccae12089508" containerName="ceilometer-notification-agent" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.735295 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="658f9ba3-69b7-4d2d-8258-bb7bdf272398" containerName="oc" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.735313 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a89045f-ad86-47f4-9837-ccae12089508" containerName="proxy-httpd" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.735334 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a89045f-ad86-47f4-9837-ccae12089508" containerName="ceilometer-central-agent" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.737311 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.741028 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.741845 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.743674 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.762449 4632 scope.go:117] "RemoveContainer" containerID="9e4a51f5856378812a236a35be56fcba04f467dfec7755d8229b1d7b6a9e6bd0" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.796782 4632 scope.go:117] "RemoveContainer" containerID="33db7b55742606799ae347e8dc24186eda0df552d34a8ff8896a3591ab4845bf" Mar 13 10:28:16 crc kubenswrapper[4632]: E0313 10:28:16.798122 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33db7b55742606799ae347e8dc24186eda0df552d34a8ff8896a3591ab4845bf\": container with ID starting with 33db7b55742606799ae347e8dc24186eda0df552d34a8ff8896a3591ab4845bf not found: ID does not exist" containerID="33db7b55742606799ae347e8dc24186eda0df552d34a8ff8896a3591ab4845bf" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.798155 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33db7b55742606799ae347e8dc24186eda0df552d34a8ff8896a3591ab4845bf"} err="failed to get container status \"33db7b55742606799ae347e8dc24186eda0df552d34a8ff8896a3591ab4845bf\": rpc error: code = NotFound desc = could not find container \"33db7b55742606799ae347e8dc24186eda0df552d34a8ff8896a3591ab4845bf\": container with ID starting with 33db7b55742606799ae347e8dc24186eda0df552d34a8ff8896a3591ab4845bf not found: ID does not exist" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.798175 4632 scope.go:117] "RemoveContainer" containerID="a98d9a2847bb67335882172b35537561c8c602078a182d937a48977eab66a688" Mar 13 10:28:16 crc kubenswrapper[4632]: E0313 10:28:16.803063 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a98d9a2847bb67335882172b35537561c8c602078a182d937a48977eab66a688\": container with ID starting with a98d9a2847bb67335882172b35537561c8c602078a182d937a48977eab66a688 not found: ID does not exist" containerID="a98d9a2847bb67335882172b35537561c8c602078a182d937a48977eab66a688" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.803242 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a98d9a2847bb67335882172b35537561c8c602078a182d937a48977eab66a688"} err="failed to get container status \"a98d9a2847bb67335882172b35537561c8c602078a182d937a48977eab66a688\": rpc error: code = NotFound desc = could not find container \"a98d9a2847bb67335882172b35537561c8c602078a182d937a48977eab66a688\": container with ID starting with a98d9a2847bb67335882172b35537561c8c602078a182d937a48977eab66a688 not found: ID does not exist" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.803332 4632 scope.go:117] "RemoveContainer" containerID="20cdc9a0316367f683a4923e148488d3441ea045da71a429f3922a31b37d7874" Mar 13 10:28:16 crc kubenswrapper[4632]: E0313 10:28:16.803824 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20cdc9a0316367f683a4923e148488d3441ea045da71a429f3922a31b37d7874\": container with ID starting with 20cdc9a0316367f683a4923e148488d3441ea045da71a429f3922a31b37d7874 not found: ID does not exist" containerID="20cdc9a0316367f683a4923e148488d3441ea045da71a429f3922a31b37d7874" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.803879 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20cdc9a0316367f683a4923e148488d3441ea045da71a429f3922a31b37d7874"} err="failed to get container status \"20cdc9a0316367f683a4923e148488d3441ea045da71a429f3922a31b37d7874\": rpc error: code = NotFound desc = could not find container \"20cdc9a0316367f683a4923e148488d3441ea045da71a429f3922a31b37d7874\": container with ID starting with 20cdc9a0316367f683a4923e148488d3441ea045da71a429f3922a31b37d7874 not found: ID does not exist" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.803907 4632 scope.go:117] "RemoveContainer" containerID="9e4a51f5856378812a236a35be56fcba04f467dfec7755d8229b1d7b6a9e6bd0" Mar 13 10:28:16 crc kubenswrapper[4632]: E0313 10:28:16.805464 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e4a51f5856378812a236a35be56fcba04f467dfec7755d8229b1d7b6a9e6bd0\": container with ID starting with 9e4a51f5856378812a236a35be56fcba04f467dfec7755d8229b1d7b6a9e6bd0 not found: ID does not exist" containerID="9e4a51f5856378812a236a35be56fcba04f467dfec7755d8229b1d7b6a9e6bd0" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.805500 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e4a51f5856378812a236a35be56fcba04f467dfec7755d8229b1d7b6a9e6bd0"} err="failed to get container status \"9e4a51f5856378812a236a35be56fcba04f467dfec7755d8229b1d7b6a9e6bd0\": rpc error: code = NotFound desc = could not find container \"9e4a51f5856378812a236a35be56fcba04f467dfec7755d8229b1d7b6a9e6bd0\": container with ID starting with 9e4a51f5856378812a236a35be56fcba04f467dfec7755d8229b1d7b6a9e6bd0 not found: ID does not exist" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.805524 4632 scope.go:117] "RemoveContainer" containerID="33db7b55742606799ae347e8dc24186eda0df552d34a8ff8896a3591ab4845bf" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.807149 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33db7b55742606799ae347e8dc24186eda0df552d34a8ff8896a3591ab4845bf"} err="failed to get container status \"33db7b55742606799ae347e8dc24186eda0df552d34a8ff8896a3591ab4845bf\": rpc error: code = NotFound desc = could not find container \"33db7b55742606799ae347e8dc24186eda0df552d34a8ff8896a3591ab4845bf\": container with ID starting with 33db7b55742606799ae347e8dc24186eda0df552d34a8ff8896a3591ab4845bf not found: ID does not exist" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.807189 4632 scope.go:117] "RemoveContainer" containerID="a98d9a2847bb67335882172b35537561c8c602078a182d937a48977eab66a688" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.807791 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a98d9a2847bb67335882172b35537561c8c602078a182d937a48977eab66a688"} err="failed to get container status \"a98d9a2847bb67335882172b35537561c8c602078a182d937a48977eab66a688\": rpc error: code = NotFound desc = could not find container \"a98d9a2847bb67335882172b35537561c8c602078a182d937a48977eab66a688\": container with ID starting with a98d9a2847bb67335882172b35537561c8c602078a182d937a48977eab66a688 not found: ID does not exist" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.807814 4632 scope.go:117] "RemoveContainer" containerID="20cdc9a0316367f683a4923e148488d3441ea045da71a429f3922a31b37d7874" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.808636 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20cdc9a0316367f683a4923e148488d3441ea045da71a429f3922a31b37d7874"} err="failed to get container status \"20cdc9a0316367f683a4923e148488d3441ea045da71a429f3922a31b37d7874\": rpc error: code = NotFound desc = could not find container \"20cdc9a0316367f683a4923e148488d3441ea045da71a429f3922a31b37d7874\": container with ID starting with 20cdc9a0316367f683a4923e148488d3441ea045da71a429f3922a31b37d7874 not found: ID does not exist" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.808661 4632 scope.go:117] "RemoveContainer" containerID="9e4a51f5856378812a236a35be56fcba04f467dfec7755d8229b1d7b6a9e6bd0" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.808950 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e4a51f5856378812a236a35be56fcba04f467dfec7755d8229b1d7b6a9e6bd0"} err="failed to get container status \"9e4a51f5856378812a236a35be56fcba04f467dfec7755d8229b1d7b6a9e6bd0\": rpc error: code = NotFound desc = could not find container \"9e4a51f5856378812a236a35be56fcba04f467dfec7755d8229b1d7b6a9e6bd0\": container with ID starting with 9e4a51f5856378812a236a35be56fcba04f467dfec7755d8229b1d7b6a9e6bd0 not found: ID does not exist" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.808971 4632 scope.go:117] "RemoveContainer" containerID="8ce0185281fb59d0c6bda2b2c484ad3711b4bd3b729b4b8677e75ca6b8e1f739" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.875656 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efe2db4a-a5d6-4aa3-805b-5144c66afca8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"efe2db4a-a5d6-4aa3-805b-5144c66afca8\") " pod="openstack/ceilometer-0" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.875728 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efe2db4a-a5d6-4aa3-805b-5144c66afca8-run-httpd\") pod \"ceilometer-0\" (UID: \"efe2db4a-a5d6-4aa3-805b-5144c66afca8\") " pod="openstack/ceilometer-0" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.876382 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efe2db4a-a5d6-4aa3-805b-5144c66afca8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"efe2db4a-a5d6-4aa3-805b-5144c66afca8\") " pod="openstack/ceilometer-0" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.876468 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efe2db4a-a5d6-4aa3-805b-5144c66afca8-log-httpd\") pod \"ceilometer-0\" (UID: \"efe2db4a-a5d6-4aa3-805b-5144c66afca8\") " pod="openstack/ceilometer-0" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.876571 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8nnb\" (UniqueName: \"kubernetes.io/projected/efe2db4a-a5d6-4aa3-805b-5144c66afca8-kube-api-access-b8nnb\") pod \"ceilometer-0\" (UID: \"efe2db4a-a5d6-4aa3-805b-5144c66afca8\") " pod="openstack/ceilometer-0" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.876661 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efe2db4a-a5d6-4aa3-805b-5144c66afca8-config-data\") pod \"ceilometer-0\" (UID: \"efe2db4a-a5d6-4aa3-805b-5144c66afca8\") " pod="openstack/ceilometer-0" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.876722 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efe2db4a-a5d6-4aa3-805b-5144c66afca8-scripts\") pod \"ceilometer-0\" (UID: \"efe2db4a-a5d6-4aa3-805b-5144c66afca8\") " pod="openstack/ceilometer-0" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.979129 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efe2db4a-a5d6-4aa3-805b-5144c66afca8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"efe2db4a-a5d6-4aa3-805b-5144c66afca8\") " pod="openstack/ceilometer-0" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.979196 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efe2db4a-a5d6-4aa3-805b-5144c66afca8-run-httpd\") pod \"ceilometer-0\" (UID: \"efe2db4a-a5d6-4aa3-805b-5144c66afca8\") " pod="openstack/ceilometer-0" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.979261 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efe2db4a-a5d6-4aa3-805b-5144c66afca8-log-httpd\") pod \"ceilometer-0\" (UID: \"efe2db4a-a5d6-4aa3-805b-5144c66afca8\") " pod="openstack/ceilometer-0" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.979346 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8nnb\" (UniqueName: \"kubernetes.io/projected/efe2db4a-a5d6-4aa3-805b-5144c66afca8-kube-api-access-b8nnb\") pod \"ceilometer-0\" (UID: \"efe2db4a-a5d6-4aa3-805b-5144c66afca8\") " pod="openstack/ceilometer-0" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.979399 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efe2db4a-a5d6-4aa3-805b-5144c66afca8-config-data\") pod \"ceilometer-0\" (UID: \"efe2db4a-a5d6-4aa3-805b-5144c66afca8\") " pod="openstack/ceilometer-0" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.979427 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efe2db4a-a5d6-4aa3-805b-5144c66afca8-scripts\") pod \"ceilometer-0\" (UID: \"efe2db4a-a5d6-4aa3-805b-5144c66afca8\") " pod="openstack/ceilometer-0" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.979476 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efe2db4a-a5d6-4aa3-805b-5144c66afca8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"efe2db4a-a5d6-4aa3-805b-5144c66afca8\") " pod="openstack/ceilometer-0" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.981355 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efe2db4a-a5d6-4aa3-805b-5144c66afca8-log-httpd\") pod \"ceilometer-0\" (UID: \"efe2db4a-a5d6-4aa3-805b-5144c66afca8\") " pod="openstack/ceilometer-0" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.981671 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efe2db4a-a5d6-4aa3-805b-5144c66afca8-run-httpd\") pod \"ceilometer-0\" (UID: \"efe2db4a-a5d6-4aa3-805b-5144c66afca8\") " pod="openstack/ceilometer-0" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.986327 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efe2db4a-a5d6-4aa3-805b-5144c66afca8-scripts\") pod \"ceilometer-0\" (UID: \"efe2db4a-a5d6-4aa3-805b-5144c66afca8\") " pod="openstack/ceilometer-0" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.986679 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efe2db4a-a5d6-4aa3-805b-5144c66afca8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"efe2db4a-a5d6-4aa3-805b-5144c66afca8\") " pod="openstack/ceilometer-0" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.989878 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efe2db4a-a5d6-4aa3-805b-5144c66afca8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"efe2db4a-a5d6-4aa3-805b-5144c66afca8\") " pod="openstack/ceilometer-0" Mar 13 10:28:16 crc kubenswrapper[4632]: I0313 10:28:16.991067 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efe2db4a-a5d6-4aa3-805b-5144c66afca8-config-data\") pod \"ceilometer-0\" (UID: \"efe2db4a-a5d6-4aa3-805b-5144c66afca8\") " pod="openstack/ceilometer-0" Mar 13 10:28:17 crc kubenswrapper[4632]: I0313 10:28:17.013113 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8nnb\" (UniqueName: \"kubernetes.io/projected/efe2db4a-a5d6-4aa3-805b-5144c66afca8-kube-api-access-b8nnb\") pod \"ceilometer-0\" (UID: \"efe2db4a-a5d6-4aa3-805b-5144c66afca8\") " pod="openstack/ceilometer-0" Mar 13 10:28:17 crc kubenswrapper[4632]: I0313 10:28:17.064892 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:28:17 crc kubenswrapper[4632]: I0313 10:28:17.345868 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-689764498d-rg7vt" event={"ID":"5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c","Type":"ContainerStarted","Data":"26a7aae686bb479cfcbc8b01e8e10e3fd467e5236d6ffb2ed638373687267401"} Mar 13 10:28:17 crc kubenswrapper[4632]: I0313 10:28:17.699795 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:28:17 crc kubenswrapper[4632]: I0313 10:28:17.886595 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 13 10:28:17 crc kubenswrapper[4632]: I0313 10:28:17.887123 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="62c1f3f8-e898-4481-88e0-49f0c20228a4" containerName="glance-log" containerID="cri-o://75e2995816c15269a0e0bb8513c4f7b9cace1b33dd417df2fc8f694c18b89fa0" gracePeriod=30 Mar 13 10:28:17 crc kubenswrapper[4632]: I0313 10:28:17.887516 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="62c1f3f8-e898-4481-88e0-49f0c20228a4" containerName="glance-httpd" containerID="cri-o://60e19c69317a817c5bf104bc8691bdf46121d52039ad19099e25f869718b8e19" gracePeriod=30 Mar 13 10:28:18 crc kubenswrapper[4632]: I0313 10:28:18.064495 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a89045f-ad86-47f4-9837-ccae12089508" path="/var/lib/kubelet/pods/8a89045f-ad86-47f4-9837-ccae12089508/volumes" Mar 13 10:28:18 crc kubenswrapper[4632]: I0313 10:28:18.368363 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efe2db4a-a5d6-4aa3-805b-5144c66afca8","Type":"ContainerStarted","Data":"e1cfa6629f759d5883022ca95d30f701e046142894117805d6775261138f329b"} Mar 13 10:28:18 crc kubenswrapper[4632]: I0313 10:28:18.368716 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efe2db4a-a5d6-4aa3-805b-5144c66afca8","Type":"ContainerStarted","Data":"14a7a36e446e3ed959c2f673ec8a5768ce0dbc971f3d7879cc430c0491594679"} Mar 13 10:28:18 crc kubenswrapper[4632]: I0313 10:28:18.388317 4632 generic.go:334] "Generic (PLEG): container finished" podID="62c1f3f8-e898-4481-88e0-49f0c20228a4" containerID="75e2995816c15269a0e0bb8513c4f7b9cace1b33dd417df2fc8f694c18b89fa0" exitCode=143 Mar 13 10:28:18 crc kubenswrapper[4632]: I0313 10:28:18.388398 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"62c1f3f8-e898-4481-88e0-49f0c20228a4","Type":"ContainerDied","Data":"75e2995816c15269a0e0bb8513c4f7b9cace1b33dd417df2fc8f694c18b89fa0"} Mar 13 10:28:18 crc kubenswrapper[4632]: I0313 10:28:18.401856 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5mlm2" event={"ID":"5de81924-9bfc-484e-8276-0216f0bbf72c","Type":"ContainerStarted","Data":"afb05bb00debb2ea4a81d169362ff2bd38d824053184e249dbe02cc1cb10e945"} Mar 13 10:28:18 crc kubenswrapper[4632]: I0313 10:28:18.440959 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-5mlm2" podStartSLOduration=3.499571533 podStartE2EDuration="44.440921152s" podCreationTimestamp="2026-03-13 10:27:34 +0000 UTC" firstStartedPulling="2026-03-13 10:27:35.835190159 +0000 UTC m=+1429.857720292" lastFinishedPulling="2026-03-13 10:28:16.776539778 +0000 UTC m=+1470.799069911" observedRunningTime="2026-03-13 10:28:18.429693559 +0000 UTC m=+1472.452223712" watchObservedRunningTime="2026-03-13 10:28:18.440921152 +0000 UTC m=+1472.463451285" Mar 13 10:28:19 crc kubenswrapper[4632]: I0313 10:28:19.411242 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efe2db4a-a5d6-4aa3-805b-5144c66afca8","Type":"ContainerStarted","Data":"fd5b97322f3966f77a91e26539a007fdcb71ceff62ef4db0eeb167b87655caa1"} Mar 13 10:28:19 crc kubenswrapper[4632]: I0313 10:28:19.866357 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 13 10:28:19 crc kubenswrapper[4632]: I0313 10:28:19.867269 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="050df504-63b9-4453-be2b-f3b0315fb801" containerName="glance-log" containerID="cri-o://4286cd55d064d024725ded90d153143e568de28aeedc6a6060f69501102dd4cb" gracePeriod=30 Mar 13 10:28:19 crc kubenswrapper[4632]: I0313 10:28:19.867392 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="050df504-63b9-4453-be2b-f3b0315fb801" containerName="glance-httpd" containerID="cri-o://a06e9823c7700968605c221a9839cf4f237fe6a7eee8836d69bade62686f4372" gracePeriod=30 Mar 13 10:28:20 crc kubenswrapper[4632]: I0313 10:28:20.433387 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efe2db4a-a5d6-4aa3-805b-5144c66afca8","Type":"ContainerStarted","Data":"23c0b66b1c919daecf5753fdf594be220c0bfa625f5256dff7fbdd421ccaaa54"} Mar 13 10:28:20 crc kubenswrapper[4632]: I0313 10:28:20.438628 4632 generic.go:334] "Generic (PLEG): container finished" podID="050df504-63b9-4453-be2b-f3b0315fb801" containerID="4286cd55d064d024725ded90d153143e568de28aeedc6a6060f69501102dd4cb" exitCode=143 Mar 13 10:28:20 crc kubenswrapper[4632]: I0313 10:28:20.438687 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"050df504-63b9-4453-be2b-f3b0315fb801","Type":"ContainerDied","Data":"4286cd55d064d024725ded90d153143e568de28aeedc6a6060f69501102dd4cb"} Mar 13 10:28:22 crc kubenswrapper[4632]: I0313 10:28:22.465423 4632 generic.go:334] "Generic (PLEG): container finished" podID="62c1f3f8-e898-4481-88e0-49f0c20228a4" containerID="60e19c69317a817c5bf104bc8691bdf46121d52039ad19099e25f869718b8e19" exitCode=0 Mar 13 10:28:22 crc kubenswrapper[4632]: I0313 10:28:22.467860 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"62c1f3f8-e898-4481-88e0-49f0c20228a4","Type":"ContainerDied","Data":"60e19c69317a817c5bf104bc8691bdf46121d52039ad19099e25f869718b8e19"} Mar 13 10:28:22 crc kubenswrapper[4632]: I0313 10:28:22.496151 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efe2db4a-a5d6-4aa3-805b-5144c66afca8","Type":"ContainerStarted","Data":"3519acd0a5d01d518140b71f8cb487e29259fada712cb5fe0b79aa9039c08494"} Mar 13 10:28:22 crc kubenswrapper[4632]: I0313 10:28:22.724589 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 13 10:28:22 crc kubenswrapper[4632]: I0313 10:28:22.757777 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.63353254 podStartE2EDuration="6.757754344s" podCreationTimestamp="2026-03-13 10:28:16 +0000 UTC" firstStartedPulling="2026-03-13 10:28:17.715277411 +0000 UTC m=+1471.737807544" lastFinishedPulling="2026-03-13 10:28:20.839499215 +0000 UTC m=+1474.862029348" observedRunningTime="2026-03-13 10:28:22.559067011 +0000 UTC m=+1476.581597154" watchObservedRunningTime="2026-03-13 10:28:22.757754344 +0000 UTC m=+1476.780284477" Mar 13 10:28:22 crc kubenswrapper[4632]: I0313 10:28:22.820405 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbnq2\" (UniqueName: \"kubernetes.io/projected/62c1f3f8-e898-4481-88e0-49f0c20228a4-kube-api-access-gbnq2\") pod \"62c1f3f8-e898-4481-88e0-49f0c20228a4\" (UID: \"62c1f3f8-e898-4481-88e0-49f0c20228a4\") " Mar 13 10:28:22 crc kubenswrapper[4632]: I0313 10:28:22.820523 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62c1f3f8-e898-4481-88e0-49f0c20228a4-public-tls-certs\") pod \"62c1f3f8-e898-4481-88e0-49f0c20228a4\" (UID: \"62c1f3f8-e898-4481-88e0-49f0c20228a4\") " Mar 13 10:28:22 crc kubenswrapper[4632]: I0313 10:28:22.820560 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/62c1f3f8-e898-4481-88e0-49f0c20228a4-httpd-run\") pod \"62c1f3f8-e898-4481-88e0-49f0c20228a4\" (UID: \"62c1f3f8-e898-4481-88e0-49f0c20228a4\") " Mar 13 10:28:22 crc kubenswrapper[4632]: I0313 10:28:22.820656 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62c1f3f8-e898-4481-88e0-49f0c20228a4-scripts\") pod \"62c1f3f8-e898-4481-88e0-49f0c20228a4\" (UID: \"62c1f3f8-e898-4481-88e0-49f0c20228a4\") " Mar 13 10:28:22 crc kubenswrapper[4632]: I0313 10:28:22.820688 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62c1f3f8-e898-4481-88e0-49f0c20228a4-logs\") pod \"62c1f3f8-e898-4481-88e0-49f0c20228a4\" (UID: \"62c1f3f8-e898-4481-88e0-49f0c20228a4\") " Mar 13 10:28:22 crc kubenswrapper[4632]: I0313 10:28:22.820718 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"62c1f3f8-e898-4481-88e0-49f0c20228a4\" (UID: \"62c1f3f8-e898-4481-88e0-49f0c20228a4\") " Mar 13 10:28:22 crc kubenswrapper[4632]: I0313 10:28:22.820763 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62c1f3f8-e898-4481-88e0-49f0c20228a4-config-data\") pod \"62c1f3f8-e898-4481-88e0-49f0c20228a4\" (UID: \"62c1f3f8-e898-4481-88e0-49f0c20228a4\") " Mar 13 10:28:22 crc kubenswrapper[4632]: I0313 10:28:22.820847 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62c1f3f8-e898-4481-88e0-49f0c20228a4-combined-ca-bundle\") pod \"62c1f3f8-e898-4481-88e0-49f0c20228a4\" (UID: \"62c1f3f8-e898-4481-88e0-49f0c20228a4\") " Mar 13 10:28:22 crc kubenswrapper[4632]: I0313 10:28:22.821248 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62c1f3f8-e898-4481-88e0-49f0c20228a4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "62c1f3f8-e898-4481-88e0-49f0c20228a4" (UID: "62c1f3f8-e898-4481-88e0-49f0c20228a4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:28:22 crc kubenswrapper[4632]: I0313 10:28:22.821419 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62c1f3f8-e898-4481-88e0-49f0c20228a4-logs" (OuterVolumeSpecName: "logs") pod "62c1f3f8-e898-4481-88e0-49f0c20228a4" (UID: "62c1f3f8-e898-4481-88e0-49f0c20228a4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:28:22 crc kubenswrapper[4632]: I0313 10:28:22.823928 4632 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/62c1f3f8-e898-4481-88e0-49f0c20228a4-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:22 crc kubenswrapper[4632]: I0313 10:28:22.824069 4632 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62c1f3f8-e898-4481-88e0-49f0c20228a4-logs\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:22 crc kubenswrapper[4632]: I0313 10:28:22.842122 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "62c1f3f8-e898-4481-88e0-49f0c20228a4" (UID: "62c1f3f8-e898-4481-88e0-49f0c20228a4"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 13 10:28:22 crc kubenswrapper[4632]: I0313 10:28:22.842322 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62c1f3f8-e898-4481-88e0-49f0c20228a4-kube-api-access-gbnq2" (OuterVolumeSpecName: "kube-api-access-gbnq2") pod "62c1f3f8-e898-4481-88e0-49f0c20228a4" (UID: "62c1f3f8-e898-4481-88e0-49f0c20228a4"). InnerVolumeSpecName "kube-api-access-gbnq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:28:22 crc kubenswrapper[4632]: I0313 10:28:22.875806 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62c1f3f8-e898-4481-88e0-49f0c20228a4-scripts" (OuterVolumeSpecName: "scripts") pod "62c1f3f8-e898-4481-88e0-49f0c20228a4" (UID: "62c1f3f8-e898-4481-88e0-49f0c20228a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:28:22 crc kubenswrapper[4632]: I0313 10:28:22.893480 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62c1f3f8-e898-4481-88e0-49f0c20228a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62c1f3f8-e898-4481-88e0-49f0c20228a4" (UID: "62c1f3f8-e898-4481-88e0-49f0c20228a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:28:22 crc kubenswrapper[4632]: I0313 10:28:22.925365 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbnq2\" (UniqueName: \"kubernetes.io/projected/62c1f3f8-e898-4481-88e0-49f0c20228a4-kube-api-access-gbnq2\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:22 crc kubenswrapper[4632]: I0313 10:28:22.925404 4632 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62c1f3f8-e898-4481-88e0-49f0c20228a4-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:22 crc kubenswrapper[4632]: I0313 10:28:22.926167 4632 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Mar 13 10:28:22 crc kubenswrapper[4632]: I0313 10:28:22.926201 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62c1f3f8-e898-4481-88e0-49f0c20228a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.006683 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62c1f3f8-e898-4481-88e0-49f0c20228a4-config-data" (OuterVolumeSpecName: "config-data") pod "62c1f3f8-e898-4481-88e0-49f0c20228a4" (UID: "62c1f3f8-e898-4481-88e0-49f0c20228a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.019612 4632 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.030120 4632 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.030153 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62c1f3f8-e898-4481-88e0-49f0c20228a4-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.040869 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62c1f3f8-e898-4481-88e0-49f0c20228a4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "62c1f3f8-e898-4481-88e0-49f0c20228a4" (UID: "62c1f3f8-e898-4481-88e0-49f0c20228a4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.131584 4632 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62c1f3f8-e898-4481-88e0-49f0c20228a4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.523613 4632 generic.go:334] "Generic (PLEG): container finished" podID="050df504-63b9-4453-be2b-f3b0315fb801" containerID="a06e9823c7700968605c221a9839cf4f237fe6a7eee8836d69bade62686f4372" exitCode=0 Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.524060 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"050df504-63b9-4453-be2b-f3b0315fb801","Type":"ContainerDied","Data":"a06e9823c7700968605c221a9839cf4f237fe6a7eee8836d69bade62686f4372"} Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.531464 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.533201 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"62c1f3f8-e898-4481-88e0-49f0c20228a4","Type":"ContainerDied","Data":"da67a58c5a020c95fb415df6f51542675c8d6697cd1fafcacdcc7d6081f0a9ff"} Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.533269 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.533580 4632 scope.go:117] "RemoveContainer" containerID="60e19c69317a817c5bf104bc8691bdf46121d52039ad19099e25f869718b8e19" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.616104 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.636277 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.656985 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 13 10:28:23 crc kubenswrapper[4632]: E0313 10:28:23.657386 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62c1f3f8-e898-4481-88e0-49f0c20228a4" containerName="glance-log" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.657398 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="62c1f3f8-e898-4481-88e0-49f0c20228a4" containerName="glance-log" Mar 13 10:28:23 crc kubenswrapper[4632]: E0313 10:28:23.657413 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62c1f3f8-e898-4481-88e0-49f0c20228a4" containerName="glance-httpd" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.657419 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="62c1f3f8-e898-4481-88e0-49f0c20228a4" containerName="glance-httpd" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.657589 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="62c1f3f8-e898-4481-88e0-49f0c20228a4" containerName="glance-log" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.657613 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="62c1f3f8-e898-4481-88e0-49f0c20228a4" containerName="glance-httpd" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.683773 4632 scope.go:117] "RemoveContainer" containerID="75e2995816c15269a0e0bb8513c4f7b9cace1b33dd417df2fc8f694c18b89fa0" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.694923 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.706648 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.716562 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.790718 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.867110 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"a2394af9-fd85-4291-8d57-c2bff02eccce\") " pod="openstack/glance-default-external-api-0" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.867512 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2394af9-fd85-4291-8d57-c2bff02eccce-logs\") pod \"glance-default-external-api-0\" (UID: \"a2394af9-fd85-4291-8d57-c2bff02eccce\") " pod="openstack/glance-default-external-api-0" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.867626 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2394af9-fd85-4291-8d57-c2bff02eccce-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a2394af9-fd85-4291-8d57-c2bff02eccce\") " pod="openstack/glance-default-external-api-0" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.867749 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2394af9-fd85-4291-8d57-c2bff02eccce-scripts\") pod \"glance-default-external-api-0\" (UID: \"a2394af9-fd85-4291-8d57-c2bff02eccce\") " pod="openstack/glance-default-external-api-0" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.867905 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2394af9-fd85-4291-8d57-c2bff02eccce-config-data\") pod \"glance-default-external-api-0\" (UID: \"a2394af9-fd85-4291-8d57-c2bff02eccce\") " pod="openstack/glance-default-external-api-0" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.868042 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbr58\" (UniqueName: \"kubernetes.io/projected/a2394af9-fd85-4291-8d57-c2bff02eccce-kube-api-access-rbr58\") pod \"glance-default-external-api-0\" (UID: \"a2394af9-fd85-4291-8d57-c2bff02eccce\") " pod="openstack/glance-default-external-api-0" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.868162 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a2394af9-fd85-4291-8d57-c2bff02eccce-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a2394af9-fd85-4291-8d57-c2bff02eccce\") " pod="openstack/glance-default-external-api-0" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.868297 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2394af9-fd85-4291-8d57-c2bff02eccce-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a2394af9-fd85-4291-8d57-c2bff02eccce\") " pod="openstack/glance-default-external-api-0" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.967851 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.969741 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"a2394af9-fd85-4291-8d57-c2bff02eccce\") " pod="openstack/glance-default-external-api-0" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.969850 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2394af9-fd85-4291-8d57-c2bff02eccce-logs\") pod \"glance-default-external-api-0\" (UID: \"a2394af9-fd85-4291-8d57-c2bff02eccce\") " pod="openstack/glance-default-external-api-0" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.969900 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2394af9-fd85-4291-8d57-c2bff02eccce-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a2394af9-fd85-4291-8d57-c2bff02eccce\") " pod="openstack/glance-default-external-api-0" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.969988 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2394af9-fd85-4291-8d57-c2bff02eccce-scripts\") pod \"glance-default-external-api-0\" (UID: \"a2394af9-fd85-4291-8d57-c2bff02eccce\") " pod="openstack/glance-default-external-api-0" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.970049 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2394af9-fd85-4291-8d57-c2bff02eccce-config-data\") pod \"glance-default-external-api-0\" (UID: \"a2394af9-fd85-4291-8d57-c2bff02eccce\") " pod="openstack/glance-default-external-api-0" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.970085 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbr58\" (UniqueName: \"kubernetes.io/projected/a2394af9-fd85-4291-8d57-c2bff02eccce-kube-api-access-rbr58\") pod \"glance-default-external-api-0\" (UID: \"a2394af9-fd85-4291-8d57-c2bff02eccce\") " pod="openstack/glance-default-external-api-0" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.970103 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a2394af9-fd85-4291-8d57-c2bff02eccce-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a2394af9-fd85-4291-8d57-c2bff02eccce\") " pod="openstack/glance-default-external-api-0" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.970148 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2394af9-fd85-4291-8d57-c2bff02eccce-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a2394af9-fd85-4291-8d57-c2bff02eccce\") " pod="openstack/glance-default-external-api-0" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.970782 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2394af9-fd85-4291-8d57-c2bff02eccce-logs\") pod \"glance-default-external-api-0\" (UID: \"a2394af9-fd85-4291-8d57-c2bff02eccce\") " pod="openstack/glance-default-external-api-0" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.971259 4632 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"a2394af9-fd85-4291-8d57-c2bff02eccce\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.971408 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a2394af9-fd85-4291-8d57-c2bff02eccce-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a2394af9-fd85-4291-8d57-c2bff02eccce\") " pod="openstack/glance-default-external-api-0" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.984438 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2394af9-fd85-4291-8d57-c2bff02eccce-config-data\") pod \"glance-default-external-api-0\" (UID: \"a2394af9-fd85-4291-8d57-c2bff02eccce\") " pod="openstack/glance-default-external-api-0" Mar 13 10:28:23 crc kubenswrapper[4632]: I0313 10:28:23.986296 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2394af9-fd85-4291-8d57-c2bff02eccce-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a2394af9-fd85-4291-8d57-c2bff02eccce\") " pod="openstack/glance-default-external-api-0" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.004799 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2394af9-fd85-4291-8d57-c2bff02eccce-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a2394af9-fd85-4291-8d57-c2bff02eccce\") " pod="openstack/glance-default-external-api-0" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.020718 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2394af9-fd85-4291-8d57-c2bff02eccce-scripts\") pod \"glance-default-external-api-0\" (UID: \"a2394af9-fd85-4291-8d57-c2bff02eccce\") " pod="openstack/glance-default-external-api-0" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.034726 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbr58\" (UniqueName: \"kubernetes.io/projected/a2394af9-fd85-4291-8d57-c2bff02eccce-kube-api-access-rbr58\") pod \"glance-default-external-api-0\" (UID: \"a2394af9-fd85-4291-8d57-c2bff02eccce\") " pod="openstack/glance-default-external-api-0" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.063632 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"a2394af9-fd85-4291-8d57-c2bff02eccce\") " pod="openstack/glance-default-external-api-0" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.081749 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/050df504-63b9-4453-be2b-f3b0315fb801-httpd-run\") pod \"050df504-63b9-4453-be2b-f3b0315fb801\" (UID: \"050df504-63b9-4453-be2b-f3b0315fb801\") " Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.081843 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050df504-63b9-4453-be2b-f3b0315fb801-config-data\") pod \"050df504-63b9-4453-be2b-f3b0315fb801\" (UID: \"050df504-63b9-4453-be2b-f3b0315fb801\") " Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.081895 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050df504-63b9-4453-be2b-f3b0315fb801-combined-ca-bundle\") pod \"050df504-63b9-4453-be2b-f3b0315fb801\" (UID: \"050df504-63b9-4453-be2b-f3b0315fb801\") " Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.082006 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"050df504-63b9-4453-be2b-f3b0315fb801\" (UID: \"050df504-63b9-4453-be2b-f3b0315fb801\") " Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.082103 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5bl8\" (UniqueName: \"kubernetes.io/projected/050df504-63b9-4453-be2b-f3b0315fb801-kube-api-access-z5bl8\") pod \"050df504-63b9-4453-be2b-f3b0315fb801\" (UID: \"050df504-63b9-4453-be2b-f3b0315fb801\") " Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.082134 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/050df504-63b9-4453-be2b-f3b0315fb801-scripts\") pod \"050df504-63b9-4453-be2b-f3b0315fb801\" (UID: \"050df504-63b9-4453-be2b-f3b0315fb801\") " Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.082214 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/050df504-63b9-4453-be2b-f3b0315fb801-logs\") pod \"050df504-63b9-4453-be2b-f3b0315fb801\" (UID: \"050df504-63b9-4453-be2b-f3b0315fb801\") " Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.082252 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/050df504-63b9-4453-be2b-f3b0315fb801-internal-tls-certs\") pod \"050df504-63b9-4453-be2b-f3b0315fb801\" (UID: \"050df504-63b9-4453-be2b-f3b0315fb801\") " Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.083688 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/050df504-63b9-4453-be2b-f3b0315fb801-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "050df504-63b9-4453-be2b-f3b0315fb801" (UID: "050df504-63b9-4453-be2b-f3b0315fb801"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.084138 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/050df504-63b9-4453-be2b-f3b0315fb801-logs" (OuterVolumeSpecName: "logs") pod "050df504-63b9-4453-be2b-f3b0315fb801" (UID: "050df504-63b9-4453-be2b-f3b0315fb801"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.109455 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/050df504-63b9-4453-be2b-f3b0315fb801-kube-api-access-z5bl8" (OuterVolumeSpecName: "kube-api-access-z5bl8") pod "050df504-63b9-4453-be2b-f3b0315fb801" (UID: "050df504-63b9-4453-be2b-f3b0315fb801"). InnerVolumeSpecName "kube-api-access-z5bl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.113244 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "050df504-63b9-4453-be2b-f3b0315fb801" (UID: "050df504-63b9-4453-be2b-f3b0315fb801"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.115355 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62c1f3f8-e898-4481-88e0-49f0c20228a4" path="/var/lib/kubelet/pods/62c1f3f8-e898-4481-88e0-49f0c20228a4/volumes" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.120139 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050df504-63b9-4453-be2b-f3b0315fb801-scripts" (OuterVolumeSpecName: "scripts") pod "050df504-63b9-4453-be2b-f3b0315fb801" (UID: "050df504-63b9-4453-be2b-f3b0315fb801"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.181277 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050df504-63b9-4453-be2b-f3b0315fb801-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "050df504-63b9-4453-be2b-f3b0315fb801" (UID: "050df504-63b9-4453-be2b-f3b0315fb801"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.184927 4632 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/050df504-63b9-4453-be2b-f3b0315fb801-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.185016 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050df504-63b9-4453-be2b-f3b0315fb801-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.185040 4632 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.185050 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5bl8\" (UniqueName: \"kubernetes.io/projected/050df504-63b9-4453-be2b-f3b0315fb801-kube-api-access-z5bl8\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.185060 4632 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/050df504-63b9-4453-be2b-f3b0315fb801-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.185069 4632 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/050df504-63b9-4453-be2b-f3b0315fb801-logs\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.219526 4632 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.256159 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050df504-63b9-4453-be2b-f3b0315fb801-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "050df504-63b9-4453-be2b-f3b0315fb801" (UID: "050df504-63b9-4453-be2b-f3b0315fb801"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.270342 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.288490 4632 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/050df504-63b9-4453-be2b-f3b0315fb801-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.288516 4632 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.304359 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050df504-63b9-4453-be2b-f3b0315fb801-config-data" (OuterVolumeSpecName: "config-data") pod "050df504-63b9-4453-be2b-f3b0315fb801" (UID: "050df504-63b9-4453-be2b-f3b0315fb801"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.391571 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050df504-63b9-4453-be2b-f3b0315fb801-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.594604 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.596008 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"050df504-63b9-4453-be2b-f3b0315fb801","Type":"ContainerDied","Data":"639dbfcf9c85b2d6df276ce37ddb572204028d4a54aa36f7c4d3026c9ff6abfc"} Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.596111 4632 scope.go:117] "RemoveContainer" containerID="a06e9823c7700968605c221a9839cf4f237fe6a7eee8836d69bade62686f4372" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.750522 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.763671 4632 scope.go:117] "RemoveContainer" containerID="4286cd55d064d024725ded90d153143e568de28aeedc6a6060f69501102dd4cb" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.780838 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.798125 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 13 10:28:24 crc kubenswrapper[4632]: E0313 10:28:24.798481 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050df504-63b9-4453-be2b-f3b0315fb801" containerName="glance-log" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.798493 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="050df504-63b9-4453-be2b-f3b0315fb801" containerName="glance-log" Mar 13 10:28:24 crc kubenswrapper[4632]: E0313 10:28:24.798506 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050df504-63b9-4453-be2b-f3b0315fb801" containerName="glance-httpd" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.798513 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="050df504-63b9-4453-be2b-f3b0315fb801" containerName="glance-httpd" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.798710 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="050df504-63b9-4453-be2b-f3b0315fb801" containerName="glance-httpd" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.798726 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="050df504-63b9-4453-be2b-f3b0315fb801" containerName="glance-log" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.799650 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.804308 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.804926 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.804968 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.911959 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97cf3e4a-cbe1-441c-8652-281a30fcf432-logs\") pod \"glance-default-internal-api-0\" (UID: \"97cf3e4a-cbe1-441c-8652-281a30fcf432\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.912303 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97cf3e4a-cbe1-441c-8652-281a30fcf432-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"97cf3e4a-cbe1-441c-8652-281a30fcf432\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.912365 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97cf3e4a-cbe1-441c-8652-281a30fcf432-config-data\") pod \"glance-default-internal-api-0\" (UID: \"97cf3e4a-cbe1-441c-8652-281a30fcf432\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.912418 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97cf3e4a-cbe1-441c-8652-281a30fcf432-scripts\") pod \"glance-default-internal-api-0\" (UID: \"97cf3e4a-cbe1-441c-8652-281a30fcf432\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.912465 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khmr9\" (UniqueName: \"kubernetes.io/projected/97cf3e4a-cbe1-441c-8652-281a30fcf432-kube-api-access-khmr9\") pod \"glance-default-internal-api-0\" (UID: \"97cf3e4a-cbe1-441c-8652-281a30fcf432\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.912521 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"97cf3e4a-cbe1-441c-8652-281a30fcf432\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.912673 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97cf3e4a-cbe1-441c-8652-281a30fcf432-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"97cf3e4a-cbe1-441c-8652-281a30fcf432\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:28:24 crc kubenswrapper[4632]: I0313 10:28:24.912729 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/97cf3e4a-cbe1-441c-8652-281a30fcf432-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"97cf3e4a-cbe1-441c-8652-281a30fcf432\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:28:25 crc kubenswrapper[4632]: I0313 10:28:25.018502 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97cf3e4a-cbe1-441c-8652-281a30fcf432-scripts\") pod \"glance-default-internal-api-0\" (UID: \"97cf3e4a-cbe1-441c-8652-281a30fcf432\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:28:25 crc kubenswrapper[4632]: I0313 10:28:25.018575 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khmr9\" (UniqueName: \"kubernetes.io/projected/97cf3e4a-cbe1-441c-8652-281a30fcf432-kube-api-access-khmr9\") pod \"glance-default-internal-api-0\" (UID: \"97cf3e4a-cbe1-441c-8652-281a30fcf432\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:28:25 crc kubenswrapper[4632]: I0313 10:28:25.018616 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"97cf3e4a-cbe1-441c-8652-281a30fcf432\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:28:25 crc kubenswrapper[4632]: I0313 10:28:25.018710 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97cf3e4a-cbe1-441c-8652-281a30fcf432-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"97cf3e4a-cbe1-441c-8652-281a30fcf432\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:28:25 crc kubenswrapper[4632]: I0313 10:28:25.018749 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/97cf3e4a-cbe1-441c-8652-281a30fcf432-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"97cf3e4a-cbe1-441c-8652-281a30fcf432\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:28:25 crc kubenswrapper[4632]: I0313 10:28:25.018873 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97cf3e4a-cbe1-441c-8652-281a30fcf432-logs\") pod \"glance-default-internal-api-0\" (UID: \"97cf3e4a-cbe1-441c-8652-281a30fcf432\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:28:25 crc kubenswrapper[4632]: I0313 10:28:25.018903 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97cf3e4a-cbe1-441c-8652-281a30fcf432-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"97cf3e4a-cbe1-441c-8652-281a30fcf432\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:28:25 crc kubenswrapper[4632]: I0313 10:28:25.018925 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97cf3e4a-cbe1-441c-8652-281a30fcf432-config-data\") pod \"glance-default-internal-api-0\" (UID: \"97cf3e4a-cbe1-441c-8652-281a30fcf432\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:28:25 crc kubenswrapper[4632]: I0313 10:28:25.024215 4632 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"97cf3e4a-cbe1-441c-8652-281a30fcf432\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Mar 13 10:28:25 crc kubenswrapper[4632]: I0313 10:28:25.045873 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/97cf3e4a-cbe1-441c-8652-281a30fcf432-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"97cf3e4a-cbe1-441c-8652-281a30fcf432\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:28:25 crc kubenswrapper[4632]: I0313 10:28:25.072641 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97cf3e4a-cbe1-441c-8652-281a30fcf432-logs\") pod \"glance-default-internal-api-0\" (UID: \"97cf3e4a-cbe1-441c-8652-281a30fcf432\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:28:25 crc kubenswrapper[4632]: I0313 10:28:25.094573 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97cf3e4a-cbe1-441c-8652-281a30fcf432-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"97cf3e4a-cbe1-441c-8652-281a30fcf432\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:28:25 crc kubenswrapper[4632]: I0313 10:28:25.096379 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97cf3e4a-cbe1-441c-8652-281a30fcf432-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"97cf3e4a-cbe1-441c-8652-281a30fcf432\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:28:25 crc kubenswrapper[4632]: I0313 10:28:25.096685 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khmr9\" (UniqueName: \"kubernetes.io/projected/97cf3e4a-cbe1-441c-8652-281a30fcf432-kube-api-access-khmr9\") pod \"glance-default-internal-api-0\" (UID: \"97cf3e4a-cbe1-441c-8652-281a30fcf432\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:28:25 crc kubenswrapper[4632]: I0313 10:28:25.098868 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97cf3e4a-cbe1-441c-8652-281a30fcf432-config-data\") pod \"glance-default-internal-api-0\" (UID: \"97cf3e4a-cbe1-441c-8652-281a30fcf432\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:28:25 crc kubenswrapper[4632]: I0313 10:28:25.101515 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97cf3e4a-cbe1-441c-8652-281a30fcf432-scripts\") pod \"glance-default-internal-api-0\" (UID: \"97cf3e4a-cbe1-441c-8652-281a30fcf432\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:28:25 crc kubenswrapper[4632]: I0313 10:28:25.131979 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 13 10:28:25 crc kubenswrapper[4632]: I0313 10:28:25.140674 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"97cf3e4a-cbe1-441c-8652-281a30fcf432\") " pod="openstack/glance-default-internal-api-0" Mar 13 10:28:25 crc kubenswrapper[4632]: I0313 10:28:25.394787 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7bdb5f7878-ng2k2" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Mar 13 10:28:25 crc kubenswrapper[4632]: I0313 10:28:25.437870 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 13 10:28:25 crc kubenswrapper[4632]: I0313 10:28:25.612417 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a2394af9-fd85-4291-8d57-c2bff02eccce","Type":"ContainerStarted","Data":"0cf5c13e47b817041ef3b078440c25d045c2d95bbad47e2762b765047a98b062"} Mar 13 10:28:25 crc kubenswrapper[4632]: I0313 10:28:25.858376 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-689764498d-rg7vt" Mar 13 10:28:25 crc kubenswrapper[4632]: I0313 10:28:25.858497 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-689764498d-rg7vt" Mar 13 10:28:26 crc kubenswrapper[4632]: I0313 10:28:26.082317 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="050df504-63b9-4453-be2b-f3b0315fb801" path="/var/lib/kubelet/pods/050df504-63b9-4453-be2b-f3b0315fb801/volumes" Mar 13 10:28:26 crc kubenswrapper[4632]: I0313 10:28:26.241207 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 13 10:28:26 crc kubenswrapper[4632]: I0313 10:28:26.624990 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97cf3e4a-cbe1-441c-8652-281a30fcf432","Type":"ContainerStarted","Data":"61a795a507a3ddb80280eaf24f52ffcec864d321ef4b538cb1098ee5adec2393"} Mar 13 10:28:27 crc kubenswrapper[4632]: I0313 10:28:27.170561 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:28:27 crc kubenswrapper[4632]: I0313 10:28:27.170893 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="efe2db4a-a5d6-4aa3-805b-5144c66afca8" containerName="ceilometer-central-agent" containerID="cri-o://e1cfa6629f759d5883022ca95d30f701e046142894117805d6775261138f329b" gracePeriod=30 Mar 13 10:28:27 crc kubenswrapper[4632]: I0313 10:28:27.171080 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="efe2db4a-a5d6-4aa3-805b-5144c66afca8" containerName="proxy-httpd" containerID="cri-o://3519acd0a5d01d518140b71f8cb487e29259fada712cb5fe0b79aa9039c08494" gracePeriod=30 Mar 13 10:28:27 crc kubenswrapper[4632]: I0313 10:28:27.171141 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="efe2db4a-a5d6-4aa3-805b-5144c66afca8" containerName="sg-core" containerID="cri-o://23c0b66b1c919daecf5753fdf594be220c0bfa625f5256dff7fbdd421ccaaa54" gracePeriod=30 Mar 13 10:28:27 crc kubenswrapper[4632]: I0313 10:28:27.171185 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="efe2db4a-a5d6-4aa3-805b-5144c66afca8" containerName="ceilometer-notification-agent" containerID="cri-o://fd5b97322f3966f77a91e26539a007fdcb71ceff62ef4db0eeb167b87655caa1" gracePeriod=30 Mar 13 10:28:27 crc kubenswrapper[4632]: I0313 10:28:27.698431 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a2394af9-fd85-4291-8d57-c2bff02eccce","Type":"ContainerStarted","Data":"00a40bf94c6c666706b8bac7ae5ff70cf3bc92133876a5b19b0393024843c7de"} Mar 13 10:28:27 crc kubenswrapper[4632]: I0313 10:28:27.712077 4632 generic.go:334] "Generic (PLEG): container finished" podID="efe2db4a-a5d6-4aa3-805b-5144c66afca8" containerID="3519acd0a5d01d518140b71f8cb487e29259fada712cb5fe0b79aa9039c08494" exitCode=0 Mar 13 10:28:27 crc kubenswrapper[4632]: I0313 10:28:27.712125 4632 generic.go:334] "Generic (PLEG): container finished" podID="efe2db4a-a5d6-4aa3-805b-5144c66afca8" containerID="23c0b66b1c919daecf5753fdf594be220c0bfa625f5256dff7fbdd421ccaaa54" exitCode=2 Mar 13 10:28:27 crc kubenswrapper[4632]: I0313 10:28:27.712164 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efe2db4a-a5d6-4aa3-805b-5144c66afca8","Type":"ContainerDied","Data":"3519acd0a5d01d518140b71f8cb487e29259fada712cb5fe0b79aa9039c08494"} Mar 13 10:28:27 crc kubenswrapper[4632]: I0313 10:28:27.712203 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efe2db4a-a5d6-4aa3-805b-5144c66afca8","Type":"ContainerDied","Data":"23c0b66b1c919daecf5753fdf594be220c0bfa625f5256dff7fbdd421ccaaa54"} Mar 13 10:28:27 crc kubenswrapper[4632]: I0313 10:28:27.734200 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97cf3e4a-cbe1-441c-8652-281a30fcf432","Type":"ContainerStarted","Data":"c9b9c1eda565b34af6716a79a4e020a175fe5e6d574afe1ce2632c16c21b446c"} Mar 13 10:28:28 crc kubenswrapper[4632]: I0313 10:28:28.744631 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a2394af9-fd85-4291-8d57-c2bff02eccce","Type":"ContainerStarted","Data":"8d71651c55d74774fc4c858767799012d9f0bd393e8530074f6103fe0f351e36"} Mar 13 10:28:28 crc kubenswrapper[4632]: I0313 10:28:28.748656 4632 generic.go:334] "Generic (PLEG): container finished" podID="efe2db4a-a5d6-4aa3-805b-5144c66afca8" containerID="fd5b97322f3966f77a91e26539a007fdcb71ceff62ef4db0eeb167b87655caa1" exitCode=0 Mar 13 10:28:28 crc kubenswrapper[4632]: I0313 10:28:28.748737 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efe2db4a-a5d6-4aa3-805b-5144c66afca8","Type":"ContainerDied","Data":"fd5b97322f3966f77a91e26539a007fdcb71ceff62ef4db0eeb167b87655caa1"} Mar 13 10:28:28 crc kubenswrapper[4632]: I0313 10:28:28.752354 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97cf3e4a-cbe1-441c-8652-281a30fcf432","Type":"ContainerStarted","Data":"b8ac69eb67c3fe143261ad7193c00cd6a79532e49d72f84ee4793d18275f9cb5"} Mar 13 10:28:28 crc kubenswrapper[4632]: I0313 10:28:28.801773 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.801756916 podStartE2EDuration="5.801756916s" podCreationTimestamp="2026-03-13 10:28:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:28:28.781917364 +0000 UTC m=+1482.804447487" watchObservedRunningTime="2026-03-13 10:28:28.801756916 +0000 UTC m=+1482.824287049" Mar 13 10:28:28 crc kubenswrapper[4632]: I0313 10:28:28.806159 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.806143894 podStartE2EDuration="4.806143894s" podCreationTimestamp="2026-03-13 10:28:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:28:28.797803581 +0000 UTC m=+1482.820333714" watchObservedRunningTime="2026-03-13 10:28:28.806143894 +0000 UTC m=+1482.828674027" Mar 13 10:28:31 crc kubenswrapper[4632]: I0313 10:28:31.785535 4632 generic.go:334] "Generic (PLEG): container finished" podID="5d10747e-ba77-4986-9d4b-636fcbf823ab" containerID="35f6f30aa35f7a79445d6acba6d7d99ce02bc8679e546b9d8ecccf0df51e3ce6" exitCode=137 Mar 13 10:28:31 crc kubenswrapper[4632]: I0313 10:28:31.785872 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-d856c56c-cmd2q" event={"ID":"5d10747e-ba77-4986-9d4b-636fcbf823ab","Type":"ContainerDied","Data":"35f6f30aa35f7a79445d6acba6d7d99ce02bc8679e546b9d8ecccf0df51e3ce6"} Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.144794 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-d856c56c-cmd2q" Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.299790 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prc4f\" (UniqueName: \"kubernetes.io/projected/5d10747e-ba77-4986-9d4b-636fcbf823ab-kube-api-access-prc4f\") pod \"5d10747e-ba77-4986-9d4b-636fcbf823ab\" (UID: \"5d10747e-ba77-4986-9d4b-636fcbf823ab\") " Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.300059 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d10747e-ba77-4986-9d4b-636fcbf823ab-config-data\") pod \"5d10747e-ba77-4986-9d4b-636fcbf823ab\" (UID: \"5d10747e-ba77-4986-9d4b-636fcbf823ab\") " Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.300126 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d10747e-ba77-4986-9d4b-636fcbf823ab-config-data-custom\") pod \"5d10747e-ba77-4986-9d4b-636fcbf823ab\" (UID: \"5d10747e-ba77-4986-9d4b-636fcbf823ab\") " Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.300181 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d10747e-ba77-4986-9d4b-636fcbf823ab-combined-ca-bundle\") pod \"5d10747e-ba77-4986-9d4b-636fcbf823ab\" (UID: \"5d10747e-ba77-4986-9d4b-636fcbf823ab\") " Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.312361 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d10747e-ba77-4986-9d4b-636fcbf823ab-kube-api-access-prc4f" (OuterVolumeSpecName: "kube-api-access-prc4f") pod "5d10747e-ba77-4986-9d4b-636fcbf823ab" (UID: "5d10747e-ba77-4986-9d4b-636fcbf823ab"). InnerVolumeSpecName "kube-api-access-prc4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.317813 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d10747e-ba77-4986-9d4b-636fcbf823ab-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5d10747e-ba77-4986-9d4b-636fcbf823ab" (UID: "5d10747e-ba77-4986-9d4b-636fcbf823ab"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.397902 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d10747e-ba77-4986-9d4b-636fcbf823ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d10747e-ba77-4986-9d4b-636fcbf823ab" (UID: "5d10747e-ba77-4986-9d4b-636fcbf823ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.414643 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prc4f\" (UniqueName: \"kubernetes.io/projected/5d10747e-ba77-4986-9d4b-636fcbf823ab-kube-api-access-prc4f\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.415015 4632 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d10747e-ba77-4986-9d4b-636fcbf823ab-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.415028 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d10747e-ba77-4986-9d4b-636fcbf823ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.416805 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-b547848c4-bn5vs" Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.457344 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d10747e-ba77-4986-9d4b-636fcbf823ab-config-data" (OuterVolumeSpecName: "config-data") pod "5d10747e-ba77-4986-9d4b-636fcbf823ab" (UID: "5d10747e-ba77-4986-9d4b-636fcbf823ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.517091 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07914020-653d-4509-9f60-22726224c7c6-config-data\") pod \"07914020-653d-4509-9f60-22726224c7c6\" (UID: \"07914020-653d-4509-9f60-22726224c7c6\") " Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.517244 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn98r\" (UniqueName: \"kubernetes.io/projected/07914020-653d-4509-9f60-22726224c7c6-kube-api-access-nn98r\") pod \"07914020-653d-4509-9f60-22726224c7c6\" (UID: \"07914020-653d-4509-9f60-22726224c7c6\") " Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.517324 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07914020-653d-4509-9f60-22726224c7c6-combined-ca-bundle\") pod \"07914020-653d-4509-9f60-22726224c7c6\" (UID: \"07914020-653d-4509-9f60-22726224c7c6\") " Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.517385 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07914020-653d-4509-9f60-22726224c7c6-config-data-custom\") pod \"07914020-653d-4509-9f60-22726224c7c6\" (UID: \"07914020-653d-4509-9f60-22726224c7c6\") " Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.517973 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d10747e-ba77-4986-9d4b-636fcbf823ab-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.523922 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07914020-653d-4509-9f60-22726224c7c6-kube-api-access-nn98r" (OuterVolumeSpecName: "kube-api-access-nn98r") pod "07914020-653d-4509-9f60-22726224c7c6" (UID: "07914020-653d-4509-9f60-22726224c7c6"). InnerVolumeSpecName "kube-api-access-nn98r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.524406 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07914020-653d-4509-9f60-22726224c7c6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "07914020-653d-4509-9f60-22726224c7c6" (UID: "07914020-653d-4509-9f60-22726224c7c6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.555254 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07914020-653d-4509-9f60-22726224c7c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07914020-653d-4509-9f60-22726224c7c6" (UID: "07914020-653d-4509-9f60-22726224c7c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.587387 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07914020-653d-4509-9f60-22726224c7c6-config-data" (OuterVolumeSpecName: "config-data") pod "07914020-653d-4509-9f60-22726224c7c6" (UID: "07914020-653d-4509-9f60-22726224c7c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.619741 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07914020-653d-4509-9f60-22726224c7c6-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.619798 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nn98r\" (UniqueName: \"kubernetes.io/projected/07914020-653d-4509-9f60-22726224c7c6-kube-api-access-nn98r\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.619813 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07914020-653d-4509-9f60-22726224c7c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.619826 4632 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07914020-653d-4509-9f60-22726224c7c6-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.804932 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-d856c56c-cmd2q" Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.805735 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-d856c56c-cmd2q" event={"ID":"5d10747e-ba77-4986-9d4b-636fcbf823ab","Type":"ContainerDied","Data":"c4b118bba3eb9eaa2f3d30625225786b624eac290ce33f3a700f116e125abbc7"} Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.805836 4632 scope.go:117] "RemoveContainer" containerID="35f6f30aa35f7a79445d6acba6d7d99ce02bc8679e546b9d8ecccf0df51e3ce6" Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.808549 4632 generic.go:334] "Generic (PLEG): container finished" podID="07914020-653d-4509-9f60-22726224c7c6" containerID="b0d25fd2c9604e3f96cafaee37ffd660ff8a4f27903a0e0bf9e82ba66554c126" exitCode=137 Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.808621 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-b547848c4-bn5vs" event={"ID":"07914020-653d-4509-9f60-22726224c7c6","Type":"ContainerDied","Data":"b0d25fd2c9604e3f96cafaee37ffd660ff8a4f27903a0e0bf9e82ba66554c126"} Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.808667 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-b547848c4-bn5vs" event={"ID":"07914020-653d-4509-9f60-22726224c7c6","Type":"ContainerDied","Data":"bb01c2352414aa3e5bdfcb4abaaae4c47a152945a1d74d64f5cf1228335558e9"} Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.808682 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-b547848c4-bn5vs" Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.861342 4632 scope.go:117] "RemoveContainer" containerID="b0d25fd2c9604e3f96cafaee37ffd660ff8a4f27903a0e0bf9e82ba66554c126" Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.880283 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-d856c56c-cmd2q"] Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.899265 4632 scope.go:117] "RemoveContainer" containerID="b0d25fd2c9604e3f96cafaee37ffd660ff8a4f27903a0e0bf9e82ba66554c126" Mar 13 10:28:32 crc kubenswrapper[4632]: E0313 10:28:32.900250 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0d25fd2c9604e3f96cafaee37ffd660ff8a4f27903a0e0bf9e82ba66554c126\": container with ID starting with b0d25fd2c9604e3f96cafaee37ffd660ff8a4f27903a0e0bf9e82ba66554c126 not found: ID does not exist" containerID="b0d25fd2c9604e3f96cafaee37ffd660ff8a4f27903a0e0bf9e82ba66554c126" Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.900293 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0d25fd2c9604e3f96cafaee37ffd660ff8a4f27903a0e0bf9e82ba66554c126"} err="failed to get container status \"b0d25fd2c9604e3f96cafaee37ffd660ff8a4f27903a0e0bf9e82ba66554c126\": rpc error: code = NotFound desc = could not find container \"b0d25fd2c9604e3f96cafaee37ffd660ff8a4f27903a0e0bf9e82ba66554c126\": container with ID starting with b0d25fd2c9604e3f96cafaee37ffd660ff8a4f27903a0e0bf9e82ba66554c126 not found: ID does not exist" Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.900674 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-d856c56c-cmd2q"] Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.915330 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-b547848c4-bn5vs"] Mar 13 10:28:32 crc kubenswrapper[4632]: I0313 10:28:32.931798 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-b547848c4-bn5vs"] Mar 13 10:28:34 crc kubenswrapper[4632]: I0313 10:28:34.057299 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07914020-653d-4509-9f60-22726224c7c6" path="/var/lib/kubelet/pods/07914020-653d-4509-9f60-22726224c7c6/volumes" Mar 13 10:28:34 crc kubenswrapper[4632]: I0313 10:28:34.058386 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d10747e-ba77-4986-9d4b-636fcbf823ab" path="/var/lib/kubelet/pods/5d10747e-ba77-4986-9d4b-636fcbf823ab/volumes" Mar 13 10:28:34 crc kubenswrapper[4632]: I0313 10:28:34.271921 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 13 10:28:34 crc kubenswrapper[4632]: I0313 10:28:34.272672 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 13 10:28:34 crc kubenswrapper[4632]: I0313 10:28:34.310429 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 13 10:28:34 crc kubenswrapper[4632]: I0313 10:28:34.328959 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 13 10:28:34 crc kubenswrapper[4632]: I0313 10:28:34.837593 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 13 10:28:34 crc kubenswrapper[4632]: I0313 10:28:34.837639 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.396759 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7bdb5f7878-ng2k2" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.397409 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.401184 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"2dbb3ede37abc9f5b483ae48b13ac3ed8913ac4529c34c39494b1541e21ce00b"} pod="openstack/horizon-7bdb5f7878-ng2k2" containerMessage="Container horizon failed startup probe, will be restarted" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.401337 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7bdb5f7878-ng2k2" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerName="horizon" containerID="cri-o://2dbb3ede37abc9f5b483ae48b13ac3ed8913ac4529c34c39494b1541e21ce00b" gracePeriod=30 Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.439840 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.439891 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.471164 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.487077 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.506594 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.594788 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efe2db4a-a5d6-4aa3-805b-5144c66afca8-scripts\") pod \"efe2db4a-a5d6-4aa3-805b-5144c66afca8\" (UID: \"efe2db4a-a5d6-4aa3-805b-5144c66afca8\") " Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.594878 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efe2db4a-a5d6-4aa3-805b-5144c66afca8-sg-core-conf-yaml\") pod \"efe2db4a-a5d6-4aa3-805b-5144c66afca8\" (UID: \"efe2db4a-a5d6-4aa3-805b-5144c66afca8\") " Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.594917 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efe2db4a-a5d6-4aa3-805b-5144c66afca8-log-httpd\") pod \"efe2db4a-a5d6-4aa3-805b-5144c66afca8\" (UID: \"efe2db4a-a5d6-4aa3-805b-5144c66afca8\") " Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.594951 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efe2db4a-a5d6-4aa3-805b-5144c66afca8-run-httpd\") pod \"efe2db4a-a5d6-4aa3-805b-5144c66afca8\" (UID: \"efe2db4a-a5d6-4aa3-805b-5144c66afca8\") " Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.595184 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efe2db4a-a5d6-4aa3-805b-5144c66afca8-config-data\") pod \"efe2db4a-a5d6-4aa3-805b-5144c66afca8\" (UID: \"efe2db4a-a5d6-4aa3-805b-5144c66afca8\") " Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.595226 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8nnb\" (UniqueName: \"kubernetes.io/projected/efe2db4a-a5d6-4aa3-805b-5144c66afca8-kube-api-access-b8nnb\") pod \"efe2db4a-a5d6-4aa3-805b-5144c66afca8\" (UID: \"efe2db4a-a5d6-4aa3-805b-5144c66afca8\") " Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.595280 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efe2db4a-a5d6-4aa3-805b-5144c66afca8-combined-ca-bundle\") pod \"efe2db4a-a5d6-4aa3-805b-5144c66afca8\" (UID: \"efe2db4a-a5d6-4aa3-805b-5144c66afca8\") " Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.596720 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efe2db4a-a5d6-4aa3-805b-5144c66afca8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "efe2db4a-a5d6-4aa3-805b-5144c66afca8" (UID: "efe2db4a-a5d6-4aa3-805b-5144c66afca8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.597660 4632 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efe2db4a-a5d6-4aa3-805b-5144c66afca8-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.597880 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efe2db4a-a5d6-4aa3-805b-5144c66afca8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "efe2db4a-a5d6-4aa3-805b-5144c66afca8" (UID: "efe2db4a-a5d6-4aa3-805b-5144c66afca8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.625104 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efe2db4a-a5d6-4aa3-805b-5144c66afca8-scripts" (OuterVolumeSpecName: "scripts") pod "efe2db4a-a5d6-4aa3-805b-5144c66afca8" (UID: "efe2db4a-a5d6-4aa3-805b-5144c66afca8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.629805 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efe2db4a-a5d6-4aa3-805b-5144c66afca8-kube-api-access-b8nnb" (OuterVolumeSpecName: "kube-api-access-b8nnb") pod "efe2db4a-a5d6-4aa3-805b-5144c66afca8" (UID: "efe2db4a-a5d6-4aa3-805b-5144c66afca8"). InnerVolumeSpecName "kube-api-access-b8nnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.668109 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efe2db4a-a5d6-4aa3-805b-5144c66afca8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "efe2db4a-a5d6-4aa3-805b-5144c66afca8" (UID: "efe2db4a-a5d6-4aa3-805b-5144c66afca8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.699192 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8nnb\" (UniqueName: \"kubernetes.io/projected/efe2db4a-a5d6-4aa3-805b-5144c66afca8-kube-api-access-b8nnb\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.699223 4632 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efe2db4a-a5d6-4aa3-805b-5144c66afca8-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.699233 4632 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efe2db4a-a5d6-4aa3-805b-5144c66afca8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.699241 4632 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efe2db4a-a5d6-4aa3-805b-5144c66afca8-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.751233 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efe2db4a-a5d6-4aa3-805b-5144c66afca8-config-data" (OuterVolumeSpecName: "config-data") pod "efe2db4a-a5d6-4aa3-805b-5144c66afca8" (UID: "efe2db4a-a5d6-4aa3-805b-5144c66afca8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.767896 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efe2db4a-a5d6-4aa3-805b-5144c66afca8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "efe2db4a-a5d6-4aa3-805b-5144c66afca8" (UID: "efe2db4a-a5d6-4aa3-805b-5144c66afca8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.801381 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efe2db4a-a5d6-4aa3-805b-5144c66afca8-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.801425 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efe2db4a-a5d6-4aa3-805b-5144c66afca8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.847847 4632 generic.go:334] "Generic (PLEG): container finished" podID="efe2db4a-a5d6-4aa3-805b-5144c66afca8" containerID="e1cfa6629f759d5883022ca95d30f701e046142894117805d6775261138f329b" exitCode=0 Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.849088 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.851250 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efe2db4a-a5d6-4aa3-805b-5144c66afca8","Type":"ContainerDied","Data":"e1cfa6629f759d5883022ca95d30f701e046142894117805d6775261138f329b"} Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.851304 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efe2db4a-a5d6-4aa3-805b-5144c66afca8","Type":"ContainerDied","Data":"14a7a36e446e3ed959c2f673ec8a5768ce0dbc971f3d7879cc430c0491594679"} Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.851327 4632 scope.go:117] "RemoveContainer" containerID="3519acd0a5d01d518140b71f8cb487e29259fada712cb5fe0b79aa9039c08494" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.852619 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.852665 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.858098 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-689764498d-rg7vt" podUID="5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.882718 4632 scope.go:117] "RemoveContainer" containerID="23c0b66b1c919daecf5753fdf594be220c0bfa625f5256dff7fbdd421ccaaa54" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.913410 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.923829 4632 scope.go:117] "RemoveContainer" containerID="fd5b97322f3966f77a91e26539a007fdcb71ceff62ef4db0eeb167b87655caa1" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.942433 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.969564 4632 scope.go:117] "RemoveContainer" containerID="e1cfa6629f759d5883022ca95d30f701e046142894117805d6775261138f329b" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.971477 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:28:35 crc kubenswrapper[4632]: E0313 10:28:35.972786 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07914020-653d-4509-9f60-22726224c7c6" containerName="heat-api" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.972813 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="07914020-653d-4509-9f60-22726224c7c6" containerName="heat-api" Mar 13 10:28:35 crc kubenswrapper[4632]: E0313 10:28:35.972856 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d10747e-ba77-4986-9d4b-636fcbf823ab" containerName="heat-cfnapi" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.972868 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d10747e-ba77-4986-9d4b-636fcbf823ab" containerName="heat-cfnapi" Mar 13 10:28:35 crc kubenswrapper[4632]: E0313 10:28:35.972895 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe2db4a-a5d6-4aa3-805b-5144c66afca8" containerName="ceilometer-notification-agent" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.972904 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe2db4a-a5d6-4aa3-805b-5144c66afca8" containerName="ceilometer-notification-agent" Mar 13 10:28:35 crc kubenswrapper[4632]: E0313 10:28:35.972928 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe2db4a-a5d6-4aa3-805b-5144c66afca8" containerName="proxy-httpd" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.972939 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe2db4a-a5d6-4aa3-805b-5144c66afca8" containerName="proxy-httpd" Mar 13 10:28:35 crc kubenswrapper[4632]: E0313 10:28:35.972970 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe2db4a-a5d6-4aa3-805b-5144c66afca8" containerName="ceilometer-central-agent" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.972979 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe2db4a-a5d6-4aa3-805b-5144c66afca8" containerName="ceilometer-central-agent" Mar 13 10:28:35 crc kubenswrapper[4632]: E0313 10:28:35.973009 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe2db4a-a5d6-4aa3-805b-5144c66afca8" containerName="sg-core" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.973019 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe2db4a-a5d6-4aa3-805b-5144c66afca8" containerName="sg-core" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.973317 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="efe2db4a-a5d6-4aa3-805b-5144c66afca8" containerName="ceilometer-notification-agent" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.973341 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d10747e-ba77-4986-9d4b-636fcbf823ab" containerName="heat-cfnapi" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.973356 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="efe2db4a-a5d6-4aa3-805b-5144c66afca8" containerName="sg-core" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.978176 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="07914020-653d-4509-9f60-22726224c7c6" containerName="heat-api" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.978255 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="efe2db4a-a5d6-4aa3-805b-5144c66afca8" containerName="proxy-httpd" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.978271 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="efe2db4a-a5d6-4aa3-805b-5144c66afca8" containerName="ceilometer-central-agent" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.981012 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.988787 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 13 10:28:35 crc kubenswrapper[4632]: I0313 10:28:35.991581 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:28:36 crc kubenswrapper[4632]: I0313 10:28:36.009764 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 13 10:28:36 crc kubenswrapper[4632]: I0313 10:28:36.088832 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efe2db4a-a5d6-4aa3-805b-5144c66afca8" path="/var/lib/kubelet/pods/efe2db4a-a5d6-4aa3-805b-5144c66afca8/volumes" Mar 13 10:28:36 crc kubenswrapper[4632]: I0313 10:28:36.109855 4632 scope.go:117] "RemoveContainer" containerID="3519acd0a5d01d518140b71f8cb487e29259fada712cb5fe0b79aa9039c08494" Mar 13 10:28:36 crc kubenswrapper[4632]: E0313 10:28:36.111575 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3519acd0a5d01d518140b71f8cb487e29259fada712cb5fe0b79aa9039c08494\": container with ID starting with 3519acd0a5d01d518140b71f8cb487e29259fada712cb5fe0b79aa9039c08494 not found: ID does not exist" containerID="3519acd0a5d01d518140b71f8cb487e29259fada712cb5fe0b79aa9039c08494" Mar 13 10:28:36 crc kubenswrapper[4632]: I0313 10:28:36.111727 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3519acd0a5d01d518140b71f8cb487e29259fada712cb5fe0b79aa9039c08494"} err="failed to get container status \"3519acd0a5d01d518140b71f8cb487e29259fada712cb5fe0b79aa9039c08494\": rpc error: code = NotFound desc = could not find container \"3519acd0a5d01d518140b71f8cb487e29259fada712cb5fe0b79aa9039c08494\": container with ID starting with 3519acd0a5d01d518140b71f8cb487e29259fada712cb5fe0b79aa9039c08494 not found: ID does not exist" Mar 13 10:28:36 crc kubenswrapper[4632]: I0313 10:28:36.111771 4632 scope.go:117] "RemoveContainer" containerID="23c0b66b1c919daecf5753fdf594be220c0bfa625f5256dff7fbdd421ccaaa54" Mar 13 10:28:36 crc kubenswrapper[4632]: E0313 10:28:36.112408 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23c0b66b1c919daecf5753fdf594be220c0bfa625f5256dff7fbdd421ccaaa54\": container with ID starting with 23c0b66b1c919daecf5753fdf594be220c0bfa625f5256dff7fbdd421ccaaa54 not found: ID does not exist" containerID="23c0b66b1c919daecf5753fdf594be220c0bfa625f5256dff7fbdd421ccaaa54" Mar 13 10:28:36 crc kubenswrapper[4632]: I0313 10:28:36.112527 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23c0b66b1c919daecf5753fdf594be220c0bfa625f5256dff7fbdd421ccaaa54"} err="failed to get container status \"23c0b66b1c919daecf5753fdf594be220c0bfa625f5256dff7fbdd421ccaaa54\": rpc error: code = NotFound desc = could not find container \"23c0b66b1c919daecf5753fdf594be220c0bfa625f5256dff7fbdd421ccaaa54\": container with ID starting with 23c0b66b1c919daecf5753fdf594be220c0bfa625f5256dff7fbdd421ccaaa54 not found: ID does not exist" Mar 13 10:28:36 crc kubenswrapper[4632]: I0313 10:28:36.112651 4632 scope.go:117] "RemoveContainer" containerID="fd5b97322f3966f77a91e26539a007fdcb71ceff62ef4db0eeb167b87655caa1" Mar 13 10:28:36 crc kubenswrapper[4632]: E0313 10:28:36.113231 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd5b97322f3966f77a91e26539a007fdcb71ceff62ef4db0eeb167b87655caa1\": container with ID starting with fd5b97322f3966f77a91e26539a007fdcb71ceff62ef4db0eeb167b87655caa1 not found: ID does not exist" containerID="fd5b97322f3966f77a91e26539a007fdcb71ceff62ef4db0eeb167b87655caa1" Mar 13 10:28:36 crc kubenswrapper[4632]: I0313 10:28:36.113318 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd5b97322f3966f77a91e26539a007fdcb71ceff62ef4db0eeb167b87655caa1"} err="failed to get container status \"fd5b97322f3966f77a91e26539a007fdcb71ceff62ef4db0eeb167b87655caa1\": rpc error: code = NotFound desc = could not find container \"fd5b97322f3966f77a91e26539a007fdcb71ceff62ef4db0eeb167b87655caa1\": container with ID starting with fd5b97322f3966f77a91e26539a007fdcb71ceff62ef4db0eeb167b87655caa1 not found: ID does not exist" Mar 13 10:28:36 crc kubenswrapper[4632]: I0313 10:28:36.113384 4632 scope.go:117] "RemoveContainer" containerID="e1cfa6629f759d5883022ca95d30f701e046142894117805d6775261138f329b" Mar 13 10:28:36 crc kubenswrapper[4632]: E0313 10:28:36.113741 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1cfa6629f759d5883022ca95d30f701e046142894117805d6775261138f329b\": container with ID starting with e1cfa6629f759d5883022ca95d30f701e046142894117805d6775261138f329b not found: ID does not exist" containerID="e1cfa6629f759d5883022ca95d30f701e046142894117805d6775261138f329b" Mar 13 10:28:36 crc kubenswrapper[4632]: I0313 10:28:36.113839 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1cfa6629f759d5883022ca95d30f701e046142894117805d6775261138f329b"} err="failed to get container status \"e1cfa6629f759d5883022ca95d30f701e046142894117805d6775261138f329b\": rpc error: code = NotFound desc = could not find container \"e1cfa6629f759d5883022ca95d30f701e046142894117805d6775261138f329b\": container with ID starting with e1cfa6629f759d5883022ca95d30f701e046142894117805d6775261138f329b not found: ID does not exist" Mar 13 10:28:36 crc kubenswrapper[4632]: I0313 10:28:36.122592 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d6a890-da6f-4a62-a73d-ad22f8b97586-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"92d6a890-da6f-4a62-a73d-ad22f8b97586\") " pod="openstack/ceilometer-0" Mar 13 10:28:36 crc kubenswrapper[4632]: I0313 10:28:36.123185 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92d6a890-da6f-4a62-a73d-ad22f8b97586-log-httpd\") pod \"ceilometer-0\" (UID: \"92d6a890-da6f-4a62-a73d-ad22f8b97586\") " pod="openstack/ceilometer-0" Mar 13 10:28:36 crc kubenswrapper[4632]: I0313 10:28:36.123490 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnfcl\" (UniqueName: \"kubernetes.io/projected/92d6a890-da6f-4a62-a73d-ad22f8b97586-kube-api-access-dnfcl\") pod \"ceilometer-0\" (UID: \"92d6a890-da6f-4a62-a73d-ad22f8b97586\") " pod="openstack/ceilometer-0" Mar 13 10:28:36 crc kubenswrapper[4632]: I0313 10:28:36.123609 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92d6a890-da6f-4a62-a73d-ad22f8b97586-run-httpd\") pod \"ceilometer-0\" (UID: \"92d6a890-da6f-4a62-a73d-ad22f8b97586\") " pod="openstack/ceilometer-0" Mar 13 10:28:36 crc kubenswrapper[4632]: I0313 10:28:36.123652 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d6a890-da6f-4a62-a73d-ad22f8b97586-config-data\") pod \"ceilometer-0\" (UID: \"92d6a890-da6f-4a62-a73d-ad22f8b97586\") " pod="openstack/ceilometer-0" Mar 13 10:28:36 crc kubenswrapper[4632]: I0313 10:28:36.123689 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/92d6a890-da6f-4a62-a73d-ad22f8b97586-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"92d6a890-da6f-4a62-a73d-ad22f8b97586\") " pod="openstack/ceilometer-0" Mar 13 10:28:36 crc kubenswrapper[4632]: I0313 10:28:36.123716 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92d6a890-da6f-4a62-a73d-ad22f8b97586-scripts\") pod \"ceilometer-0\" (UID: \"92d6a890-da6f-4a62-a73d-ad22f8b97586\") " pod="openstack/ceilometer-0" Mar 13 10:28:36 crc kubenswrapper[4632]: I0313 10:28:36.224953 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92d6a890-da6f-4a62-a73d-ad22f8b97586-log-httpd\") pod \"ceilometer-0\" (UID: \"92d6a890-da6f-4a62-a73d-ad22f8b97586\") " pod="openstack/ceilometer-0" Mar 13 10:28:36 crc kubenswrapper[4632]: I0313 10:28:36.225044 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnfcl\" (UniqueName: \"kubernetes.io/projected/92d6a890-da6f-4a62-a73d-ad22f8b97586-kube-api-access-dnfcl\") pod \"ceilometer-0\" (UID: \"92d6a890-da6f-4a62-a73d-ad22f8b97586\") " pod="openstack/ceilometer-0" Mar 13 10:28:36 crc kubenswrapper[4632]: I0313 10:28:36.225086 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92d6a890-da6f-4a62-a73d-ad22f8b97586-run-httpd\") pod \"ceilometer-0\" (UID: \"92d6a890-da6f-4a62-a73d-ad22f8b97586\") " pod="openstack/ceilometer-0" Mar 13 10:28:36 crc kubenswrapper[4632]: I0313 10:28:36.225112 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d6a890-da6f-4a62-a73d-ad22f8b97586-config-data\") pod \"ceilometer-0\" (UID: \"92d6a890-da6f-4a62-a73d-ad22f8b97586\") " pod="openstack/ceilometer-0" Mar 13 10:28:36 crc kubenswrapper[4632]: I0313 10:28:36.225133 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/92d6a890-da6f-4a62-a73d-ad22f8b97586-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"92d6a890-da6f-4a62-a73d-ad22f8b97586\") " pod="openstack/ceilometer-0" Mar 13 10:28:36 crc kubenswrapper[4632]: I0313 10:28:36.225155 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92d6a890-da6f-4a62-a73d-ad22f8b97586-scripts\") pod \"ceilometer-0\" (UID: \"92d6a890-da6f-4a62-a73d-ad22f8b97586\") " pod="openstack/ceilometer-0" Mar 13 10:28:36 crc kubenswrapper[4632]: I0313 10:28:36.225179 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d6a890-da6f-4a62-a73d-ad22f8b97586-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"92d6a890-da6f-4a62-a73d-ad22f8b97586\") " pod="openstack/ceilometer-0" Mar 13 10:28:36 crc kubenswrapper[4632]: I0313 10:28:36.226806 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92d6a890-da6f-4a62-a73d-ad22f8b97586-run-httpd\") pod \"ceilometer-0\" (UID: \"92d6a890-da6f-4a62-a73d-ad22f8b97586\") " pod="openstack/ceilometer-0" Mar 13 10:28:36 crc kubenswrapper[4632]: I0313 10:28:36.227252 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92d6a890-da6f-4a62-a73d-ad22f8b97586-log-httpd\") pod \"ceilometer-0\" (UID: \"92d6a890-da6f-4a62-a73d-ad22f8b97586\") " pod="openstack/ceilometer-0" Mar 13 10:28:36 crc kubenswrapper[4632]: I0313 10:28:36.238380 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92d6a890-da6f-4a62-a73d-ad22f8b97586-scripts\") pod \"ceilometer-0\" (UID: \"92d6a890-da6f-4a62-a73d-ad22f8b97586\") " pod="openstack/ceilometer-0" Mar 13 10:28:36 crc kubenswrapper[4632]: I0313 10:28:36.238840 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d6a890-da6f-4a62-a73d-ad22f8b97586-config-data\") pod \"ceilometer-0\" (UID: \"92d6a890-da6f-4a62-a73d-ad22f8b97586\") " pod="openstack/ceilometer-0" Mar 13 10:28:36 crc kubenswrapper[4632]: I0313 10:28:36.248910 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d6a890-da6f-4a62-a73d-ad22f8b97586-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"92d6a890-da6f-4a62-a73d-ad22f8b97586\") " pod="openstack/ceilometer-0" Mar 13 10:28:36 crc kubenswrapper[4632]: I0313 10:28:36.252434 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/92d6a890-da6f-4a62-a73d-ad22f8b97586-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"92d6a890-da6f-4a62-a73d-ad22f8b97586\") " pod="openstack/ceilometer-0" Mar 13 10:28:36 crc kubenswrapper[4632]: I0313 10:28:36.259046 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnfcl\" (UniqueName: \"kubernetes.io/projected/92d6a890-da6f-4a62-a73d-ad22f8b97586-kube-api-access-dnfcl\") pod \"ceilometer-0\" (UID: \"92d6a890-da6f-4a62-a73d-ad22f8b97586\") " pod="openstack/ceilometer-0" Mar 13 10:28:36 crc kubenswrapper[4632]: I0313 10:28:36.408580 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:28:36 crc kubenswrapper[4632]: I0313 10:28:36.872319 4632 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 10:28:36 crc kubenswrapper[4632]: I0313 10:28:36.873279 4632 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 10:28:37 crc kubenswrapper[4632]: I0313 10:28:37.089712 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:28:37 crc kubenswrapper[4632]: W0313 10:28:37.096782 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92d6a890_da6f_4a62_a73d_ad22f8b97586.slice/crio-61eb61c712456be9b4257c5e2ea6a70dfbfca01a50f8412f9ea8b2cdb5c8b498 WatchSource:0}: Error finding container 61eb61c712456be9b4257c5e2ea6a70dfbfca01a50f8412f9ea8b2cdb5c8b498: Status 404 returned error can't find the container with id 61eb61c712456be9b4257c5e2ea6a70dfbfca01a50f8412f9ea8b2cdb5c8b498 Mar 13 10:28:37 crc kubenswrapper[4632]: I0313 10:28:37.887167 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92d6a890-da6f-4a62-a73d-ad22f8b97586","Type":"ContainerStarted","Data":"13beea25c7ec581a71ff8aed4dcb89b5326c0045c02a48578bb1e384a8c92d16"} Mar 13 10:28:37 crc kubenswrapper[4632]: I0313 10:28:37.887554 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92d6a890-da6f-4a62-a73d-ad22f8b97586","Type":"ContainerStarted","Data":"61eb61c712456be9b4257c5e2ea6a70dfbfca01a50f8412f9ea8b2cdb5c8b498"} Mar 13 10:28:38 crc kubenswrapper[4632]: I0313 10:28:38.903347 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92d6a890-da6f-4a62-a73d-ad22f8b97586","Type":"ContainerStarted","Data":"a78858d52f4edb9f1b215cb0b5d9d5d059b8c3bfd31b64cf5e0deaf6ab27d4b4"} Mar 13 10:28:39 crc kubenswrapper[4632]: I0313 10:28:39.553845 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 13 10:28:39 crc kubenswrapper[4632]: I0313 10:28:39.553956 4632 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 10:28:39 crc kubenswrapper[4632]: I0313 10:28:39.807234 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 13 10:28:39 crc kubenswrapper[4632]: I0313 10:28:39.807661 4632 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 10:28:39 crc kubenswrapper[4632]: I0313 10:28:39.914499 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92d6a890-da6f-4a62-a73d-ad22f8b97586","Type":"ContainerStarted","Data":"c2c74a9428ab3dfaa995d259e75dccffb44018988a076ef192a947b75ff6a7f1"} Mar 13 10:28:39 crc kubenswrapper[4632]: I0313 10:28:39.916855 4632 generic.go:334] "Generic (PLEG): container finished" podID="5de81924-9bfc-484e-8276-0216f0bbf72c" containerID="afb05bb00debb2ea4a81d169362ff2bd38d824053184e249dbe02cc1cb10e945" exitCode=0 Mar 13 10:28:39 crc kubenswrapper[4632]: I0313 10:28:39.916898 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5mlm2" event={"ID":"5de81924-9bfc-484e-8276-0216f0bbf72c","Type":"ContainerDied","Data":"afb05bb00debb2ea4a81d169362ff2bd38d824053184e249dbe02cc1cb10e945"} Mar 13 10:28:40 crc kubenswrapper[4632]: I0313 10:28:40.025834 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 13 10:28:40 crc kubenswrapper[4632]: I0313 10:28:40.790023 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 13 10:28:41 crc kubenswrapper[4632]: I0313 10:28:41.535454 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5mlm2" Mar 13 10:28:41 crc kubenswrapper[4632]: I0313 10:28:41.660488 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5de81924-9bfc-484e-8276-0216f0bbf72c-config-data\") pod \"5de81924-9bfc-484e-8276-0216f0bbf72c\" (UID: \"5de81924-9bfc-484e-8276-0216f0bbf72c\") " Mar 13 10:28:41 crc kubenswrapper[4632]: I0313 10:28:41.661026 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5de81924-9bfc-484e-8276-0216f0bbf72c-scripts\") pod \"5de81924-9bfc-484e-8276-0216f0bbf72c\" (UID: \"5de81924-9bfc-484e-8276-0216f0bbf72c\") " Mar 13 10:28:41 crc kubenswrapper[4632]: I0313 10:28:41.661103 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bg9bb\" (UniqueName: \"kubernetes.io/projected/5de81924-9bfc-484e-8276-0216f0bbf72c-kube-api-access-bg9bb\") pod \"5de81924-9bfc-484e-8276-0216f0bbf72c\" (UID: \"5de81924-9bfc-484e-8276-0216f0bbf72c\") " Mar 13 10:28:41 crc kubenswrapper[4632]: I0313 10:28:41.661225 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5de81924-9bfc-484e-8276-0216f0bbf72c-combined-ca-bundle\") pod \"5de81924-9bfc-484e-8276-0216f0bbf72c\" (UID: \"5de81924-9bfc-484e-8276-0216f0bbf72c\") " Mar 13 10:28:41 crc kubenswrapper[4632]: I0313 10:28:41.676718 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5de81924-9bfc-484e-8276-0216f0bbf72c-scripts" (OuterVolumeSpecName: "scripts") pod "5de81924-9bfc-484e-8276-0216f0bbf72c" (UID: "5de81924-9bfc-484e-8276-0216f0bbf72c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:28:41 crc kubenswrapper[4632]: I0313 10:28:41.684273 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5de81924-9bfc-484e-8276-0216f0bbf72c-kube-api-access-bg9bb" (OuterVolumeSpecName: "kube-api-access-bg9bb") pod "5de81924-9bfc-484e-8276-0216f0bbf72c" (UID: "5de81924-9bfc-484e-8276-0216f0bbf72c"). InnerVolumeSpecName "kube-api-access-bg9bb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:28:41 crc kubenswrapper[4632]: I0313 10:28:41.765495 4632 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5de81924-9bfc-484e-8276-0216f0bbf72c-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:41 crc kubenswrapper[4632]: I0313 10:28:41.765537 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bg9bb\" (UniqueName: \"kubernetes.io/projected/5de81924-9bfc-484e-8276-0216f0bbf72c-kube-api-access-bg9bb\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:41 crc kubenswrapper[4632]: I0313 10:28:41.786238 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5de81924-9bfc-484e-8276-0216f0bbf72c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5de81924-9bfc-484e-8276-0216f0bbf72c" (UID: "5de81924-9bfc-484e-8276-0216f0bbf72c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:28:41 crc kubenswrapper[4632]: I0313 10:28:41.797148 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5de81924-9bfc-484e-8276-0216f0bbf72c-config-data" (OuterVolumeSpecName: "config-data") pod "5de81924-9bfc-484e-8276-0216f0bbf72c" (UID: "5de81924-9bfc-484e-8276-0216f0bbf72c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:28:41 crc kubenswrapper[4632]: I0313 10:28:41.867451 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5de81924-9bfc-484e-8276-0216f0bbf72c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:41 crc kubenswrapper[4632]: I0313 10:28:41.867501 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5de81924-9bfc-484e-8276-0216f0bbf72c-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:28:41 crc kubenswrapper[4632]: I0313 10:28:41.943869 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5mlm2" event={"ID":"5de81924-9bfc-484e-8276-0216f0bbf72c","Type":"ContainerDied","Data":"951453764e4945cebf39ce493b8227004659c89412dc1b5f0146d76b115b3607"} Mar 13 10:28:41 crc kubenswrapper[4632]: I0313 10:28:41.943914 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="951453764e4945cebf39ce493b8227004659c89412dc1b5f0146d76b115b3607" Mar 13 10:28:41 crc kubenswrapper[4632]: I0313 10:28:41.944004 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5mlm2" Mar 13 10:28:42 crc kubenswrapper[4632]: I0313 10:28:42.126043 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 13 10:28:42 crc kubenswrapper[4632]: E0313 10:28:42.126527 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5de81924-9bfc-484e-8276-0216f0bbf72c" containerName="nova-cell0-conductor-db-sync" Mar 13 10:28:42 crc kubenswrapper[4632]: I0313 10:28:42.126555 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="5de81924-9bfc-484e-8276-0216f0bbf72c" containerName="nova-cell0-conductor-db-sync" Mar 13 10:28:42 crc kubenswrapper[4632]: I0313 10:28:42.126784 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="5de81924-9bfc-484e-8276-0216f0bbf72c" containerName="nova-cell0-conductor-db-sync" Mar 13 10:28:42 crc kubenswrapper[4632]: I0313 10:28:42.127583 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 13 10:28:42 crc kubenswrapper[4632]: I0313 10:28:42.133698 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-h4qk2" Mar 13 10:28:42 crc kubenswrapper[4632]: I0313 10:28:42.134420 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 13 10:28:42 crc kubenswrapper[4632]: I0313 10:28:42.154788 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 13 10:28:42 crc kubenswrapper[4632]: I0313 10:28:42.191215 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89c5451e-248e-46eb-ac20-f52c3e3bcdc4-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"89c5451e-248e-46eb-ac20-f52c3e3bcdc4\") " pod="openstack/nova-cell0-conductor-0" Mar 13 10:28:42 crc kubenswrapper[4632]: I0313 10:28:42.191360 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89c5451e-248e-46eb-ac20-f52c3e3bcdc4-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"89c5451e-248e-46eb-ac20-f52c3e3bcdc4\") " pod="openstack/nova-cell0-conductor-0" Mar 13 10:28:42 crc kubenswrapper[4632]: I0313 10:28:42.191409 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjsd4\" (UniqueName: \"kubernetes.io/projected/89c5451e-248e-46eb-ac20-f52c3e3bcdc4-kube-api-access-pjsd4\") pod \"nova-cell0-conductor-0\" (UID: \"89c5451e-248e-46eb-ac20-f52c3e3bcdc4\") " pod="openstack/nova-cell0-conductor-0" Mar 13 10:28:42 crc kubenswrapper[4632]: I0313 10:28:42.295131 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjsd4\" (UniqueName: \"kubernetes.io/projected/89c5451e-248e-46eb-ac20-f52c3e3bcdc4-kube-api-access-pjsd4\") pod \"nova-cell0-conductor-0\" (UID: \"89c5451e-248e-46eb-ac20-f52c3e3bcdc4\") " pod="openstack/nova-cell0-conductor-0" Mar 13 10:28:42 crc kubenswrapper[4632]: I0313 10:28:42.296234 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89c5451e-248e-46eb-ac20-f52c3e3bcdc4-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"89c5451e-248e-46eb-ac20-f52c3e3bcdc4\") " pod="openstack/nova-cell0-conductor-0" Mar 13 10:28:42 crc kubenswrapper[4632]: I0313 10:28:42.296375 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89c5451e-248e-46eb-ac20-f52c3e3bcdc4-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"89c5451e-248e-46eb-ac20-f52c3e3bcdc4\") " pod="openstack/nova-cell0-conductor-0" Mar 13 10:28:42 crc kubenswrapper[4632]: I0313 10:28:42.304236 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89c5451e-248e-46eb-ac20-f52c3e3bcdc4-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"89c5451e-248e-46eb-ac20-f52c3e3bcdc4\") " pod="openstack/nova-cell0-conductor-0" Mar 13 10:28:42 crc kubenswrapper[4632]: I0313 10:28:42.304738 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89c5451e-248e-46eb-ac20-f52c3e3bcdc4-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"89c5451e-248e-46eb-ac20-f52c3e3bcdc4\") " pod="openstack/nova-cell0-conductor-0" Mar 13 10:28:42 crc kubenswrapper[4632]: I0313 10:28:42.318563 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjsd4\" (UniqueName: \"kubernetes.io/projected/89c5451e-248e-46eb-ac20-f52c3e3bcdc4-kube-api-access-pjsd4\") pod \"nova-cell0-conductor-0\" (UID: \"89c5451e-248e-46eb-ac20-f52c3e3bcdc4\") " pod="openstack/nova-cell0-conductor-0" Mar 13 10:28:42 crc kubenswrapper[4632]: I0313 10:28:42.463411 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 13 10:28:42 crc kubenswrapper[4632]: I0313 10:28:42.960782 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92d6a890-da6f-4a62-a73d-ad22f8b97586","Type":"ContainerStarted","Data":"07674d77dbcfb4d04e610536847653ba6a156f4e167fb3e30be00823bd80251e"} Mar 13 10:28:42 crc kubenswrapper[4632]: I0313 10:28:42.961416 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 13 10:28:43 crc kubenswrapper[4632]: I0313 10:28:43.134304 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.5756441839999997 podStartE2EDuration="8.134280978s" podCreationTimestamp="2026-03-13 10:28:35 +0000 UTC" firstStartedPulling="2026-03-13 10:28:37.105162837 +0000 UTC m=+1491.127692970" lastFinishedPulling="2026-03-13 10:28:41.663799631 +0000 UTC m=+1495.686329764" observedRunningTime="2026-03-13 10:28:42.988585934 +0000 UTC m=+1497.011116097" watchObservedRunningTime="2026-03-13 10:28:43.134280978 +0000 UTC m=+1497.156811141" Mar 13 10:28:43 crc kubenswrapper[4632]: I0313 10:28:43.135014 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 13 10:28:43 crc kubenswrapper[4632]: W0313 10:28:43.135852 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89c5451e_248e_46eb_ac20_f52c3e3bcdc4.slice/crio-fea8b62da5fff833a90864e9fa4a28877f40e3642c9c75596310ee934707e980 WatchSource:0}: Error finding container fea8b62da5fff833a90864e9fa4a28877f40e3642c9c75596310ee934707e980: Status 404 returned error can't find the container with id fea8b62da5fff833a90864e9fa4a28877f40e3642c9c75596310ee934707e980 Mar 13 10:28:43 crc kubenswrapper[4632]: I0313 10:28:43.971715 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"89c5451e-248e-46eb-ac20-f52c3e3bcdc4","Type":"ContainerStarted","Data":"b6bf072e344c147c11f620ba388aa0850202403d9ea7d55387c6d06560823aa8"} Mar 13 10:28:43 crc kubenswrapper[4632]: I0313 10:28:43.972349 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"89c5451e-248e-46eb-ac20-f52c3e3bcdc4","Type":"ContainerStarted","Data":"fea8b62da5fff833a90864e9fa4a28877f40e3642c9c75596310ee934707e980"} Mar 13 10:28:43 crc kubenswrapper[4632]: I0313 10:28:43.972369 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 13 10:28:45 crc kubenswrapper[4632]: I0313 10:28:45.857960 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-689764498d-rg7vt" podUID="5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Mar 13 10:28:46 crc kubenswrapper[4632]: I0313 10:28:46.912561 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=4.912536629 podStartE2EDuration="4.912536629s" podCreationTimestamp="2026-03-13 10:28:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:28:43.999388511 +0000 UTC m=+1498.021918654" watchObservedRunningTime="2026-03-13 10:28:46.912536629 +0000 UTC m=+1500.935066762" Mar 13 10:28:46 crc kubenswrapper[4632]: I0313 10:28:46.914850 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n4z22"] Mar 13 10:28:46 crc kubenswrapper[4632]: I0313 10:28:46.916829 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n4z22" Mar 13 10:28:46 crc kubenswrapper[4632]: I0313 10:28:46.928646 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n4z22"] Mar 13 10:28:46 crc kubenswrapper[4632]: I0313 10:28:46.997612 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpwh9\" (UniqueName: \"kubernetes.io/projected/d0cabd29-ef3e-4808-8c92-3b032483789e-kube-api-access-xpwh9\") pod \"redhat-operators-n4z22\" (UID: \"d0cabd29-ef3e-4808-8c92-3b032483789e\") " pod="openshift-marketplace/redhat-operators-n4z22" Mar 13 10:28:46 crc kubenswrapper[4632]: I0313 10:28:46.997784 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0cabd29-ef3e-4808-8c92-3b032483789e-utilities\") pod \"redhat-operators-n4z22\" (UID: \"d0cabd29-ef3e-4808-8c92-3b032483789e\") " pod="openshift-marketplace/redhat-operators-n4z22" Mar 13 10:28:46 crc kubenswrapper[4632]: I0313 10:28:46.997853 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0cabd29-ef3e-4808-8c92-3b032483789e-catalog-content\") pod \"redhat-operators-n4z22\" (UID: \"d0cabd29-ef3e-4808-8c92-3b032483789e\") " pod="openshift-marketplace/redhat-operators-n4z22" Mar 13 10:28:47 crc kubenswrapper[4632]: I0313 10:28:47.100080 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpwh9\" (UniqueName: \"kubernetes.io/projected/d0cabd29-ef3e-4808-8c92-3b032483789e-kube-api-access-xpwh9\") pod \"redhat-operators-n4z22\" (UID: \"d0cabd29-ef3e-4808-8c92-3b032483789e\") " pod="openshift-marketplace/redhat-operators-n4z22" Mar 13 10:28:47 crc kubenswrapper[4632]: I0313 10:28:47.100167 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0cabd29-ef3e-4808-8c92-3b032483789e-utilities\") pod \"redhat-operators-n4z22\" (UID: \"d0cabd29-ef3e-4808-8c92-3b032483789e\") " pod="openshift-marketplace/redhat-operators-n4z22" Mar 13 10:28:47 crc kubenswrapper[4632]: I0313 10:28:47.100256 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0cabd29-ef3e-4808-8c92-3b032483789e-catalog-content\") pod \"redhat-operators-n4z22\" (UID: \"d0cabd29-ef3e-4808-8c92-3b032483789e\") " pod="openshift-marketplace/redhat-operators-n4z22" Mar 13 10:28:47 crc kubenswrapper[4632]: I0313 10:28:47.101642 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0cabd29-ef3e-4808-8c92-3b032483789e-catalog-content\") pod \"redhat-operators-n4z22\" (UID: \"d0cabd29-ef3e-4808-8c92-3b032483789e\") " pod="openshift-marketplace/redhat-operators-n4z22" Mar 13 10:28:47 crc kubenswrapper[4632]: I0313 10:28:47.101682 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0cabd29-ef3e-4808-8c92-3b032483789e-utilities\") pod \"redhat-operators-n4z22\" (UID: \"d0cabd29-ef3e-4808-8c92-3b032483789e\") " pod="openshift-marketplace/redhat-operators-n4z22" Mar 13 10:28:47 crc kubenswrapper[4632]: I0313 10:28:47.102806 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:28:47 crc kubenswrapper[4632]: I0313 10:28:47.103148 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="92d6a890-da6f-4a62-a73d-ad22f8b97586" containerName="ceilometer-central-agent" containerID="cri-o://13beea25c7ec581a71ff8aed4dcb89b5326c0045c02a48578bb1e384a8c92d16" gracePeriod=30 Mar 13 10:28:47 crc kubenswrapper[4632]: I0313 10:28:47.103292 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="92d6a890-da6f-4a62-a73d-ad22f8b97586" containerName="proxy-httpd" containerID="cri-o://07674d77dbcfb4d04e610536847653ba6a156f4e167fb3e30be00823bd80251e" gracePeriod=30 Mar 13 10:28:47 crc kubenswrapper[4632]: I0313 10:28:47.103342 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="92d6a890-da6f-4a62-a73d-ad22f8b97586" containerName="sg-core" containerID="cri-o://c2c74a9428ab3dfaa995d259e75dccffb44018988a076ef192a947b75ff6a7f1" gracePeriod=30 Mar 13 10:28:47 crc kubenswrapper[4632]: I0313 10:28:47.103394 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="92d6a890-da6f-4a62-a73d-ad22f8b97586" containerName="ceilometer-notification-agent" containerID="cri-o://a78858d52f4edb9f1b215cb0b5d9d5d059b8c3bfd31b64cf5e0deaf6ab27d4b4" gracePeriod=30 Mar 13 10:28:47 crc kubenswrapper[4632]: I0313 10:28:47.134921 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpwh9\" (UniqueName: \"kubernetes.io/projected/d0cabd29-ef3e-4808-8c92-3b032483789e-kube-api-access-xpwh9\") pod \"redhat-operators-n4z22\" (UID: \"d0cabd29-ef3e-4808-8c92-3b032483789e\") " pod="openshift-marketplace/redhat-operators-n4z22" Mar 13 10:28:47 crc kubenswrapper[4632]: I0313 10:28:47.247407 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n4z22" Mar 13 10:28:47 crc kubenswrapper[4632]: W0313 10:28:47.824446 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0cabd29_ef3e_4808_8c92_3b032483789e.slice/crio-dd0fe42db5b99209dcd168810b0996ceb728a9055a395258ce5d2c5e8afe18b9 WatchSource:0}: Error finding container dd0fe42db5b99209dcd168810b0996ceb728a9055a395258ce5d2c5e8afe18b9: Status 404 returned error can't find the container with id dd0fe42db5b99209dcd168810b0996ceb728a9055a395258ce5d2c5e8afe18b9 Mar 13 10:28:47 crc kubenswrapper[4632]: I0313 10:28:47.825704 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n4z22"] Mar 13 10:28:48 crc kubenswrapper[4632]: I0313 10:28:48.092665 4632 generic.go:334] "Generic (PLEG): container finished" podID="92d6a890-da6f-4a62-a73d-ad22f8b97586" containerID="07674d77dbcfb4d04e610536847653ba6a156f4e167fb3e30be00823bd80251e" exitCode=0 Mar 13 10:28:48 crc kubenswrapper[4632]: I0313 10:28:48.095656 4632 generic.go:334] "Generic (PLEG): container finished" podID="92d6a890-da6f-4a62-a73d-ad22f8b97586" containerID="c2c74a9428ab3dfaa995d259e75dccffb44018988a076ef192a947b75ff6a7f1" exitCode=2 Mar 13 10:28:48 crc kubenswrapper[4632]: I0313 10:28:48.095787 4632 generic.go:334] "Generic (PLEG): container finished" podID="92d6a890-da6f-4a62-a73d-ad22f8b97586" containerID="a78858d52f4edb9f1b215cb0b5d9d5d059b8c3bfd31b64cf5e0deaf6ab27d4b4" exitCode=0 Mar 13 10:28:48 crc kubenswrapper[4632]: I0313 10:28:48.093671 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92d6a890-da6f-4a62-a73d-ad22f8b97586","Type":"ContainerDied","Data":"07674d77dbcfb4d04e610536847653ba6a156f4e167fb3e30be00823bd80251e"} Mar 13 10:28:48 crc kubenswrapper[4632]: I0313 10:28:48.096314 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92d6a890-da6f-4a62-a73d-ad22f8b97586","Type":"ContainerDied","Data":"c2c74a9428ab3dfaa995d259e75dccffb44018988a076ef192a947b75ff6a7f1"} Mar 13 10:28:48 crc kubenswrapper[4632]: I0313 10:28:48.096421 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92d6a890-da6f-4a62-a73d-ad22f8b97586","Type":"ContainerDied","Data":"a78858d52f4edb9f1b215cb0b5d9d5d059b8c3bfd31b64cf5e0deaf6ab27d4b4"} Mar 13 10:28:48 crc kubenswrapper[4632]: I0313 10:28:48.112339 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n4z22" event={"ID":"d0cabd29-ef3e-4808-8c92-3b032483789e","Type":"ContainerStarted","Data":"dd0fe42db5b99209dcd168810b0996ceb728a9055a395258ce5d2c5e8afe18b9"} Mar 13 10:28:49 crc kubenswrapper[4632]: I0313 10:28:49.127089 4632 generic.go:334] "Generic (PLEG): container finished" podID="d0cabd29-ef3e-4808-8c92-3b032483789e" containerID="201e16e880cd2d90fce52e678a22a53fb8633269541a12fab257e648ab87ec2d" exitCode=0 Mar 13 10:28:49 crc kubenswrapper[4632]: I0313 10:28:49.127225 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n4z22" event={"ID":"d0cabd29-ef3e-4808-8c92-3b032483789e","Type":"ContainerDied","Data":"201e16e880cd2d90fce52e678a22a53fb8633269541a12fab257e648ab87ec2d"} Mar 13 10:28:50 crc kubenswrapper[4632]: I0313 10:28:50.144621 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n4z22" event={"ID":"d0cabd29-ef3e-4808-8c92-3b032483789e","Type":"ContainerStarted","Data":"71596437e6a2d325baac09e68c289bacf1de16594d0329b6a19a5f5d92099872"} Mar 13 10:28:52 crc kubenswrapper[4632]: I0313 10:28:52.506910 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.303146 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-gwj5n"] Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.304431 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gwj5n" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.313572 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.314272 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.327402 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-gwj5n"] Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.391156 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcce9343-52a3-4e6d-98fd-8e66390020ac-scripts\") pod \"nova-cell0-cell-mapping-gwj5n\" (UID: \"bcce9343-52a3-4e6d-98fd-8e66390020ac\") " pod="openstack/nova-cell0-cell-mapping-gwj5n" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.391280 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxd74\" (UniqueName: \"kubernetes.io/projected/bcce9343-52a3-4e6d-98fd-8e66390020ac-kube-api-access-sxd74\") pod \"nova-cell0-cell-mapping-gwj5n\" (UID: \"bcce9343-52a3-4e6d-98fd-8e66390020ac\") " pod="openstack/nova-cell0-cell-mapping-gwj5n" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.391325 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcce9343-52a3-4e6d-98fd-8e66390020ac-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-gwj5n\" (UID: \"bcce9343-52a3-4e6d-98fd-8e66390020ac\") " pod="openstack/nova-cell0-cell-mapping-gwj5n" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.391358 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcce9343-52a3-4e6d-98fd-8e66390020ac-config-data\") pod \"nova-cell0-cell-mapping-gwj5n\" (UID: \"bcce9343-52a3-4e6d-98fd-8e66390020ac\") " pod="openstack/nova-cell0-cell-mapping-gwj5n" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.495767 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxd74\" (UniqueName: \"kubernetes.io/projected/bcce9343-52a3-4e6d-98fd-8e66390020ac-kube-api-access-sxd74\") pod \"nova-cell0-cell-mapping-gwj5n\" (UID: \"bcce9343-52a3-4e6d-98fd-8e66390020ac\") " pod="openstack/nova-cell0-cell-mapping-gwj5n" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.495849 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcce9343-52a3-4e6d-98fd-8e66390020ac-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-gwj5n\" (UID: \"bcce9343-52a3-4e6d-98fd-8e66390020ac\") " pod="openstack/nova-cell0-cell-mapping-gwj5n" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.495908 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcce9343-52a3-4e6d-98fd-8e66390020ac-config-data\") pod \"nova-cell0-cell-mapping-gwj5n\" (UID: \"bcce9343-52a3-4e6d-98fd-8e66390020ac\") " pod="openstack/nova-cell0-cell-mapping-gwj5n" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.496056 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcce9343-52a3-4e6d-98fd-8e66390020ac-scripts\") pod \"nova-cell0-cell-mapping-gwj5n\" (UID: \"bcce9343-52a3-4e6d-98fd-8e66390020ac\") " pod="openstack/nova-cell0-cell-mapping-gwj5n" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.510894 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.511210 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcce9343-52a3-4e6d-98fd-8e66390020ac-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-gwj5n\" (UID: \"bcce9343-52a3-4e6d-98fd-8e66390020ac\") " pod="openstack/nova-cell0-cell-mapping-gwj5n" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.514829 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.531454 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.541624 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcce9343-52a3-4e6d-98fd-8e66390020ac-scripts\") pod \"nova-cell0-cell-mapping-gwj5n\" (UID: \"bcce9343-52a3-4e6d-98fd-8e66390020ac\") " pod="openstack/nova-cell0-cell-mapping-gwj5n" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.562979 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.564990 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcce9343-52a3-4e6d-98fd-8e66390020ac-config-data\") pod \"nova-cell0-cell-mapping-gwj5n\" (UID: \"bcce9343-52a3-4e6d-98fd-8e66390020ac\") " pod="openstack/nova-cell0-cell-mapping-gwj5n" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.599167 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9h2v\" (UniqueName: \"kubernetes.io/projected/1b3f0cc6-72ae-4738-baee-ce7bc9ef2998-kube-api-access-g9h2v\") pod \"nova-api-0\" (UID: \"1b3f0cc6-72ae-4738-baee-ce7bc9ef2998\") " pod="openstack/nova-api-0" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.599234 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b3f0cc6-72ae-4738-baee-ce7bc9ef2998-logs\") pod \"nova-api-0\" (UID: \"1b3f0cc6-72ae-4738-baee-ce7bc9ef2998\") " pod="openstack/nova-api-0" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.599460 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3f0cc6-72ae-4738-baee-ce7bc9ef2998-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1b3f0cc6-72ae-4738-baee-ce7bc9ef2998\") " pod="openstack/nova-api-0" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.599491 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3f0cc6-72ae-4738-baee-ce7bc9ef2998-config-data\") pod \"nova-api-0\" (UID: \"1b3f0cc6-72ae-4738-baee-ce7bc9ef2998\") " pod="openstack/nova-api-0" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.604082 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxd74\" (UniqueName: \"kubernetes.io/projected/bcce9343-52a3-4e6d-98fd-8e66390020ac-kube-api-access-sxd74\") pod \"nova-cell0-cell-mapping-gwj5n\" (UID: \"bcce9343-52a3-4e6d-98fd-8e66390020ac\") " pod="openstack/nova-cell0-cell-mapping-gwj5n" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.634321 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gwj5n" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.666341 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.676698 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.705681 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9h2v\" (UniqueName: \"kubernetes.io/projected/1b3f0cc6-72ae-4738-baee-ce7bc9ef2998-kube-api-access-g9h2v\") pod \"nova-api-0\" (UID: \"1b3f0cc6-72ae-4738-baee-ce7bc9ef2998\") " pod="openstack/nova-api-0" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.705748 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b3f0cc6-72ae-4738-baee-ce7bc9ef2998-logs\") pod \"nova-api-0\" (UID: \"1b3f0cc6-72ae-4738-baee-ce7bc9ef2998\") " pod="openstack/nova-api-0" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.705825 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/229b0823-7d97-48cf-9b38-188a3f4ecde3-config-data\") pod \"nova-metadata-0\" (UID: \"229b0823-7d97-48cf-9b38-188a3f4ecde3\") " pod="openstack/nova-metadata-0" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.705858 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/229b0823-7d97-48cf-9b38-188a3f4ecde3-logs\") pod \"nova-metadata-0\" (UID: \"229b0823-7d97-48cf-9b38-188a3f4ecde3\") " pod="openstack/nova-metadata-0" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.705896 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7cq6\" (UniqueName: \"kubernetes.io/projected/229b0823-7d97-48cf-9b38-188a3f4ecde3-kube-api-access-c7cq6\") pod \"nova-metadata-0\" (UID: \"229b0823-7d97-48cf-9b38-188a3f4ecde3\") " pod="openstack/nova-metadata-0" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.705989 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/229b0823-7d97-48cf-9b38-188a3f4ecde3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"229b0823-7d97-48cf-9b38-188a3f4ecde3\") " pod="openstack/nova-metadata-0" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.706074 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3f0cc6-72ae-4738-baee-ce7bc9ef2998-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1b3f0cc6-72ae-4738-baee-ce7bc9ef2998\") " pod="openstack/nova-api-0" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.706104 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3f0cc6-72ae-4738-baee-ce7bc9ef2998-config-data\") pod \"nova-api-0\" (UID: \"1b3f0cc6-72ae-4738-baee-ce7bc9ef2998\") " pod="openstack/nova-api-0" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.712251 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b3f0cc6-72ae-4738-baee-ce7bc9ef2998-logs\") pod \"nova-api-0\" (UID: \"1b3f0cc6-72ae-4738-baee-ce7bc9ef2998\") " pod="openstack/nova-api-0" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.722095 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.742062 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3f0cc6-72ae-4738-baee-ce7bc9ef2998-config-data\") pod \"nova-api-0\" (UID: \"1b3f0cc6-72ae-4738-baee-ce7bc9ef2998\") " pod="openstack/nova-api-0" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.743490 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3f0cc6-72ae-4738-baee-ce7bc9ef2998-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1b3f0cc6-72ae-4738-baee-ce7bc9ef2998\") " pod="openstack/nova-api-0" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.789391 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.810165 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/229b0823-7d97-48cf-9b38-188a3f4ecde3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"229b0823-7d97-48cf-9b38-188a3f4ecde3\") " pod="openstack/nova-metadata-0" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.810653 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/229b0823-7d97-48cf-9b38-188a3f4ecde3-config-data\") pod \"nova-metadata-0\" (UID: \"229b0823-7d97-48cf-9b38-188a3f4ecde3\") " pod="openstack/nova-metadata-0" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.811536 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/229b0823-7d97-48cf-9b38-188a3f4ecde3-logs\") pod \"nova-metadata-0\" (UID: \"229b0823-7d97-48cf-9b38-188a3f4ecde3\") " pod="openstack/nova-metadata-0" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.811703 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7cq6\" (UniqueName: \"kubernetes.io/projected/229b0823-7d97-48cf-9b38-188a3f4ecde3-kube-api-access-c7cq6\") pod \"nova-metadata-0\" (UID: \"229b0823-7d97-48cf-9b38-188a3f4ecde3\") " pod="openstack/nova-metadata-0" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.813473 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/229b0823-7d97-48cf-9b38-188a3f4ecde3-logs\") pod \"nova-metadata-0\" (UID: \"229b0823-7d97-48cf-9b38-188a3f4ecde3\") " pod="openstack/nova-metadata-0" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.830160 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/229b0823-7d97-48cf-9b38-188a3f4ecde3-config-data\") pod \"nova-metadata-0\" (UID: \"229b0823-7d97-48cf-9b38-188a3f4ecde3\") " pod="openstack/nova-metadata-0" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.854314 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/229b0823-7d97-48cf-9b38-188a3f4ecde3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"229b0823-7d97-48cf-9b38-188a3f4ecde3\") " pod="openstack/nova-metadata-0" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.893700 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9h2v\" (UniqueName: \"kubernetes.io/projected/1b3f0cc6-72ae-4738-baee-ce7bc9ef2998-kube-api-access-g9h2v\") pod \"nova-api-0\" (UID: \"1b3f0cc6-72ae-4738-baee-ce7bc9ef2998\") " pod="openstack/nova-api-0" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.904881 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7cq6\" (UniqueName: \"kubernetes.io/projected/229b0823-7d97-48cf-9b38-188a3f4ecde3-kube-api-access-c7cq6\") pod \"nova-metadata-0\" (UID: \"229b0823-7d97-48cf-9b38-188a3f4ecde3\") " pod="openstack/nova-metadata-0" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.914740 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.942105 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.943360 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.958651 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.958876 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 13 10:28:53 crc kubenswrapper[4632]: I0313 10:28:53.977064 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.104032 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3548645f-4c72-4c75-b1bd-95116d47f6e2-config-data\") pod \"nova-scheduler-0\" (UID: \"3548645f-4c72-4c75-b1bd-95116d47f6e2\") " pod="openstack/nova-scheduler-0" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.104469 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tntfg\" (UniqueName: \"kubernetes.io/projected/3548645f-4c72-4c75-b1bd-95116d47f6e2-kube-api-access-tntfg\") pod \"nova-scheduler-0\" (UID: \"3548645f-4c72-4c75-b1bd-95116d47f6e2\") " pod="openstack/nova-scheduler-0" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.104570 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3548645f-4c72-4c75-b1bd-95116d47f6e2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3548645f-4c72-4c75-b1bd-95116d47f6e2\") " pod="openstack/nova-scheduler-0" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.212843 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tntfg\" (UniqueName: \"kubernetes.io/projected/3548645f-4c72-4c75-b1bd-95116d47f6e2-kube-api-access-tntfg\") pod \"nova-scheduler-0\" (UID: \"3548645f-4c72-4c75-b1bd-95116d47f6e2\") " pod="openstack/nova-scheduler-0" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.213279 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3548645f-4c72-4c75-b1bd-95116d47f6e2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3548645f-4c72-4c75-b1bd-95116d47f6e2\") " pod="openstack/nova-scheduler-0" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.213464 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3548645f-4c72-4c75-b1bd-95116d47f6e2-config-data\") pod \"nova-scheduler-0\" (UID: \"3548645f-4c72-4c75-b1bd-95116d47f6e2\") " pod="openstack/nova-scheduler-0" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.259053 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3548645f-4c72-4c75-b1bd-95116d47f6e2-config-data\") pod \"nova-scheduler-0\" (UID: \"3548645f-4c72-4c75-b1bd-95116d47f6e2\") " pod="openstack/nova-scheduler-0" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.285477 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tntfg\" (UniqueName: \"kubernetes.io/projected/3548645f-4c72-4c75-b1bd-95116d47f6e2-kube-api-access-tntfg\") pod \"nova-scheduler-0\" (UID: \"3548645f-4c72-4c75-b1bd-95116d47f6e2\") " pod="openstack/nova-scheduler-0" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.285992 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3548645f-4c72-4c75-b1bd-95116d47f6e2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3548645f-4c72-4c75-b1bd-95116d47f6e2\") " pod="openstack/nova-scheduler-0" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.350171 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.420323 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.422859 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75788dd97c-r8qnr"] Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.424079 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.424117 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75788dd97c-r8qnr"] Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.424196 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75788dd97c-r8qnr" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.426471 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.434020 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.529369 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0f17959-fde8-4cf1-b255-db5fc3325b70-dns-svc\") pod \"dnsmasq-dns-75788dd97c-r8qnr\" (UID: \"e0f17959-fde8-4cf1-b255-db5fc3325b70\") " pod="openstack/dnsmasq-dns-75788dd97c-r8qnr" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.531180 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0f17959-fde8-4cf1-b255-db5fc3325b70-ovsdbserver-nb\") pod \"dnsmasq-dns-75788dd97c-r8qnr\" (UID: \"e0f17959-fde8-4cf1-b255-db5fc3325b70\") " pod="openstack/dnsmasq-dns-75788dd97c-r8qnr" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.531318 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0f17959-fde8-4cf1-b255-db5fc3325b70-config\") pod \"dnsmasq-dns-75788dd97c-r8qnr\" (UID: \"e0f17959-fde8-4cf1-b255-db5fc3325b70\") " pod="openstack/dnsmasq-dns-75788dd97c-r8qnr" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.531451 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0f17959-fde8-4cf1-b255-db5fc3325b70-dns-swift-storage-0\") pod \"dnsmasq-dns-75788dd97c-r8qnr\" (UID: \"e0f17959-fde8-4cf1-b255-db5fc3325b70\") " pod="openstack/dnsmasq-dns-75788dd97c-r8qnr" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.531603 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac666e99-860b-4f76-8b34-0ac5d3f67e9e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ac666e99-860b-4f76-8b34-0ac5d3f67e9e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.532330 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlgdp\" (UniqueName: \"kubernetes.io/projected/ac666e99-860b-4f76-8b34-0ac5d3f67e9e-kube-api-access-hlgdp\") pod \"nova-cell1-novncproxy-0\" (UID: \"ac666e99-860b-4f76-8b34-0ac5d3f67e9e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.532455 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac666e99-860b-4f76-8b34-0ac5d3f67e9e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ac666e99-860b-4f76-8b34-0ac5d3f67e9e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.532799 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0f17959-fde8-4cf1-b255-db5fc3325b70-ovsdbserver-sb\") pod \"dnsmasq-dns-75788dd97c-r8qnr\" (UID: \"e0f17959-fde8-4cf1-b255-db5fc3325b70\") " pod="openstack/dnsmasq-dns-75788dd97c-r8qnr" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.532986 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc7jm\" (UniqueName: \"kubernetes.io/projected/e0f17959-fde8-4cf1-b255-db5fc3325b70-kube-api-access-pc7jm\") pod \"dnsmasq-dns-75788dd97c-r8qnr\" (UID: \"e0f17959-fde8-4cf1-b255-db5fc3325b70\") " pod="openstack/dnsmasq-dns-75788dd97c-r8qnr" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.640095 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0f17959-fde8-4cf1-b255-db5fc3325b70-ovsdbserver-sb\") pod \"dnsmasq-dns-75788dd97c-r8qnr\" (UID: \"e0f17959-fde8-4cf1-b255-db5fc3325b70\") " pod="openstack/dnsmasq-dns-75788dd97c-r8qnr" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.640162 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc7jm\" (UniqueName: \"kubernetes.io/projected/e0f17959-fde8-4cf1-b255-db5fc3325b70-kube-api-access-pc7jm\") pod \"dnsmasq-dns-75788dd97c-r8qnr\" (UID: \"e0f17959-fde8-4cf1-b255-db5fc3325b70\") " pod="openstack/dnsmasq-dns-75788dd97c-r8qnr" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.640210 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0f17959-fde8-4cf1-b255-db5fc3325b70-dns-svc\") pod \"dnsmasq-dns-75788dd97c-r8qnr\" (UID: \"e0f17959-fde8-4cf1-b255-db5fc3325b70\") " pod="openstack/dnsmasq-dns-75788dd97c-r8qnr" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.640259 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0f17959-fde8-4cf1-b255-db5fc3325b70-ovsdbserver-nb\") pod \"dnsmasq-dns-75788dd97c-r8qnr\" (UID: \"e0f17959-fde8-4cf1-b255-db5fc3325b70\") " pod="openstack/dnsmasq-dns-75788dd97c-r8qnr" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.640299 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0f17959-fde8-4cf1-b255-db5fc3325b70-config\") pod \"dnsmasq-dns-75788dd97c-r8qnr\" (UID: \"e0f17959-fde8-4cf1-b255-db5fc3325b70\") " pod="openstack/dnsmasq-dns-75788dd97c-r8qnr" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.640353 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0f17959-fde8-4cf1-b255-db5fc3325b70-dns-swift-storage-0\") pod \"dnsmasq-dns-75788dd97c-r8qnr\" (UID: \"e0f17959-fde8-4cf1-b255-db5fc3325b70\") " pod="openstack/dnsmasq-dns-75788dd97c-r8qnr" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.640410 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac666e99-860b-4f76-8b34-0ac5d3f67e9e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ac666e99-860b-4f76-8b34-0ac5d3f67e9e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.640440 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlgdp\" (UniqueName: \"kubernetes.io/projected/ac666e99-860b-4f76-8b34-0ac5d3f67e9e-kube-api-access-hlgdp\") pod \"nova-cell1-novncproxy-0\" (UID: \"ac666e99-860b-4f76-8b34-0ac5d3f67e9e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.640500 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac666e99-860b-4f76-8b34-0ac5d3f67e9e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ac666e99-860b-4f76-8b34-0ac5d3f67e9e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.652433 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0f17959-fde8-4cf1-b255-db5fc3325b70-ovsdbserver-sb\") pod \"dnsmasq-dns-75788dd97c-r8qnr\" (UID: \"e0f17959-fde8-4cf1-b255-db5fc3325b70\") " pod="openstack/dnsmasq-dns-75788dd97c-r8qnr" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.668589 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0f17959-fde8-4cf1-b255-db5fc3325b70-dns-svc\") pod \"dnsmasq-dns-75788dd97c-r8qnr\" (UID: \"e0f17959-fde8-4cf1-b255-db5fc3325b70\") " pod="openstack/dnsmasq-dns-75788dd97c-r8qnr" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.669728 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0f17959-fde8-4cf1-b255-db5fc3325b70-ovsdbserver-nb\") pod \"dnsmasq-dns-75788dd97c-r8qnr\" (UID: \"e0f17959-fde8-4cf1-b255-db5fc3325b70\") " pod="openstack/dnsmasq-dns-75788dd97c-r8qnr" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.670658 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0f17959-fde8-4cf1-b255-db5fc3325b70-config\") pod \"dnsmasq-dns-75788dd97c-r8qnr\" (UID: \"e0f17959-fde8-4cf1-b255-db5fc3325b70\") " pod="openstack/dnsmasq-dns-75788dd97c-r8qnr" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.689682 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0f17959-fde8-4cf1-b255-db5fc3325b70-dns-swift-storage-0\") pod \"dnsmasq-dns-75788dd97c-r8qnr\" (UID: \"e0f17959-fde8-4cf1-b255-db5fc3325b70\") " pod="openstack/dnsmasq-dns-75788dd97c-r8qnr" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.720451 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac666e99-860b-4f76-8b34-0ac5d3f67e9e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ac666e99-860b-4f76-8b34-0ac5d3f67e9e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.747671 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac666e99-860b-4f76-8b34-0ac5d3f67e9e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ac666e99-860b-4f76-8b34-0ac5d3f67e9e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.756783 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlgdp\" (UniqueName: \"kubernetes.io/projected/ac666e99-860b-4f76-8b34-0ac5d3f67e9e-kube-api-access-hlgdp\") pod \"nova-cell1-novncproxy-0\" (UID: \"ac666e99-860b-4f76-8b34-0ac5d3f67e9e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.773636 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc7jm\" (UniqueName: \"kubernetes.io/projected/e0f17959-fde8-4cf1-b255-db5fc3325b70-kube-api-access-pc7jm\") pod \"dnsmasq-dns-75788dd97c-r8qnr\" (UID: \"e0f17959-fde8-4cf1-b255-db5fc3325b70\") " pod="openstack/dnsmasq-dns-75788dd97c-r8qnr" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.907404 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75788dd97c-r8qnr" Mar 13 10:28:54 crc kubenswrapper[4632]: I0313 10:28:54.915993 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 13 10:28:55 crc kubenswrapper[4632]: I0313 10:28:55.199916 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-gwj5n"] Mar 13 10:28:55 crc kubenswrapper[4632]: I0313 10:28:55.235047 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 10:28:55 crc kubenswrapper[4632]: I0313 10:28:55.252425 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gwj5n" event={"ID":"bcce9343-52a3-4e6d-98fd-8e66390020ac","Type":"ContainerStarted","Data":"f930aa1f0069e1fe78556089c981d58cc3cf6a82579a76e645212ffad42b673e"} Mar 13 10:28:55 crc kubenswrapper[4632]: I0313 10:28:55.274097 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 13 10:28:55 crc kubenswrapper[4632]: I0313 10:28:55.583272 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 10:28:56 crc kubenswrapper[4632]: I0313 10:28:56.189042 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75788dd97c-r8qnr"] Mar 13 10:28:56 crc kubenswrapper[4632]: I0313 10:28:56.239740 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 13 10:28:56 crc kubenswrapper[4632]: I0313 10:28:56.309773 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75788dd97c-r8qnr" event={"ID":"e0f17959-fde8-4cf1-b255-db5fc3325b70","Type":"ContainerStarted","Data":"a2b2fdcf6ef7efc2eca17a814eb5b4394c29b09fe6419666d04ee4759d7660a8"} Mar 13 10:28:56 crc kubenswrapper[4632]: I0313 10:28:56.348305 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ac666e99-860b-4f76-8b34-0ac5d3f67e9e","Type":"ContainerStarted","Data":"21ad142ba62995b1f6903534cfcf5ce85e20fd10841d57b3365ce522bedc21e5"} Mar 13 10:28:56 crc kubenswrapper[4632]: I0313 10:28:56.374297 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3548645f-4c72-4c75-b1bd-95116d47f6e2","Type":"ContainerStarted","Data":"2e867316abee7ad3f39e6bce10fac89a2148bd2f0cb280bc0cfe2baf59687b11"} Mar 13 10:28:56 crc kubenswrapper[4632]: I0313 10:28:56.390261 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"229b0823-7d97-48cf-9b38-188a3f4ecde3","Type":"ContainerStarted","Data":"e09e0609bf87522d60a55be578e8526df6a87fdb2167dcc1dc7feca3fdedd742"} Mar 13 10:28:56 crc kubenswrapper[4632]: I0313 10:28:56.404813 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b3f0cc6-72ae-4738-baee-ce7bc9ef2998","Type":"ContainerStarted","Data":"34daac34beb9c4f24aabab024257734fad48bd622c9a19af69564a5c1af316f2"} Mar 13 10:28:57 crc kubenswrapper[4632]: I0313 10:28:57.388443 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9n7gj"] Mar 13 10:28:57 crc kubenswrapper[4632]: I0313 10:28:57.390036 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9n7gj" Mar 13 10:28:57 crc kubenswrapper[4632]: I0313 10:28:57.394208 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 13 10:28:57 crc kubenswrapper[4632]: I0313 10:28:57.394468 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Mar 13 10:28:57 crc kubenswrapper[4632]: I0313 10:28:57.405882 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9n7gj"] Mar 13 10:28:57 crc kubenswrapper[4632]: I0313 10:28:57.407966 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cpkl\" (UniqueName: \"kubernetes.io/projected/cf19672e-3284-49bc-a460-f2e629881d9b-kube-api-access-2cpkl\") pod \"nova-cell1-conductor-db-sync-9n7gj\" (UID: \"cf19672e-3284-49bc-a460-f2e629881d9b\") " pod="openstack/nova-cell1-conductor-db-sync-9n7gj" Mar 13 10:28:57 crc kubenswrapper[4632]: I0313 10:28:57.408114 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf19672e-3284-49bc-a460-f2e629881d9b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9n7gj\" (UID: \"cf19672e-3284-49bc-a460-f2e629881d9b\") " pod="openstack/nova-cell1-conductor-db-sync-9n7gj" Mar 13 10:28:57 crc kubenswrapper[4632]: I0313 10:28:57.408138 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf19672e-3284-49bc-a460-f2e629881d9b-config-data\") pod \"nova-cell1-conductor-db-sync-9n7gj\" (UID: \"cf19672e-3284-49bc-a460-f2e629881d9b\") " pod="openstack/nova-cell1-conductor-db-sync-9n7gj" Mar 13 10:28:57 crc kubenswrapper[4632]: I0313 10:28:57.408169 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf19672e-3284-49bc-a460-f2e629881d9b-scripts\") pod \"nova-cell1-conductor-db-sync-9n7gj\" (UID: \"cf19672e-3284-49bc-a460-f2e629881d9b\") " pod="openstack/nova-cell1-conductor-db-sync-9n7gj" Mar 13 10:28:57 crc kubenswrapper[4632]: I0313 10:28:57.483643 4632 generic.go:334] "Generic (PLEG): container finished" podID="e0f17959-fde8-4cf1-b255-db5fc3325b70" containerID="c6848744dc1fd449bb0df7b7ca2c04941331806f97abf20c11372e120fb30d31" exitCode=0 Mar 13 10:28:57 crc kubenswrapper[4632]: I0313 10:28:57.483722 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75788dd97c-r8qnr" event={"ID":"e0f17959-fde8-4cf1-b255-db5fc3325b70","Type":"ContainerDied","Data":"c6848744dc1fd449bb0df7b7ca2c04941331806f97abf20c11372e120fb30d31"} Mar 13 10:28:57 crc kubenswrapper[4632]: I0313 10:28:57.515888 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cpkl\" (UniqueName: \"kubernetes.io/projected/cf19672e-3284-49bc-a460-f2e629881d9b-kube-api-access-2cpkl\") pod \"nova-cell1-conductor-db-sync-9n7gj\" (UID: \"cf19672e-3284-49bc-a460-f2e629881d9b\") " pod="openstack/nova-cell1-conductor-db-sync-9n7gj" Mar 13 10:28:57 crc kubenswrapper[4632]: I0313 10:28:57.516290 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf19672e-3284-49bc-a460-f2e629881d9b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9n7gj\" (UID: \"cf19672e-3284-49bc-a460-f2e629881d9b\") " pod="openstack/nova-cell1-conductor-db-sync-9n7gj" Mar 13 10:28:57 crc kubenswrapper[4632]: I0313 10:28:57.516325 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf19672e-3284-49bc-a460-f2e629881d9b-config-data\") pod \"nova-cell1-conductor-db-sync-9n7gj\" (UID: \"cf19672e-3284-49bc-a460-f2e629881d9b\") " pod="openstack/nova-cell1-conductor-db-sync-9n7gj" Mar 13 10:28:57 crc kubenswrapper[4632]: I0313 10:28:57.516384 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf19672e-3284-49bc-a460-f2e629881d9b-scripts\") pod \"nova-cell1-conductor-db-sync-9n7gj\" (UID: \"cf19672e-3284-49bc-a460-f2e629881d9b\") " pod="openstack/nova-cell1-conductor-db-sync-9n7gj" Mar 13 10:28:57 crc kubenswrapper[4632]: I0313 10:28:57.518609 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gwj5n" event={"ID":"bcce9343-52a3-4e6d-98fd-8e66390020ac","Type":"ContainerStarted","Data":"379356ecac878a5f4776d015be267e8c7eec62c977ce924abd53ff44455ce8e4"} Mar 13 10:28:57 crc kubenswrapper[4632]: I0313 10:28:57.552421 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-gwj5n" podStartSLOduration=4.552392671 podStartE2EDuration="4.552392671s" podCreationTimestamp="2026-03-13 10:28:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:28:57.536418082 +0000 UTC m=+1511.558948225" watchObservedRunningTime="2026-03-13 10:28:57.552392671 +0000 UTC m=+1511.574922804" Mar 13 10:28:57 crc kubenswrapper[4632]: I0313 10:28:57.566202 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf19672e-3284-49bc-a460-f2e629881d9b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9n7gj\" (UID: \"cf19672e-3284-49bc-a460-f2e629881d9b\") " pod="openstack/nova-cell1-conductor-db-sync-9n7gj" Mar 13 10:28:57 crc kubenswrapper[4632]: I0313 10:28:57.569254 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cpkl\" (UniqueName: \"kubernetes.io/projected/cf19672e-3284-49bc-a460-f2e629881d9b-kube-api-access-2cpkl\") pod \"nova-cell1-conductor-db-sync-9n7gj\" (UID: \"cf19672e-3284-49bc-a460-f2e629881d9b\") " pod="openstack/nova-cell1-conductor-db-sync-9n7gj" Mar 13 10:28:57 crc kubenswrapper[4632]: I0313 10:28:57.580687 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf19672e-3284-49bc-a460-f2e629881d9b-scripts\") pod \"nova-cell1-conductor-db-sync-9n7gj\" (UID: \"cf19672e-3284-49bc-a460-f2e629881d9b\") " pod="openstack/nova-cell1-conductor-db-sync-9n7gj" Mar 13 10:28:57 crc kubenswrapper[4632]: I0313 10:28:57.581064 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf19672e-3284-49bc-a460-f2e629881d9b-config-data\") pod \"nova-cell1-conductor-db-sync-9n7gj\" (UID: \"cf19672e-3284-49bc-a460-f2e629881d9b\") " pod="openstack/nova-cell1-conductor-db-sync-9n7gj" Mar 13 10:28:57 crc kubenswrapper[4632]: I0313 10:28:57.767770 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9n7gj" Mar 13 10:28:58 crc kubenswrapper[4632]: I0313 10:28:58.565966 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75788dd97c-r8qnr" event={"ID":"e0f17959-fde8-4cf1-b255-db5fc3325b70","Type":"ContainerStarted","Data":"f957b291649cd64b5f0c12f7a4a8a32abd88e0067f00c5ae80a3e106aedde5a8"} Mar 13 10:28:58 crc kubenswrapper[4632]: I0313 10:28:58.566537 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75788dd97c-r8qnr" Mar 13 10:28:58 crc kubenswrapper[4632]: I0313 10:28:58.594223 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75788dd97c-r8qnr" podStartSLOduration=5.594198242 podStartE2EDuration="5.594198242s" podCreationTimestamp="2026-03-13 10:28:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:28:58.586365671 +0000 UTC m=+1512.608895834" watchObservedRunningTime="2026-03-13 10:28:58.594198242 +0000 UTC m=+1512.616728385" Mar 13 10:28:58 crc kubenswrapper[4632]: I0313 10:28:58.733256 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9n7gj"] Mar 13 10:28:59 crc kubenswrapper[4632]: I0313 10:28:59.534169 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 10:28:59 crc kubenswrapper[4632]: I0313 10:28:59.548530 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 13 10:28:59 crc kubenswrapper[4632]: I0313 10:28:59.589442 4632 generic.go:334] "Generic (PLEG): container finished" podID="d0cabd29-ef3e-4808-8c92-3b032483789e" containerID="71596437e6a2d325baac09e68c289bacf1de16594d0329b6a19a5f5d92099872" exitCode=0 Mar 13 10:28:59 crc kubenswrapper[4632]: I0313 10:28:59.589524 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n4z22" event={"ID":"d0cabd29-ef3e-4808-8c92-3b032483789e","Type":"ContainerDied","Data":"71596437e6a2d325baac09e68c289bacf1de16594d0329b6a19a5f5d92099872"} Mar 13 10:28:59 crc kubenswrapper[4632]: I0313 10:28:59.597426 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9n7gj" event={"ID":"cf19672e-3284-49bc-a460-f2e629881d9b","Type":"ContainerStarted","Data":"a8f98d9cfd7da7677c0fe463edd081d6aa2858ecb1027917673862b2700f1545"} Mar 13 10:28:59 crc kubenswrapper[4632]: I0313 10:28:59.597477 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9n7gj" event={"ID":"cf19672e-3284-49bc-a460-f2e629881d9b","Type":"ContainerStarted","Data":"2552baf18c3652ea7e85b1cf98a826d4538b2dbd01aa4210322514f06f9b9c99"} Mar 13 10:28:59 crc kubenswrapper[4632]: I0313 10:28:59.639748 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-9n7gj" podStartSLOduration=2.6397271719999997 podStartE2EDuration="2.639727172s" podCreationTimestamp="2026-03-13 10:28:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:28:59.636856482 +0000 UTC m=+1513.659386615" watchObservedRunningTime="2026-03-13 10:28:59.639727172 +0000 UTC m=+1513.662257315" Mar 13 10:29:00 crc kubenswrapper[4632]: I0313 10:29:00.866150 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-689764498d-rg7vt" podUID="5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:29:00 crc kubenswrapper[4632]: I0313 10:29:00.866596 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-689764498d-rg7vt" Mar 13 10:29:00 crc kubenswrapper[4632]: I0313 10:29:00.868009 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"26a7aae686bb479cfcbc8b01e8e10e3fd467e5236d6ffb2ed638373687267401"} pod="openstack/horizon-689764498d-rg7vt" containerMessage="Container horizon failed startup probe, will be restarted" Mar 13 10:29:00 crc kubenswrapper[4632]: I0313 10:29:00.868052 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-689764498d-rg7vt" podUID="5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c" containerName="horizon" containerID="cri-o://26a7aae686bb479cfcbc8b01e8e10e3fd467e5236d6ffb2ed638373687267401" gracePeriod=30 Mar 13 10:29:03 crc kubenswrapper[4632]: I0313 10:29:03.671019 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"229b0823-7d97-48cf-9b38-188a3f4ecde3","Type":"ContainerStarted","Data":"c37b72b8d98bcfa61d62d487a54c1380919b990c2a3301633445f9ec65005bf5"} Mar 13 10:29:03 crc kubenswrapper[4632]: I0313 10:29:03.671809 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"229b0823-7d97-48cf-9b38-188a3f4ecde3","Type":"ContainerStarted","Data":"9461ca171120686afab14291efb3881e8b77ff7ec5bc19c4bdf3f4e55da83af2"} Mar 13 10:29:03 crc kubenswrapper[4632]: I0313 10:29:03.671338 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="229b0823-7d97-48cf-9b38-188a3f4ecde3" containerName="nova-metadata-metadata" containerID="cri-o://c37b72b8d98bcfa61d62d487a54c1380919b990c2a3301633445f9ec65005bf5" gracePeriod=30 Mar 13 10:29:03 crc kubenswrapper[4632]: I0313 10:29:03.671256 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="229b0823-7d97-48cf-9b38-188a3f4ecde3" containerName="nova-metadata-log" containerID="cri-o://9461ca171120686afab14291efb3881e8b77ff7ec5bc19c4bdf3f4e55da83af2" gracePeriod=30 Mar 13 10:29:03 crc kubenswrapper[4632]: I0313 10:29:03.678171 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b3f0cc6-72ae-4738-baee-ce7bc9ef2998","Type":"ContainerStarted","Data":"29733198b90e2c4e70c2780c9fc89c7f6fe69e2a7c8898353f2770088741eed7"} Mar 13 10:29:03 crc kubenswrapper[4632]: I0313 10:29:03.678219 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b3f0cc6-72ae-4738-baee-ce7bc9ef2998","Type":"ContainerStarted","Data":"f1168e3921eed21e9e9f3fc46e51a896484996f89685c7e47f03c083dd2451a8"} Mar 13 10:29:03 crc kubenswrapper[4632]: I0313 10:29:03.682161 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ac666e99-860b-4f76-8b34-0ac5d3f67e9e","Type":"ContainerStarted","Data":"a0afd63ece3b9f8f07f13d5cb18fecbc9a68899e19cf0a855e21117da83b326a"} Mar 13 10:29:03 crc kubenswrapper[4632]: I0313 10:29:03.682281 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="ac666e99-860b-4f76-8b34-0ac5d3f67e9e" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://a0afd63ece3b9f8f07f13d5cb18fecbc9a68899e19cf0a855e21117da83b326a" gracePeriod=30 Mar 13 10:29:03 crc kubenswrapper[4632]: I0313 10:29:03.688327 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3548645f-4c72-4c75-b1bd-95116d47f6e2","Type":"ContainerStarted","Data":"cd653c258a170cf57afe5c4a490b6a607d8ebe8fe6115c40d2e2f88f1ae5f713"} Mar 13 10:29:03 crc kubenswrapper[4632]: I0313 10:29:03.691296 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n4z22" event={"ID":"d0cabd29-ef3e-4808-8c92-3b032483789e","Type":"ContainerStarted","Data":"d90ad363a11a672d65b86841626f5e66c7cb30326c75375f126c473ca51ca722"} Mar 13 10:29:03 crc kubenswrapper[4632]: I0313 10:29:03.722900 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.445747854 podStartE2EDuration="10.72284758s" podCreationTimestamp="2026-03-13 10:28:53 +0000 UTC" firstStartedPulling="2026-03-13 10:28:55.258236408 +0000 UTC m=+1509.280766541" lastFinishedPulling="2026-03-13 10:29:02.535336134 +0000 UTC m=+1516.557866267" observedRunningTime="2026-03-13 10:29:03.70110244 +0000 UTC m=+1517.723632593" watchObservedRunningTime="2026-03-13 10:29:03.72284758 +0000 UTC m=+1517.745377713" Mar 13 10:29:03 crc kubenswrapper[4632]: I0313 10:29:03.738340 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n4z22" podStartSLOduration=4.326789488 podStartE2EDuration="17.738316436s" podCreationTimestamp="2026-03-13 10:28:46 +0000 UTC" firstStartedPulling="2026-03-13 10:28:49.129264399 +0000 UTC m=+1503.151794532" lastFinishedPulling="2026-03-13 10:29:02.540791347 +0000 UTC m=+1516.563321480" observedRunningTime="2026-03-13 10:29:03.726510578 +0000 UTC m=+1517.749040721" watchObservedRunningTime="2026-03-13 10:29:03.738316436 +0000 UTC m=+1517.760846569" Mar 13 10:29:03 crc kubenswrapper[4632]: I0313 10:29:03.760246 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.494929709 podStartE2EDuration="10.760225008s" podCreationTimestamp="2026-03-13 10:28:53 +0000 UTC" firstStartedPulling="2026-03-13 10:28:55.275494578 +0000 UTC m=+1509.298024721" lastFinishedPulling="2026-03-13 10:29:02.540789897 +0000 UTC m=+1516.563320020" observedRunningTime="2026-03-13 10:29:03.752923681 +0000 UTC m=+1517.775453814" watchObservedRunningTime="2026-03-13 10:29:03.760225008 +0000 UTC m=+1517.782755141" Mar 13 10:29:03 crc kubenswrapper[4632]: I0313 10:29:03.778810 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.888070252 podStartE2EDuration="10.778791681s" podCreationTimestamp="2026-03-13 10:28:53 +0000 UTC" firstStartedPulling="2026-03-13 10:28:55.649347151 +0000 UTC m=+1509.671877284" lastFinishedPulling="2026-03-13 10:29:02.54006858 +0000 UTC m=+1516.562598713" observedRunningTime="2026-03-13 10:29:03.778354349 +0000 UTC m=+1517.800884482" watchObservedRunningTime="2026-03-13 10:29:03.778791681 +0000 UTC m=+1517.801321814" Mar 13 10:29:03 crc kubenswrapper[4632]: I0313 10:29:03.844490 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.575620436 podStartE2EDuration="10.844464627s" podCreationTimestamp="2026-03-13 10:28:53 +0000 UTC" firstStartedPulling="2026-03-13 10:28:56.246872106 +0000 UTC m=+1510.269402239" lastFinishedPulling="2026-03-13 10:29:02.515716297 +0000 UTC m=+1516.538246430" observedRunningTime="2026-03-13 10:29:03.826333737 +0000 UTC m=+1517.848863870" watchObservedRunningTime="2026-03-13 10:29:03.844464627 +0000 UTC m=+1517.866994760" Mar 13 10:29:03 crc kubenswrapper[4632]: W0313 10:29:03.836536 4632 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0f17959_fde8_4cf1_b255_db5fc3325b70.slice/crio-conmon-c6848744dc1fd449bb0df7b7ca2c04941331806f97abf20c11372e120fb30d31.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0f17959_fde8_4cf1_b255_db5fc3325b70.slice/crio-conmon-c6848744dc1fd449bb0df7b7ca2c04941331806f97abf20c11372e120fb30d31.scope: no such file or directory Mar 13 10:29:03 crc kubenswrapper[4632]: W0313 10:29:03.850412 4632 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0f17959_fde8_4cf1_b255_db5fc3325b70.slice/crio-c6848744dc1fd449bb0df7b7ca2c04941331806f97abf20c11372e120fb30d31.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0f17959_fde8_4cf1_b255_db5fc3325b70.slice/crio-c6848744dc1fd449bb0df7b7ca2c04941331806f97abf20c11372e120fb30d31.scope: no such file or directory Mar 13 10:29:03 crc kubenswrapper[4632]: W0313 10:29:03.851368 4632 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod229b0823_7d97_48cf_9b38_188a3f4ecde3.slice/crio-conmon-9461ca171120686afab14291efb3881e8b77ff7ec5bc19c4bdf3f4e55da83af2.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod229b0823_7d97_48cf_9b38_188a3f4ecde3.slice/crio-conmon-9461ca171120686afab14291efb3881e8b77ff7ec5bc19c4bdf3f4e55da83af2.scope: no such file or directory Mar 13 10:29:03 crc kubenswrapper[4632]: I0313 10:29:03.915796 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 13 10:29:03 crc kubenswrapper[4632]: I0313 10:29:03.915839 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 13 10:29:03 crc kubenswrapper[4632]: I0313 10:29:03.917747 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1b3f0cc6-72ae-4738-baee-ce7bc9ef2998" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.208:8774/\": dial tcp 10.217.0.208:8774: connect: connection refused" Mar 13 10:29:03 crc kubenswrapper[4632]: I0313 10:29:03.917755 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1b3f0cc6-72ae-4738-baee-ce7bc9ef2998" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.208:8774/\": dial tcp 10.217.0.208:8774: connect: connection refused" Mar 13 10:29:03 crc kubenswrapper[4632]: I0313 10:29:03.961857 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 13 10:29:03 crc kubenswrapper[4632]: I0313 10:29:03.961907 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 13 10:29:04 crc kubenswrapper[4632]: I0313 10:29:04.010584 4632 scope.go:117] "RemoveContainer" containerID="0e23e3344de45eadba8d2e2f7dead6b7591126ab6ec56a759524e9fc0c54694e" Mar 13 10:29:04 crc kubenswrapper[4632]: I0313 10:29:04.351392 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 13 10:29:04 crc kubenswrapper[4632]: I0313 10:29:04.351454 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 13 10:29:04 crc kubenswrapper[4632]: I0313 10:29:04.393363 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 13 10:29:04 crc kubenswrapper[4632]: I0313 10:29:04.465178 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 10:29:04 crc kubenswrapper[4632]: I0313 10:29:04.513621 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 13 10:29:04 crc kubenswrapper[4632]: I0313 10:29:04.560356 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 13 10:29:04 crc kubenswrapper[4632]: I0313 10:29:04.560721 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="89c5451e-248e-46eb-ac20-f52c3e3bcdc4" containerName="nova-cell0-conductor-conductor" containerID="cri-o://b6bf072e344c147c11f620ba388aa0850202403d9ea7d55387c6d06560823aa8" gracePeriod=30 Mar 13 10:29:04 crc kubenswrapper[4632]: I0313 10:29:04.745771 4632 generic.go:334] "Generic (PLEG): container finished" podID="92d6a890-da6f-4a62-a73d-ad22f8b97586" containerID="13beea25c7ec581a71ff8aed4dcb89b5326c0045c02a48578bb1e384a8c92d16" exitCode=0 Mar 13 10:29:04 crc kubenswrapper[4632]: I0313 10:29:04.746142 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92d6a890-da6f-4a62-a73d-ad22f8b97586","Type":"ContainerDied","Data":"13beea25c7ec581a71ff8aed4dcb89b5326c0045c02a48578bb1e384a8c92d16"} Mar 13 10:29:04 crc kubenswrapper[4632]: I0313 10:29:04.754909 4632 generic.go:334] "Generic (PLEG): container finished" podID="229b0823-7d97-48cf-9b38-188a3f4ecde3" containerID="9461ca171120686afab14291efb3881e8b77ff7ec5bc19c4bdf3f4e55da83af2" exitCode=143 Mar 13 10:29:04 crc kubenswrapper[4632]: I0313 10:29:04.755043 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"229b0823-7d97-48cf-9b38-188a3f4ecde3","Type":"ContainerDied","Data":"9461ca171120686afab14291efb3881e8b77ff7ec5bc19c4bdf3f4e55da83af2"} Mar 13 10:29:04 crc kubenswrapper[4632]: I0313 10:29:04.817008 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 13 10:29:04 crc kubenswrapper[4632]: I0313 10:29:04.914121 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75788dd97c-r8qnr" Mar 13 10:29:04 crc kubenswrapper[4632]: I0313 10:29:04.919031 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.114675 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7888df55c7-mw5p4"] Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.114976 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7888df55c7-mw5p4" podUID="904f04cd-8110-4637-8bb4-67c4b83e189b" containerName="dnsmasq-dns" containerID="cri-o://4cc9fd73a35e44ae17915d74f83df931e877bf9d4b7384d1b90a6239d1a72628" gracePeriod=10 Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.347755 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.463810 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92d6a890-da6f-4a62-a73d-ad22f8b97586-log-httpd\") pod \"92d6a890-da6f-4a62-a73d-ad22f8b97586\" (UID: \"92d6a890-da6f-4a62-a73d-ad22f8b97586\") " Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.464175 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d6a890-da6f-4a62-a73d-ad22f8b97586-config-data\") pod \"92d6a890-da6f-4a62-a73d-ad22f8b97586\" (UID: \"92d6a890-da6f-4a62-a73d-ad22f8b97586\") " Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.464373 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d6a890-da6f-4a62-a73d-ad22f8b97586-combined-ca-bundle\") pod \"92d6a890-da6f-4a62-a73d-ad22f8b97586\" (UID: \"92d6a890-da6f-4a62-a73d-ad22f8b97586\") " Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.464547 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92d6a890-da6f-4a62-a73d-ad22f8b97586-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "92d6a890-da6f-4a62-a73d-ad22f8b97586" (UID: "92d6a890-da6f-4a62-a73d-ad22f8b97586"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.465235 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/92d6a890-da6f-4a62-a73d-ad22f8b97586-sg-core-conf-yaml\") pod \"92d6a890-da6f-4a62-a73d-ad22f8b97586\" (UID: \"92d6a890-da6f-4a62-a73d-ad22f8b97586\") " Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.465775 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92d6a890-da6f-4a62-a73d-ad22f8b97586-scripts\") pod \"92d6a890-da6f-4a62-a73d-ad22f8b97586\" (UID: \"92d6a890-da6f-4a62-a73d-ad22f8b97586\") " Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.466000 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnfcl\" (UniqueName: \"kubernetes.io/projected/92d6a890-da6f-4a62-a73d-ad22f8b97586-kube-api-access-dnfcl\") pod \"92d6a890-da6f-4a62-a73d-ad22f8b97586\" (UID: \"92d6a890-da6f-4a62-a73d-ad22f8b97586\") " Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.466179 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92d6a890-da6f-4a62-a73d-ad22f8b97586-run-httpd\") pod \"92d6a890-da6f-4a62-a73d-ad22f8b97586\" (UID: \"92d6a890-da6f-4a62-a73d-ad22f8b97586\") " Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.466849 4632 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92d6a890-da6f-4a62-a73d-ad22f8b97586-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.467340 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92d6a890-da6f-4a62-a73d-ad22f8b97586-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "92d6a890-da6f-4a62-a73d-ad22f8b97586" (UID: "92d6a890-da6f-4a62-a73d-ad22f8b97586"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.477281 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92d6a890-da6f-4a62-a73d-ad22f8b97586-kube-api-access-dnfcl" (OuterVolumeSpecName: "kube-api-access-dnfcl") pod "92d6a890-da6f-4a62-a73d-ad22f8b97586" (UID: "92d6a890-da6f-4a62-a73d-ad22f8b97586"). InnerVolumeSpecName "kube-api-access-dnfcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.505177 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92d6a890-da6f-4a62-a73d-ad22f8b97586-scripts" (OuterVolumeSpecName: "scripts") pod "92d6a890-da6f-4a62-a73d-ad22f8b97586" (UID: "92d6a890-da6f-4a62-a73d-ad22f8b97586"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.524171 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92d6a890-da6f-4a62-a73d-ad22f8b97586-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "92d6a890-da6f-4a62-a73d-ad22f8b97586" (UID: "92d6a890-da6f-4a62-a73d-ad22f8b97586"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.570444 4632 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/92d6a890-da6f-4a62-a73d-ad22f8b97586-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.570492 4632 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92d6a890-da6f-4a62-a73d-ad22f8b97586-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.570503 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnfcl\" (UniqueName: \"kubernetes.io/projected/92d6a890-da6f-4a62-a73d-ad22f8b97586-kube-api-access-dnfcl\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.570514 4632 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92d6a890-da6f-4a62-a73d-ad22f8b97586-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.716422 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92d6a890-da6f-4a62-a73d-ad22f8b97586-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92d6a890-da6f-4a62-a73d-ad22f8b97586" (UID: "92d6a890-da6f-4a62-a73d-ad22f8b97586"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.780711 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d6a890-da6f-4a62-a73d-ad22f8b97586-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.798289 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.802158 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92d6a890-da6f-4a62-a73d-ad22f8b97586","Type":"ContainerDied","Data":"61eb61c712456be9b4257c5e2ea6a70dfbfca01a50f8412f9ea8b2cdb5c8b498"} Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.802351 4632 scope.go:117] "RemoveContainer" containerID="07674d77dbcfb4d04e610536847653ba6a156f4e167fb3e30be00823bd80251e" Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.824184 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7888df55c7-mw5p4" event={"ID":"904f04cd-8110-4637-8bb4-67c4b83e189b","Type":"ContainerDied","Data":"4cc9fd73a35e44ae17915d74f83df931e877bf9d4b7384d1b90a6239d1a72628"} Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.820239 4632 generic.go:334] "Generic (PLEG): container finished" podID="904f04cd-8110-4637-8bb4-67c4b83e189b" containerID="4cc9fd73a35e44ae17915d74f83df931e877bf9d4b7384d1b90a6239d1a72628" exitCode=0 Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.843169 4632 generic.go:334] "Generic (PLEG): container finished" podID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerID="2dbb3ede37abc9f5b483ae48b13ac3ed8913ac4529c34c39494b1541e21ce00b" exitCode=137 Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.843731 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1b3f0cc6-72ae-4738-baee-ce7bc9ef2998" containerName="nova-api-log" containerID="cri-o://f1168e3921eed21e9e9f3fc46e51a896484996f89685c7e47f03c083dd2451a8" gracePeriod=30 Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.844337 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="3548645f-4c72-4c75-b1bd-95116d47f6e2" containerName="nova-scheduler-scheduler" containerID="cri-o://cd653c258a170cf57afe5c4a490b6a607d8ebe8fe6115c40d2e2f88f1ae5f713" gracePeriod=30 Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.845279 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1b3f0cc6-72ae-4738-baee-ce7bc9ef2998" containerName="nova-api-api" containerID="cri-o://29733198b90e2c4e70c2780c9fc89c7f6fe69e2a7c8898353f2770088741eed7" gracePeriod=30 Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.845387 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bdb5f7878-ng2k2" event={"ID":"3e4afb91-ce26-4325-89c9-2542da2ec48a","Type":"ContainerDied","Data":"2dbb3ede37abc9f5b483ae48b13ac3ed8913ac4529c34c39494b1541e21ce00b"} Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.878766 4632 scope.go:117] "RemoveContainer" containerID="c2c74a9428ab3dfaa995d259e75dccffb44018988a076ef192a947b75ff6a7f1" Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.924993 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7888df55c7-mw5p4" Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.942099 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92d6a890-da6f-4a62-a73d-ad22f8b97586-config-data" (OuterVolumeSpecName: "config-data") pod "92d6a890-da6f-4a62-a73d-ad22f8b97586" (UID: "92d6a890-da6f-4a62-a73d-ad22f8b97586"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.948350 4632 scope.go:117] "RemoveContainer" containerID="a78858d52f4edb9f1b215cb0b5d9d5d059b8c3bfd31b64cf5e0deaf6ab27d4b4" Mar 13 10:29:05 crc kubenswrapper[4632]: I0313 10:29:05.987876 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d6a890-da6f-4a62-a73d-ad22f8b97586-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.096886 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6mzz\" (UniqueName: \"kubernetes.io/projected/904f04cd-8110-4637-8bb4-67c4b83e189b-kube-api-access-k6mzz\") pod \"904f04cd-8110-4637-8bb4-67c4b83e189b\" (UID: \"904f04cd-8110-4637-8bb4-67c4b83e189b\") " Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.163314 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/904f04cd-8110-4637-8bb4-67c4b83e189b-ovsdbserver-sb\") pod \"904f04cd-8110-4637-8bb4-67c4b83e189b\" (UID: \"904f04cd-8110-4637-8bb4-67c4b83e189b\") " Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.163473 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/904f04cd-8110-4637-8bb4-67c4b83e189b-config\") pod \"904f04cd-8110-4637-8bb4-67c4b83e189b\" (UID: \"904f04cd-8110-4637-8bb4-67c4b83e189b\") " Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.325616 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/904f04cd-8110-4637-8bb4-67c4b83e189b-kube-api-access-k6mzz" (OuterVolumeSpecName: "kube-api-access-k6mzz") pod "904f04cd-8110-4637-8bb4-67c4b83e189b" (UID: "904f04cd-8110-4637-8bb4-67c4b83e189b"). InnerVolumeSpecName "kube-api-access-k6mzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.331772 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/904f04cd-8110-4637-8bb4-67c4b83e189b-dns-svc\") pod \"904f04cd-8110-4637-8bb4-67c4b83e189b\" (UID: \"904f04cd-8110-4637-8bb4-67c4b83e189b\") " Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.332045 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/904f04cd-8110-4637-8bb4-67c4b83e189b-ovsdbserver-nb\") pod \"904f04cd-8110-4637-8bb4-67c4b83e189b\" (UID: \"904f04cd-8110-4637-8bb4-67c4b83e189b\") " Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.332155 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/904f04cd-8110-4637-8bb4-67c4b83e189b-dns-swift-storage-0\") pod \"904f04cd-8110-4637-8bb4-67c4b83e189b\" (UID: \"904f04cd-8110-4637-8bb4-67c4b83e189b\") " Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.333163 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6mzz\" (UniqueName: \"kubernetes.io/projected/904f04cd-8110-4637-8bb4-67c4b83e189b-kube-api-access-k6mzz\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.337154 4632 scope.go:117] "RemoveContainer" containerID="13beea25c7ec581a71ff8aed4dcb89b5326c0045c02a48578bb1e384a8c92d16" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.455114 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/904f04cd-8110-4637-8bb4-67c4b83e189b-config" (OuterVolumeSpecName: "config") pod "904f04cd-8110-4637-8bb4-67c4b83e189b" (UID: "904f04cd-8110-4637-8bb4-67c4b83e189b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.505087 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/904f04cd-8110-4637-8bb4-67c4b83e189b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "904f04cd-8110-4637-8bb4-67c4b83e189b" (UID: "904f04cd-8110-4637-8bb4-67c4b83e189b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.538901 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/904f04cd-8110-4637-8bb4-67c4b83e189b-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.538965 4632 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/904f04cd-8110-4637-8bb4-67c4b83e189b-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.558155 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/904f04cd-8110-4637-8bb4-67c4b83e189b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "904f04cd-8110-4637-8bb4-67c4b83e189b" (UID: "904f04cd-8110-4637-8bb4-67c4b83e189b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.563381 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/904f04cd-8110-4637-8bb4-67c4b83e189b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "904f04cd-8110-4637-8bb4-67c4b83e189b" (UID: "904f04cd-8110-4637-8bb4-67c4b83e189b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.573514 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/904f04cd-8110-4637-8bb4-67c4b83e189b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "904f04cd-8110-4637-8bb4-67c4b83e189b" (UID: "904f04cd-8110-4637-8bb4-67c4b83e189b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.640689 4632 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/904f04cd-8110-4637-8bb4-67c4b83e189b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.640728 4632 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/904f04cd-8110-4637-8bb4-67c4b83e189b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.640738 4632 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/904f04cd-8110-4637-8bb4-67c4b83e189b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.686209 4632 scope.go:117] "RemoveContainer" containerID="c9dfdd84c36e6ac95b45a488b62e176636bdecfbe3a88d3f5d2058d92ebbacdd" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.696445 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.710803 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.724216 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:29:06 crc kubenswrapper[4632]: E0313 10:29:06.729834 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92d6a890-da6f-4a62-a73d-ad22f8b97586" containerName="proxy-httpd" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.729931 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="92d6a890-da6f-4a62-a73d-ad22f8b97586" containerName="proxy-httpd" Mar 13 10:29:06 crc kubenswrapper[4632]: E0313 10:29:06.730024 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92d6a890-da6f-4a62-a73d-ad22f8b97586" containerName="ceilometer-notification-agent" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.730103 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="92d6a890-da6f-4a62-a73d-ad22f8b97586" containerName="ceilometer-notification-agent" Mar 13 10:29:06 crc kubenswrapper[4632]: E0313 10:29:06.730167 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="904f04cd-8110-4637-8bb4-67c4b83e189b" containerName="init" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.730221 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="904f04cd-8110-4637-8bb4-67c4b83e189b" containerName="init" Mar 13 10:29:06 crc kubenswrapper[4632]: E0313 10:29:06.730278 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="904f04cd-8110-4637-8bb4-67c4b83e189b" containerName="dnsmasq-dns" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.730328 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="904f04cd-8110-4637-8bb4-67c4b83e189b" containerName="dnsmasq-dns" Mar 13 10:29:06 crc kubenswrapper[4632]: E0313 10:29:06.730390 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92d6a890-da6f-4a62-a73d-ad22f8b97586" containerName="sg-core" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.730446 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="92d6a890-da6f-4a62-a73d-ad22f8b97586" containerName="sg-core" Mar 13 10:29:06 crc kubenswrapper[4632]: E0313 10:29:06.730532 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92d6a890-da6f-4a62-a73d-ad22f8b97586" containerName="ceilometer-central-agent" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.730593 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="92d6a890-da6f-4a62-a73d-ad22f8b97586" containerName="ceilometer-central-agent" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.730820 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="92d6a890-da6f-4a62-a73d-ad22f8b97586" containerName="ceilometer-central-agent" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.730889 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="904f04cd-8110-4637-8bb4-67c4b83e189b" containerName="dnsmasq-dns" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.730984 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="92d6a890-da6f-4a62-a73d-ad22f8b97586" containerName="proxy-httpd" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.731045 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="92d6a890-da6f-4a62-a73d-ad22f8b97586" containerName="sg-core" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.731180 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="92d6a890-da6f-4a62-a73d-ad22f8b97586" containerName="ceilometer-notification-agent" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.732898 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.737502 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.737853 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.771788 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.847677 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-config-data\") pod \"ceilometer-0\" (UID: \"aae6e23c-4223-4a5c-8074-0cdfb5d99e78\") " pod="openstack/ceilometer-0" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.848537 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-log-httpd\") pod \"ceilometer-0\" (UID: \"aae6e23c-4223-4a5c-8074-0cdfb5d99e78\") " pod="openstack/ceilometer-0" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.848743 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-run-httpd\") pod \"ceilometer-0\" (UID: \"aae6e23c-4223-4a5c-8074-0cdfb5d99e78\") " pod="openstack/ceilometer-0" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.848901 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aae6e23c-4223-4a5c-8074-0cdfb5d99e78\") " pod="openstack/ceilometer-0" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.849198 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aae6e23c-4223-4a5c-8074-0cdfb5d99e78\") " pod="openstack/ceilometer-0" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.849377 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-scripts\") pod \"ceilometer-0\" (UID: \"aae6e23c-4223-4a5c-8074-0cdfb5d99e78\") " pod="openstack/ceilometer-0" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.849495 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtzmr\" (UniqueName: \"kubernetes.io/projected/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-kube-api-access-vtzmr\") pod \"ceilometer-0\" (UID: \"aae6e23c-4223-4a5c-8074-0cdfb5d99e78\") " pod="openstack/ceilometer-0" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.857476 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7888df55c7-mw5p4" event={"ID":"904f04cd-8110-4637-8bb4-67c4b83e189b","Type":"ContainerDied","Data":"b305d4370882ddeb316b7136e1b6a31fb9b050f68adc94baa9487a0176e85bb7"} Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.857771 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7888df55c7-mw5p4" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.874666 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bdb5f7878-ng2k2" event={"ID":"3e4afb91-ce26-4325-89c9-2542da2ec48a","Type":"ContainerStarted","Data":"0d13a7ba01a78f7619f69655522000449193f4aa62fa0ed0a3c794480484af30"} Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.882188 4632 generic.go:334] "Generic (PLEG): container finished" podID="1b3f0cc6-72ae-4738-baee-ce7bc9ef2998" containerID="29733198b90e2c4e70c2780c9fc89c7f6fe69e2a7c8898353f2770088741eed7" exitCode=0 Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.882232 4632 generic.go:334] "Generic (PLEG): container finished" podID="1b3f0cc6-72ae-4738-baee-ce7bc9ef2998" containerID="f1168e3921eed21e9e9f3fc46e51a896484996f89685c7e47f03c083dd2451a8" exitCode=143 Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.882279 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b3f0cc6-72ae-4738-baee-ce7bc9ef2998","Type":"ContainerDied","Data":"29733198b90e2c4e70c2780c9fc89c7f6fe69e2a7c8898353f2770088741eed7"} Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.882305 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b3f0cc6-72ae-4738-baee-ce7bc9ef2998","Type":"ContainerDied","Data":"f1168e3921eed21e9e9f3fc46e51a896484996f89685c7e47f03c083dd2451a8"} Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.892394 4632 generic.go:334] "Generic (PLEG): container finished" podID="5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c" containerID="26a7aae686bb479cfcbc8b01e8e10e3fd467e5236d6ffb2ed638373687267401" exitCode=0 Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.892451 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-689764498d-rg7vt" event={"ID":"5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c","Type":"ContainerDied","Data":"26a7aae686bb479cfcbc8b01e8e10e3fd467e5236d6ffb2ed638373687267401"} Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.896731 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7888df55c7-mw5p4"] Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.910517 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7888df55c7-mw5p4"] Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.952544 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-config-data\") pod \"ceilometer-0\" (UID: \"aae6e23c-4223-4a5c-8074-0cdfb5d99e78\") " pod="openstack/ceilometer-0" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.952790 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-log-httpd\") pod \"ceilometer-0\" (UID: \"aae6e23c-4223-4a5c-8074-0cdfb5d99e78\") " pod="openstack/ceilometer-0" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.952938 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-run-httpd\") pod \"ceilometer-0\" (UID: \"aae6e23c-4223-4a5c-8074-0cdfb5d99e78\") " pod="openstack/ceilometer-0" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.953142 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aae6e23c-4223-4a5c-8074-0cdfb5d99e78\") " pod="openstack/ceilometer-0" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.953272 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aae6e23c-4223-4a5c-8074-0cdfb5d99e78\") " pod="openstack/ceilometer-0" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.953375 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-scripts\") pod \"ceilometer-0\" (UID: \"aae6e23c-4223-4a5c-8074-0cdfb5d99e78\") " pod="openstack/ceilometer-0" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.953625 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtzmr\" (UniqueName: \"kubernetes.io/projected/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-kube-api-access-vtzmr\") pod \"ceilometer-0\" (UID: \"aae6e23c-4223-4a5c-8074-0cdfb5d99e78\") " pod="openstack/ceilometer-0" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.955331 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-log-httpd\") pod \"ceilometer-0\" (UID: \"aae6e23c-4223-4a5c-8074-0cdfb5d99e78\") " pod="openstack/ceilometer-0" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.956649 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-run-httpd\") pod \"ceilometer-0\" (UID: \"aae6e23c-4223-4a5c-8074-0cdfb5d99e78\") " pod="openstack/ceilometer-0" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.958389 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aae6e23c-4223-4a5c-8074-0cdfb5d99e78\") " pod="openstack/ceilometer-0" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.959142 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-config-data\") pod \"ceilometer-0\" (UID: \"aae6e23c-4223-4a5c-8074-0cdfb5d99e78\") " pod="openstack/ceilometer-0" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.962704 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-scripts\") pod \"ceilometer-0\" (UID: \"aae6e23c-4223-4a5c-8074-0cdfb5d99e78\") " pod="openstack/ceilometer-0" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.963521 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aae6e23c-4223-4a5c-8074-0cdfb5d99e78\") " pod="openstack/ceilometer-0" Mar 13 10:29:06 crc kubenswrapper[4632]: I0313 10:29:06.980438 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtzmr\" (UniqueName: \"kubernetes.io/projected/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-kube-api-access-vtzmr\") pod \"ceilometer-0\" (UID: \"aae6e23c-4223-4a5c-8074-0cdfb5d99e78\") " pod="openstack/ceilometer-0" Mar 13 10:29:07 crc kubenswrapper[4632]: I0313 10:29:07.097079 4632 scope.go:117] "RemoveContainer" containerID="4cc9fd73a35e44ae17915d74f83df931e877bf9d4b7384d1b90a6239d1a72628" Mar 13 10:29:07 crc kubenswrapper[4632]: I0313 10:29:07.100375 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:29:07 crc kubenswrapper[4632]: I0313 10:29:07.150197 4632 scope.go:117] "RemoveContainer" containerID="10ef0805fc14af19dcea5ad4d4426bd1471fa5008be0ab704ad9b901662ea060" Mar 13 10:29:07 crc kubenswrapper[4632]: I0313 10:29:07.230984 4632 scope.go:117] "RemoveContainer" containerID="433c9aa5a02161c4bc7228b52cc460020479cbbb899bc6549755a59b8ad796f4" Mar 13 10:29:07 crc kubenswrapper[4632]: I0313 10:29:07.249240 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n4z22" Mar 13 10:29:07 crc kubenswrapper[4632]: I0313 10:29:07.254070 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n4z22" Mar 13 10:29:07 crc kubenswrapper[4632]: E0313 10:29:07.470575 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b6bf072e344c147c11f620ba388aa0850202403d9ea7d55387c6d06560823aa8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 13 10:29:07 crc kubenswrapper[4632]: E0313 10:29:07.474772 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b6bf072e344c147c11f620ba388aa0850202403d9ea7d55387c6d06560823aa8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 13 10:29:07 crc kubenswrapper[4632]: E0313 10:29:07.483768 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b6bf072e344c147c11f620ba388aa0850202403d9ea7d55387c6d06560823aa8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 13 10:29:07 crc kubenswrapper[4632]: E0313 10:29:07.483835 4632 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="89c5451e-248e-46eb-ac20-f52c3e3bcdc4" containerName="nova-cell0-conductor-conductor" Mar 13 10:29:07 crc kubenswrapper[4632]: I0313 10:29:07.877445 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:29:08 crc kubenswrapper[4632]: I0313 10:29:08.026160 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-689764498d-rg7vt" event={"ID":"5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c","Type":"ContainerStarted","Data":"468b0c833599c14f4c7d5ed1aa0e813466a55e5432fa312ef3fc463200e9d1b1"} Mar 13 10:29:08 crc kubenswrapper[4632]: I0313 10:29:08.090733 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="904f04cd-8110-4637-8bb4-67c4b83e189b" path="/var/lib/kubelet/pods/904f04cd-8110-4637-8bb4-67c4b83e189b/volumes" Mar 13 10:29:08 crc kubenswrapper[4632]: I0313 10:29:08.095606 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92d6a890-da6f-4a62-a73d-ad22f8b97586" path="/var/lib/kubelet/pods/92d6a890-da6f-4a62-a73d-ad22f8b97586/volumes" Mar 13 10:29:08 crc kubenswrapper[4632]: I0313 10:29:08.279300 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 10:29:08 crc kubenswrapper[4632]: I0313 10:29:08.400871 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3f0cc6-72ae-4738-baee-ce7bc9ef2998-config-data\") pod \"1b3f0cc6-72ae-4738-baee-ce7bc9ef2998\" (UID: \"1b3f0cc6-72ae-4738-baee-ce7bc9ef2998\") " Mar 13 10:29:08 crc kubenswrapper[4632]: I0313 10:29:08.401003 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b3f0cc6-72ae-4738-baee-ce7bc9ef2998-logs\") pod \"1b3f0cc6-72ae-4738-baee-ce7bc9ef2998\" (UID: \"1b3f0cc6-72ae-4738-baee-ce7bc9ef2998\") " Mar 13 10:29:08 crc kubenswrapper[4632]: I0313 10:29:08.401033 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3f0cc6-72ae-4738-baee-ce7bc9ef2998-combined-ca-bundle\") pod \"1b3f0cc6-72ae-4738-baee-ce7bc9ef2998\" (UID: \"1b3f0cc6-72ae-4738-baee-ce7bc9ef2998\") " Mar 13 10:29:08 crc kubenswrapper[4632]: I0313 10:29:08.401083 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9h2v\" (UniqueName: \"kubernetes.io/projected/1b3f0cc6-72ae-4738-baee-ce7bc9ef2998-kube-api-access-g9h2v\") pod \"1b3f0cc6-72ae-4738-baee-ce7bc9ef2998\" (UID: \"1b3f0cc6-72ae-4738-baee-ce7bc9ef2998\") " Mar 13 10:29:08 crc kubenswrapper[4632]: I0313 10:29:08.401404 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b3f0cc6-72ae-4738-baee-ce7bc9ef2998-logs" (OuterVolumeSpecName: "logs") pod "1b3f0cc6-72ae-4738-baee-ce7bc9ef2998" (UID: "1b3f0cc6-72ae-4738-baee-ce7bc9ef2998"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:29:08 crc kubenswrapper[4632]: I0313 10:29:08.401499 4632 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b3f0cc6-72ae-4738-baee-ce7bc9ef2998-logs\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:08 crc kubenswrapper[4632]: I0313 10:29:08.414209 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b3f0cc6-72ae-4738-baee-ce7bc9ef2998-kube-api-access-g9h2v" (OuterVolumeSpecName: "kube-api-access-g9h2v") pod "1b3f0cc6-72ae-4738-baee-ce7bc9ef2998" (UID: "1b3f0cc6-72ae-4738-baee-ce7bc9ef2998"). InnerVolumeSpecName "kube-api-access-g9h2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:29:08 crc kubenswrapper[4632]: I0313 10:29:08.447544 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n4z22" podUID="d0cabd29-ef3e-4808-8c92-3b032483789e" containerName="registry-server" probeResult="failure" output=< Mar 13 10:29:08 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 10:29:08 crc kubenswrapper[4632]: > Mar 13 10:29:08 crc kubenswrapper[4632]: I0313 10:29:08.485142 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b3f0cc6-72ae-4738-baee-ce7bc9ef2998-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b3f0cc6-72ae-4738-baee-ce7bc9ef2998" (UID: "1b3f0cc6-72ae-4738-baee-ce7bc9ef2998"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:29:08 crc kubenswrapper[4632]: I0313 10:29:08.503405 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3f0cc6-72ae-4738-baee-ce7bc9ef2998-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:08 crc kubenswrapper[4632]: I0313 10:29:08.503434 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9h2v\" (UniqueName: \"kubernetes.io/projected/1b3f0cc6-72ae-4738-baee-ce7bc9ef2998-kube-api-access-g9h2v\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:08 crc kubenswrapper[4632]: I0313 10:29:08.503506 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b3f0cc6-72ae-4738-baee-ce7bc9ef2998-config-data" (OuterVolumeSpecName: "config-data") pod "1b3f0cc6-72ae-4738-baee-ce7bc9ef2998" (UID: "1b3f0cc6-72ae-4738-baee-ce7bc9ef2998"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:29:08 crc kubenswrapper[4632]: I0313 10:29:08.671230 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3f0cc6-72ae-4738-baee-ce7bc9ef2998-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.078522 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aae6e23c-4223-4a5c-8074-0cdfb5d99e78","Type":"ContainerStarted","Data":"fb2da7115022d9a5776bf008164ce9b7e7fedf403dd765c92f96767951f31f8f"} Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.079718 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aae6e23c-4223-4a5c-8074-0cdfb5d99e78","Type":"ContainerStarted","Data":"ac0a78784a347100c3f7eb503566ca534d51cff4225e41d7e119e9bf5dd8e6ea"} Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.096528 4632 generic.go:334] "Generic (PLEG): container finished" podID="89c5451e-248e-46eb-ac20-f52c3e3bcdc4" containerID="b6bf072e344c147c11f620ba388aa0850202403d9ea7d55387c6d06560823aa8" exitCode=0 Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.096831 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"89c5451e-248e-46eb-ac20-f52c3e3bcdc4","Type":"ContainerDied","Data":"b6bf072e344c147c11f620ba388aa0850202403d9ea7d55387c6d06560823aa8"} Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.143328 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.143601 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b3f0cc6-72ae-4738-baee-ce7bc9ef2998","Type":"ContainerDied","Data":"34daac34beb9c4f24aabab024257734fad48bd622c9a19af69564a5c1af316f2"} Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.143696 4632 scope.go:117] "RemoveContainer" containerID="29733198b90e2c4e70c2780c9fc89c7f6fe69e2a7c8898353f2770088741eed7" Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.207771 4632 scope.go:117] "RemoveContainer" containerID="f1168e3921eed21e9e9f3fc46e51a896484996f89685c7e47f03c083dd2451a8" Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.252991 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.283615 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.302921 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 13 10:29:09 crc kubenswrapper[4632]: E0313 10:29:09.303337 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b3f0cc6-72ae-4738-baee-ce7bc9ef2998" containerName="nova-api-log" Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.303356 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b3f0cc6-72ae-4738-baee-ce7bc9ef2998" containerName="nova-api-log" Mar 13 10:29:09 crc kubenswrapper[4632]: E0313 10:29:09.303366 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b3f0cc6-72ae-4738-baee-ce7bc9ef2998" containerName="nova-api-api" Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.303372 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b3f0cc6-72ae-4738-baee-ce7bc9ef2998" containerName="nova-api-api" Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.303597 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b3f0cc6-72ae-4738-baee-ce7bc9ef2998" containerName="nova-api-log" Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.303636 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b3f0cc6-72ae-4738-baee-ce7bc9ef2998" containerName="nova-api-api" Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.304664 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.307248 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.326252 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 13 10:29:09 crc kubenswrapper[4632]: E0313 10:29:09.357842 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd653c258a170cf57afe5c4a490b6a607d8ebe8fe6115c40d2e2f88f1ae5f713" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 13 10:29:09 crc kubenswrapper[4632]: E0313 10:29:09.359965 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd653c258a170cf57afe5c4a490b6a607d8ebe8fe6115c40d2e2f88f1ae5f713" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 13 10:29:09 crc kubenswrapper[4632]: E0313 10:29:09.363505 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd653c258a170cf57afe5c4a490b6a607d8ebe8fe6115c40d2e2f88f1ae5f713" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 13 10:29:09 crc kubenswrapper[4632]: E0313 10:29:09.363566 4632 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="3548645f-4c72-4c75-b1bd-95116d47f6e2" containerName="nova-scheduler-scheduler" Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.490984 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71d36cc3-a7e3-47e5-b98f-599dc669ccc5-config-data\") pod \"nova-api-0\" (UID: \"71d36cc3-a7e3-47e5-b98f-599dc669ccc5\") " pod="openstack/nova-api-0" Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.491118 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71d36cc3-a7e3-47e5-b98f-599dc669ccc5-logs\") pod \"nova-api-0\" (UID: \"71d36cc3-a7e3-47e5-b98f-599dc669ccc5\") " pod="openstack/nova-api-0" Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.491206 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l5th\" (UniqueName: \"kubernetes.io/projected/71d36cc3-a7e3-47e5-b98f-599dc669ccc5-kube-api-access-8l5th\") pod \"nova-api-0\" (UID: \"71d36cc3-a7e3-47e5-b98f-599dc669ccc5\") " pod="openstack/nova-api-0" Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.491254 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71d36cc3-a7e3-47e5-b98f-599dc669ccc5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"71d36cc3-a7e3-47e5-b98f-599dc669ccc5\") " pod="openstack/nova-api-0" Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.593268 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71d36cc3-a7e3-47e5-b98f-599dc669ccc5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"71d36cc3-a7e3-47e5-b98f-599dc669ccc5\") " pod="openstack/nova-api-0" Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.593381 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71d36cc3-a7e3-47e5-b98f-599dc669ccc5-config-data\") pod \"nova-api-0\" (UID: \"71d36cc3-a7e3-47e5-b98f-599dc669ccc5\") " pod="openstack/nova-api-0" Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.593486 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71d36cc3-a7e3-47e5-b98f-599dc669ccc5-logs\") pod \"nova-api-0\" (UID: \"71d36cc3-a7e3-47e5-b98f-599dc669ccc5\") " pod="openstack/nova-api-0" Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.593570 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l5th\" (UniqueName: \"kubernetes.io/projected/71d36cc3-a7e3-47e5-b98f-599dc669ccc5-kube-api-access-8l5th\") pod \"nova-api-0\" (UID: \"71d36cc3-a7e3-47e5-b98f-599dc669ccc5\") " pod="openstack/nova-api-0" Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.600126 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71d36cc3-a7e3-47e5-b98f-599dc669ccc5-logs\") pod \"nova-api-0\" (UID: \"71d36cc3-a7e3-47e5-b98f-599dc669ccc5\") " pod="openstack/nova-api-0" Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.607846 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71d36cc3-a7e3-47e5-b98f-599dc669ccc5-config-data\") pod \"nova-api-0\" (UID: \"71d36cc3-a7e3-47e5-b98f-599dc669ccc5\") " pod="openstack/nova-api-0" Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.616216 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71d36cc3-a7e3-47e5-b98f-599dc669ccc5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"71d36cc3-a7e3-47e5-b98f-599dc669ccc5\") " pod="openstack/nova-api-0" Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.650741 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l5th\" (UniqueName: \"kubernetes.io/projected/71d36cc3-a7e3-47e5-b98f-599dc669ccc5-kube-api-access-8l5th\") pod \"nova-api-0\" (UID: \"71d36cc3-a7e3-47e5-b98f-599dc669ccc5\") " pod="openstack/nova-api-0" Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.695503 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.799659 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.910121 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjsd4\" (UniqueName: \"kubernetes.io/projected/89c5451e-248e-46eb-ac20-f52c3e3bcdc4-kube-api-access-pjsd4\") pod \"89c5451e-248e-46eb-ac20-f52c3e3bcdc4\" (UID: \"89c5451e-248e-46eb-ac20-f52c3e3bcdc4\") " Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.910454 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89c5451e-248e-46eb-ac20-f52c3e3bcdc4-combined-ca-bundle\") pod \"89c5451e-248e-46eb-ac20-f52c3e3bcdc4\" (UID: \"89c5451e-248e-46eb-ac20-f52c3e3bcdc4\") " Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.910558 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89c5451e-248e-46eb-ac20-f52c3e3bcdc4-config-data\") pod \"89c5451e-248e-46eb-ac20-f52c3e3bcdc4\" (UID: \"89c5451e-248e-46eb-ac20-f52c3e3bcdc4\") " Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.926093 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89c5451e-248e-46eb-ac20-f52c3e3bcdc4-kube-api-access-pjsd4" (OuterVolumeSpecName: "kube-api-access-pjsd4") pod "89c5451e-248e-46eb-ac20-f52c3e3bcdc4" (UID: "89c5451e-248e-46eb-ac20-f52c3e3bcdc4"). InnerVolumeSpecName "kube-api-access-pjsd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:29:09 crc kubenswrapper[4632]: I0313 10:29:09.966216 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89c5451e-248e-46eb-ac20-f52c3e3bcdc4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89c5451e-248e-46eb-ac20-f52c3e3bcdc4" (UID: "89c5451e-248e-46eb-ac20-f52c3e3bcdc4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:29:10 crc kubenswrapper[4632]: I0313 10:29:10.014456 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjsd4\" (UniqueName: \"kubernetes.io/projected/89c5451e-248e-46eb-ac20-f52c3e3bcdc4-kube-api-access-pjsd4\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:10 crc kubenswrapper[4632]: I0313 10:29:10.014725 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89c5451e-248e-46eb-ac20-f52c3e3bcdc4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:10 crc kubenswrapper[4632]: I0313 10:29:10.111097 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89c5451e-248e-46eb-ac20-f52c3e3bcdc4-config-data" (OuterVolumeSpecName: "config-data") pod "89c5451e-248e-46eb-ac20-f52c3e3bcdc4" (UID: "89c5451e-248e-46eb-ac20-f52c3e3bcdc4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:29:10 crc kubenswrapper[4632]: I0313 10:29:10.113384 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b3f0cc6-72ae-4738-baee-ce7bc9ef2998" path="/var/lib/kubelet/pods/1b3f0cc6-72ae-4738-baee-ce7bc9ef2998/volumes" Mar 13 10:29:10 crc kubenswrapper[4632]: I0313 10:29:10.120602 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89c5451e-248e-46eb-ac20-f52c3e3bcdc4-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:10 crc kubenswrapper[4632]: I0313 10:29:10.214398 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 13 10:29:10 crc kubenswrapper[4632]: I0313 10:29:10.215344 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"89c5451e-248e-46eb-ac20-f52c3e3bcdc4","Type":"ContainerDied","Data":"fea8b62da5fff833a90864e9fa4a28877f40e3642c9c75596310ee934707e980"} Mar 13 10:29:10 crc kubenswrapper[4632]: I0313 10:29:10.215406 4632 scope.go:117] "RemoveContainer" containerID="b6bf072e344c147c11f620ba388aa0850202403d9ea7d55387c6d06560823aa8" Mar 13 10:29:10 crc kubenswrapper[4632]: I0313 10:29:10.239010 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:29:10 crc kubenswrapper[4632]: I0313 10:29:10.278469 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aae6e23c-4223-4a5c-8074-0cdfb5d99e78","Type":"ContainerStarted","Data":"b21cf0ac61b24fc322cab134fff86780fa6ec89891d057825ed89069f94de13b"} Mar 13 10:29:10 crc kubenswrapper[4632]: I0313 10:29:10.307045 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 13 10:29:10 crc kubenswrapper[4632]: I0313 10:29:10.340862 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 13 10:29:10 crc kubenswrapper[4632]: I0313 10:29:10.427014 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 13 10:29:10 crc kubenswrapper[4632]: E0313 10:29:10.427517 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89c5451e-248e-46eb-ac20-f52c3e3bcdc4" containerName="nova-cell0-conductor-conductor" Mar 13 10:29:10 crc kubenswrapper[4632]: I0313 10:29:10.427534 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="89c5451e-248e-46eb-ac20-f52c3e3bcdc4" containerName="nova-cell0-conductor-conductor" Mar 13 10:29:10 crc kubenswrapper[4632]: I0313 10:29:10.427791 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="89c5451e-248e-46eb-ac20-f52c3e3bcdc4" containerName="nova-cell0-conductor-conductor" Mar 13 10:29:10 crc kubenswrapper[4632]: I0313 10:29:10.436222 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 13 10:29:10 crc kubenswrapper[4632]: I0313 10:29:10.446743 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 13 10:29:10 crc kubenswrapper[4632]: I0313 10:29:10.449965 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 13 10:29:10 crc kubenswrapper[4632]: I0313 10:29:10.540801 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbe53f0a-8bf3-4572-b5c8-01d5ed72c426-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"dbe53f0a-8bf3-4572-b5c8-01d5ed72c426\") " pod="openstack/nova-cell0-conductor-0" Mar 13 10:29:10 crc kubenswrapper[4632]: I0313 10:29:10.541183 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe53f0a-8bf3-4572-b5c8-01d5ed72c426-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"dbe53f0a-8bf3-4572-b5c8-01d5ed72c426\") " pod="openstack/nova-cell0-conductor-0" Mar 13 10:29:10 crc kubenswrapper[4632]: I0313 10:29:10.541236 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j5rv\" (UniqueName: \"kubernetes.io/projected/dbe53f0a-8bf3-4572-b5c8-01d5ed72c426-kube-api-access-5j5rv\") pod \"nova-cell0-conductor-0\" (UID: \"dbe53f0a-8bf3-4572-b5c8-01d5ed72c426\") " pod="openstack/nova-cell0-conductor-0" Mar 13 10:29:10 crc kubenswrapper[4632]: I0313 10:29:10.642682 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe53f0a-8bf3-4572-b5c8-01d5ed72c426-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"dbe53f0a-8bf3-4572-b5c8-01d5ed72c426\") " pod="openstack/nova-cell0-conductor-0" Mar 13 10:29:10 crc kubenswrapper[4632]: I0313 10:29:10.643064 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j5rv\" (UniqueName: \"kubernetes.io/projected/dbe53f0a-8bf3-4572-b5c8-01d5ed72c426-kube-api-access-5j5rv\") pod \"nova-cell0-conductor-0\" (UID: \"dbe53f0a-8bf3-4572-b5c8-01d5ed72c426\") " pod="openstack/nova-cell0-conductor-0" Mar 13 10:29:10 crc kubenswrapper[4632]: I0313 10:29:10.643251 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbe53f0a-8bf3-4572-b5c8-01d5ed72c426-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"dbe53f0a-8bf3-4572-b5c8-01d5ed72c426\") " pod="openstack/nova-cell0-conductor-0" Mar 13 10:29:10 crc kubenswrapper[4632]: I0313 10:29:10.646835 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 13 10:29:10 crc kubenswrapper[4632]: I0313 10:29:10.654345 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe53f0a-8bf3-4572-b5c8-01d5ed72c426-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"dbe53f0a-8bf3-4572-b5c8-01d5ed72c426\") " pod="openstack/nova-cell0-conductor-0" Mar 13 10:29:10 crc kubenswrapper[4632]: I0313 10:29:10.657474 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbe53f0a-8bf3-4572-b5c8-01d5ed72c426-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"dbe53f0a-8bf3-4572-b5c8-01d5ed72c426\") " pod="openstack/nova-cell0-conductor-0" Mar 13 10:29:10 crc kubenswrapper[4632]: I0313 10:29:10.670613 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j5rv\" (UniqueName: \"kubernetes.io/projected/dbe53f0a-8bf3-4572-b5c8-01d5ed72c426-kube-api-access-5j5rv\") pod \"nova-cell0-conductor-0\" (UID: \"dbe53f0a-8bf3-4572-b5c8-01d5ed72c426\") " pod="openstack/nova-cell0-conductor-0" Mar 13 10:29:10 crc kubenswrapper[4632]: I0313 10:29:10.793639 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 13 10:29:11 crc kubenswrapper[4632]: I0313 10:29:11.297119 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71d36cc3-a7e3-47e5-b98f-599dc669ccc5","Type":"ContainerStarted","Data":"4df2156f6fe32fab45f05d256a8ec2adb23f786a2989c939b92b996a496f122f"} Mar 13 10:29:11 crc kubenswrapper[4632]: I0313 10:29:11.297450 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71d36cc3-a7e3-47e5-b98f-599dc669ccc5","Type":"ContainerStarted","Data":"85c564a65a9f43d06ad1647efab572b9e533b3cb0feadf64be1f6226a656d6e9"} Mar 13 10:29:11 crc kubenswrapper[4632]: I0313 10:29:11.309101 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aae6e23c-4223-4a5c-8074-0cdfb5d99e78","Type":"ContainerStarted","Data":"585bc9dbde7bcb94e64b2121baa3ef51bee81b44562ad73e38572b5035710787"} Mar 13 10:29:11 crc kubenswrapper[4632]: I0313 10:29:11.446953 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 13 10:29:12 crc kubenswrapper[4632]: I0313 10:29:12.055103 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89c5451e-248e-46eb-ac20-f52c3e3bcdc4" path="/var/lib/kubelet/pods/89c5451e-248e-46eb-ac20-f52c3e3bcdc4/volumes" Mar 13 10:29:12 crc kubenswrapper[4632]: I0313 10:29:12.320984 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"dbe53f0a-8bf3-4572-b5c8-01d5ed72c426","Type":"ContainerStarted","Data":"37c2692b2411684aa86eeacfa77a4627a5e26055bee50129ef79a6b5d713b14b"} Mar 13 10:29:12 crc kubenswrapper[4632]: I0313 10:29:12.321033 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"dbe53f0a-8bf3-4572-b5c8-01d5ed72c426","Type":"ContainerStarted","Data":"db803b63c80478ad379bb99fe0594d8b3d74a21a8ad14d3a30fc918f8f1ba6e1"} Mar 13 10:29:12 crc kubenswrapper[4632]: I0313 10:29:12.322923 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 13 10:29:12 crc kubenswrapper[4632]: I0313 10:29:12.323669 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71d36cc3-a7e3-47e5-b98f-599dc669ccc5","Type":"ContainerStarted","Data":"e604b5ae6ce92dde6f33a140a99a7c7d5949aebd7f4821ef087f38b50a0e872b"} Mar 13 10:29:12 crc kubenswrapper[4632]: I0313 10:29:12.341648 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.34161136 podStartE2EDuration="2.34161136s" podCreationTimestamp="2026-03-13 10:29:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:29:12.338865823 +0000 UTC m=+1526.361395966" watchObservedRunningTime="2026-03-13 10:29:12.34161136 +0000 UTC m=+1526.364141493" Mar 13 10:29:12 crc kubenswrapper[4632]: I0313 10:29:12.360621 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.360598262 podStartE2EDuration="3.360598262s" podCreationTimestamp="2026-03-13 10:29:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:29:12.358495111 +0000 UTC m=+1526.381025244" watchObservedRunningTime="2026-03-13 10:29:12.360598262 +0000 UTC m=+1526.383128385" Mar 13 10:29:13 crc kubenswrapper[4632]: I0313 10:29:13.336446 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aae6e23c-4223-4a5c-8074-0cdfb5d99e78","Type":"ContainerStarted","Data":"604c723ae68fd3213f4723bdf3524877331344576a1c363f42185040543011d1"} Mar 13 10:29:13 crc kubenswrapper[4632]: I0313 10:29:13.337454 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aae6e23c-4223-4a5c-8074-0cdfb5d99e78" containerName="ceilometer-central-agent" containerID="cri-o://fb2da7115022d9a5776bf008164ce9b7e7fedf403dd765c92f96767951f31f8f" gracePeriod=30 Mar 13 10:29:13 crc kubenswrapper[4632]: I0313 10:29:13.337480 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 13 10:29:13 crc kubenswrapper[4632]: I0313 10:29:13.337479 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aae6e23c-4223-4a5c-8074-0cdfb5d99e78" containerName="proxy-httpd" containerID="cri-o://604c723ae68fd3213f4723bdf3524877331344576a1c363f42185040543011d1" gracePeriod=30 Mar 13 10:29:13 crc kubenswrapper[4632]: I0313 10:29:13.337495 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aae6e23c-4223-4a5c-8074-0cdfb5d99e78" containerName="sg-core" containerID="cri-o://585bc9dbde7bcb94e64b2121baa3ef51bee81b44562ad73e38572b5035710787" gracePeriod=30 Mar 13 10:29:13 crc kubenswrapper[4632]: I0313 10:29:13.337521 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aae6e23c-4223-4a5c-8074-0cdfb5d99e78" containerName="ceilometer-notification-agent" containerID="cri-o://b21cf0ac61b24fc322cab134fff86780fa6ec89891d057825ed89069f94de13b" gracePeriod=30 Mar 13 10:29:13 crc kubenswrapper[4632]: I0313 10:29:13.393908 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.929082165 podStartE2EDuration="7.393884886s" podCreationTimestamp="2026-03-13 10:29:06 +0000 UTC" firstStartedPulling="2026-03-13 10:29:07.972752583 +0000 UTC m=+1521.995282726" lastFinishedPulling="2026-03-13 10:29:12.437555314 +0000 UTC m=+1526.460085447" observedRunningTime="2026-03-13 10:29:13.376074042 +0000 UTC m=+1527.398604175" watchObservedRunningTime="2026-03-13 10:29:13.393884886 +0000 UTC m=+1527.416415019" Mar 13 10:29:14 crc kubenswrapper[4632]: I0313 10:29:14.349887 4632 generic.go:334] "Generic (PLEG): container finished" podID="aae6e23c-4223-4a5c-8074-0cdfb5d99e78" containerID="604c723ae68fd3213f4723bdf3524877331344576a1c363f42185040543011d1" exitCode=0 Mar 13 10:29:14 crc kubenswrapper[4632]: I0313 10:29:14.350208 4632 generic.go:334] "Generic (PLEG): container finished" podID="aae6e23c-4223-4a5c-8074-0cdfb5d99e78" containerID="585bc9dbde7bcb94e64b2121baa3ef51bee81b44562ad73e38572b5035710787" exitCode=2 Mar 13 10:29:14 crc kubenswrapper[4632]: I0313 10:29:14.349932 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aae6e23c-4223-4a5c-8074-0cdfb5d99e78","Type":"ContainerDied","Data":"604c723ae68fd3213f4723bdf3524877331344576a1c363f42185040543011d1"} Mar 13 10:29:14 crc kubenswrapper[4632]: I0313 10:29:14.350257 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aae6e23c-4223-4a5c-8074-0cdfb5d99e78","Type":"ContainerDied","Data":"585bc9dbde7bcb94e64b2121baa3ef51bee81b44562ad73e38572b5035710787"} Mar 13 10:29:14 crc kubenswrapper[4632]: E0313 10:29:14.353407 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd653c258a170cf57afe5c4a490b6a607d8ebe8fe6115c40d2e2f88f1ae5f713" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 13 10:29:14 crc kubenswrapper[4632]: E0313 10:29:14.354776 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd653c258a170cf57afe5c4a490b6a607d8ebe8fe6115c40d2e2f88f1ae5f713" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 13 10:29:14 crc kubenswrapper[4632]: E0313 10:29:14.356467 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd653c258a170cf57afe5c4a490b6a607d8ebe8fe6115c40d2e2f88f1ae5f713" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 13 10:29:14 crc kubenswrapper[4632]: E0313 10:29:14.356505 4632 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="3548645f-4c72-4c75-b1bd-95116d47f6e2" containerName="nova-scheduler-scheduler" Mar 13 10:29:15 crc kubenswrapper[4632]: I0313 10:29:15.394377 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:29:15 crc kubenswrapper[4632]: I0313 10:29:15.394813 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:29:16 crc kubenswrapper[4632]: I0313 10:29:15.857046 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-689764498d-rg7vt" Mar 13 10:29:16 crc kubenswrapper[4632]: I0313 10:29:15.858031 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-689764498d-rg7vt" Mar 13 10:29:16 crc kubenswrapper[4632]: I0313 10:29:16.393873 4632 generic.go:334] "Generic (PLEG): container finished" podID="bcce9343-52a3-4e6d-98fd-8e66390020ac" containerID="379356ecac878a5f4776d015be267e8c7eec62c977ce924abd53ff44455ce8e4" exitCode=0 Mar 13 10:29:16 crc kubenswrapper[4632]: I0313 10:29:16.394002 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gwj5n" event={"ID":"bcce9343-52a3-4e6d-98fd-8e66390020ac","Type":"ContainerDied","Data":"379356ecac878a5f4776d015be267e8c7eec62c977ce924abd53ff44455ce8e4"} Mar 13 10:29:17 crc kubenswrapper[4632]: I0313 10:29:17.411111 4632 generic.go:334] "Generic (PLEG): container finished" podID="cf19672e-3284-49bc-a460-f2e629881d9b" containerID="a8f98d9cfd7da7677c0fe463edd081d6aa2858ecb1027917673862b2700f1545" exitCode=0 Mar 13 10:29:17 crc kubenswrapper[4632]: I0313 10:29:17.411366 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9n7gj" event={"ID":"cf19672e-3284-49bc-a460-f2e629881d9b","Type":"ContainerDied","Data":"a8f98d9cfd7da7677c0fe463edd081d6aa2858ecb1027917673862b2700f1545"} Mar 13 10:29:17 crc kubenswrapper[4632]: I0313 10:29:17.850579 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gwj5n" Mar 13 10:29:17 crc kubenswrapper[4632]: I0313 10:29:17.967913 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcce9343-52a3-4e6d-98fd-8e66390020ac-config-data\") pod \"bcce9343-52a3-4e6d-98fd-8e66390020ac\" (UID: \"bcce9343-52a3-4e6d-98fd-8e66390020ac\") " Mar 13 10:29:17 crc kubenswrapper[4632]: I0313 10:29:17.968100 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcce9343-52a3-4e6d-98fd-8e66390020ac-combined-ca-bundle\") pod \"bcce9343-52a3-4e6d-98fd-8e66390020ac\" (UID: \"bcce9343-52a3-4e6d-98fd-8e66390020ac\") " Mar 13 10:29:17 crc kubenswrapper[4632]: I0313 10:29:17.968287 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxd74\" (UniqueName: \"kubernetes.io/projected/bcce9343-52a3-4e6d-98fd-8e66390020ac-kube-api-access-sxd74\") pod \"bcce9343-52a3-4e6d-98fd-8e66390020ac\" (UID: \"bcce9343-52a3-4e6d-98fd-8e66390020ac\") " Mar 13 10:29:17 crc kubenswrapper[4632]: I0313 10:29:17.968367 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcce9343-52a3-4e6d-98fd-8e66390020ac-scripts\") pod \"bcce9343-52a3-4e6d-98fd-8e66390020ac\" (UID: \"bcce9343-52a3-4e6d-98fd-8e66390020ac\") " Mar 13 10:29:17 crc kubenswrapper[4632]: I0313 10:29:17.974707 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcce9343-52a3-4e6d-98fd-8e66390020ac-kube-api-access-sxd74" (OuterVolumeSpecName: "kube-api-access-sxd74") pod "bcce9343-52a3-4e6d-98fd-8e66390020ac" (UID: "bcce9343-52a3-4e6d-98fd-8e66390020ac"). InnerVolumeSpecName "kube-api-access-sxd74". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:29:17 crc kubenswrapper[4632]: I0313 10:29:17.977783 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcce9343-52a3-4e6d-98fd-8e66390020ac-scripts" (OuterVolumeSpecName: "scripts") pod "bcce9343-52a3-4e6d-98fd-8e66390020ac" (UID: "bcce9343-52a3-4e6d-98fd-8e66390020ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:29:18 crc kubenswrapper[4632]: I0313 10:29:18.007744 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcce9343-52a3-4e6d-98fd-8e66390020ac-config-data" (OuterVolumeSpecName: "config-data") pod "bcce9343-52a3-4e6d-98fd-8e66390020ac" (UID: "bcce9343-52a3-4e6d-98fd-8e66390020ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:29:18 crc kubenswrapper[4632]: I0313 10:29:18.016106 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcce9343-52a3-4e6d-98fd-8e66390020ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bcce9343-52a3-4e6d-98fd-8e66390020ac" (UID: "bcce9343-52a3-4e6d-98fd-8e66390020ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:29:18 crc kubenswrapper[4632]: I0313 10:29:18.070185 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcce9343-52a3-4e6d-98fd-8e66390020ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:18 crc kubenswrapper[4632]: I0313 10:29:18.070217 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxd74\" (UniqueName: \"kubernetes.io/projected/bcce9343-52a3-4e6d-98fd-8e66390020ac-kube-api-access-sxd74\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:18 crc kubenswrapper[4632]: I0313 10:29:18.070228 4632 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcce9343-52a3-4e6d-98fd-8e66390020ac-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:18 crc kubenswrapper[4632]: I0313 10:29:18.070237 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcce9343-52a3-4e6d-98fd-8e66390020ac-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:18 crc kubenswrapper[4632]: I0313 10:29:18.298036 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n4z22" podUID="d0cabd29-ef3e-4808-8c92-3b032483789e" containerName="registry-server" probeResult="failure" output=< Mar 13 10:29:18 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 10:29:18 crc kubenswrapper[4632]: > Mar 13 10:29:18 crc kubenswrapper[4632]: I0313 10:29:18.423624 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gwj5n" Mar 13 10:29:18 crc kubenswrapper[4632]: I0313 10:29:18.424832 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gwj5n" event={"ID":"bcce9343-52a3-4e6d-98fd-8e66390020ac","Type":"ContainerDied","Data":"f930aa1f0069e1fe78556089c981d58cc3cf6a82579a76e645212ffad42b673e"} Mar 13 10:29:18 crc kubenswrapper[4632]: I0313 10:29:18.424857 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f930aa1f0069e1fe78556089c981d58cc3cf6a82579a76e645212ffad42b673e" Mar 13 10:29:18 crc kubenswrapper[4632]: I0313 10:29:18.883148 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9n7gj" Mar 13 10:29:18 crc kubenswrapper[4632]: I0313 10:29:18.990545 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cpkl\" (UniqueName: \"kubernetes.io/projected/cf19672e-3284-49bc-a460-f2e629881d9b-kube-api-access-2cpkl\") pod \"cf19672e-3284-49bc-a460-f2e629881d9b\" (UID: \"cf19672e-3284-49bc-a460-f2e629881d9b\") " Mar 13 10:29:18 crc kubenswrapper[4632]: I0313 10:29:18.991064 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf19672e-3284-49bc-a460-f2e629881d9b-scripts\") pod \"cf19672e-3284-49bc-a460-f2e629881d9b\" (UID: \"cf19672e-3284-49bc-a460-f2e629881d9b\") " Mar 13 10:29:18 crc kubenswrapper[4632]: I0313 10:29:18.991159 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf19672e-3284-49bc-a460-f2e629881d9b-combined-ca-bundle\") pod \"cf19672e-3284-49bc-a460-f2e629881d9b\" (UID: \"cf19672e-3284-49bc-a460-f2e629881d9b\") " Mar 13 10:29:18 crc kubenswrapper[4632]: I0313 10:29:18.991268 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf19672e-3284-49bc-a460-f2e629881d9b-config-data\") pod \"cf19672e-3284-49bc-a460-f2e629881d9b\" (UID: \"cf19672e-3284-49bc-a460-f2e629881d9b\") " Mar 13 10:29:19 crc kubenswrapper[4632]: I0313 10:29:19.006414 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf19672e-3284-49bc-a460-f2e629881d9b-kube-api-access-2cpkl" (OuterVolumeSpecName: "kube-api-access-2cpkl") pod "cf19672e-3284-49bc-a460-f2e629881d9b" (UID: "cf19672e-3284-49bc-a460-f2e629881d9b"). InnerVolumeSpecName "kube-api-access-2cpkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:29:19 crc kubenswrapper[4632]: I0313 10:29:19.015261 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf19672e-3284-49bc-a460-f2e629881d9b-scripts" (OuterVolumeSpecName: "scripts") pod "cf19672e-3284-49bc-a460-f2e629881d9b" (UID: "cf19672e-3284-49bc-a460-f2e629881d9b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:29:19 crc kubenswrapper[4632]: I0313 10:29:19.034894 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf19672e-3284-49bc-a460-f2e629881d9b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf19672e-3284-49bc-a460-f2e629881d9b" (UID: "cf19672e-3284-49bc-a460-f2e629881d9b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:29:19 crc kubenswrapper[4632]: I0313 10:29:19.035556 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf19672e-3284-49bc-a460-f2e629881d9b-config-data" (OuterVolumeSpecName: "config-data") pod "cf19672e-3284-49bc-a460-f2e629881d9b" (UID: "cf19672e-3284-49bc-a460-f2e629881d9b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:29:19 crc kubenswrapper[4632]: I0313 10:29:19.095668 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cpkl\" (UniqueName: \"kubernetes.io/projected/cf19672e-3284-49bc-a460-f2e629881d9b-kube-api-access-2cpkl\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:19 crc kubenswrapper[4632]: I0313 10:29:19.095704 4632 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf19672e-3284-49bc-a460-f2e629881d9b-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:19 crc kubenswrapper[4632]: I0313 10:29:19.095716 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf19672e-3284-49bc-a460-f2e629881d9b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:19 crc kubenswrapper[4632]: I0313 10:29:19.095727 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf19672e-3284-49bc-a460-f2e629881d9b-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:19 crc kubenswrapper[4632]: E0313 10:29:19.355343 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd653c258a170cf57afe5c4a490b6a607d8ebe8fe6115c40d2e2f88f1ae5f713" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 13 10:29:19 crc kubenswrapper[4632]: E0313 10:29:19.356711 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd653c258a170cf57afe5c4a490b6a607d8ebe8fe6115c40d2e2f88f1ae5f713" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 13 10:29:19 crc kubenswrapper[4632]: E0313 10:29:19.357814 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd653c258a170cf57afe5c4a490b6a607d8ebe8fe6115c40d2e2f88f1ae5f713" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 13 10:29:19 crc kubenswrapper[4632]: E0313 10:29:19.357851 4632 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="3548645f-4c72-4c75-b1bd-95116d47f6e2" containerName="nova-scheduler-scheduler" Mar 13 10:29:19 crc kubenswrapper[4632]: I0313 10:29:19.435077 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9n7gj" event={"ID":"cf19672e-3284-49bc-a460-f2e629881d9b","Type":"ContainerDied","Data":"2552baf18c3652ea7e85b1cf98a826d4538b2dbd01aa4210322514f06f9b9c99"} Mar 13 10:29:19 crc kubenswrapper[4632]: I0313 10:29:19.435119 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2552baf18c3652ea7e85b1cf98a826d4538b2dbd01aa4210322514f06f9b9c99" Mar 13 10:29:19 crc kubenswrapper[4632]: I0313 10:29:19.435152 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9n7gj" Mar 13 10:29:19 crc kubenswrapper[4632]: I0313 10:29:19.526259 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 13 10:29:19 crc kubenswrapper[4632]: E0313 10:29:19.527412 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcce9343-52a3-4e6d-98fd-8e66390020ac" containerName="nova-manage" Mar 13 10:29:19 crc kubenswrapper[4632]: I0313 10:29:19.527429 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcce9343-52a3-4e6d-98fd-8e66390020ac" containerName="nova-manage" Mar 13 10:29:19 crc kubenswrapper[4632]: E0313 10:29:19.527446 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf19672e-3284-49bc-a460-f2e629881d9b" containerName="nova-cell1-conductor-db-sync" Mar 13 10:29:19 crc kubenswrapper[4632]: I0313 10:29:19.527453 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf19672e-3284-49bc-a460-f2e629881d9b" containerName="nova-cell1-conductor-db-sync" Mar 13 10:29:19 crc kubenswrapper[4632]: I0313 10:29:19.527628 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcce9343-52a3-4e6d-98fd-8e66390020ac" containerName="nova-manage" Mar 13 10:29:19 crc kubenswrapper[4632]: I0313 10:29:19.527666 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf19672e-3284-49bc-a460-f2e629881d9b" containerName="nova-cell1-conductor-db-sync" Mar 13 10:29:19 crc kubenswrapper[4632]: I0313 10:29:19.528270 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 13 10:29:19 crc kubenswrapper[4632]: I0313 10:29:19.531363 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 13 10:29:19 crc kubenswrapper[4632]: I0313 10:29:19.588303 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 13 10:29:19 crc kubenswrapper[4632]: I0313 10:29:19.619583 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/febcbdc5-25a6-46f7-8c06-d6f45624a466-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"febcbdc5-25a6-46f7-8c06-d6f45624a466\") " pod="openstack/nova-cell1-conductor-0" Mar 13 10:29:19 crc kubenswrapper[4632]: I0313 10:29:19.619649 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/febcbdc5-25a6-46f7-8c06-d6f45624a466-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"febcbdc5-25a6-46f7-8c06-d6f45624a466\") " pod="openstack/nova-cell1-conductor-0" Mar 13 10:29:19 crc kubenswrapper[4632]: I0313 10:29:19.619727 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wxm2\" (UniqueName: \"kubernetes.io/projected/febcbdc5-25a6-46f7-8c06-d6f45624a466-kube-api-access-4wxm2\") pod \"nova-cell1-conductor-0\" (UID: \"febcbdc5-25a6-46f7-8c06-d6f45624a466\") " pod="openstack/nova-cell1-conductor-0" Mar 13 10:29:19 crc kubenswrapper[4632]: I0313 10:29:19.697040 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 13 10:29:19 crc kubenswrapper[4632]: I0313 10:29:19.697449 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 13 10:29:19 crc kubenswrapper[4632]: I0313 10:29:19.722196 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/febcbdc5-25a6-46f7-8c06-d6f45624a466-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"febcbdc5-25a6-46f7-8c06-d6f45624a466\") " pod="openstack/nova-cell1-conductor-0" Mar 13 10:29:19 crc kubenswrapper[4632]: I0313 10:29:19.722267 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/febcbdc5-25a6-46f7-8c06-d6f45624a466-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"febcbdc5-25a6-46f7-8c06-d6f45624a466\") " pod="openstack/nova-cell1-conductor-0" Mar 13 10:29:19 crc kubenswrapper[4632]: I0313 10:29:19.722334 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wxm2\" (UniqueName: \"kubernetes.io/projected/febcbdc5-25a6-46f7-8c06-d6f45624a466-kube-api-access-4wxm2\") pod \"nova-cell1-conductor-0\" (UID: \"febcbdc5-25a6-46f7-8c06-d6f45624a466\") " pod="openstack/nova-cell1-conductor-0" Mar 13 10:29:19 crc kubenswrapper[4632]: I0313 10:29:19.737250 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/febcbdc5-25a6-46f7-8c06-d6f45624a466-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"febcbdc5-25a6-46f7-8c06-d6f45624a466\") " pod="openstack/nova-cell1-conductor-0" Mar 13 10:29:19 crc kubenswrapper[4632]: I0313 10:29:19.741641 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/febcbdc5-25a6-46f7-8c06-d6f45624a466-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"febcbdc5-25a6-46f7-8c06-d6f45624a466\") " pod="openstack/nova-cell1-conductor-0" Mar 13 10:29:19 crc kubenswrapper[4632]: I0313 10:29:19.747646 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wxm2\" (UniqueName: \"kubernetes.io/projected/febcbdc5-25a6-46f7-8c06-d6f45624a466-kube-api-access-4wxm2\") pod \"nova-cell1-conductor-0\" (UID: \"febcbdc5-25a6-46f7-8c06-d6f45624a466\") " pod="openstack/nova-cell1-conductor-0" Mar 13 10:29:19 crc kubenswrapper[4632]: I0313 10:29:19.853215 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 13 10:29:20 crc kubenswrapper[4632]: I0313 10:29:20.779284 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="71d36cc3-a7e3-47e5-b98f-599dc669ccc5" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.215:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 10:29:20 crc kubenswrapper[4632]: I0313 10:29:20.779307 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="71d36cc3-a7e3-47e5-b98f-599dc669ccc5" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.215:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 10:29:20 crc kubenswrapper[4632]: I0313 10:29:20.853843 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Mar 13 10:29:21 crc kubenswrapper[4632]: I0313 10:29:21.309446 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 13 10:29:21 crc kubenswrapper[4632]: I0313 10:29:21.489748 4632 generic.go:334] "Generic (PLEG): container finished" podID="aae6e23c-4223-4a5c-8074-0cdfb5d99e78" containerID="fb2da7115022d9a5776bf008164ce9b7e7fedf403dd765c92f96767951f31f8f" exitCode=0 Mar 13 10:29:21 crc kubenswrapper[4632]: I0313 10:29:21.490126 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aae6e23c-4223-4a5c-8074-0cdfb5d99e78","Type":"ContainerDied","Data":"fb2da7115022d9a5776bf008164ce9b7e7fedf403dd765c92f96767951f31f8f"} Mar 13 10:29:21 crc kubenswrapper[4632]: I0313 10:29:21.496624 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"febcbdc5-25a6-46f7-8c06-d6f45624a466","Type":"ContainerStarted","Data":"7db6edb34f93eb0d1c2adcd3461408006f8a86257dba9d9ea35c1b93c50a9f25"} Mar 13 10:29:21 crc kubenswrapper[4632]: I0313 10:29:21.603959 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 13 10:29:21 crc kubenswrapper[4632]: I0313 10:29:21.604191 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="71d36cc3-a7e3-47e5-b98f-599dc669ccc5" containerName="nova-api-log" containerID="cri-o://4df2156f6fe32fab45f05d256a8ec2adb23f786a2989c939b92b996a496f122f" gracePeriod=30 Mar 13 10:29:21 crc kubenswrapper[4632]: I0313 10:29:21.604712 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="71d36cc3-a7e3-47e5-b98f-599dc669ccc5" containerName="nova-api-api" containerID="cri-o://e604b5ae6ce92dde6f33a140a99a7c7d5949aebd7f4821ef087f38b50a0e872b" gracePeriod=30 Mar 13 10:29:22 crc kubenswrapper[4632]: I0313 10:29:22.512110 4632 generic.go:334] "Generic (PLEG): container finished" podID="71d36cc3-a7e3-47e5-b98f-599dc669ccc5" containerID="4df2156f6fe32fab45f05d256a8ec2adb23f786a2989c939b92b996a496f122f" exitCode=143 Mar 13 10:29:22 crc kubenswrapper[4632]: I0313 10:29:22.512169 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71d36cc3-a7e3-47e5-b98f-599dc669ccc5","Type":"ContainerDied","Data":"4df2156f6fe32fab45f05d256a8ec2adb23f786a2989c939b92b996a496f122f"} Mar 13 10:29:22 crc kubenswrapper[4632]: I0313 10:29:22.515543 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"febcbdc5-25a6-46f7-8c06-d6f45624a466","Type":"ContainerStarted","Data":"dc567f2c9ac20bcf2a21914a8653c160f2de73b0622331bb81ae937c887e0ffb"} Mar 13 10:29:22 crc kubenswrapper[4632]: I0313 10:29:22.516974 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Mar 13 10:29:22 crc kubenswrapper[4632]: I0313 10:29:22.545353 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.542492154 podStartE2EDuration="3.542492154s" podCreationTimestamp="2026-03-13 10:29:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:29:22.536373596 +0000 UTC m=+1536.558903739" watchObservedRunningTime="2026-03-13 10:29:22.542492154 +0000 UTC m=+1536.565022297" Mar 13 10:29:24 crc kubenswrapper[4632]: E0313 10:29:24.352682 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd653c258a170cf57afe5c4a490b6a607d8ebe8fe6115c40d2e2f88f1ae5f713" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 13 10:29:24 crc kubenswrapper[4632]: E0313 10:29:24.354698 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd653c258a170cf57afe5c4a490b6a607d8ebe8fe6115c40d2e2f88f1ae5f713" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 13 10:29:24 crc kubenswrapper[4632]: E0313 10:29:24.366409 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd653c258a170cf57afe5c4a490b6a607d8ebe8fe6115c40d2e2f88f1ae5f713" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 13 10:29:24 crc kubenswrapper[4632]: E0313 10:29:24.366486 4632 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="3548645f-4c72-4c75-b1bd-95116d47f6e2" containerName="nova-scheduler-scheduler" Mar 13 10:29:25 crc kubenswrapper[4632]: I0313 10:29:25.395670 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7bdb5f7878-ng2k2" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Mar 13 10:29:25 crc kubenswrapper[4632]: I0313 10:29:25.864954 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-689764498d-rg7vt" podUID="5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Mar 13 10:29:28 crc kubenswrapper[4632]: I0313 10:29:28.294485 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n4z22" podUID="d0cabd29-ef3e-4808-8c92-3b032483789e" containerName="registry-server" probeResult="failure" output=< Mar 13 10:29:28 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 10:29:28 crc kubenswrapper[4632]: > Mar 13 10:29:29 crc kubenswrapper[4632]: E0313 10:29:29.355089 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd653c258a170cf57afe5c4a490b6a607d8ebe8fe6115c40d2e2f88f1ae5f713" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 13 10:29:29 crc kubenswrapper[4632]: E0313 10:29:29.358331 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd653c258a170cf57afe5c4a490b6a607d8ebe8fe6115c40d2e2f88f1ae5f713" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 13 10:29:29 crc kubenswrapper[4632]: E0313 10:29:29.372355 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd653c258a170cf57afe5c4a490b6a607d8ebe8fe6115c40d2e2f88f1ae5f713" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 13 10:29:29 crc kubenswrapper[4632]: E0313 10:29:29.372488 4632 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="3548645f-4c72-4c75-b1bd-95116d47f6e2" containerName="nova-scheduler-scheduler" Mar 13 10:29:29 crc kubenswrapper[4632]: I0313 10:29:29.581488 4632 generic.go:334] "Generic (PLEG): container finished" podID="71d36cc3-a7e3-47e5-b98f-599dc669ccc5" containerID="e604b5ae6ce92dde6f33a140a99a7c7d5949aebd7f4821ef087f38b50a0e872b" exitCode=0 Mar 13 10:29:29 crc kubenswrapper[4632]: I0313 10:29:29.581553 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71d36cc3-a7e3-47e5-b98f-599dc669ccc5","Type":"ContainerDied","Data":"e604b5ae6ce92dde6f33a140a99a7c7d5949aebd7f4821ef087f38b50a0e872b"} Mar 13 10:29:29 crc kubenswrapper[4632]: I0313 10:29:29.581585 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71d36cc3-a7e3-47e5-b98f-599dc669ccc5","Type":"ContainerDied","Data":"85c564a65a9f43d06ad1647efab572b9e533b3cb0feadf64be1f6226a656d6e9"} Mar 13 10:29:29 crc kubenswrapper[4632]: I0313 10:29:29.581599 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85c564a65a9f43d06ad1647efab572b9e533b3cb0feadf64be1f6226a656d6e9" Mar 13 10:29:29 crc kubenswrapper[4632]: I0313 10:29:29.651807 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 10:29:29 crc kubenswrapper[4632]: I0313 10:29:29.742337 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71d36cc3-a7e3-47e5-b98f-599dc669ccc5-config-data\") pod \"71d36cc3-a7e3-47e5-b98f-599dc669ccc5\" (UID: \"71d36cc3-a7e3-47e5-b98f-599dc669ccc5\") " Mar 13 10:29:29 crc kubenswrapper[4632]: I0313 10:29:29.742462 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8l5th\" (UniqueName: \"kubernetes.io/projected/71d36cc3-a7e3-47e5-b98f-599dc669ccc5-kube-api-access-8l5th\") pod \"71d36cc3-a7e3-47e5-b98f-599dc669ccc5\" (UID: \"71d36cc3-a7e3-47e5-b98f-599dc669ccc5\") " Mar 13 10:29:29 crc kubenswrapper[4632]: I0313 10:29:29.742511 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71d36cc3-a7e3-47e5-b98f-599dc669ccc5-logs\") pod \"71d36cc3-a7e3-47e5-b98f-599dc669ccc5\" (UID: \"71d36cc3-a7e3-47e5-b98f-599dc669ccc5\") " Mar 13 10:29:29 crc kubenswrapper[4632]: I0313 10:29:29.742760 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71d36cc3-a7e3-47e5-b98f-599dc669ccc5-combined-ca-bundle\") pod \"71d36cc3-a7e3-47e5-b98f-599dc669ccc5\" (UID: \"71d36cc3-a7e3-47e5-b98f-599dc669ccc5\") " Mar 13 10:29:29 crc kubenswrapper[4632]: I0313 10:29:29.743152 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71d36cc3-a7e3-47e5-b98f-599dc669ccc5-logs" (OuterVolumeSpecName: "logs") pod "71d36cc3-a7e3-47e5-b98f-599dc669ccc5" (UID: "71d36cc3-a7e3-47e5-b98f-599dc669ccc5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:29:29 crc kubenswrapper[4632]: I0313 10:29:29.764042 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71d36cc3-a7e3-47e5-b98f-599dc669ccc5-kube-api-access-8l5th" (OuterVolumeSpecName: "kube-api-access-8l5th") pod "71d36cc3-a7e3-47e5-b98f-599dc669ccc5" (UID: "71d36cc3-a7e3-47e5-b98f-599dc669ccc5"). InnerVolumeSpecName "kube-api-access-8l5th". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:29:29 crc kubenswrapper[4632]: I0313 10:29:29.773669 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71d36cc3-a7e3-47e5-b98f-599dc669ccc5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71d36cc3-a7e3-47e5-b98f-599dc669ccc5" (UID: "71d36cc3-a7e3-47e5-b98f-599dc669ccc5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:29:29 crc kubenswrapper[4632]: I0313 10:29:29.776186 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71d36cc3-a7e3-47e5-b98f-599dc669ccc5-config-data" (OuterVolumeSpecName: "config-data") pod "71d36cc3-a7e3-47e5-b98f-599dc669ccc5" (UID: "71d36cc3-a7e3-47e5-b98f-599dc669ccc5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:29:29 crc kubenswrapper[4632]: E0313 10:29:29.782581 4632 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/364743f09c9046eb7ee9d937ed57e59f3a447250991c2f722abb845ea5ccd856/diff" to get inode usage: stat /var/lib/containers/storage/overlay/364743f09c9046eb7ee9d937ed57e59f3a447250991c2f722abb845ea5ccd856/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_dnsmasq-dns-7888df55c7-mw5p4_904f04cd-8110-4637-8bb4-67c4b83e189b/dnsmasq-dns/0.log" to get inode usage: stat /var/log/pods/openstack_dnsmasq-dns-7888df55c7-mw5p4_904f04cd-8110-4637-8bb4-67c4b83e189b/dnsmasq-dns/0.log: no such file or directory Mar 13 10:29:29 crc kubenswrapper[4632]: I0313 10:29:29.844738 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8l5th\" (UniqueName: \"kubernetes.io/projected/71d36cc3-a7e3-47e5-b98f-599dc669ccc5-kube-api-access-8l5th\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:29 crc kubenswrapper[4632]: I0313 10:29:29.844806 4632 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71d36cc3-a7e3-47e5-b98f-599dc669ccc5-logs\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:29 crc kubenswrapper[4632]: I0313 10:29:29.844824 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71d36cc3-a7e3-47e5-b98f-599dc669ccc5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:29 crc kubenswrapper[4632]: I0313 10:29:29.844835 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71d36cc3-a7e3-47e5-b98f-599dc669ccc5-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:29 crc kubenswrapper[4632]: I0313 10:29:29.885604 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Mar 13 10:29:30 crc kubenswrapper[4632]: I0313 10:29:30.592874 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 10:29:30 crc kubenswrapper[4632]: I0313 10:29:30.615163 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 13 10:29:30 crc kubenswrapper[4632]: I0313 10:29:30.632659 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 13 10:29:30 crc kubenswrapper[4632]: I0313 10:29:30.646106 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 13 10:29:30 crc kubenswrapper[4632]: E0313 10:29:30.646742 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71d36cc3-a7e3-47e5-b98f-599dc669ccc5" containerName="nova-api-api" Mar 13 10:29:30 crc kubenswrapper[4632]: I0313 10:29:30.646814 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="71d36cc3-a7e3-47e5-b98f-599dc669ccc5" containerName="nova-api-api" Mar 13 10:29:30 crc kubenswrapper[4632]: E0313 10:29:30.646881 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71d36cc3-a7e3-47e5-b98f-599dc669ccc5" containerName="nova-api-log" Mar 13 10:29:30 crc kubenswrapper[4632]: I0313 10:29:30.646929 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="71d36cc3-a7e3-47e5-b98f-599dc669ccc5" containerName="nova-api-log" Mar 13 10:29:30 crc kubenswrapper[4632]: I0313 10:29:30.647216 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="71d36cc3-a7e3-47e5-b98f-599dc669ccc5" containerName="nova-api-api" Mar 13 10:29:30 crc kubenswrapper[4632]: I0313 10:29:30.647284 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="71d36cc3-a7e3-47e5-b98f-599dc669ccc5" containerName="nova-api-log" Mar 13 10:29:30 crc kubenswrapper[4632]: I0313 10:29:30.648656 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 10:29:30 crc kubenswrapper[4632]: I0313 10:29:30.652302 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 13 10:29:30 crc kubenswrapper[4632]: I0313 10:29:30.673194 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 13 10:29:30 crc kubenswrapper[4632]: I0313 10:29:30.762006 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de71c6bf-377b-44e8-a5fb-e654b259404f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"de71c6bf-377b-44e8-a5fb-e654b259404f\") " pod="openstack/nova-api-0" Mar 13 10:29:30 crc kubenswrapper[4632]: I0313 10:29:30.762065 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de71c6bf-377b-44e8-a5fb-e654b259404f-logs\") pod \"nova-api-0\" (UID: \"de71c6bf-377b-44e8-a5fb-e654b259404f\") " pod="openstack/nova-api-0" Mar 13 10:29:30 crc kubenswrapper[4632]: I0313 10:29:30.762160 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjszk\" (UniqueName: \"kubernetes.io/projected/de71c6bf-377b-44e8-a5fb-e654b259404f-kube-api-access-cjszk\") pod \"nova-api-0\" (UID: \"de71c6bf-377b-44e8-a5fb-e654b259404f\") " pod="openstack/nova-api-0" Mar 13 10:29:30 crc kubenswrapper[4632]: I0313 10:29:30.762236 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de71c6bf-377b-44e8-a5fb-e654b259404f-config-data\") pod \"nova-api-0\" (UID: \"de71c6bf-377b-44e8-a5fb-e654b259404f\") " pod="openstack/nova-api-0" Mar 13 10:29:30 crc kubenswrapper[4632]: I0313 10:29:30.863745 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de71c6bf-377b-44e8-a5fb-e654b259404f-config-data\") pod \"nova-api-0\" (UID: \"de71c6bf-377b-44e8-a5fb-e654b259404f\") " pod="openstack/nova-api-0" Mar 13 10:29:30 crc kubenswrapper[4632]: I0313 10:29:30.864216 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de71c6bf-377b-44e8-a5fb-e654b259404f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"de71c6bf-377b-44e8-a5fb-e654b259404f\") " pod="openstack/nova-api-0" Mar 13 10:29:30 crc kubenswrapper[4632]: I0313 10:29:30.864337 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de71c6bf-377b-44e8-a5fb-e654b259404f-logs\") pod \"nova-api-0\" (UID: \"de71c6bf-377b-44e8-a5fb-e654b259404f\") " pod="openstack/nova-api-0" Mar 13 10:29:30 crc kubenswrapper[4632]: I0313 10:29:30.864526 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjszk\" (UniqueName: \"kubernetes.io/projected/de71c6bf-377b-44e8-a5fb-e654b259404f-kube-api-access-cjszk\") pod \"nova-api-0\" (UID: \"de71c6bf-377b-44e8-a5fb-e654b259404f\") " pod="openstack/nova-api-0" Mar 13 10:29:30 crc kubenswrapper[4632]: I0313 10:29:30.864883 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de71c6bf-377b-44e8-a5fb-e654b259404f-logs\") pod \"nova-api-0\" (UID: \"de71c6bf-377b-44e8-a5fb-e654b259404f\") " pod="openstack/nova-api-0" Mar 13 10:29:30 crc kubenswrapper[4632]: I0313 10:29:30.871368 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de71c6bf-377b-44e8-a5fb-e654b259404f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"de71c6bf-377b-44e8-a5fb-e654b259404f\") " pod="openstack/nova-api-0" Mar 13 10:29:30 crc kubenswrapper[4632]: I0313 10:29:30.875215 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de71c6bf-377b-44e8-a5fb-e654b259404f-config-data\") pod \"nova-api-0\" (UID: \"de71c6bf-377b-44e8-a5fb-e654b259404f\") " pod="openstack/nova-api-0" Mar 13 10:29:30 crc kubenswrapper[4632]: I0313 10:29:30.880890 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjszk\" (UniqueName: \"kubernetes.io/projected/de71c6bf-377b-44e8-a5fb-e654b259404f-kube-api-access-cjszk\") pod \"nova-api-0\" (UID: \"de71c6bf-377b-44e8-a5fb-e654b259404f\") " pod="openstack/nova-api-0" Mar 13 10:29:30 crc kubenswrapper[4632]: I0313 10:29:30.966850 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 10:29:31 crc kubenswrapper[4632]: I0313 10:29:31.491588 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 13 10:29:31 crc kubenswrapper[4632]: I0313 10:29:31.606416 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"de71c6bf-377b-44e8-a5fb-e654b259404f","Type":"ContainerStarted","Data":"ee45ab9f33fdda93f7c890750739536f8547b9a2cf6264542af4cb74ce30fa4b"} Mar 13 10:29:32 crc kubenswrapper[4632]: I0313 10:29:32.058222 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71d36cc3-a7e3-47e5-b98f-599dc669ccc5" path="/var/lib/kubelet/pods/71d36cc3-a7e3-47e5-b98f-599dc669ccc5/volumes" Mar 13 10:29:32 crc kubenswrapper[4632]: I0313 10:29:32.615302 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"de71c6bf-377b-44e8-a5fb-e654b259404f","Type":"ContainerStarted","Data":"4fa9439436746dc39d49be92d41774ffb73ecdcce50f69453fca69442efcc0cf"} Mar 13 10:29:32 crc kubenswrapper[4632]: I0313 10:29:32.615347 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"de71c6bf-377b-44e8-a5fb-e654b259404f","Type":"ContainerStarted","Data":"66957b9fbbc860bd8b0b4ba61ac2afb5edc2532051b2d051493e658595c97c89"} Mar 13 10:29:32 crc kubenswrapper[4632]: I0313 10:29:32.636414 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.636393396 podStartE2EDuration="2.636393396s" podCreationTimestamp="2026-03-13 10:29:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:29:32.630915172 +0000 UTC m=+1546.653445305" watchObservedRunningTime="2026-03-13 10:29:32.636393396 +0000 UTC m=+1546.658923529" Mar 13 10:29:33 crc kubenswrapper[4632]: W0313 10:29:33.747489 4632 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod229b0823_7d97_48cf_9b38_188a3f4ecde3.slice/crio-9461ca171120686afab14291efb3881e8b77ff7ec5bc19c4bdf3f4e55da83af2.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod229b0823_7d97_48cf_9b38_188a3f4ecde3.slice/crio-9461ca171120686afab14291efb3881e8b77ff7ec5bc19c4bdf3f4e55da83af2.scope: no such file or directory Mar 13 10:29:34 crc kubenswrapper[4632]: E0313 10:29:34.023485 4632 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0cabd29_ef3e_4808_8c92_3b032483789e.slice/crio-conmon-71596437e6a2d325baac09e68c289bacf1de16594d0329b6a19a5f5d92099872.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0cabd29_ef3e_4808_8c92_3b032483789e.slice/crio-71596437e6a2d325baac09e68c289bacf1de16594d0329b6a19a5f5d92099872.scope\": RecentStats: unable to find data in memory cache]" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.345526 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.352234 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 10:29:34 crc kubenswrapper[4632]: E0313 10:29:34.357065 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd653c258a170cf57afe5c4a490b6a607d8ebe8fe6115c40d2e2f88f1ae5f713" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 13 10:29:34 crc kubenswrapper[4632]: E0313 10:29:34.359382 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd653c258a170cf57afe5c4a490b6a607d8ebe8fe6115c40d2e2f88f1ae5f713" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 13 10:29:34 crc kubenswrapper[4632]: E0313 10:29:34.367263 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd653c258a170cf57afe5c4a490b6a607d8ebe8fe6115c40d2e2f88f1ae5f713" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 13 10:29:34 crc kubenswrapper[4632]: E0313 10:29:34.367337 4632 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="3548645f-4c72-4c75-b1bd-95116d47f6e2" containerName="nova-scheduler-scheduler" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.478153 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7cq6\" (UniqueName: \"kubernetes.io/projected/229b0823-7d97-48cf-9b38-188a3f4ecde3-kube-api-access-c7cq6\") pod \"229b0823-7d97-48cf-9b38-188a3f4ecde3\" (UID: \"229b0823-7d97-48cf-9b38-188a3f4ecde3\") " Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.478269 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/229b0823-7d97-48cf-9b38-188a3f4ecde3-logs\") pod \"229b0823-7d97-48cf-9b38-188a3f4ecde3\" (UID: \"229b0823-7d97-48cf-9b38-188a3f4ecde3\") " Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.478416 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlgdp\" (UniqueName: \"kubernetes.io/projected/ac666e99-860b-4f76-8b34-0ac5d3f67e9e-kube-api-access-hlgdp\") pod \"ac666e99-860b-4f76-8b34-0ac5d3f67e9e\" (UID: \"ac666e99-860b-4f76-8b34-0ac5d3f67e9e\") " Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.478462 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac666e99-860b-4f76-8b34-0ac5d3f67e9e-combined-ca-bundle\") pod \"ac666e99-860b-4f76-8b34-0ac5d3f67e9e\" (UID: \"ac666e99-860b-4f76-8b34-0ac5d3f67e9e\") " Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.478652 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/229b0823-7d97-48cf-9b38-188a3f4ecde3-combined-ca-bundle\") pod \"229b0823-7d97-48cf-9b38-188a3f4ecde3\" (UID: \"229b0823-7d97-48cf-9b38-188a3f4ecde3\") " Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.478697 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac666e99-860b-4f76-8b34-0ac5d3f67e9e-config-data\") pod \"ac666e99-860b-4f76-8b34-0ac5d3f67e9e\" (UID: \"ac666e99-860b-4f76-8b34-0ac5d3f67e9e\") " Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.478743 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/229b0823-7d97-48cf-9b38-188a3f4ecde3-config-data\") pod \"229b0823-7d97-48cf-9b38-188a3f4ecde3\" (UID: \"229b0823-7d97-48cf-9b38-188a3f4ecde3\") " Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.481051 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/229b0823-7d97-48cf-9b38-188a3f4ecde3-logs" (OuterVolumeSpecName: "logs") pod "229b0823-7d97-48cf-9b38-188a3f4ecde3" (UID: "229b0823-7d97-48cf-9b38-188a3f4ecde3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.481438 4632 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/229b0823-7d97-48cf-9b38-188a3f4ecde3-logs\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.485028 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac666e99-860b-4f76-8b34-0ac5d3f67e9e-kube-api-access-hlgdp" (OuterVolumeSpecName: "kube-api-access-hlgdp") pod "ac666e99-860b-4f76-8b34-0ac5d3f67e9e" (UID: "ac666e99-860b-4f76-8b34-0ac5d3f67e9e"). InnerVolumeSpecName "kube-api-access-hlgdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.486764 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/229b0823-7d97-48cf-9b38-188a3f4ecde3-kube-api-access-c7cq6" (OuterVolumeSpecName: "kube-api-access-c7cq6") pod "229b0823-7d97-48cf-9b38-188a3f4ecde3" (UID: "229b0823-7d97-48cf-9b38-188a3f4ecde3"). InnerVolumeSpecName "kube-api-access-c7cq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.520763 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/229b0823-7d97-48cf-9b38-188a3f4ecde3-config-data" (OuterVolumeSpecName: "config-data") pod "229b0823-7d97-48cf-9b38-188a3f4ecde3" (UID: "229b0823-7d97-48cf-9b38-188a3f4ecde3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.525057 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac666e99-860b-4f76-8b34-0ac5d3f67e9e-config-data" (OuterVolumeSpecName: "config-data") pod "ac666e99-860b-4f76-8b34-0ac5d3f67e9e" (UID: "ac666e99-860b-4f76-8b34-0ac5d3f67e9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.527399 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/229b0823-7d97-48cf-9b38-188a3f4ecde3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "229b0823-7d97-48cf-9b38-188a3f4ecde3" (UID: "229b0823-7d97-48cf-9b38-188a3f4ecde3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.536895 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac666e99-860b-4f76-8b34-0ac5d3f67e9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac666e99-860b-4f76-8b34-0ac5d3f67e9e" (UID: "ac666e99-860b-4f76-8b34-0ac5d3f67e9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.583106 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlgdp\" (UniqueName: \"kubernetes.io/projected/ac666e99-860b-4f76-8b34-0ac5d3f67e9e-kube-api-access-hlgdp\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.583409 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac666e99-860b-4f76-8b34-0ac5d3f67e9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.583467 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/229b0823-7d97-48cf-9b38-188a3f4ecde3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.583517 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac666e99-860b-4f76-8b34-0ac5d3f67e9e-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.583583 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/229b0823-7d97-48cf-9b38-188a3f4ecde3-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.583638 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7cq6\" (UniqueName: \"kubernetes.io/projected/229b0823-7d97-48cf-9b38-188a3f4ecde3-kube-api-access-c7cq6\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.638986 4632 generic.go:334] "Generic (PLEG): container finished" podID="229b0823-7d97-48cf-9b38-188a3f4ecde3" containerID="c37b72b8d98bcfa61d62d487a54c1380919b990c2a3301633445f9ec65005bf5" exitCode=137 Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.639062 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"229b0823-7d97-48cf-9b38-188a3f4ecde3","Type":"ContainerDied","Data":"c37b72b8d98bcfa61d62d487a54c1380919b990c2a3301633445f9ec65005bf5"} Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.639096 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"229b0823-7d97-48cf-9b38-188a3f4ecde3","Type":"ContainerDied","Data":"e09e0609bf87522d60a55be578e8526df6a87fdb2167dcc1dc7feca3fdedd742"} Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.639115 4632 scope.go:117] "RemoveContainer" containerID="c37b72b8d98bcfa61d62d487a54c1380919b990c2a3301633445f9ec65005bf5" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.639262 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.646517 4632 generic.go:334] "Generic (PLEG): container finished" podID="ac666e99-860b-4f76-8b34-0ac5d3f67e9e" containerID="a0afd63ece3b9f8f07f13d5cb18fecbc9a68899e19cf0a855e21117da83b326a" exitCode=137 Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.646563 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ac666e99-860b-4f76-8b34-0ac5d3f67e9e","Type":"ContainerDied","Data":"a0afd63ece3b9f8f07f13d5cb18fecbc9a68899e19cf0a855e21117da83b326a"} Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.646587 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ac666e99-860b-4f76-8b34-0ac5d3f67e9e","Type":"ContainerDied","Data":"21ad142ba62995b1f6903534cfcf5ce85e20fd10841d57b3365ce522bedc21e5"} Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.646697 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.683008 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.685551 4632 scope.go:117] "RemoveContainer" containerID="9461ca171120686afab14291efb3881e8b77ff7ec5bc19c4bdf3f4e55da83af2" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.727184 4632 scope.go:117] "RemoveContainer" containerID="c37b72b8d98bcfa61d62d487a54c1380919b990c2a3301633445f9ec65005bf5" Mar 13 10:29:34 crc kubenswrapper[4632]: E0313 10:29:34.728961 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c37b72b8d98bcfa61d62d487a54c1380919b990c2a3301633445f9ec65005bf5\": container with ID starting with c37b72b8d98bcfa61d62d487a54c1380919b990c2a3301633445f9ec65005bf5 not found: ID does not exist" containerID="c37b72b8d98bcfa61d62d487a54c1380919b990c2a3301633445f9ec65005bf5" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.729004 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c37b72b8d98bcfa61d62d487a54c1380919b990c2a3301633445f9ec65005bf5"} err="failed to get container status \"c37b72b8d98bcfa61d62d487a54c1380919b990c2a3301633445f9ec65005bf5\": rpc error: code = NotFound desc = could not find container \"c37b72b8d98bcfa61d62d487a54c1380919b990c2a3301633445f9ec65005bf5\": container with ID starting with c37b72b8d98bcfa61d62d487a54c1380919b990c2a3301633445f9ec65005bf5 not found: ID does not exist" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.729032 4632 scope.go:117] "RemoveContainer" containerID="9461ca171120686afab14291efb3881e8b77ff7ec5bc19c4bdf3f4e55da83af2" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.734153 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 10:29:34 crc kubenswrapper[4632]: E0313 10:29:34.737287 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9461ca171120686afab14291efb3881e8b77ff7ec5bc19c4bdf3f4e55da83af2\": container with ID starting with 9461ca171120686afab14291efb3881e8b77ff7ec5bc19c4bdf3f4e55da83af2 not found: ID does not exist" containerID="9461ca171120686afab14291efb3881e8b77ff7ec5bc19c4bdf3f4e55da83af2" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.737341 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9461ca171120686afab14291efb3881e8b77ff7ec5bc19c4bdf3f4e55da83af2"} err="failed to get container status \"9461ca171120686afab14291efb3881e8b77ff7ec5bc19c4bdf3f4e55da83af2\": rpc error: code = NotFound desc = could not find container \"9461ca171120686afab14291efb3881e8b77ff7ec5bc19c4bdf3f4e55da83af2\": container with ID starting with 9461ca171120686afab14291efb3881e8b77ff7ec5bc19c4bdf3f4e55da83af2 not found: ID does not exist" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.737376 4632 scope.go:117] "RemoveContainer" containerID="a0afd63ece3b9f8f07f13d5cb18fecbc9a68899e19cf0a855e21117da83b326a" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.763147 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.780055 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.793087 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 13 10:29:34 crc kubenswrapper[4632]: E0313 10:29:34.793638 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="229b0823-7d97-48cf-9b38-188a3f4ecde3" containerName="nova-metadata-metadata" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.793664 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="229b0823-7d97-48cf-9b38-188a3f4ecde3" containerName="nova-metadata-metadata" Mar 13 10:29:34 crc kubenswrapper[4632]: E0313 10:29:34.793680 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="229b0823-7d97-48cf-9b38-188a3f4ecde3" containerName="nova-metadata-log" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.793689 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="229b0823-7d97-48cf-9b38-188a3f4ecde3" containerName="nova-metadata-log" Mar 13 10:29:34 crc kubenswrapper[4632]: E0313 10:29:34.793739 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac666e99-860b-4f76-8b34-0ac5d3f67e9e" containerName="nova-cell1-novncproxy-novncproxy" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.793751 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac666e99-860b-4f76-8b34-0ac5d3f67e9e" containerName="nova-cell1-novncproxy-novncproxy" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.794000 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="229b0823-7d97-48cf-9b38-188a3f4ecde3" containerName="nova-metadata-log" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.794026 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="229b0823-7d97-48cf-9b38-188a3f4ecde3" containerName="nova-metadata-metadata" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.794048 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac666e99-860b-4f76-8b34-0ac5d3f67e9e" containerName="nova-cell1-novncproxy-novncproxy" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.796579 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.802603 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.802983 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.807334 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.816403 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.822374 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.822708 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.822906 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.828186 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.866214 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.892192 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9zdc\" (UniqueName: \"kubernetes.io/projected/bf01307f-1529-4aa7-95fc-8af84b061970-kube-api-access-j9zdc\") pod \"nova-cell1-novncproxy-0\" (UID: \"bf01307f-1529-4aa7-95fc-8af84b061970\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.892294 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82ae3a46-0133-43f5-942d-0b9a5b4d59f4-logs\") pod \"nova-metadata-0\" (UID: \"82ae3a46-0133-43f5-942d-0b9a5b4d59f4\") " pod="openstack/nova-metadata-0" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.892360 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf01307f-1529-4aa7-95fc-8af84b061970-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bf01307f-1529-4aa7-95fc-8af84b061970\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.892469 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf01307f-1529-4aa7-95fc-8af84b061970-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bf01307f-1529-4aa7-95fc-8af84b061970\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.892535 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf01307f-1529-4aa7-95fc-8af84b061970-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bf01307f-1529-4aa7-95fc-8af84b061970\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.892566 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82ae3a46-0133-43f5-942d-0b9a5b4d59f4-config-data\") pod \"nova-metadata-0\" (UID: \"82ae3a46-0133-43f5-942d-0b9a5b4d59f4\") " pod="openstack/nova-metadata-0" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.892595 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdwzm\" (UniqueName: \"kubernetes.io/projected/82ae3a46-0133-43f5-942d-0b9a5b4d59f4-kube-api-access-rdwzm\") pod \"nova-metadata-0\" (UID: \"82ae3a46-0133-43f5-942d-0b9a5b4d59f4\") " pod="openstack/nova-metadata-0" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.892635 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82ae3a46-0133-43f5-942d-0b9a5b4d59f4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"82ae3a46-0133-43f5-942d-0b9a5b4d59f4\") " pod="openstack/nova-metadata-0" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.892667 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf01307f-1529-4aa7-95fc-8af84b061970-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bf01307f-1529-4aa7-95fc-8af84b061970\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.892867 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/82ae3a46-0133-43f5-942d-0b9a5b4d59f4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"82ae3a46-0133-43f5-942d-0b9a5b4d59f4\") " pod="openstack/nova-metadata-0" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.894411 4632 scope.go:117] "RemoveContainer" containerID="a0afd63ece3b9f8f07f13d5cb18fecbc9a68899e19cf0a855e21117da83b326a" Mar 13 10:29:34 crc kubenswrapper[4632]: E0313 10:29:34.894777 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0afd63ece3b9f8f07f13d5cb18fecbc9a68899e19cf0a855e21117da83b326a\": container with ID starting with a0afd63ece3b9f8f07f13d5cb18fecbc9a68899e19cf0a855e21117da83b326a not found: ID does not exist" containerID="a0afd63ece3b9f8f07f13d5cb18fecbc9a68899e19cf0a855e21117da83b326a" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.894809 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0afd63ece3b9f8f07f13d5cb18fecbc9a68899e19cf0a855e21117da83b326a"} err="failed to get container status \"a0afd63ece3b9f8f07f13d5cb18fecbc9a68899e19cf0a855e21117da83b326a\": rpc error: code = NotFound desc = could not find container \"a0afd63ece3b9f8f07f13d5cb18fecbc9a68899e19cf0a855e21117da83b326a\": container with ID starting with a0afd63ece3b9f8f07f13d5cb18fecbc9a68899e19cf0a855e21117da83b326a not found: ID does not exist" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.994973 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82ae3a46-0133-43f5-942d-0b9a5b4d59f4-logs\") pod \"nova-metadata-0\" (UID: \"82ae3a46-0133-43f5-942d-0b9a5b4d59f4\") " pod="openstack/nova-metadata-0" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.995234 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf01307f-1529-4aa7-95fc-8af84b061970-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bf01307f-1529-4aa7-95fc-8af84b061970\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.995366 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf01307f-1529-4aa7-95fc-8af84b061970-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bf01307f-1529-4aa7-95fc-8af84b061970\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.995481 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf01307f-1529-4aa7-95fc-8af84b061970-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bf01307f-1529-4aa7-95fc-8af84b061970\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.995567 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82ae3a46-0133-43f5-942d-0b9a5b4d59f4-logs\") pod \"nova-metadata-0\" (UID: \"82ae3a46-0133-43f5-942d-0b9a5b4d59f4\") " pod="openstack/nova-metadata-0" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.995583 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82ae3a46-0133-43f5-942d-0b9a5b4d59f4-config-data\") pod \"nova-metadata-0\" (UID: \"82ae3a46-0133-43f5-942d-0b9a5b4d59f4\") " pod="openstack/nova-metadata-0" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.995737 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwzm\" (UniqueName: \"kubernetes.io/projected/82ae3a46-0133-43f5-942d-0b9a5b4d59f4-kube-api-access-rdwzm\") pod \"nova-metadata-0\" (UID: \"82ae3a46-0133-43f5-942d-0b9a5b4d59f4\") " pod="openstack/nova-metadata-0" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.995846 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82ae3a46-0133-43f5-942d-0b9a5b4d59f4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"82ae3a46-0133-43f5-942d-0b9a5b4d59f4\") " pod="openstack/nova-metadata-0" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.996006 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf01307f-1529-4aa7-95fc-8af84b061970-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bf01307f-1529-4aa7-95fc-8af84b061970\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.996154 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/82ae3a46-0133-43f5-942d-0b9a5b4d59f4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"82ae3a46-0133-43f5-942d-0b9a5b4d59f4\") " pod="openstack/nova-metadata-0" Mar 13 10:29:34 crc kubenswrapper[4632]: I0313 10:29:34.996290 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9zdc\" (UniqueName: \"kubernetes.io/projected/bf01307f-1529-4aa7-95fc-8af84b061970-kube-api-access-j9zdc\") pod \"nova-cell1-novncproxy-0\" (UID: \"bf01307f-1529-4aa7-95fc-8af84b061970\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 10:29:35 crc kubenswrapper[4632]: I0313 10:29:35.001717 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf01307f-1529-4aa7-95fc-8af84b061970-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bf01307f-1529-4aa7-95fc-8af84b061970\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 10:29:35 crc kubenswrapper[4632]: I0313 10:29:35.001720 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf01307f-1529-4aa7-95fc-8af84b061970-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bf01307f-1529-4aa7-95fc-8af84b061970\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 10:29:35 crc kubenswrapper[4632]: I0313 10:29:35.014264 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82ae3a46-0133-43f5-942d-0b9a5b4d59f4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"82ae3a46-0133-43f5-942d-0b9a5b4d59f4\") " pod="openstack/nova-metadata-0" Mar 13 10:29:35 crc kubenswrapper[4632]: I0313 10:29:35.014870 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82ae3a46-0133-43f5-942d-0b9a5b4d59f4-config-data\") pod \"nova-metadata-0\" (UID: \"82ae3a46-0133-43f5-942d-0b9a5b4d59f4\") " pod="openstack/nova-metadata-0" Mar 13 10:29:35 crc kubenswrapper[4632]: I0313 10:29:35.018589 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/82ae3a46-0133-43f5-942d-0b9a5b4d59f4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"82ae3a46-0133-43f5-942d-0b9a5b4d59f4\") " pod="openstack/nova-metadata-0" Mar 13 10:29:35 crc kubenswrapper[4632]: I0313 10:29:35.018698 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf01307f-1529-4aa7-95fc-8af84b061970-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bf01307f-1529-4aa7-95fc-8af84b061970\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 10:29:35 crc kubenswrapper[4632]: I0313 10:29:35.019098 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwzm\" (UniqueName: \"kubernetes.io/projected/82ae3a46-0133-43f5-942d-0b9a5b4d59f4-kube-api-access-rdwzm\") pod \"nova-metadata-0\" (UID: \"82ae3a46-0133-43f5-942d-0b9a5b4d59f4\") " pod="openstack/nova-metadata-0" Mar 13 10:29:35 crc kubenswrapper[4632]: I0313 10:29:35.020637 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf01307f-1529-4aa7-95fc-8af84b061970-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bf01307f-1529-4aa7-95fc-8af84b061970\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 10:29:35 crc kubenswrapper[4632]: I0313 10:29:35.035630 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9zdc\" (UniqueName: \"kubernetes.io/projected/bf01307f-1529-4aa7-95fc-8af84b061970-kube-api-access-j9zdc\") pod \"nova-cell1-novncproxy-0\" (UID: \"bf01307f-1529-4aa7-95fc-8af84b061970\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 10:29:35 crc kubenswrapper[4632]: I0313 10:29:35.182765 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 10:29:35 crc kubenswrapper[4632]: I0313 10:29:35.198312 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 13 10:29:35 crc kubenswrapper[4632]: I0313 10:29:35.399380 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7bdb5f7878-ng2k2" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Mar 13 10:29:35 crc kubenswrapper[4632]: I0313 10:29:35.816794 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 10:29:35 crc kubenswrapper[4632]: I0313 10:29:35.833847 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 13 10:29:35 crc kubenswrapper[4632]: I0313 10:29:35.857220 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-689764498d-rg7vt" podUID="5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Mar 13 10:29:35 crc kubenswrapper[4632]: W0313 10:29:35.918407 4632 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b3f0cc6_72ae_4738_baee_ce7bc9ef2998.slice/crio-f1168e3921eed21e9e9f3fc46e51a896484996f89685c7e47f03c083dd2451a8.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b3f0cc6_72ae_4738_baee_ce7bc9ef2998.slice/crio-f1168e3921eed21e9e9f3fc46e51a896484996f89685c7e47f03c083dd2451a8.scope: no such file or directory Mar 13 10:29:35 crc kubenswrapper[4632]: W0313 10:29:35.918467 4632 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod229b0823_7d97_48cf_9b38_188a3f4ecde3.slice/crio-conmon-c37b72b8d98bcfa61d62d487a54c1380919b990c2a3301633445f9ec65005bf5.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod229b0823_7d97_48cf_9b38_188a3f4ecde3.slice/crio-conmon-c37b72b8d98bcfa61d62d487a54c1380919b990c2a3301633445f9ec65005bf5.scope: no such file or directory Mar 13 10:29:35 crc kubenswrapper[4632]: W0313 10:29:35.918487 4632 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod229b0823_7d97_48cf_9b38_188a3f4ecde3.slice/crio-c37b72b8d98bcfa61d62d487a54c1380919b990c2a3301633445f9ec65005bf5.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod229b0823_7d97_48cf_9b38_188a3f4ecde3.slice/crio-c37b72b8d98bcfa61d62d487a54c1380919b990c2a3301633445f9ec65005bf5.scope: no such file or directory Mar 13 10:29:35 crc kubenswrapper[4632]: W0313 10:29:35.918505 4632 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b3f0cc6_72ae_4738_baee_ce7bc9ef2998.slice/crio-conmon-29733198b90e2c4e70c2780c9fc89c7f6fe69e2a7c8898353f2770088741eed7.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b3f0cc6_72ae_4738_baee_ce7bc9ef2998.slice/crio-conmon-29733198b90e2c4e70c2780c9fc89c7f6fe69e2a7c8898353f2770088741eed7.scope: no such file or directory Mar 13 10:29:35 crc kubenswrapper[4632]: W0313 10:29:35.918521 4632 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b3f0cc6_72ae_4738_baee_ce7bc9ef2998.slice/crio-29733198b90e2c4e70c2780c9fc89c7f6fe69e2a7c8898353f2770088741eed7.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b3f0cc6_72ae_4738_baee_ce7bc9ef2998.slice/crio-29733198b90e2c4e70c2780c9fc89c7f6fe69e2a7c8898353f2770088741eed7.scope: no such file or directory Mar 13 10:29:35 crc kubenswrapper[4632]: W0313 10:29:35.935519 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac666e99_860b_4f76_8b34_0ac5d3f67e9e.slice/crio-a0afd63ece3b9f8f07f13d5cb18fecbc9a68899e19cf0a855e21117da83b326a.scope WatchSource:0}: Error finding container a0afd63ece3b9f8f07f13d5cb18fecbc9a68899e19cf0a855e21117da83b326a: Status 404 returned error can't find the container with id a0afd63ece3b9f8f07f13d5cb18fecbc9a68899e19cf0a855e21117da83b326a Mar 13 10:29:35 crc kubenswrapper[4632]: W0313 10:29:35.945008 4632 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71d36cc3_a7e3_47e5_b98f_599dc669ccc5.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71d36cc3_a7e3_47e5_b98f_599dc669ccc5.slice: no such file or directory Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.072457 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="229b0823-7d97-48cf-9b38-188a3f4ecde3" path="/var/lib/kubelet/pods/229b0823-7d97-48cf-9b38-188a3f4ecde3/volumes" Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.073289 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac666e99-860b-4f76-8b34-0ac5d3f67e9e" path="/var/lib/kubelet/pods/ac666e99-860b-4f76-8b34-0ac5d3f67e9e/volumes" Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.316700 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.441899 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tntfg\" (UniqueName: \"kubernetes.io/projected/3548645f-4c72-4c75-b1bd-95116d47f6e2-kube-api-access-tntfg\") pod \"3548645f-4c72-4c75-b1bd-95116d47f6e2\" (UID: \"3548645f-4c72-4c75-b1bd-95116d47f6e2\") " Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.441981 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3548645f-4c72-4c75-b1bd-95116d47f6e2-combined-ca-bundle\") pod \"3548645f-4c72-4c75-b1bd-95116d47f6e2\" (UID: \"3548645f-4c72-4c75-b1bd-95116d47f6e2\") " Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.442029 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3548645f-4c72-4c75-b1bd-95116d47f6e2-config-data\") pod \"3548645f-4c72-4c75-b1bd-95116d47f6e2\" (UID: \"3548645f-4c72-4c75-b1bd-95116d47f6e2\") " Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.451078 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3548645f-4c72-4c75-b1bd-95116d47f6e2-kube-api-access-tntfg" (OuterVolumeSpecName: "kube-api-access-tntfg") pod "3548645f-4c72-4c75-b1bd-95116d47f6e2" (UID: "3548645f-4c72-4c75-b1bd-95116d47f6e2"). InnerVolumeSpecName "kube-api-access-tntfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.484675 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3548645f-4c72-4c75-b1bd-95116d47f6e2-config-data" (OuterVolumeSpecName: "config-data") pod "3548645f-4c72-4c75-b1bd-95116d47f6e2" (UID: "3548645f-4c72-4c75-b1bd-95116d47f6e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.487103 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3548645f-4c72-4c75-b1bd-95116d47f6e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3548645f-4c72-4c75-b1bd-95116d47f6e2" (UID: "3548645f-4c72-4c75-b1bd-95116d47f6e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.544017 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tntfg\" (UniqueName: \"kubernetes.io/projected/3548645f-4c72-4c75-b1bd-95116d47f6e2-kube-api-access-tntfg\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.544062 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3548645f-4c72-4c75-b1bd-95116d47f6e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.544075 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3548645f-4c72-4c75-b1bd-95116d47f6e2-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.690407 4632 generic.go:334] "Generic (PLEG): container finished" podID="3548645f-4c72-4c75-b1bd-95116d47f6e2" containerID="cd653c258a170cf57afe5c4a490b6a607d8ebe8fe6115c40d2e2f88f1ae5f713" exitCode=137 Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.690461 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3548645f-4c72-4c75-b1bd-95116d47f6e2","Type":"ContainerDied","Data":"cd653c258a170cf57afe5c4a490b6a607d8ebe8fe6115c40d2e2f88f1ae5f713"} Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.690500 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3548645f-4c72-4c75-b1bd-95116d47f6e2","Type":"ContainerDied","Data":"2e867316abee7ad3f39e6bce10fac89a2148bd2f0cb280bc0cfe2baf59687b11"} Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.690516 4632 scope.go:117] "RemoveContainer" containerID="cd653c258a170cf57afe5c4a490b6a607d8ebe8fe6115c40d2e2f88f1ae5f713" Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.690443 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.694354 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"82ae3a46-0133-43f5-942d-0b9a5b4d59f4","Type":"ContainerStarted","Data":"a8f2d5b445eff9d0451f4c86782cfcac63ef30c3164366e7abf62cc09495ddd8"} Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.694535 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"82ae3a46-0133-43f5-942d-0b9a5b4d59f4","Type":"ContainerStarted","Data":"519f8117c2e26f4a0b3f5e7e157a107d38c388ac29acd5faf2f4b4ecf121e55f"} Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.694621 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"82ae3a46-0133-43f5-942d-0b9a5b4d59f4","Type":"ContainerStarted","Data":"d71e5927086b73530c1dba7fd1700212b2fab56fed475076dc30c04ba970bcf7"} Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.702641 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bf01307f-1529-4aa7-95fc-8af84b061970","Type":"ContainerStarted","Data":"d38babf4e97712ec891a6f38a9738c7611fa766a76e6a294604ae52bc653604f"} Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.702869 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bf01307f-1529-4aa7-95fc-8af84b061970","Type":"ContainerStarted","Data":"38f25e4565386591341a7ca73af278dbdad469f10eb262b202099e00fa3206fb"} Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.727179 4632 scope.go:117] "RemoveContainer" containerID="cd653c258a170cf57afe5c4a490b6a607d8ebe8fe6115c40d2e2f88f1ae5f713" Mar 13 10:29:36 crc kubenswrapper[4632]: E0313 10:29:36.727857 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd653c258a170cf57afe5c4a490b6a607d8ebe8fe6115c40d2e2f88f1ae5f713\": container with ID starting with cd653c258a170cf57afe5c4a490b6a607d8ebe8fe6115c40d2e2f88f1ae5f713 not found: ID does not exist" containerID="cd653c258a170cf57afe5c4a490b6a607d8ebe8fe6115c40d2e2f88f1ae5f713" Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.727902 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd653c258a170cf57afe5c4a490b6a607d8ebe8fe6115c40d2e2f88f1ae5f713"} err="failed to get container status \"cd653c258a170cf57afe5c4a490b6a607d8ebe8fe6115c40d2e2f88f1ae5f713\": rpc error: code = NotFound desc = could not find container \"cd653c258a170cf57afe5c4a490b6a607d8ebe8fe6115c40d2e2f88f1ae5f713\": container with ID starting with cd653c258a170cf57afe5c4a490b6a607d8ebe8fe6115c40d2e2f88f1ae5f713 not found: ID does not exist" Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.740382 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.74036559 podStartE2EDuration="2.74036559s" podCreationTimestamp="2026-03-13 10:29:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:29:36.724281679 +0000 UTC m=+1550.746811822" watchObservedRunningTime="2026-03-13 10:29:36.74036559 +0000 UTC m=+1550.762895723" Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.750253 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.772015 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.776668 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.776645732 podStartE2EDuration="2.776645732s" podCreationTimestamp="2026-03-13 10:29:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:29:36.757048906 +0000 UTC m=+1550.779579039" watchObservedRunningTime="2026-03-13 10:29:36.776645732 +0000 UTC m=+1550.799175865" Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.822320 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 10:29:36 crc kubenswrapper[4632]: E0313 10:29:36.823775 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3548645f-4c72-4c75-b1bd-95116d47f6e2" containerName="nova-scheduler-scheduler" Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.823796 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="3548645f-4c72-4c75-b1bd-95116d47f6e2" containerName="nova-scheduler-scheduler" Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.824053 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="3548645f-4c72-4c75-b1bd-95116d47f6e2" containerName="nova-scheduler-scheduler" Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.826618 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.836798 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.886731 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.953929 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d0c4f9f-780f-42d8-9eee-cb2201034218-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4d0c4f9f-780f-42d8-9eee-cb2201034218\") " pod="openstack/nova-scheduler-0" Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.954173 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d0c4f9f-780f-42d8-9eee-cb2201034218-config-data\") pod \"nova-scheduler-0\" (UID: \"4d0c4f9f-780f-42d8-9eee-cb2201034218\") " pod="openstack/nova-scheduler-0" Mar 13 10:29:36 crc kubenswrapper[4632]: I0313 10:29:36.954219 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dmpm\" (UniqueName: \"kubernetes.io/projected/4d0c4f9f-780f-42d8-9eee-cb2201034218-kube-api-access-2dmpm\") pod \"nova-scheduler-0\" (UID: \"4d0c4f9f-780f-42d8-9eee-cb2201034218\") " pod="openstack/nova-scheduler-0" Mar 13 10:29:37 crc kubenswrapper[4632]: I0313 10:29:37.055711 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dmpm\" (UniqueName: \"kubernetes.io/projected/4d0c4f9f-780f-42d8-9eee-cb2201034218-kube-api-access-2dmpm\") pod \"nova-scheduler-0\" (UID: \"4d0c4f9f-780f-42d8-9eee-cb2201034218\") " pod="openstack/nova-scheduler-0" Mar 13 10:29:37 crc kubenswrapper[4632]: I0313 10:29:37.055817 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d0c4f9f-780f-42d8-9eee-cb2201034218-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4d0c4f9f-780f-42d8-9eee-cb2201034218\") " pod="openstack/nova-scheduler-0" Mar 13 10:29:37 crc kubenswrapper[4632]: I0313 10:29:37.056061 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d0c4f9f-780f-42d8-9eee-cb2201034218-config-data\") pod \"nova-scheduler-0\" (UID: \"4d0c4f9f-780f-42d8-9eee-cb2201034218\") " pod="openstack/nova-scheduler-0" Mar 13 10:29:37 crc kubenswrapper[4632]: I0313 10:29:37.060897 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d0c4f9f-780f-42d8-9eee-cb2201034218-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4d0c4f9f-780f-42d8-9eee-cb2201034218\") " pod="openstack/nova-scheduler-0" Mar 13 10:29:37 crc kubenswrapper[4632]: I0313 10:29:37.060969 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d0c4f9f-780f-42d8-9eee-cb2201034218-config-data\") pod \"nova-scheduler-0\" (UID: \"4d0c4f9f-780f-42d8-9eee-cb2201034218\") " pod="openstack/nova-scheduler-0" Mar 13 10:29:37 crc kubenswrapper[4632]: I0313 10:29:37.087595 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dmpm\" (UniqueName: \"kubernetes.io/projected/4d0c4f9f-780f-42d8-9eee-cb2201034218-kube-api-access-2dmpm\") pod \"nova-scheduler-0\" (UID: \"4d0c4f9f-780f-42d8-9eee-cb2201034218\") " pod="openstack/nova-scheduler-0" Mar 13 10:29:37 crc kubenswrapper[4632]: I0313 10:29:37.103928 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="aae6e23c-4223-4a5c-8074-0cdfb5d99e78" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.214:3000/\": dial tcp 10.217.0.214:3000: connect: connection refused" Mar 13 10:29:37 crc kubenswrapper[4632]: I0313 10:29:37.181164 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 10:29:37 crc kubenswrapper[4632]: I0313 10:29:37.893561 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 10:29:38 crc kubenswrapper[4632]: I0313 10:29:38.082821 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3548645f-4c72-4c75-b1bd-95116d47f6e2" path="/var/lib/kubelet/pods/3548645f-4c72-4c75-b1bd-95116d47f6e2/volumes" Mar 13 10:29:38 crc kubenswrapper[4632]: I0313 10:29:38.413498 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n4z22" podUID="d0cabd29-ef3e-4808-8c92-3b032483789e" containerName="registry-server" probeResult="failure" output=< Mar 13 10:29:38 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 10:29:38 crc kubenswrapper[4632]: > Mar 13 10:29:38 crc kubenswrapper[4632]: I0313 10:29:38.727299 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4d0c4f9f-780f-42d8-9eee-cb2201034218","Type":"ContainerStarted","Data":"9ea4520ca12a1649f0ffe1aaf48fb3759b0ff4ddc87166ec50af120c6fe09b58"} Mar 13 10:29:38 crc kubenswrapper[4632]: I0313 10:29:38.728260 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4d0c4f9f-780f-42d8-9eee-cb2201034218","Type":"ContainerStarted","Data":"ccfc717e8149e75cbe225885927d06c595e9efeff8370ba3176af49fbdc5eb3d"} Mar 13 10:29:38 crc kubenswrapper[4632]: I0313 10:29:38.760109 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.760086338 podStartE2EDuration="2.760086338s" podCreationTimestamp="2026-03-13 10:29:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:29:38.752093583 +0000 UTC m=+1552.774623716" watchObservedRunningTime="2026-03-13 10:29:38.760086338 +0000 UTC m=+1552.782616471" Mar 13 10:29:40 crc kubenswrapper[4632]: I0313 10:29:40.183783 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 13 10:29:40 crc kubenswrapper[4632]: I0313 10:29:40.185108 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 13 10:29:40 crc kubenswrapper[4632]: I0313 10:29:40.198883 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 13 10:29:40 crc kubenswrapper[4632]: I0313 10:29:40.967640 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 13 10:29:40 crc kubenswrapper[4632]: I0313 10:29:40.968067 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 13 10:29:42 crc kubenswrapper[4632]: I0313 10:29:42.051177 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="de71c6bf-377b-44e8-a5fb-e654b259404f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.218:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 10:29:42 crc kubenswrapper[4632]: I0313 10:29:42.051220 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="de71c6bf-377b-44e8-a5fb-e654b259404f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.218:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 10:29:42 crc kubenswrapper[4632]: I0313 10:29:42.181754 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 13 10:29:43 crc kubenswrapper[4632]: E0313 10:29:43.281446 4632 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/d612d79249ad4601fcba1d21d312ec5600b48bd60b162893d856c65bd99ed9fb/diff" to get inode usage: stat /var/lib/containers/storage/overlay/d612d79249ad4601fcba1d21d312ec5600b48bd60b162893d856c65bd99ed9fb/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_nova-cell0-conductor-0_89c5451e-248e-46eb-ac20-f52c3e3bcdc4/nova-cell0-conductor-conductor/0.log" to get inode usage: stat /var/log/pods/openstack_nova-cell0-conductor-0_89c5451e-248e-46eb-ac20-f52c3e3bcdc4/nova-cell0-conductor-conductor/0.log: no such file or directory Mar 13 10:29:43 crc kubenswrapper[4632]: I0313 10:29:43.831053 4632 generic.go:334] "Generic (PLEG): container finished" podID="aae6e23c-4223-4a5c-8074-0cdfb5d99e78" containerID="b21cf0ac61b24fc322cab134fff86780fa6ec89891d057825ed89069f94de13b" exitCode=137 Mar 13 10:29:43 crc kubenswrapper[4632]: I0313 10:29:43.831186 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aae6e23c-4223-4a5c-8074-0cdfb5d99e78","Type":"ContainerDied","Data":"b21cf0ac61b24fc322cab134fff86780fa6ec89891d057825ed89069f94de13b"} Mar 13 10:29:43 crc kubenswrapper[4632]: I0313 10:29:43.971963 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.011677 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtzmr\" (UniqueName: \"kubernetes.io/projected/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-kube-api-access-vtzmr\") pod \"aae6e23c-4223-4a5c-8074-0cdfb5d99e78\" (UID: \"aae6e23c-4223-4a5c-8074-0cdfb5d99e78\") " Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.011736 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-config-data\") pod \"aae6e23c-4223-4a5c-8074-0cdfb5d99e78\" (UID: \"aae6e23c-4223-4a5c-8074-0cdfb5d99e78\") " Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.011870 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-run-httpd\") pod \"aae6e23c-4223-4a5c-8074-0cdfb5d99e78\" (UID: \"aae6e23c-4223-4a5c-8074-0cdfb5d99e78\") " Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.012051 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-combined-ca-bundle\") pod \"aae6e23c-4223-4a5c-8074-0cdfb5d99e78\" (UID: \"aae6e23c-4223-4a5c-8074-0cdfb5d99e78\") " Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.012096 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-log-httpd\") pod \"aae6e23c-4223-4a5c-8074-0cdfb5d99e78\" (UID: \"aae6e23c-4223-4a5c-8074-0cdfb5d99e78\") " Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.012135 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-scripts\") pod \"aae6e23c-4223-4a5c-8074-0cdfb5d99e78\" (UID: \"aae6e23c-4223-4a5c-8074-0cdfb5d99e78\") " Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.012161 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-sg-core-conf-yaml\") pod \"aae6e23c-4223-4a5c-8074-0cdfb5d99e78\" (UID: \"aae6e23c-4223-4a5c-8074-0cdfb5d99e78\") " Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.014222 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "aae6e23c-4223-4a5c-8074-0cdfb5d99e78" (UID: "aae6e23c-4223-4a5c-8074-0cdfb5d99e78"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.016576 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "aae6e23c-4223-4a5c-8074-0cdfb5d99e78" (UID: "aae6e23c-4223-4a5c-8074-0cdfb5d99e78"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.024809 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-kube-api-access-vtzmr" (OuterVolumeSpecName: "kube-api-access-vtzmr") pod "aae6e23c-4223-4a5c-8074-0cdfb5d99e78" (UID: "aae6e23c-4223-4a5c-8074-0cdfb5d99e78"). InnerVolumeSpecName "kube-api-access-vtzmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.033278 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-scripts" (OuterVolumeSpecName: "scripts") pod "aae6e23c-4223-4a5c-8074-0cdfb5d99e78" (UID: "aae6e23c-4223-4a5c-8074-0cdfb5d99e78"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.091843 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "aae6e23c-4223-4a5c-8074-0cdfb5d99e78" (UID: "aae6e23c-4223-4a5c-8074-0cdfb5d99e78"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.114672 4632 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.114716 4632 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.114731 4632 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.114748 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtzmr\" (UniqueName: \"kubernetes.io/projected/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-kube-api-access-vtzmr\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.114757 4632 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.130375 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aae6e23c-4223-4a5c-8074-0cdfb5d99e78" (UID: "aae6e23c-4223-4a5c-8074-0cdfb5d99e78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.186892 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-config-data" (OuterVolumeSpecName: "config-data") pod "aae6e23c-4223-4a5c-8074-0cdfb5d99e78" (UID: "aae6e23c-4223-4a5c-8074-0cdfb5d99e78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.217022 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.217068 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aae6e23c-4223-4a5c-8074-0cdfb5d99e78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.843927 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aae6e23c-4223-4a5c-8074-0cdfb5d99e78","Type":"ContainerDied","Data":"ac0a78784a347100c3f7eb503566ca534d51cff4225e41d7e119e9bf5dd8e6ea"} Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.844189 4632 scope.go:117] "RemoveContainer" containerID="604c723ae68fd3213f4723bdf3524877331344576a1c363f42185040543011d1" Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.844314 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.898729 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.912782 4632 scope.go:117] "RemoveContainer" containerID="585bc9dbde7bcb94e64b2121baa3ef51bee81b44562ad73e38572b5035710787" Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.915785 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.927302 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:29:44 crc kubenswrapper[4632]: E0313 10:29:44.927988 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aae6e23c-4223-4a5c-8074-0cdfb5d99e78" containerName="sg-core" Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.928105 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="aae6e23c-4223-4a5c-8074-0cdfb5d99e78" containerName="sg-core" Mar 13 10:29:44 crc kubenswrapper[4632]: E0313 10:29:44.928198 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aae6e23c-4223-4a5c-8074-0cdfb5d99e78" containerName="ceilometer-notification-agent" Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.928302 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="aae6e23c-4223-4a5c-8074-0cdfb5d99e78" containerName="ceilometer-notification-agent" Mar 13 10:29:44 crc kubenswrapper[4632]: E0313 10:29:44.928389 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aae6e23c-4223-4a5c-8074-0cdfb5d99e78" containerName="ceilometer-central-agent" Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.928470 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="aae6e23c-4223-4a5c-8074-0cdfb5d99e78" containerName="ceilometer-central-agent" Mar 13 10:29:44 crc kubenswrapper[4632]: E0313 10:29:44.928549 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aae6e23c-4223-4a5c-8074-0cdfb5d99e78" containerName="proxy-httpd" Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.928615 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="aae6e23c-4223-4a5c-8074-0cdfb5d99e78" containerName="proxy-httpd" Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.929392 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="aae6e23c-4223-4a5c-8074-0cdfb5d99e78" containerName="ceilometer-notification-agent" Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.929533 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="aae6e23c-4223-4a5c-8074-0cdfb5d99e78" containerName="sg-core" Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.929625 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="aae6e23c-4223-4a5c-8074-0cdfb5d99e78" containerName="ceilometer-central-agent" Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.929708 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="aae6e23c-4223-4a5c-8074-0cdfb5d99e78" containerName="proxy-httpd" Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.932502 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.937621 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.937768 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.950670 4632 scope.go:117] "RemoveContainer" containerID="b21cf0ac61b24fc322cab134fff86780fa6ec89891d057825ed89069f94de13b" Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.954316 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:29:44 crc kubenswrapper[4632]: I0313 10:29:44.980638 4632 scope.go:117] "RemoveContainer" containerID="fb2da7115022d9a5776bf008164ce9b7e7fedf403dd765c92f96767951f31f8f" Mar 13 10:29:45 crc kubenswrapper[4632]: I0313 10:29:45.035202 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff318cc9-cbe7-4357-971a-26c26e8bd269-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff318cc9-cbe7-4357-971a-26c26e8bd269\") " pod="openstack/ceilometer-0" Mar 13 10:29:45 crc kubenswrapper[4632]: I0313 10:29:45.035274 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff318cc9-cbe7-4357-971a-26c26e8bd269-run-httpd\") pod \"ceilometer-0\" (UID: \"ff318cc9-cbe7-4357-971a-26c26e8bd269\") " pod="openstack/ceilometer-0" Mar 13 10:29:45 crc kubenswrapper[4632]: I0313 10:29:45.035314 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qm5d\" (UniqueName: \"kubernetes.io/projected/ff318cc9-cbe7-4357-971a-26c26e8bd269-kube-api-access-2qm5d\") pod \"ceilometer-0\" (UID: \"ff318cc9-cbe7-4357-971a-26c26e8bd269\") " pod="openstack/ceilometer-0" Mar 13 10:29:45 crc kubenswrapper[4632]: I0313 10:29:45.035353 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff318cc9-cbe7-4357-971a-26c26e8bd269-scripts\") pod \"ceilometer-0\" (UID: \"ff318cc9-cbe7-4357-971a-26c26e8bd269\") " pod="openstack/ceilometer-0" Mar 13 10:29:45 crc kubenswrapper[4632]: I0313 10:29:45.035433 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff318cc9-cbe7-4357-971a-26c26e8bd269-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff318cc9-cbe7-4357-971a-26c26e8bd269\") " pod="openstack/ceilometer-0" Mar 13 10:29:45 crc kubenswrapper[4632]: I0313 10:29:45.035504 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff318cc9-cbe7-4357-971a-26c26e8bd269-config-data\") pod \"ceilometer-0\" (UID: \"ff318cc9-cbe7-4357-971a-26c26e8bd269\") " pod="openstack/ceilometer-0" Mar 13 10:29:45 crc kubenswrapper[4632]: I0313 10:29:45.035529 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff318cc9-cbe7-4357-971a-26c26e8bd269-log-httpd\") pod \"ceilometer-0\" (UID: \"ff318cc9-cbe7-4357-971a-26c26e8bd269\") " pod="openstack/ceilometer-0" Mar 13 10:29:45 crc kubenswrapper[4632]: I0313 10:29:45.137278 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qm5d\" (UniqueName: \"kubernetes.io/projected/ff318cc9-cbe7-4357-971a-26c26e8bd269-kube-api-access-2qm5d\") pod \"ceilometer-0\" (UID: \"ff318cc9-cbe7-4357-971a-26c26e8bd269\") " pod="openstack/ceilometer-0" Mar 13 10:29:45 crc kubenswrapper[4632]: I0313 10:29:45.137349 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff318cc9-cbe7-4357-971a-26c26e8bd269-scripts\") pod \"ceilometer-0\" (UID: \"ff318cc9-cbe7-4357-971a-26c26e8bd269\") " pod="openstack/ceilometer-0" Mar 13 10:29:45 crc kubenswrapper[4632]: I0313 10:29:45.137440 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff318cc9-cbe7-4357-971a-26c26e8bd269-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff318cc9-cbe7-4357-971a-26c26e8bd269\") " pod="openstack/ceilometer-0" Mar 13 10:29:45 crc kubenswrapper[4632]: I0313 10:29:45.137538 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff318cc9-cbe7-4357-971a-26c26e8bd269-config-data\") pod \"ceilometer-0\" (UID: \"ff318cc9-cbe7-4357-971a-26c26e8bd269\") " pod="openstack/ceilometer-0" Mar 13 10:29:45 crc kubenswrapper[4632]: I0313 10:29:45.137562 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff318cc9-cbe7-4357-971a-26c26e8bd269-log-httpd\") pod \"ceilometer-0\" (UID: \"ff318cc9-cbe7-4357-971a-26c26e8bd269\") " pod="openstack/ceilometer-0" Mar 13 10:29:45 crc kubenswrapper[4632]: I0313 10:29:45.137630 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff318cc9-cbe7-4357-971a-26c26e8bd269-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff318cc9-cbe7-4357-971a-26c26e8bd269\") " pod="openstack/ceilometer-0" Mar 13 10:29:45 crc kubenswrapper[4632]: I0313 10:29:45.137660 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff318cc9-cbe7-4357-971a-26c26e8bd269-run-httpd\") pod \"ceilometer-0\" (UID: \"ff318cc9-cbe7-4357-971a-26c26e8bd269\") " pod="openstack/ceilometer-0" Mar 13 10:29:45 crc kubenswrapper[4632]: I0313 10:29:45.138195 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff318cc9-cbe7-4357-971a-26c26e8bd269-run-httpd\") pod \"ceilometer-0\" (UID: \"ff318cc9-cbe7-4357-971a-26c26e8bd269\") " pod="openstack/ceilometer-0" Mar 13 10:29:45 crc kubenswrapper[4632]: I0313 10:29:45.140735 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff318cc9-cbe7-4357-971a-26c26e8bd269-log-httpd\") pod \"ceilometer-0\" (UID: \"ff318cc9-cbe7-4357-971a-26c26e8bd269\") " pod="openstack/ceilometer-0" Mar 13 10:29:45 crc kubenswrapper[4632]: I0313 10:29:45.145325 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff318cc9-cbe7-4357-971a-26c26e8bd269-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff318cc9-cbe7-4357-971a-26c26e8bd269\") " pod="openstack/ceilometer-0" Mar 13 10:29:45 crc kubenswrapper[4632]: I0313 10:29:45.146010 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff318cc9-cbe7-4357-971a-26c26e8bd269-scripts\") pod \"ceilometer-0\" (UID: \"ff318cc9-cbe7-4357-971a-26c26e8bd269\") " pod="openstack/ceilometer-0" Mar 13 10:29:45 crc kubenswrapper[4632]: I0313 10:29:45.146970 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff318cc9-cbe7-4357-971a-26c26e8bd269-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff318cc9-cbe7-4357-971a-26c26e8bd269\") " pod="openstack/ceilometer-0" Mar 13 10:29:45 crc kubenswrapper[4632]: I0313 10:29:45.151442 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff318cc9-cbe7-4357-971a-26c26e8bd269-config-data\") pod \"ceilometer-0\" (UID: \"ff318cc9-cbe7-4357-971a-26c26e8bd269\") " pod="openstack/ceilometer-0" Mar 13 10:29:45 crc kubenswrapper[4632]: I0313 10:29:45.163891 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qm5d\" (UniqueName: \"kubernetes.io/projected/ff318cc9-cbe7-4357-971a-26c26e8bd269-kube-api-access-2qm5d\") pod \"ceilometer-0\" (UID: \"ff318cc9-cbe7-4357-971a-26c26e8bd269\") " pod="openstack/ceilometer-0" Mar 13 10:29:45 crc kubenswrapper[4632]: I0313 10:29:45.183779 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 13 10:29:45 crc kubenswrapper[4632]: I0313 10:29:45.184028 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 13 10:29:45 crc kubenswrapper[4632]: I0313 10:29:45.199171 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Mar 13 10:29:45 crc kubenswrapper[4632]: I0313 10:29:45.234197 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Mar 13 10:29:45 crc kubenswrapper[4632]: I0313 10:29:45.255233 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:29:45 crc kubenswrapper[4632]: I0313 10:29:45.788667 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:29:45 crc kubenswrapper[4632]: W0313 10:29:45.799688 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff318cc9_cbe7_4357_971a_26c26e8bd269.slice/crio-2f6d41c40b5de2ff95100617c3719cca8da01adbe4c5436cbe7b0e955e7ff656 WatchSource:0}: Error finding container 2f6d41c40b5de2ff95100617c3719cca8da01adbe4c5436cbe7b0e955e7ff656: Status 404 returned error can't find the container with id 2f6d41c40b5de2ff95100617c3719cca8da01adbe4c5436cbe7b0e955e7ff656 Mar 13 10:29:45 crc kubenswrapper[4632]: I0313 10:29:45.841356 4632 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 10:29:45 crc kubenswrapper[4632]: I0313 10:29:45.864710 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff318cc9-cbe7-4357-971a-26c26e8bd269","Type":"ContainerStarted","Data":"2f6d41c40b5de2ff95100617c3719cca8da01adbe4c5436cbe7b0e955e7ff656"} Mar 13 10:29:45 crc kubenswrapper[4632]: I0313 10:29:45.889748 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Mar 13 10:29:46 crc kubenswrapper[4632]: I0313 10:29:46.119138 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aae6e23c-4223-4a5c-8074-0cdfb5d99e78" path="/var/lib/kubelet/pods/aae6e23c-4223-4a5c-8074-0cdfb5d99e78/volumes" Mar 13 10:29:46 crc kubenswrapper[4632]: I0313 10:29:46.241115 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-ngzsx"] Mar 13 10:29:46 crc kubenswrapper[4632]: I0313 10:29:46.243039 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ngzsx" Mar 13 10:29:46 crc kubenswrapper[4632]: I0313 10:29:46.244142 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="82ae3a46-0133-43f5-942d-0b9a5b4d59f4" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.219:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:29:46 crc kubenswrapper[4632]: I0313 10:29:46.244466 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="82ae3a46-0133-43f5-942d-0b9a5b4d59f4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.219:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:29:46 crc kubenswrapper[4632]: I0313 10:29:46.248340 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Mar 13 10:29:46 crc kubenswrapper[4632]: I0313 10:29:46.248792 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Mar 13 10:29:46 crc kubenswrapper[4632]: I0313 10:29:46.253339 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-ngzsx"] Mar 13 10:29:46 crc kubenswrapper[4632]: I0313 10:29:46.278753 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/601f3615-5015-486a-bbb5-04c683da6990-config-data\") pod \"nova-cell1-cell-mapping-ngzsx\" (UID: \"601f3615-5015-486a-bbb5-04c683da6990\") " pod="openstack/nova-cell1-cell-mapping-ngzsx" Mar 13 10:29:46 crc kubenswrapper[4632]: I0313 10:29:46.278846 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4fj2\" (UniqueName: \"kubernetes.io/projected/601f3615-5015-486a-bbb5-04c683da6990-kube-api-access-q4fj2\") pod \"nova-cell1-cell-mapping-ngzsx\" (UID: \"601f3615-5015-486a-bbb5-04c683da6990\") " pod="openstack/nova-cell1-cell-mapping-ngzsx" Mar 13 10:29:46 crc kubenswrapper[4632]: I0313 10:29:46.278870 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/601f3615-5015-486a-bbb5-04c683da6990-scripts\") pod \"nova-cell1-cell-mapping-ngzsx\" (UID: \"601f3615-5015-486a-bbb5-04c683da6990\") " pod="openstack/nova-cell1-cell-mapping-ngzsx" Mar 13 10:29:46 crc kubenswrapper[4632]: I0313 10:29:46.278888 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/601f3615-5015-486a-bbb5-04c683da6990-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ngzsx\" (UID: \"601f3615-5015-486a-bbb5-04c683da6990\") " pod="openstack/nova-cell1-cell-mapping-ngzsx" Mar 13 10:29:46 crc kubenswrapper[4632]: I0313 10:29:46.382668 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/601f3615-5015-486a-bbb5-04c683da6990-config-data\") pod \"nova-cell1-cell-mapping-ngzsx\" (UID: \"601f3615-5015-486a-bbb5-04c683da6990\") " pod="openstack/nova-cell1-cell-mapping-ngzsx" Mar 13 10:29:46 crc kubenswrapper[4632]: I0313 10:29:46.382890 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4fj2\" (UniqueName: \"kubernetes.io/projected/601f3615-5015-486a-bbb5-04c683da6990-kube-api-access-q4fj2\") pod \"nova-cell1-cell-mapping-ngzsx\" (UID: \"601f3615-5015-486a-bbb5-04c683da6990\") " pod="openstack/nova-cell1-cell-mapping-ngzsx" Mar 13 10:29:46 crc kubenswrapper[4632]: I0313 10:29:46.382925 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/601f3615-5015-486a-bbb5-04c683da6990-scripts\") pod \"nova-cell1-cell-mapping-ngzsx\" (UID: \"601f3615-5015-486a-bbb5-04c683da6990\") " pod="openstack/nova-cell1-cell-mapping-ngzsx" Mar 13 10:29:46 crc kubenswrapper[4632]: I0313 10:29:46.383045 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/601f3615-5015-486a-bbb5-04c683da6990-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ngzsx\" (UID: \"601f3615-5015-486a-bbb5-04c683da6990\") " pod="openstack/nova-cell1-cell-mapping-ngzsx" Mar 13 10:29:46 crc kubenswrapper[4632]: I0313 10:29:46.396255 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/601f3615-5015-486a-bbb5-04c683da6990-scripts\") pod \"nova-cell1-cell-mapping-ngzsx\" (UID: \"601f3615-5015-486a-bbb5-04c683da6990\") " pod="openstack/nova-cell1-cell-mapping-ngzsx" Mar 13 10:29:46 crc kubenswrapper[4632]: I0313 10:29:46.399920 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/601f3615-5015-486a-bbb5-04c683da6990-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ngzsx\" (UID: \"601f3615-5015-486a-bbb5-04c683da6990\") " pod="openstack/nova-cell1-cell-mapping-ngzsx" Mar 13 10:29:46 crc kubenswrapper[4632]: I0313 10:29:46.401267 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/601f3615-5015-486a-bbb5-04c683da6990-config-data\") pod \"nova-cell1-cell-mapping-ngzsx\" (UID: \"601f3615-5015-486a-bbb5-04c683da6990\") " pod="openstack/nova-cell1-cell-mapping-ngzsx" Mar 13 10:29:46 crc kubenswrapper[4632]: I0313 10:29:46.412071 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4fj2\" (UniqueName: \"kubernetes.io/projected/601f3615-5015-486a-bbb5-04c683da6990-kube-api-access-q4fj2\") pod \"nova-cell1-cell-mapping-ngzsx\" (UID: \"601f3615-5015-486a-bbb5-04c683da6990\") " pod="openstack/nova-cell1-cell-mapping-ngzsx" Mar 13 10:29:46 crc kubenswrapper[4632]: I0313 10:29:46.642163 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ngzsx" Mar 13 10:29:46 crc kubenswrapper[4632]: I0313 10:29:46.895683 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff318cc9-cbe7-4357-971a-26c26e8bd269","Type":"ContainerStarted","Data":"e19df6f77d1da54d66347bac1ac445e20a6a2fb793251d7691739401089385b1"} Mar 13 10:29:47 crc kubenswrapper[4632]: I0313 10:29:47.151888 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-ngzsx"] Mar 13 10:29:47 crc kubenswrapper[4632]: I0313 10:29:47.182128 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 13 10:29:47 crc kubenswrapper[4632]: I0313 10:29:47.219924 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 13 10:29:47 crc kubenswrapper[4632]: I0313 10:29:47.906398 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ngzsx" event={"ID":"601f3615-5015-486a-bbb5-04c683da6990","Type":"ContainerStarted","Data":"e181311595cfc3a50154df8d12fbc0793d907a3185d962d8a64fc357e0b6ee4f"} Mar 13 10:29:47 crc kubenswrapper[4632]: I0313 10:29:47.906788 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ngzsx" event={"ID":"601f3615-5015-486a-bbb5-04c683da6990","Type":"ContainerStarted","Data":"25ba33658addfd86f3973f0bbd3f7d31da0188bcd4762d3214f38a7e8af5297b"} Mar 13 10:29:47 crc kubenswrapper[4632]: I0313 10:29:47.912366 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff318cc9-cbe7-4357-971a-26c26e8bd269","Type":"ContainerStarted","Data":"ca42f2855aed10e6d2409a495733d8e631d4d8011dee434ee91ebcbee2777ca4"} Mar 13 10:29:47 crc kubenswrapper[4632]: I0313 10:29:47.912420 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff318cc9-cbe7-4357-971a-26c26e8bd269","Type":"ContainerStarted","Data":"cd9daf2ae40dbbc090305eb1d55ec458cd7302b2ff730d20e1f7272ead3f43c4"} Mar 13 10:29:47 crc kubenswrapper[4632]: I0313 10:29:47.930435 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-ngzsx" podStartSLOduration=1.930413961 podStartE2EDuration="1.930413961s" podCreationTimestamp="2026-03-13 10:29:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:29:47.925323789 +0000 UTC m=+1561.947853922" watchObservedRunningTime="2026-03-13 10:29:47.930413961 +0000 UTC m=+1561.952944094" Mar 13 10:29:47 crc kubenswrapper[4632]: I0313 10:29:47.942652 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 13 10:29:48 crc kubenswrapper[4632]: I0313 10:29:48.309840 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n4z22" podUID="d0cabd29-ef3e-4808-8c92-3b032483789e" containerName="registry-server" probeResult="failure" output=< Mar 13 10:29:48 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 10:29:48 crc kubenswrapper[4632]: > Mar 13 10:29:49 crc kubenswrapper[4632]: I0313 10:29:49.933252 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff318cc9-cbe7-4357-971a-26c26e8bd269","Type":"ContainerStarted","Data":"e81d93539be5ad8110d788d8abb89dd69059700a39392e03a27cde23af0ce544"} Mar 13 10:29:49 crc kubenswrapper[4632]: I0313 10:29:49.934683 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 13 10:29:49 crc kubenswrapper[4632]: I0313 10:29:49.970212 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.592511212 podStartE2EDuration="5.970189454s" podCreationTimestamp="2026-03-13 10:29:44 +0000 UTC" firstStartedPulling="2026-03-13 10:29:45.834730312 +0000 UTC m=+1559.857260445" lastFinishedPulling="2026-03-13 10:29:49.212408554 +0000 UTC m=+1563.234938687" observedRunningTime="2026-03-13 10:29:49.968278078 +0000 UTC m=+1563.990808231" watchObservedRunningTime="2026-03-13 10:29:49.970189454 +0000 UTC m=+1563.992719597" Mar 13 10:29:50 crc kubenswrapper[4632]: I0313 10:29:50.401184 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7bdb5f7878-ng2k2" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:29:50 crc kubenswrapper[4632]: I0313 10:29:50.401289 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:29:50 crc kubenswrapper[4632]: I0313 10:29:50.402593 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"0d13a7ba01a78f7619f69655522000449193f4aa62fa0ed0a3c794480484af30"} pod="openstack/horizon-7bdb5f7878-ng2k2" containerMessage="Container horizon failed startup probe, will be restarted" Mar 13 10:29:50 crc kubenswrapper[4632]: I0313 10:29:50.402647 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7bdb5f7878-ng2k2" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerName="horizon" containerID="cri-o://0d13a7ba01a78f7619f69655522000449193f4aa62fa0ed0a3c794480484af30" gracePeriod=30 Mar 13 10:29:50 crc kubenswrapper[4632]: I0313 10:29:50.446395 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-689764498d-rg7vt" Mar 13 10:29:50 crc kubenswrapper[4632]: I0313 10:29:50.973203 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 13 10:29:50 crc kubenswrapper[4632]: I0313 10:29:50.974158 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 13 10:29:50 crc kubenswrapper[4632]: I0313 10:29:50.977514 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 13 10:29:50 crc kubenswrapper[4632]: I0313 10:29:50.986873 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 13 10:29:51 crc kubenswrapper[4632]: I0313 10:29:51.952209 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 13 10:29:51 crc kubenswrapper[4632]: I0313 10:29:51.955738 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 13 10:29:52 crc kubenswrapper[4632]: I0313 10:29:52.184733 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-564797cccc-84dg2"] Mar 13 10:29:52 crc kubenswrapper[4632]: I0313 10:29:52.186925 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-564797cccc-84dg2" Mar 13 10:29:52 crc kubenswrapper[4632]: I0313 10:29:52.205140 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djmgg\" (UniqueName: \"kubernetes.io/projected/ac568760-fbe3-49ca-af4a-13f7780a1ad2-kube-api-access-djmgg\") pod \"dnsmasq-dns-564797cccc-84dg2\" (UID: \"ac568760-fbe3-49ca-af4a-13f7780a1ad2\") " pod="openstack/dnsmasq-dns-564797cccc-84dg2" Mar 13 10:29:52 crc kubenswrapper[4632]: I0313 10:29:52.205206 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac568760-fbe3-49ca-af4a-13f7780a1ad2-ovsdbserver-nb\") pod \"dnsmasq-dns-564797cccc-84dg2\" (UID: \"ac568760-fbe3-49ca-af4a-13f7780a1ad2\") " pod="openstack/dnsmasq-dns-564797cccc-84dg2" Mar 13 10:29:52 crc kubenswrapper[4632]: I0313 10:29:52.205237 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac568760-fbe3-49ca-af4a-13f7780a1ad2-config\") pod \"dnsmasq-dns-564797cccc-84dg2\" (UID: \"ac568760-fbe3-49ca-af4a-13f7780a1ad2\") " pod="openstack/dnsmasq-dns-564797cccc-84dg2" Mar 13 10:29:52 crc kubenswrapper[4632]: I0313 10:29:52.205278 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac568760-fbe3-49ca-af4a-13f7780a1ad2-dns-swift-storage-0\") pod \"dnsmasq-dns-564797cccc-84dg2\" (UID: \"ac568760-fbe3-49ca-af4a-13f7780a1ad2\") " pod="openstack/dnsmasq-dns-564797cccc-84dg2" Mar 13 10:29:52 crc kubenswrapper[4632]: I0313 10:29:52.205307 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac568760-fbe3-49ca-af4a-13f7780a1ad2-dns-svc\") pod \"dnsmasq-dns-564797cccc-84dg2\" (UID: \"ac568760-fbe3-49ca-af4a-13f7780a1ad2\") " pod="openstack/dnsmasq-dns-564797cccc-84dg2" Mar 13 10:29:52 crc kubenswrapper[4632]: I0313 10:29:52.205342 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac568760-fbe3-49ca-af4a-13f7780a1ad2-ovsdbserver-sb\") pod \"dnsmasq-dns-564797cccc-84dg2\" (UID: \"ac568760-fbe3-49ca-af4a-13f7780a1ad2\") " pod="openstack/dnsmasq-dns-564797cccc-84dg2" Mar 13 10:29:52 crc kubenswrapper[4632]: I0313 10:29:52.223290 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-564797cccc-84dg2"] Mar 13 10:29:52 crc kubenswrapper[4632]: I0313 10:29:52.306745 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac568760-fbe3-49ca-af4a-13f7780a1ad2-config\") pod \"dnsmasq-dns-564797cccc-84dg2\" (UID: \"ac568760-fbe3-49ca-af4a-13f7780a1ad2\") " pod="openstack/dnsmasq-dns-564797cccc-84dg2" Mar 13 10:29:52 crc kubenswrapper[4632]: I0313 10:29:52.306838 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac568760-fbe3-49ca-af4a-13f7780a1ad2-dns-swift-storage-0\") pod \"dnsmasq-dns-564797cccc-84dg2\" (UID: \"ac568760-fbe3-49ca-af4a-13f7780a1ad2\") " pod="openstack/dnsmasq-dns-564797cccc-84dg2" Mar 13 10:29:52 crc kubenswrapper[4632]: I0313 10:29:52.306886 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac568760-fbe3-49ca-af4a-13f7780a1ad2-dns-svc\") pod \"dnsmasq-dns-564797cccc-84dg2\" (UID: \"ac568760-fbe3-49ca-af4a-13f7780a1ad2\") " pod="openstack/dnsmasq-dns-564797cccc-84dg2" Mar 13 10:29:52 crc kubenswrapper[4632]: I0313 10:29:52.306957 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac568760-fbe3-49ca-af4a-13f7780a1ad2-ovsdbserver-sb\") pod \"dnsmasq-dns-564797cccc-84dg2\" (UID: \"ac568760-fbe3-49ca-af4a-13f7780a1ad2\") " pod="openstack/dnsmasq-dns-564797cccc-84dg2" Mar 13 10:29:52 crc kubenswrapper[4632]: I0313 10:29:52.307078 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djmgg\" (UniqueName: \"kubernetes.io/projected/ac568760-fbe3-49ca-af4a-13f7780a1ad2-kube-api-access-djmgg\") pod \"dnsmasq-dns-564797cccc-84dg2\" (UID: \"ac568760-fbe3-49ca-af4a-13f7780a1ad2\") " pod="openstack/dnsmasq-dns-564797cccc-84dg2" Mar 13 10:29:52 crc kubenswrapper[4632]: I0313 10:29:52.307145 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac568760-fbe3-49ca-af4a-13f7780a1ad2-ovsdbserver-nb\") pod \"dnsmasq-dns-564797cccc-84dg2\" (UID: \"ac568760-fbe3-49ca-af4a-13f7780a1ad2\") " pod="openstack/dnsmasq-dns-564797cccc-84dg2" Mar 13 10:29:52 crc kubenswrapper[4632]: I0313 10:29:52.308540 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac568760-fbe3-49ca-af4a-13f7780a1ad2-config\") pod \"dnsmasq-dns-564797cccc-84dg2\" (UID: \"ac568760-fbe3-49ca-af4a-13f7780a1ad2\") " pod="openstack/dnsmasq-dns-564797cccc-84dg2" Mar 13 10:29:52 crc kubenswrapper[4632]: I0313 10:29:52.308787 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac568760-fbe3-49ca-af4a-13f7780a1ad2-dns-svc\") pod \"dnsmasq-dns-564797cccc-84dg2\" (UID: \"ac568760-fbe3-49ca-af4a-13f7780a1ad2\") " pod="openstack/dnsmasq-dns-564797cccc-84dg2" Mar 13 10:29:52 crc kubenswrapper[4632]: I0313 10:29:52.309016 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac568760-fbe3-49ca-af4a-13f7780a1ad2-ovsdbserver-sb\") pod \"dnsmasq-dns-564797cccc-84dg2\" (UID: \"ac568760-fbe3-49ca-af4a-13f7780a1ad2\") " pod="openstack/dnsmasq-dns-564797cccc-84dg2" Mar 13 10:29:52 crc kubenswrapper[4632]: I0313 10:29:52.309218 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac568760-fbe3-49ca-af4a-13f7780a1ad2-dns-swift-storage-0\") pod \"dnsmasq-dns-564797cccc-84dg2\" (UID: \"ac568760-fbe3-49ca-af4a-13f7780a1ad2\") " pod="openstack/dnsmasq-dns-564797cccc-84dg2" Mar 13 10:29:52 crc kubenswrapper[4632]: I0313 10:29:52.310182 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac568760-fbe3-49ca-af4a-13f7780a1ad2-ovsdbserver-nb\") pod \"dnsmasq-dns-564797cccc-84dg2\" (UID: \"ac568760-fbe3-49ca-af4a-13f7780a1ad2\") " pod="openstack/dnsmasq-dns-564797cccc-84dg2" Mar 13 10:29:52 crc kubenswrapper[4632]: I0313 10:29:52.341258 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djmgg\" (UniqueName: \"kubernetes.io/projected/ac568760-fbe3-49ca-af4a-13f7780a1ad2-kube-api-access-djmgg\") pod \"dnsmasq-dns-564797cccc-84dg2\" (UID: \"ac568760-fbe3-49ca-af4a-13f7780a1ad2\") " pod="openstack/dnsmasq-dns-564797cccc-84dg2" Mar 13 10:29:52 crc kubenswrapper[4632]: I0313 10:29:52.539570 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-564797cccc-84dg2" Mar 13 10:29:53 crc kubenswrapper[4632]: I0313 10:29:53.133310 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-564797cccc-84dg2"] Mar 13 10:29:53 crc kubenswrapper[4632]: I0313 10:29:53.974581 4632 generic.go:334] "Generic (PLEG): container finished" podID="ac568760-fbe3-49ca-af4a-13f7780a1ad2" containerID="7c9783dd40660c9e8665537c8ead9f633309987f7dedc616633d346075b3da86" exitCode=0 Mar 13 10:29:53 crc kubenswrapper[4632]: I0313 10:29:53.974640 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-564797cccc-84dg2" event={"ID":"ac568760-fbe3-49ca-af4a-13f7780a1ad2","Type":"ContainerDied","Data":"7c9783dd40660c9e8665537c8ead9f633309987f7dedc616633d346075b3da86"} Mar 13 10:29:53 crc kubenswrapper[4632]: I0313 10:29:53.974973 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-564797cccc-84dg2" event={"ID":"ac568760-fbe3-49ca-af4a-13f7780a1ad2","Type":"ContainerStarted","Data":"f114a94f0fb42ce2c1f69bef8fad045098717f536b5431da20286872b08fed02"} Mar 13 10:29:54 crc kubenswrapper[4632]: I0313 10:29:54.823018 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-689764498d-rg7vt" Mar 13 10:29:54 crc kubenswrapper[4632]: I0313 10:29:54.895480 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7bdb5f7878-ng2k2"] Mar 13 10:29:55 crc kubenswrapper[4632]: I0313 10:29:55.010331 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-564797cccc-84dg2" event={"ID":"ac568760-fbe3-49ca-af4a-13f7780a1ad2","Type":"ContainerStarted","Data":"3807149ca5beac08d142f3e5ffa3b80f5bf9a97b93a119f317229b5a8536c4a3"} Mar 13 10:29:55 crc kubenswrapper[4632]: I0313 10:29:55.011733 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-564797cccc-84dg2" Mar 13 10:29:55 crc kubenswrapper[4632]: I0313 10:29:55.029586 4632 generic.go:334] "Generic (PLEG): container finished" podID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerID="0d13a7ba01a78f7619f69655522000449193f4aa62fa0ed0a3c794480484af30" exitCode=0 Mar 13 10:29:55 crc kubenswrapper[4632]: I0313 10:29:55.029632 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bdb5f7878-ng2k2" event={"ID":"3e4afb91-ce26-4325-89c9-2542da2ec48a","Type":"ContainerDied","Data":"0d13a7ba01a78f7619f69655522000449193f4aa62fa0ed0a3c794480484af30"} Mar 13 10:29:55 crc kubenswrapper[4632]: I0313 10:29:55.029659 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bdb5f7878-ng2k2" event={"ID":"3e4afb91-ce26-4325-89c9-2542da2ec48a","Type":"ContainerStarted","Data":"d57c6cccef4987c7003d38b1c8de63c00e37251e291f0b2d5a1b05218e53dd35"} Mar 13 10:29:55 crc kubenswrapper[4632]: I0313 10:29:55.029677 4632 scope.go:117] "RemoveContainer" containerID="2dbb3ede37abc9f5b483ae48b13ac3ed8913ac4529c34c39494b1541e21ce00b" Mar 13 10:29:55 crc kubenswrapper[4632]: I0313 10:29:55.029906 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7bdb5f7878-ng2k2" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerName="horizon-log" containerID="cri-o://0848706fc94227d25a25441bdfc8b2affad934f1f484b5adb7c4470d7918e589" gracePeriod=30 Mar 13 10:29:55 crc kubenswrapper[4632]: I0313 10:29:55.030077 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7bdb5f7878-ng2k2" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerName="horizon" containerID="cri-o://d57c6cccef4987c7003d38b1c8de63c00e37251e291f0b2d5a1b05218e53dd35" gracePeriod=30 Mar 13 10:29:55 crc kubenswrapper[4632]: I0313 10:29:55.049350 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-564797cccc-84dg2" podStartSLOduration=3.049331597 podStartE2EDuration="3.049331597s" podCreationTimestamp="2026-03-13 10:29:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:29:55.029458338 +0000 UTC m=+1569.051988471" watchObservedRunningTime="2026-03-13 10:29:55.049331597 +0000 UTC m=+1569.071861730" Mar 13 10:29:55 crc kubenswrapper[4632]: I0313 10:29:55.192815 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 13 10:29:55 crc kubenswrapper[4632]: I0313 10:29:55.220456 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 13 10:29:55 crc kubenswrapper[4632]: I0313 10:29:55.220741 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="de71c6bf-377b-44e8-a5fb-e654b259404f" containerName="nova-api-log" containerID="cri-o://66957b9fbbc860bd8b0b4ba61ac2afb5edc2532051b2d051493e658595c97c89" gracePeriod=30 Mar 13 10:29:55 crc kubenswrapper[4632]: I0313 10:29:55.221297 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="de71c6bf-377b-44e8-a5fb-e654b259404f" containerName="nova-api-api" containerID="cri-o://4fa9439436746dc39d49be92d41774ffb73ecdcce50f69453fca69442efcc0cf" gracePeriod=30 Mar 13 10:29:55 crc kubenswrapper[4632]: I0313 10:29:55.223361 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 13 10:29:55 crc kubenswrapper[4632]: I0313 10:29:55.223626 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 13 10:29:55 crc kubenswrapper[4632]: I0313 10:29:55.394137 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:29:56 crc kubenswrapper[4632]: I0313 10:29:56.048342 4632 generic.go:334] "Generic (PLEG): container finished" podID="de71c6bf-377b-44e8-a5fb-e654b259404f" containerID="66957b9fbbc860bd8b0b4ba61ac2afb5edc2532051b2d051493e658595c97c89" exitCode=143 Mar 13 10:29:56 crc kubenswrapper[4632]: I0313 10:29:56.061151 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"de71c6bf-377b-44e8-a5fb-e654b259404f","Type":"ContainerDied","Data":"66957b9fbbc860bd8b0b4ba61ac2afb5edc2532051b2d051493e658595c97c89"} Mar 13 10:29:56 crc kubenswrapper[4632]: I0313 10:29:56.146501 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 13 10:29:57 crc kubenswrapper[4632]: I0313 10:29:57.098139 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:29:57 crc kubenswrapper[4632]: I0313 10:29:57.098731 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff318cc9-cbe7-4357-971a-26c26e8bd269" containerName="ceilometer-central-agent" containerID="cri-o://e19df6f77d1da54d66347bac1ac445e20a6a2fb793251d7691739401089385b1" gracePeriod=30 Mar 13 10:29:57 crc kubenswrapper[4632]: I0313 10:29:57.100643 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff318cc9-cbe7-4357-971a-26c26e8bd269" containerName="proxy-httpd" containerID="cri-o://e81d93539be5ad8110d788d8abb89dd69059700a39392e03a27cde23af0ce544" gracePeriod=30 Mar 13 10:29:57 crc kubenswrapper[4632]: I0313 10:29:57.100683 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff318cc9-cbe7-4357-971a-26c26e8bd269" containerName="ceilometer-notification-agent" containerID="cri-o://cd9daf2ae40dbbc090305eb1d55ec458cd7302b2ff730d20e1f7272ead3f43c4" gracePeriod=30 Mar 13 10:29:57 crc kubenswrapper[4632]: I0313 10:29:57.100638 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff318cc9-cbe7-4357-971a-26c26e8bd269" containerName="sg-core" containerID="cri-o://ca42f2855aed10e6d2409a495733d8e631d4d8011dee434ee91ebcbee2777ca4" gracePeriod=30 Mar 13 10:29:58 crc kubenswrapper[4632]: I0313 10:29:58.116908 4632 generic.go:334] "Generic (PLEG): container finished" podID="ff318cc9-cbe7-4357-971a-26c26e8bd269" containerID="e81d93539be5ad8110d788d8abb89dd69059700a39392e03a27cde23af0ce544" exitCode=0 Mar 13 10:29:58 crc kubenswrapper[4632]: I0313 10:29:58.117203 4632 generic.go:334] "Generic (PLEG): container finished" podID="ff318cc9-cbe7-4357-971a-26c26e8bd269" containerID="ca42f2855aed10e6d2409a495733d8e631d4d8011dee434ee91ebcbee2777ca4" exitCode=2 Mar 13 10:29:58 crc kubenswrapper[4632]: I0313 10:29:58.117212 4632 generic.go:334] "Generic (PLEG): container finished" podID="ff318cc9-cbe7-4357-971a-26c26e8bd269" containerID="cd9daf2ae40dbbc090305eb1d55ec458cd7302b2ff730d20e1f7272ead3f43c4" exitCode=0 Mar 13 10:29:58 crc kubenswrapper[4632]: I0313 10:29:58.116973 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff318cc9-cbe7-4357-971a-26c26e8bd269","Type":"ContainerDied","Data":"e81d93539be5ad8110d788d8abb89dd69059700a39392e03a27cde23af0ce544"} Mar 13 10:29:58 crc kubenswrapper[4632]: I0313 10:29:58.117273 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff318cc9-cbe7-4357-971a-26c26e8bd269","Type":"ContainerDied","Data":"ca42f2855aed10e6d2409a495733d8e631d4d8011dee434ee91ebcbee2777ca4"} Mar 13 10:29:58 crc kubenswrapper[4632]: I0313 10:29:58.117287 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff318cc9-cbe7-4357-971a-26c26e8bd269","Type":"ContainerDied","Data":"cd9daf2ae40dbbc090305eb1d55ec458cd7302b2ff730d20e1f7272ead3f43c4"} Mar 13 10:29:58 crc kubenswrapper[4632]: I0313 10:29:58.121833 4632 generic.go:334] "Generic (PLEG): container finished" podID="601f3615-5015-486a-bbb5-04c683da6990" containerID="e181311595cfc3a50154df8d12fbc0793d907a3185d962d8a64fc357e0b6ee4f" exitCode=0 Mar 13 10:29:58 crc kubenswrapper[4632]: I0313 10:29:58.122066 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ngzsx" event={"ID":"601f3615-5015-486a-bbb5-04c683da6990","Type":"ContainerDied","Data":"e181311595cfc3a50154df8d12fbc0793d907a3185d962d8a64fc357e0b6ee4f"} Mar 13 10:29:58 crc kubenswrapper[4632]: I0313 10:29:58.307536 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n4z22" podUID="d0cabd29-ef3e-4808-8c92-3b032483789e" containerName="registry-server" probeResult="failure" output=< Mar 13 10:29:58 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 10:29:58 crc kubenswrapper[4632]: > Mar 13 10:29:59 crc kubenswrapper[4632]: I0313 10:29:59.133816 4632 generic.go:334] "Generic (PLEG): container finished" podID="de71c6bf-377b-44e8-a5fb-e654b259404f" containerID="4fa9439436746dc39d49be92d41774ffb73ecdcce50f69453fca69442efcc0cf" exitCode=0 Mar 13 10:29:59 crc kubenswrapper[4632]: I0313 10:29:59.134013 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"de71c6bf-377b-44e8-a5fb-e654b259404f","Type":"ContainerDied","Data":"4fa9439436746dc39d49be92d41774ffb73ecdcce50f69453fca69442efcc0cf"} Mar 13 10:29:59 crc kubenswrapper[4632]: I0313 10:29:59.813580 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ngzsx" Mar 13 10:29:59 crc kubenswrapper[4632]: I0313 10:29:59.828642 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/601f3615-5015-486a-bbb5-04c683da6990-scripts\") pod \"601f3615-5015-486a-bbb5-04c683da6990\" (UID: \"601f3615-5015-486a-bbb5-04c683da6990\") " Mar 13 10:29:59 crc kubenswrapper[4632]: I0313 10:29:59.828751 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/601f3615-5015-486a-bbb5-04c683da6990-config-data\") pod \"601f3615-5015-486a-bbb5-04c683da6990\" (UID: \"601f3615-5015-486a-bbb5-04c683da6990\") " Mar 13 10:29:59 crc kubenswrapper[4632]: I0313 10:29:59.828826 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/601f3615-5015-486a-bbb5-04c683da6990-combined-ca-bundle\") pod \"601f3615-5015-486a-bbb5-04c683da6990\" (UID: \"601f3615-5015-486a-bbb5-04c683da6990\") " Mar 13 10:29:59 crc kubenswrapper[4632]: I0313 10:29:59.828867 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4fj2\" (UniqueName: \"kubernetes.io/projected/601f3615-5015-486a-bbb5-04c683da6990-kube-api-access-q4fj2\") pod \"601f3615-5015-486a-bbb5-04c683da6990\" (UID: \"601f3615-5015-486a-bbb5-04c683da6990\") " Mar 13 10:29:59 crc kubenswrapper[4632]: I0313 10:29:59.859350 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/601f3615-5015-486a-bbb5-04c683da6990-scripts" (OuterVolumeSpecName: "scripts") pod "601f3615-5015-486a-bbb5-04c683da6990" (UID: "601f3615-5015-486a-bbb5-04c683da6990"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:29:59 crc kubenswrapper[4632]: I0313 10:29:59.881219 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/601f3615-5015-486a-bbb5-04c683da6990-kube-api-access-q4fj2" (OuterVolumeSpecName: "kube-api-access-q4fj2") pod "601f3615-5015-486a-bbb5-04c683da6990" (UID: "601f3615-5015-486a-bbb5-04c683da6990"). InnerVolumeSpecName "kube-api-access-q4fj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:29:59 crc kubenswrapper[4632]: I0313 10:29:59.946637 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4fj2\" (UniqueName: \"kubernetes.io/projected/601f3615-5015-486a-bbb5-04c683da6990-kube-api-access-q4fj2\") on node \"crc\" DevicePath \"\"" Mar 13 10:29:59 crc kubenswrapper[4632]: I0313 10:29:59.946686 4632 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/601f3615-5015-486a-bbb5-04c683da6990-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.001606 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/601f3615-5015-486a-bbb5-04c683da6990-config-data" (OuterVolumeSpecName: "config-data") pod "601f3615-5015-486a-bbb5-04c683da6990" (UID: "601f3615-5015-486a-bbb5-04c683da6990"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.006899 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/601f3615-5015-486a-bbb5-04c683da6990-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "601f3615-5015-486a-bbb5-04c683da6990" (UID: "601f3615-5015-486a-bbb5-04c683da6990"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.050364 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/601f3615-5015-486a-bbb5-04c683da6990-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.051009 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/601f3615-5015-486a-bbb5-04c683da6990-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.121494 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.150103 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ngzsx" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.150973 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ngzsx" event={"ID":"601f3615-5015-486a-bbb5-04c683da6990","Type":"ContainerDied","Data":"25ba33658addfd86f3973f0bbd3f7d31da0188bcd4762d3214f38a7e8af5297b"} Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.151009 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25ba33658addfd86f3973f0bbd3f7d31da0188bcd4762d3214f38a7e8af5297b" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.160738 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"de71c6bf-377b-44e8-a5fb-e654b259404f","Type":"ContainerDied","Data":"ee45ab9f33fdda93f7c890750739536f8547b9a2cf6264542af4cb74ce30fa4b"} Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.160810 4632 scope.go:117] "RemoveContainer" containerID="4fa9439436746dc39d49be92d41774ffb73ecdcce50f69453fca69442efcc0cf" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.161111 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.170430 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556630-kpbz7"] Mar 13 10:30:00 crc kubenswrapper[4632]: E0313 10:30:00.171733 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de71c6bf-377b-44e8-a5fb-e654b259404f" containerName="nova-api-log" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.171791 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="de71c6bf-377b-44e8-a5fb-e654b259404f" containerName="nova-api-log" Mar 13 10:30:00 crc kubenswrapper[4632]: E0313 10:30:00.171861 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de71c6bf-377b-44e8-a5fb-e654b259404f" containerName="nova-api-api" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.171874 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="de71c6bf-377b-44e8-a5fb-e654b259404f" containerName="nova-api-api" Mar 13 10:30:00 crc kubenswrapper[4632]: E0313 10:30:00.171931 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="601f3615-5015-486a-bbb5-04c683da6990" containerName="nova-manage" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.171956 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="601f3615-5015-486a-bbb5-04c683da6990" containerName="nova-manage" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.172383 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="de71c6bf-377b-44e8-a5fb-e654b259404f" containerName="nova-api-log" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.172443 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="de71c6bf-377b-44e8-a5fb-e654b259404f" containerName="nova-api-api" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.172466 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="601f3615-5015-486a-bbb5-04c683da6990" containerName="nova-manage" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.180680 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556630-kpbz7" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.190194 4632 scope.go:117] "RemoveContainer" containerID="66957b9fbbc860bd8b0b4ba61ac2afb5edc2532051b2d051493e658595c97c89" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.190609 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.198660 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556630-kxrkn"] Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.198766 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.199995 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556630-kxrkn" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.203241 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.208718 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.213953 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.241906 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556630-kxrkn"] Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.255648 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de71c6bf-377b-44e8-a5fb-e654b259404f-logs\") pod \"de71c6bf-377b-44e8-a5fb-e654b259404f\" (UID: \"de71c6bf-377b-44e8-a5fb-e654b259404f\") " Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.255715 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de71c6bf-377b-44e8-a5fb-e654b259404f-config-data\") pod \"de71c6bf-377b-44e8-a5fb-e654b259404f\" (UID: \"de71c6bf-377b-44e8-a5fb-e654b259404f\") " Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.255870 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjszk\" (UniqueName: \"kubernetes.io/projected/de71c6bf-377b-44e8-a5fb-e654b259404f-kube-api-access-cjszk\") pod \"de71c6bf-377b-44e8-a5fb-e654b259404f\" (UID: \"de71c6bf-377b-44e8-a5fb-e654b259404f\") " Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.255924 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de71c6bf-377b-44e8-a5fb-e654b259404f-combined-ca-bundle\") pod \"de71c6bf-377b-44e8-a5fb-e654b259404f\" (UID: \"de71c6bf-377b-44e8-a5fb-e654b259404f\") " Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.260457 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de71c6bf-377b-44e8-a5fb-e654b259404f-logs" (OuterVolumeSpecName: "logs") pod "de71c6bf-377b-44e8-a5fb-e654b259404f" (UID: "de71c6bf-377b-44e8-a5fb-e654b259404f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.285292 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de71c6bf-377b-44e8-a5fb-e654b259404f-kube-api-access-cjszk" (OuterVolumeSpecName: "kube-api-access-cjszk") pod "de71c6bf-377b-44e8-a5fb-e654b259404f" (UID: "de71c6bf-377b-44e8-a5fb-e654b259404f"). InnerVolumeSpecName "kube-api-access-cjszk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.309207 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556630-kpbz7"] Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.352756 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de71c6bf-377b-44e8-a5fb-e654b259404f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de71c6bf-377b-44e8-a5fb-e654b259404f" (UID: "de71c6bf-377b-44e8-a5fb-e654b259404f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.358628 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e912f5a7-eb85-4d19-9703-6cd7ff46c810-secret-volume\") pod \"collect-profiles-29556630-kpbz7\" (UID: \"e912f5a7-eb85-4d19-9703-6cd7ff46c810\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556630-kpbz7" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.358703 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e912f5a7-eb85-4d19-9703-6cd7ff46c810-config-volume\") pod \"collect-profiles-29556630-kpbz7\" (UID: \"e912f5a7-eb85-4d19-9703-6cd7ff46c810\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556630-kpbz7" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.358930 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4884\" (UniqueName: \"kubernetes.io/projected/b0ccb00a-40ce-4b3d-86e8-8f87354c1e8e-kube-api-access-m4884\") pod \"auto-csr-approver-29556630-kxrkn\" (UID: \"b0ccb00a-40ce-4b3d-86e8-8f87354c1e8e\") " pod="openshift-infra/auto-csr-approver-29556630-kxrkn" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.359008 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhzz5\" (UniqueName: \"kubernetes.io/projected/e912f5a7-eb85-4d19-9703-6cd7ff46c810-kube-api-access-bhzz5\") pod \"collect-profiles-29556630-kpbz7\" (UID: \"e912f5a7-eb85-4d19-9703-6cd7ff46c810\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556630-kpbz7" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.359140 4632 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de71c6bf-377b-44e8-a5fb-e654b259404f-logs\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.359157 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjszk\" (UniqueName: \"kubernetes.io/projected/de71c6bf-377b-44e8-a5fb-e654b259404f-kube-api-access-cjszk\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.359171 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de71c6bf-377b-44e8-a5fb-e654b259404f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.367065 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de71c6bf-377b-44e8-a5fb-e654b259404f-config-data" (OuterVolumeSpecName: "config-data") pod "de71c6bf-377b-44e8-a5fb-e654b259404f" (UID: "de71c6bf-377b-44e8-a5fb-e654b259404f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.460636 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4884\" (UniqueName: \"kubernetes.io/projected/b0ccb00a-40ce-4b3d-86e8-8f87354c1e8e-kube-api-access-m4884\") pod \"auto-csr-approver-29556630-kxrkn\" (UID: \"b0ccb00a-40ce-4b3d-86e8-8f87354c1e8e\") " pod="openshift-infra/auto-csr-approver-29556630-kxrkn" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.460699 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhzz5\" (UniqueName: \"kubernetes.io/projected/e912f5a7-eb85-4d19-9703-6cd7ff46c810-kube-api-access-bhzz5\") pod \"collect-profiles-29556630-kpbz7\" (UID: \"e912f5a7-eb85-4d19-9703-6cd7ff46c810\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556630-kpbz7" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.460759 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e912f5a7-eb85-4d19-9703-6cd7ff46c810-secret-volume\") pod \"collect-profiles-29556630-kpbz7\" (UID: \"e912f5a7-eb85-4d19-9703-6cd7ff46c810\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556630-kpbz7" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.460782 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e912f5a7-eb85-4d19-9703-6cd7ff46c810-config-volume\") pod \"collect-profiles-29556630-kpbz7\" (UID: \"e912f5a7-eb85-4d19-9703-6cd7ff46c810\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556630-kpbz7" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.460882 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de71c6bf-377b-44e8-a5fb-e654b259404f-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.461752 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e912f5a7-eb85-4d19-9703-6cd7ff46c810-config-volume\") pod \"collect-profiles-29556630-kpbz7\" (UID: \"e912f5a7-eb85-4d19-9703-6cd7ff46c810\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556630-kpbz7" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.477281 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.477515 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="4d0c4f9f-780f-42d8-9eee-cb2201034218" containerName="nova-scheduler-scheduler" containerID="cri-o://9ea4520ca12a1649f0ffe1aaf48fb3759b0ff4ddc87166ec50af120c6fe09b58" gracePeriod=30 Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.517898 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.519198 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="82ae3a46-0133-43f5-942d-0b9a5b4d59f4" containerName="nova-metadata-log" containerID="cri-o://519f8117c2e26f4a0b3f5e7e157a107d38c388ac29acd5faf2f4b4ecf121e55f" gracePeriod=30 Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.519783 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="82ae3a46-0133-43f5-942d-0b9a5b4d59f4" containerName="nova-metadata-metadata" containerID="cri-o://a8f2d5b445eff9d0451f4c86782cfcac63ef30c3164366e7abf62cc09495ddd8" gracePeriod=30 Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.530230 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4884\" (UniqueName: \"kubernetes.io/projected/b0ccb00a-40ce-4b3d-86e8-8f87354c1e8e-kube-api-access-m4884\") pod \"auto-csr-approver-29556630-kxrkn\" (UID: \"b0ccb00a-40ce-4b3d-86e8-8f87354c1e8e\") " pod="openshift-infra/auto-csr-approver-29556630-kxrkn" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.540421 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhzz5\" (UniqueName: \"kubernetes.io/projected/e912f5a7-eb85-4d19-9703-6cd7ff46c810-kube-api-access-bhzz5\") pod \"collect-profiles-29556630-kpbz7\" (UID: \"e912f5a7-eb85-4d19-9703-6cd7ff46c810\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556630-kpbz7" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.543048 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e912f5a7-eb85-4d19-9703-6cd7ff46c810-secret-volume\") pod \"collect-profiles-29556630-kpbz7\" (UID: \"e912f5a7-eb85-4d19-9703-6cd7ff46c810\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556630-kpbz7" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.557372 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556630-kpbz7" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.579732 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556630-kxrkn" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.593511 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.627828 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.671140 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.676508 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.684252 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.684710 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.685435 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.702536 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.773862 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef77ea1-fee2-432d-9aba-c0acfedb4e69-config-data\") pod \"nova-api-0\" (UID: \"3ef77ea1-fee2-432d-9aba-c0acfedb4e69\") " pod="openstack/nova-api-0" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.773925 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr6fw\" (UniqueName: \"kubernetes.io/projected/3ef77ea1-fee2-432d-9aba-c0acfedb4e69-kube-api-access-gr6fw\") pod \"nova-api-0\" (UID: \"3ef77ea1-fee2-432d-9aba-c0acfedb4e69\") " pod="openstack/nova-api-0" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.773969 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef77ea1-fee2-432d-9aba-c0acfedb4e69-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3ef77ea1-fee2-432d-9aba-c0acfedb4e69\") " pod="openstack/nova-api-0" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.774013 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ef77ea1-fee2-432d-9aba-c0acfedb4e69-logs\") pod \"nova-api-0\" (UID: \"3ef77ea1-fee2-432d-9aba-c0acfedb4e69\") " pod="openstack/nova-api-0" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.774032 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ef77ea1-fee2-432d-9aba-c0acfedb4e69-public-tls-certs\") pod \"nova-api-0\" (UID: \"3ef77ea1-fee2-432d-9aba-c0acfedb4e69\") " pod="openstack/nova-api-0" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.774105 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ef77ea1-fee2-432d-9aba-c0acfedb4e69-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3ef77ea1-fee2-432d-9aba-c0acfedb4e69\") " pod="openstack/nova-api-0" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.877079 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ef77ea1-fee2-432d-9aba-c0acfedb4e69-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3ef77ea1-fee2-432d-9aba-c0acfedb4e69\") " pod="openstack/nova-api-0" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.877252 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef77ea1-fee2-432d-9aba-c0acfedb4e69-config-data\") pod \"nova-api-0\" (UID: \"3ef77ea1-fee2-432d-9aba-c0acfedb4e69\") " pod="openstack/nova-api-0" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.877288 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr6fw\" (UniqueName: \"kubernetes.io/projected/3ef77ea1-fee2-432d-9aba-c0acfedb4e69-kube-api-access-gr6fw\") pod \"nova-api-0\" (UID: \"3ef77ea1-fee2-432d-9aba-c0acfedb4e69\") " pod="openstack/nova-api-0" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.877308 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef77ea1-fee2-432d-9aba-c0acfedb4e69-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3ef77ea1-fee2-432d-9aba-c0acfedb4e69\") " pod="openstack/nova-api-0" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.877348 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ef77ea1-fee2-432d-9aba-c0acfedb4e69-logs\") pod \"nova-api-0\" (UID: \"3ef77ea1-fee2-432d-9aba-c0acfedb4e69\") " pod="openstack/nova-api-0" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.877365 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ef77ea1-fee2-432d-9aba-c0acfedb4e69-public-tls-certs\") pod \"nova-api-0\" (UID: \"3ef77ea1-fee2-432d-9aba-c0acfedb4e69\") " pod="openstack/nova-api-0" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.881707 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ef77ea1-fee2-432d-9aba-c0acfedb4e69-logs\") pod \"nova-api-0\" (UID: \"3ef77ea1-fee2-432d-9aba-c0acfedb4e69\") " pod="openstack/nova-api-0" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.889902 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ef77ea1-fee2-432d-9aba-c0acfedb4e69-public-tls-certs\") pod \"nova-api-0\" (UID: \"3ef77ea1-fee2-432d-9aba-c0acfedb4e69\") " pod="openstack/nova-api-0" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.895801 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef77ea1-fee2-432d-9aba-c0acfedb4e69-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3ef77ea1-fee2-432d-9aba-c0acfedb4e69\") " pod="openstack/nova-api-0" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.896677 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef77ea1-fee2-432d-9aba-c0acfedb4e69-config-data\") pod \"nova-api-0\" (UID: \"3ef77ea1-fee2-432d-9aba-c0acfedb4e69\") " pod="openstack/nova-api-0" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.897267 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ef77ea1-fee2-432d-9aba-c0acfedb4e69-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3ef77ea1-fee2-432d-9aba-c0acfedb4e69\") " pod="openstack/nova-api-0" Mar 13 10:30:00 crc kubenswrapper[4632]: I0313 10:30:00.926859 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr6fw\" (UniqueName: \"kubernetes.io/projected/3ef77ea1-fee2-432d-9aba-c0acfedb4e69-kube-api-access-gr6fw\") pod \"nova-api-0\" (UID: \"3ef77ea1-fee2-432d-9aba-c0acfedb4e69\") " pod="openstack/nova-api-0" Mar 13 10:30:01 crc kubenswrapper[4632]: I0313 10:30:01.040698 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 10:30:01 crc kubenswrapper[4632]: I0313 10:30:01.209164 4632 generic.go:334] "Generic (PLEG): container finished" podID="82ae3a46-0133-43f5-942d-0b9a5b4d59f4" containerID="519f8117c2e26f4a0b3f5e7e157a107d38c388ac29acd5faf2f4b4ecf121e55f" exitCode=143 Mar 13 10:30:01 crc kubenswrapper[4632]: I0313 10:30:01.209213 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"82ae3a46-0133-43f5-942d-0b9a5b4d59f4","Type":"ContainerDied","Data":"519f8117c2e26f4a0b3f5e7e157a107d38c388ac29acd5faf2f4b4ecf121e55f"} Mar 13 10:30:01 crc kubenswrapper[4632]: I0313 10:30:01.470531 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556630-kpbz7"] Mar 13 10:30:01 crc kubenswrapper[4632]: W0313 10:30:01.654321 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0ccb00a_40ce_4b3d_86e8_8f87354c1e8e.slice/crio-4d8d82c51a3c93ea917ffae6d282471f4227fa3a597385ba7ee78758a779abd9 WatchSource:0}: Error finding container 4d8d82c51a3c93ea917ffae6d282471f4227fa3a597385ba7ee78758a779abd9: Status 404 returned error can't find the container with id 4d8d82c51a3c93ea917ffae6d282471f4227fa3a597385ba7ee78758a779abd9 Mar 13 10:30:01 crc kubenswrapper[4632]: I0313 10:30:01.666877 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556630-kxrkn"] Mar 13 10:30:01 crc kubenswrapper[4632]: I0313 10:30:01.838633 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.087962 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de71c6bf-377b-44e8-a5fb-e654b259404f" path="/var/lib/kubelet/pods/de71c6bf-377b-44e8-a5fb-e654b259404f/volumes" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.150280 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:30:02 crc kubenswrapper[4632]: E0313 10:30:02.186442 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9ea4520ca12a1649f0ffe1aaf48fb3759b0ff4ddc87166ec50af120c6fe09b58" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 13 10:30:02 crc kubenswrapper[4632]: E0313 10:30:02.191560 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9ea4520ca12a1649f0ffe1aaf48fb3759b0ff4ddc87166ec50af120c6fe09b58" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 13 10:30:02 crc kubenswrapper[4632]: E0313 10:30:02.198837 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9ea4520ca12a1649f0ffe1aaf48fb3759b0ff4ddc87166ec50af120c6fe09b58" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 13 10:30:02 crc kubenswrapper[4632]: E0313 10:30:02.198929 4632 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="4d0c4f9f-780f-42d8-9eee-cb2201034218" containerName="nova-scheduler-scheduler" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.218506 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556630-kxrkn" event={"ID":"b0ccb00a-40ce-4b3d-86e8-8f87354c1e8e","Type":"ContainerStarted","Data":"4d8d82c51a3c93ea917ffae6d282471f4227fa3a597385ba7ee78758a779abd9"} Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.220378 4632 generic.go:334] "Generic (PLEG): container finished" podID="ff318cc9-cbe7-4357-971a-26c26e8bd269" containerID="e19df6f77d1da54d66347bac1ac445e20a6a2fb793251d7691739401089385b1" exitCode=0 Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.220417 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff318cc9-cbe7-4357-971a-26c26e8bd269","Type":"ContainerDied","Data":"e19df6f77d1da54d66347bac1ac445e20a6a2fb793251d7691739401089385b1"} Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.220434 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff318cc9-cbe7-4357-971a-26c26e8bd269","Type":"ContainerDied","Data":"2f6d41c40b5de2ff95100617c3719cca8da01adbe4c5436cbe7b0e955e7ff656"} Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.220450 4632 scope.go:117] "RemoveContainer" containerID="e81d93539be5ad8110d788d8abb89dd69059700a39392e03a27cde23af0ce544" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.220584 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.226744 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff318cc9-cbe7-4357-971a-26c26e8bd269-run-httpd\") pod \"ff318cc9-cbe7-4357-971a-26c26e8bd269\" (UID: \"ff318cc9-cbe7-4357-971a-26c26e8bd269\") " Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.226884 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff318cc9-cbe7-4357-971a-26c26e8bd269-config-data\") pod \"ff318cc9-cbe7-4357-971a-26c26e8bd269\" (UID: \"ff318cc9-cbe7-4357-971a-26c26e8bd269\") " Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.226917 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff318cc9-cbe7-4357-971a-26c26e8bd269-log-httpd\") pod \"ff318cc9-cbe7-4357-971a-26c26e8bd269\" (UID: \"ff318cc9-cbe7-4357-971a-26c26e8bd269\") " Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.227012 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff318cc9-cbe7-4357-971a-26c26e8bd269-scripts\") pod \"ff318cc9-cbe7-4357-971a-26c26e8bd269\" (UID: \"ff318cc9-cbe7-4357-971a-26c26e8bd269\") " Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.227043 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff318cc9-cbe7-4357-971a-26c26e8bd269-sg-core-conf-yaml\") pod \"ff318cc9-cbe7-4357-971a-26c26e8bd269\" (UID: \"ff318cc9-cbe7-4357-971a-26c26e8bd269\") " Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.227071 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff318cc9-cbe7-4357-971a-26c26e8bd269-combined-ca-bundle\") pod \"ff318cc9-cbe7-4357-971a-26c26e8bd269\" (UID: \"ff318cc9-cbe7-4357-971a-26c26e8bd269\") " Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.227140 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qm5d\" (UniqueName: \"kubernetes.io/projected/ff318cc9-cbe7-4357-971a-26c26e8bd269-kube-api-access-2qm5d\") pod \"ff318cc9-cbe7-4357-971a-26c26e8bd269\" (UID: \"ff318cc9-cbe7-4357-971a-26c26e8bd269\") " Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.227914 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff318cc9-cbe7-4357-971a-26c26e8bd269-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ff318cc9-cbe7-4357-971a-26c26e8bd269" (UID: "ff318cc9-cbe7-4357-971a-26c26e8bd269"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.228077 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff318cc9-cbe7-4357-971a-26c26e8bd269-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ff318cc9-cbe7-4357-971a-26c26e8bd269" (UID: "ff318cc9-cbe7-4357-971a-26c26e8bd269"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.252015 4632 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff318cc9-cbe7-4357-971a-26c26e8bd269-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.253818 4632 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff318cc9-cbe7-4357-971a-26c26e8bd269-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.254187 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556630-kpbz7" event={"ID":"e912f5a7-eb85-4d19-9703-6cd7ff46c810","Type":"ContainerStarted","Data":"020084ff22e9c174abe1865969844a8ece77dd4c848ac5f03af6af51bccf8643"} Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.254310 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556630-kpbz7" event={"ID":"e912f5a7-eb85-4d19-9703-6cd7ff46c810","Type":"ContainerStarted","Data":"7a016243d1468faa2397822528bac518deaada14c2e02f5ebb8a0907294cf98e"} Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.261351 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3ef77ea1-fee2-432d-9aba-c0acfedb4e69","Type":"ContainerStarted","Data":"da0b238c4fcc59e32638ff3d13934331efeb0e2693bb059a1f899fdb9cf426cf"} Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.267016 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff318cc9-cbe7-4357-971a-26c26e8bd269-scripts" (OuterVolumeSpecName: "scripts") pod "ff318cc9-cbe7-4357-971a-26c26e8bd269" (UID: "ff318cc9-cbe7-4357-971a-26c26e8bd269"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.271280 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff318cc9-cbe7-4357-971a-26c26e8bd269-kube-api-access-2qm5d" (OuterVolumeSpecName: "kube-api-access-2qm5d") pod "ff318cc9-cbe7-4357-971a-26c26e8bd269" (UID: "ff318cc9-cbe7-4357-971a-26c26e8bd269"). InnerVolumeSpecName "kube-api-access-2qm5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.280381 4632 scope.go:117] "RemoveContainer" containerID="ca42f2855aed10e6d2409a495733d8e631d4d8011dee434ee91ebcbee2777ca4" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.293391 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29556630-kpbz7" podStartSLOduration=2.293375857 podStartE2EDuration="2.293375857s" podCreationTimestamp="2026-03-13 10:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:30:02.291793618 +0000 UTC m=+1576.314323771" watchObservedRunningTime="2026-03-13 10:30:02.293375857 +0000 UTC m=+1576.315905990" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.344263 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff318cc9-cbe7-4357-971a-26c26e8bd269-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ff318cc9-cbe7-4357-971a-26c26e8bd269" (UID: "ff318cc9-cbe7-4357-971a-26c26e8bd269"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.356365 4632 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff318cc9-cbe7-4357-971a-26c26e8bd269-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.356397 4632 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff318cc9-cbe7-4357-971a-26c26e8bd269-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.356428 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qm5d\" (UniqueName: \"kubernetes.io/projected/ff318cc9-cbe7-4357-971a-26c26e8bd269-kube-api-access-2qm5d\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.357676 4632 scope.go:117] "RemoveContainer" containerID="cd9daf2ae40dbbc090305eb1d55ec458cd7302b2ff730d20e1f7272ead3f43c4" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.414606 4632 scope.go:117] "RemoveContainer" containerID="e19df6f77d1da54d66347bac1ac445e20a6a2fb793251d7691739401089385b1" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.421296 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff318cc9-cbe7-4357-971a-26c26e8bd269-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff318cc9-cbe7-4357-971a-26c26e8bd269" (UID: "ff318cc9-cbe7-4357-971a-26c26e8bd269"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.452456 4632 scope.go:117] "RemoveContainer" containerID="e81d93539be5ad8110d788d8abb89dd69059700a39392e03a27cde23af0ce544" Mar 13 10:30:02 crc kubenswrapper[4632]: E0313 10:30:02.452974 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e81d93539be5ad8110d788d8abb89dd69059700a39392e03a27cde23af0ce544\": container with ID starting with e81d93539be5ad8110d788d8abb89dd69059700a39392e03a27cde23af0ce544 not found: ID does not exist" containerID="e81d93539be5ad8110d788d8abb89dd69059700a39392e03a27cde23af0ce544" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.453105 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e81d93539be5ad8110d788d8abb89dd69059700a39392e03a27cde23af0ce544"} err="failed to get container status \"e81d93539be5ad8110d788d8abb89dd69059700a39392e03a27cde23af0ce544\": rpc error: code = NotFound desc = could not find container \"e81d93539be5ad8110d788d8abb89dd69059700a39392e03a27cde23af0ce544\": container with ID starting with e81d93539be5ad8110d788d8abb89dd69059700a39392e03a27cde23af0ce544 not found: ID does not exist" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.453209 4632 scope.go:117] "RemoveContainer" containerID="ca42f2855aed10e6d2409a495733d8e631d4d8011dee434ee91ebcbee2777ca4" Mar 13 10:30:02 crc kubenswrapper[4632]: E0313 10:30:02.453501 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca42f2855aed10e6d2409a495733d8e631d4d8011dee434ee91ebcbee2777ca4\": container with ID starting with ca42f2855aed10e6d2409a495733d8e631d4d8011dee434ee91ebcbee2777ca4 not found: ID does not exist" containerID="ca42f2855aed10e6d2409a495733d8e631d4d8011dee434ee91ebcbee2777ca4" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.453604 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca42f2855aed10e6d2409a495733d8e631d4d8011dee434ee91ebcbee2777ca4"} err="failed to get container status \"ca42f2855aed10e6d2409a495733d8e631d4d8011dee434ee91ebcbee2777ca4\": rpc error: code = NotFound desc = could not find container \"ca42f2855aed10e6d2409a495733d8e631d4d8011dee434ee91ebcbee2777ca4\": container with ID starting with ca42f2855aed10e6d2409a495733d8e631d4d8011dee434ee91ebcbee2777ca4 not found: ID does not exist" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.453716 4632 scope.go:117] "RemoveContainer" containerID="cd9daf2ae40dbbc090305eb1d55ec458cd7302b2ff730d20e1f7272ead3f43c4" Mar 13 10:30:02 crc kubenswrapper[4632]: E0313 10:30:02.454774 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd9daf2ae40dbbc090305eb1d55ec458cd7302b2ff730d20e1f7272ead3f43c4\": container with ID starting with cd9daf2ae40dbbc090305eb1d55ec458cd7302b2ff730d20e1f7272ead3f43c4 not found: ID does not exist" containerID="cd9daf2ae40dbbc090305eb1d55ec458cd7302b2ff730d20e1f7272ead3f43c4" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.454878 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd9daf2ae40dbbc090305eb1d55ec458cd7302b2ff730d20e1f7272ead3f43c4"} err="failed to get container status \"cd9daf2ae40dbbc090305eb1d55ec458cd7302b2ff730d20e1f7272ead3f43c4\": rpc error: code = NotFound desc = could not find container \"cd9daf2ae40dbbc090305eb1d55ec458cd7302b2ff730d20e1f7272ead3f43c4\": container with ID starting with cd9daf2ae40dbbc090305eb1d55ec458cd7302b2ff730d20e1f7272ead3f43c4 not found: ID does not exist" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.455037 4632 scope.go:117] "RemoveContainer" containerID="e19df6f77d1da54d66347bac1ac445e20a6a2fb793251d7691739401089385b1" Mar 13 10:30:02 crc kubenswrapper[4632]: E0313 10:30:02.455899 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e19df6f77d1da54d66347bac1ac445e20a6a2fb793251d7691739401089385b1\": container with ID starting with e19df6f77d1da54d66347bac1ac445e20a6a2fb793251d7691739401089385b1 not found: ID does not exist" containerID="e19df6f77d1da54d66347bac1ac445e20a6a2fb793251d7691739401089385b1" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.455996 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e19df6f77d1da54d66347bac1ac445e20a6a2fb793251d7691739401089385b1"} err="failed to get container status \"e19df6f77d1da54d66347bac1ac445e20a6a2fb793251d7691739401089385b1\": rpc error: code = NotFound desc = could not find container \"e19df6f77d1da54d66347bac1ac445e20a6a2fb793251d7691739401089385b1\": container with ID starting with e19df6f77d1da54d66347bac1ac445e20a6a2fb793251d7691739401089385b1 not found: ID does not exist" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.462467 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff318cc9-cbe7-4357-971a-26c26e8bd269-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.509747 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff318cc9-cbe7-4357-971a-26c26e8bd269-config-data" (OuterVolumeSpecName: "config-data") pod "ff318cc9-cbe7-4357-971a-26c26e8bd269" (UID: "ff318cc9-cbe7-4357-971a-26c26e8bd269"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.542165 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-564797cccc-84dg2" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.565903 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff318cc9-cbe7-4357-971a-26c26e8bd269-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.566720 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.595032 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.624007 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:30:02 crc kubenswrapper[4632]: E0313 10:30:02.624436 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff318cc9-cbe7-4357-971a-26c26e8bd269" containerName="ceilometer-central-agent" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.624453 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff318cc9-cbe7-4357-971a-26c26e8bd269" containerName="ceilometer-central-agent" Mar 13 10:30:02 crc kubenswrapper[4632]: E0313 10:30:02.624481 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff318cc9-cbe7-4357-971a-26c26e8bd269" containerName="proxy-httpd" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.624488 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff318cc9-cbe7-4357-971a-26c26e8bd269" containerName="proxy-httpd" Mar 13 10:30:02 crc kubenswrapper[4632]: E0313 10:30:02.624499 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff318cc9-cbe7-4357-971a-26c26e8bd269" containerName="sg-core" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.624505 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff318cc9-cbe7-4357-971a-26c26e8bd269" containerName="sg-core" Mar 13 10:30:02 crc kubenswrapper[4632]: E0313 10:30:02.624522 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff318cc9-cbe7-4357-971a-26c26e8bd269" containerName="ceilometer-notification-agent" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.624529 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff318cc9-cbe7-4357-971a-26c26e8bd269" containerName="ceilometer-notification-agent" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.624687 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff318cc9-cbe7-4357-971a-26c26e8bd269" containerName="ceilometer-central-agent" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.624713 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff318cc9-cbe7-4357-971a-26c26e8bd269" containerName="proxy-httpd" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.624738 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff318cc9-cbe7-4357-971a-26c26e8bd269" containerName="sg-core" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.624752 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff318cc9-cbe7-4357-971a-26c26e8bd269" containerName="ceilometer-notification-agent" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.626920 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.631471 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.640667 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.641071 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.714011 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75788dd97c-r8qnr"] Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.714358 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75788dd97c-r8qnr" podUID="e0f17959-fde8-4cf1-b255-db5fc3325b70" containerName="dnsmasq-dns" containerID="cri-o://f957b291649cd64b5f0c12f7a4a8a32abd88e0067f00c5ae80a3e106aedde5a8" gracePeriod=10 Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.784022 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-scripts\") pod \"ceilometer-0\" (UID: \"bc34b88a-a0cc-4ef1-8267-30f73d9712e7\") " pod="openstack/ceilometer-0" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.784084 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-config-data\") pod \"ceilometer-0\" (UID: \"bc34b88a-a0cc-4ef1-8267-30f73d9712e7\") " pod="openstack/ceilometer-0" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.784223 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc34b88a-a0cc-4ef1-8267-30f73d9712e7\") " pod="openstack/ceilometer-0" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.784266 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvm2p\" (UniqueName: \"kubernetes.io/projected/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-kube-api-access-nvm2p\") pod \"ceilometer-0\" (UID: \"bc34b88a-a0cc-4ef1-8267-30f73d9712e7\") " pod="openstack/ceilometer-0" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.784298 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-run-httpd\") pod \"ceilometer-0\" (UID: \"bc34b88a-a0cc-4ef1-8267-30f73d9712e7\") " pod="openstack/ceilometer-0" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.784360 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-log-httpd\") pod \"ceilometer-0\" (UID: \"bc34b88a-a0cc-4ef1-8267-30f73d9712e7\") " pod="openstack/ceilometer-0" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.784384 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc34b88a-a0cc-4ef1-8267-30f73d9712e7\") " pod="openstack/ceilometer-0" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.890272 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc34b88a-a0cc-4ef1-8267-30f73d9712e7\") " pod="openstack/ceilometer-0" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.890365 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-run-httpd\") pod \"ceilometer-0\" (UID: \"bc34b88a-a0cc-4ef1-8267-30f73d9712e7\") " pod="openstack/ceilometer-0" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.890391 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvm2p\" (UniqueName: \"kubernetes.io/projected/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-kube-api-access-nvm2p\") pod \"ceilometer-0\" (UID: \"bc34b88a-a0cc-4ef1-8267-30f73d9712e7\") " pod="openstack/ceilometer-0" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.890432 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-log-httpd\") pod \"ceilometer-0\" (UID: \"bc34b88a-a0cc-4ef1-8267-30f73d9712e7\") " pod="openstack/ceilometer-0" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.890453 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc34b88a-a0cc-4ef1-8267-30f73d9712e7\") " pod="openstack/ceilometer-0" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.890529 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-scripts\") pod \"ceilometer-0\" (UID: \"bc34b88a-a0cc-4ef1-8267-30f73d9712e7\") " pod="openstack/ceilometer-0" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.890561 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-config-data\") pod \"ceilometer-0\" (UID: \"bc34b88a-a0cc-4ef1-8267-30f73d9712e7\") " pod="openstack/ceilometer-0" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.892538 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-log-httpd\") pod \"ceilometer-0\" (UID: \"bc34b88a-a0cc-4ef1-8267-30f73d9712e7\") " pod="openstack/ceilometer-0" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.892869 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-run-httpd\") pod \"ceilometer-0\" (UID: \"bc34b88a-a0cc-4ef1-8267-30f73d9712e7\") " pod="openstack/ceilometer-0" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.901124 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-scripts\") pod \"ceilometer-0\" (UID: \"bc34b88a-a0cc-4ef1-8267-30f73d9712e7\") " pod="openstack/ceilometer-0" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.901513 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc34b88a-a0cc-4ef1-8267-30f73d9712e7\") " pod="openstack/ceilometer-0" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.902830 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc34b88a-a0cc-4ef1-8267-30f73d9712e7\") " pod="openstack/ceilometer-0" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.927768 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvm2p\" (UniqueName: \"kubernetes.io/projected/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-kube-api-access-nvm2p\") pod \"ceilometer-0\" (UID: \"bc34b88a-a0cc-4ef1-8267-30f73d9712e7\") " pod="openstack/ceilometer-0" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.936416 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-config-data\") pod \"ceilometer-0\" (UID: \"bc34b88a-a0cc-4ef1-8267-30f73d9712e7\") " pod="openstack/ceilometer-0" Mar 13 10:30:02 crc kubenswrapper[4632]: I0313 10:30:02.967474 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:30:03 crc kubenswrapper[4632]: I0313 10:30:03.294044 4632 generic.go:334] "Generic (PLEG): container finished" podID="e912f5a7-eb85-4d19-9703-6cd7ff46c810" containerID="020084ff22e9c174abe1865969844a8ece77dd4c848ac5f03af6af51bccf8643" exitCode=0 Mar 13 10:30:03 crc kubenswrapper[4632]: I0313 10:30:03.294220 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556630-kpbz7" event={"ID":"e912f5a7-eb85-4d19-9703-6cd7ff46c810","Type":"ContainerDied","Data":"020084ff22e9c174abe1865969844a8ece77dd4c848ac5f03af6af51bccf8643"} Mar 13 10:30:03 crc kubenswrapper[4632]: I0313 10:30:03.307069 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3ef77ea1-fee2-432d-9aba-c0acfedb4e69","Type":"ContainerStarted","Data":"6593142dfefa299444df5fb08b472bc35abc684dfbe62b81f00b738f693d0298"} Mar 13 10:30:03 crc kubenswrapper[4632]: I0313 10:30:03.307145 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3ef77ea1-fee2-432d-9aba-c0acfedb4e69","Type":"ContainerStarted","Data":"369720f21c546cdcf03c1df03b4e7a408184d133e835b378f918587a8367d7e3"} Mar 13 10:30:03 crc kubenswrapper[4632]: I0313 10:30:03.317983 4632 generic.go:334] "Generic (PLEG): container finished" podID="e0f17959-fde8-4cf1-b255-db5fc3325b70" containerID="f957b291649cd64b5f0c12f7a4a8a32abd88e0067f00c5ae80a3e106aedde5a8" exitCode=0 Mar 13 10:30:03 crc kubenswrapper[4632]: I0313 10:30:03.318049 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75788dd97c-r8qnr" event={"ID":"e0f17959-fde8-4cf1-b255-db5fc3325b70","Type":"ContainerDied","Data":"f957b291649cd64b5f0c12f7a4a8a32abd88e0067f00c5ae80a3e106aedde5a8"} Mar 13 10:30:03 crc kubenswrapper[4632]: I0313 10:30:03.359124 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.359105698 podStartE2EDuration="3.359105698s" podCreationTimestamp="2026-03-13 10:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:30:03.34720229 +0000 UTC m=+1577.369732423" watchObservedRunningTime="2026-03-13 10:30:03.359105698 +0000 UTC m=+1577.381635841" Mar 13 10:30:03 crc kubenswrapper[4632]: I0313 10:30:03.718594 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75788dd97c-r8qnr" Mar 13 10:30:03 crc kubenswrapper[4632]: I0313 10:30:03.826544 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0f17959-fde8-4cf1-b255-db5fc3325b70-dns-svc\") pod \"e0f17959-fde8-4cf1-b255-db5fc3325b70\" (UID: \"e0f17959-fde8-4cf1-b255-db5fc3325b70\") " Mar 13 10:30:03 crc kubenswrapper[4632]: I0313 10:30:03.826912 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0f17959-fde8-4cf1-b255-db5fc3325b70-dns-swift-storage-0\") pod \"e0f17959-fde8-4cf1-b255-db5fc3325b70\" (UID: \"e0f17959-fde8-4cf1-b255-db5fc3325b70\") " Mar 13 10:30:03 crc kubenswrapper[4632]: I0313 10:30:03.827041 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0f17959-fde8-4cf1-b255-db5fc3325b70-ovsdbserver-sb\") pod \"e0f17959-fde8-4cf1-b255-db5fc3325b70\" (UID: \"e0f17959-fde8-4cf1-b255-db5fc3325b70\") " Mar 13 10:30:03 crc kubenswrapper[4632]: I0313 10:30:03.827079 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc7jm\" (UniqueName: \"kubernetes.io/projected/e0f17959-fde8-4cf1-b255-db5fc3325b70-kube-api-access-pc7jm\") pod \"e0f17959-fde8-4cf1-b255-db5fc3325b70\" (UID: \"e0f17959-fde8-4cf1-b255-db5fc3325b70\") " Mar 13 10:30:03 crc kubenswrapper[4632]: I0313 10:30:03.827153 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0f17959-fde8-4cf1-b255-db5fc3325b70-ovsdbserver-nb\") pod \"e0f17959-fde8-4cf1-b255-db5fc3325b70\" (UID: \"e0f17959-fde8-4cf1-b255-db5fc3325b70\") " Mar 13 10:30:03 crc kubenswrapper[4632]: I0313 10:30:03.827226 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0f17959-fde8-4cf1-b255-db5fc3325b70-config\") pod \"e0f17959-fde8-4cf1-b255-db5fc3325b70\" (UID: \"e0f17959-fde8-4cf1-b255-db5fc3325b70\") " Mar 13 10:30:03 crc kubenswrapper[4632]: I0313 10:30:03.871850 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0f17959-fde8-4cf1-b255-db5fc3325b70-kube-api-access-pc7jm" (OuterVolumeSpecName: "kube-api-access-pc7jm") pod "e0f17959-fde8-4cf1-b255-db5fc3325b70" (UID: "e0f17959-fde8-4cf1-b255-db5fc3325b70"). InnerVolumeSpecName "kube-api-access-pc7jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:30:03 crc kubenswrapper[4632]: I0313 10:30:03.930797 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc7jm\" (UniqueName: \"kubernetes.io/projected/e0f17959-fde8-4cf1-b255-db5fc3325b70-kube-api-access-pc7jm\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:03 crc kubenswrapper[4632]: I0313 10:30:03.962209 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0f17959-fde8-4cf1-b255-db5fc3325b70-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e0f17959-fde8-4cf1-b255-db5fc3325b70" (UID: "e0f17959-fde8-4cf1-b255-db5fc3325b70"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:30:03 crc kubenswrapper[4632]: I0313 10:30:03.996417 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0f17959-fde8-4cf1-b255-db5fc3325b70-config" (OuterVolumeSpecName: "config") pod "e0f17959-fde8-4cf1-b255-db5fc3325b70" (UID: "e0f17959-fde8-4cf1-b255-db5fc3325b70"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:30:04 crc kubenswrapper[4632]: I0313 10:30:04.005008 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0f17959-fde8-4cf1-b255-db5fc3325b70-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e0f17959-fde8-4cf1-b255-db5fc3325b70" (UID: "e0f17959-fde8-4cf1-b255-db5fc3325b70"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:30:04 crc kubenswrapper[4632]: I0313 10:30:04.007505 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0f17959-fde8-4cf1-b255-db5fc3325b70-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e0f17959-fde8-4cf1-b255-db5fc3325b70" (UID: "e0f17959-fde8-4cf1-b255-db5fc3325b70"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:30:04 crc kubenswrapper[4632]: I0313 10:30:04.024546 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:30:04 crc kubenswrapper[4632]: I0313 10:30:04.034305 4632 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0f17959-fde8-4cf1-b255-db5fc3325b70-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:04 crc kubenswrapper[4632]: I0313 10:30:04.034349 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0f17959-fde8-4cf1-b255-db5fc3325b70-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:04 crc kubenswrapper[4632]: I0313 10:30:04.034358 4632 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0f17959-fde8-4cf1-b255-db5fc3325b70-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:04 crc kubenswrapper[4632]: I0313 10:30:04.034367 4632 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0f17959-fde8-4cf1-b255-db5fc3325b70-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:04 crc kubenswrapper[4632]: I0313 10:30:04.075342 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0f17959-fde8-4cf1-b255-db5fc3325b70-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e0f17959-fde8-4cf1-b255-db5fc3325b70" (UID: "e0f17959-fde8-4cf1-b255-db5fc3325b70"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:30:04 crc kubenswrapper[4632]: I0313 10:30:04.105333 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff318cc9-cbe7-4357-971a-26c26e8bd269" path="/var/lib/kubelet/pods/ff318cc9-cbe7-4357-971a-26c26e8bd269/volumes" Mar 13 10:30:04 crc kubenswrapper[4632]: I0313 10:30:04.136529 4632 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0f17959-fde8-4cf1-b255-db5fc3325b70-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:04 crc kubenswrapper[4632]: I0313 10:30:04.328545 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75788dd97c-r8qnr" Mar 13 10:30:04 crc kubenswrapper[4632]: I0313 10:30:04.328548 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75788dd97c-r8qnr" event={"ID":"e0f17959-fde8-4cf1-b255-db5fc3325b70","Type":"ContainerDied","Data":"a2b2fdcf6ef7efc2eca17a814eb5b4394c29b09fe6419666d04ee4759d7660a8"} Mar 13 10:30:04 crc kubenswrapper[4632]: I0313 10:30:04.328664 4632 scope.go:117] "RemoveContainer" containerID="f957b291649cd64b5f0c12f7a4a8a32abd88e0067f00c5ae80a3e106aedde5a8" Mar 13 10:30:04 crc kubenswrapper[4632]: I0313 10:30:04.330485 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc34b88a-a0cc-4ef1-8267-30f73d9712e7","Type":"ContainerStarted","Data":"43e2d3c1f3ad2fd1a1419876a0a3f1ee25556cf01d81619d7588168a228ea654"} Mar 13 10:30:04 crc kubenswrapper[4632]: I0313 10:30:04.371852 4632 scope.go:117] "RemoveContainer" containerID="c6848744dc1fd449bb0df7b7ca2c04941331806f97abf20c11372e120fb30d31" Mar 13 10:30:04 crc kubenswrapper[4632]: I0313 10:30:04.378203 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75788dd97c-r8qnr"] Mar 13 10:30:04 crc kubenswrapper[4632]: I0313 10:30:04.393539 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75788dd97c-r8qnr"] Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.026756 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556630-kpbz7" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.174991 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e912f5a7-eb85-4d19-9703-6cd7ff46c810-secret-volume\") pod \"e912f5a7-eb85-4d19-9703-6cd7ff46c810\" (UID: \"e912f5a7-eb85-4d19-9703-6cd7ff46c810\") " Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.175182 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e912f5a7-eb85-4d19-9703-6cd7ff46c810-config-volume\") pod \"e912f5a7-eb85-4d19-9703-6cd7ff46c810\" (UID: \"e912f5a7-eb85-4d19-9703-6cd7ff46c810\") " Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.175519 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhzz5\" (UniqueName: \"kubernetes.io/projected/e912f5a7-eb85-4d19-9703-6cd7ff46c810-kube-api-access-bhzz5\") pod \"e912f5a7-eb85-4d19-9703-6cd7ff46c810\" (UID: \"e912f5a7-eb85-4d19-9703-6cd7ff46c810\") " Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.175852 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e912f5a7-eb85-4d19-9703-6cd7ff46c810-config-volume" (OuterVolumeSpecName: "config-volume") pod "e912f5a7-eb85-4d19-9703-6cd7ff46c810" (UID: "e912f5a7-eb85-4d19-9703-6cd7ff46c810"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.176517 4632 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e912f5a7-eb85-4d19-9703-6cd7ff46c810-config-volume\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.185553 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e912f5a7-eb85-4d19-9703-6cd7ff46c810-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e912f5a7-eb85-4d19-9703-6cd7ff46c810" (UID: "e912f5a7-eb85-4d19-9703-6cd7ff46c810"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.199777 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e912f5a7-eb85-4d19-9703-6cd7ff46c810-kube-api-access-bhzz5" (OuterVolumeSpecName: "kube-api-access-bhzz5") pod "e912f5a7-eb85-4d19-9703-6cd7ff46c810" (UID: "e912f5a7-eb85-4d19-9703-6cd7ff46c810"). InnerVolumeSpecName "kube-api-access-bhzz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.281160 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.282273 4632 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e912f5a7-eb85-4d19-9703-6cd7ff46c810-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.282315 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhzz5\" (UniqueName: \"kubernetes.io/projected/e912f5a7-eb85-4d19-9703-6cd7ff46c810-kube-api-access-bhzz5\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.372570 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc34b88a-a0cc-4ef1-8267-30f73d9712e7","Type":"ContainerStarted","Data":"38c2fc76cd7277bb1d4be79e157389e249663d48647565e9e7553c7283f85329"} Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.378355 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556630-kpbz7" event={"ID":"e912f5a7-eb85-4d19-9703-6cd7ff46c810","Type":"ContainerDied","Data":"7a016243d1468faa2397822528bac518deaada14c2e02f5ebb8a0907294cf98e"} Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.378403 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a016243d1468faa2397822528bac518deaada14c2e02f5ebb8a0907294cf98e" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.378475 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556630-kpbz7" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.384703 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82ae3a46-0133-43f5-942d-0b9a5b4d59f4-combined-ca-bundle\") pod \"82ae3a46-0133-43f5-942d-0b9a5b4d59f4\" (UID: \"82ae3a46-0133-43f5-942d-0b9a5b4d59f4\") " Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.384804 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/82ae3a46-0133-43f5-942d-0b9a5b4d59f4-nova-metadata-tls-certs\") pod \"82ae3a46-0133-43f5-942d-0b9a5b4d59f4\" (UID: \"82ae3a46-0133-43f5-942d-0b9a5b4d59f4\") " Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.384927 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82ae3a46-0133-43f5-942d-0b9a5b4d59f4-config-data\") pod \"82ae3a46-0133-43f5-942d-0b9a5b4d59f4\" (UID: \"82ae3a46-0133-43f5-942d-0b9a5b4d59f4\") " Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.385058 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdwzm\" (UniqueName: \"kubernetes.io/projected/82ae3a46-0133-43f5-942d-0b9a5b4d59f4-kube-api-access-rdwzm\") pod \"82ae3a46-0133-43f5-942d-0b9a5b4d59f4\" (UID: \"82ae3a46-0133-43f5-942d-0b9a5b4d59f4\") " Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.385100 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82ae3a46-0133-43f5-942d-0b9a5b4d59f4-logs\") pod \"82ae3a46-0133-43f5-942d-0b9a5b4d59f4\" (UID: \"82ae3a46-0133-43f5-942d-0b9a5b4d59f4\") " Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.386910 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82ae3a46-0133-43f5-942d-0b9a5b4d59f4-logs" (OuterVolumeSpecName: "logs") pod "82ae3a46-0133-43f5-942d-0b9a5b4d59f4" (UID: "82ae3a46-0133-43f5-942d-0b9a5b4d59f4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.388019 4632 generic.go:334] "Generic (PLEG): container finished" podID="82ae3a46-0133-43f5-942d-0b9a5b4d59f4" containerID="a8f2d5b445eff9d0451f4c86782cfcac63ef30c3164366e7abf62cc09495ddd8" exitCode=0 Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.388071 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"82ae3a46-0133-43f5-942d-0b9a5b4d59f4","Type":"ContainerDied","Data":"a8f2d5b445eff9d0451f4c86782cfcac63ef30c3164366e7abf62cc09495ddd8"} Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.388101 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"82ae3a46-0133-43f5-942d-0b9a5b4d59f4","Type":"ContainerDied","Data":"d71e5927086b73530c1dba7fd1700212b2fab56fed475076dc30c04ba970bcf7"} Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.388123 4632 scope.go:117] "RemoveContainer" containerID="a8f2d5b445eff9d0451f4c86782cfcac63ef30c3164366e7abf62cc09495ddd8" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.388327 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.397227 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82ae3a46-0133-43f5-942d-0b9a5b4d59f4-kube-api-access-rdwzm" (OuterVolumeSpecName: "kube-api-access-rdwzm") pod "82ae3a46-0133-43f5-942d-0b9a5b4d59f4" (UID: "82ae3a46-0133-43f5-942d-0b9a5b4d59f4"). InnerVolumeSpecName "kube-api-access-rdwzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.457609 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82ae3a46-0133-43f5-942d-0b9a5b4d59f4-config-data" (OuterVolumeSpecName: "config-data") pod "82ae3a46-0133-43f5-942d-0b9a5b4d59f4" (UID: "82ae3a46-0133-43f5-942d-0b9a5b4d59f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.464187 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82ae3a46-0133-43f5-942d-0b9a5b4d59f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82ae3a46-0133-43f5-942d-0b9a5b4d59f4" (UID: "82ae3a46-0133-43f5-942d-0b9a5b4d59f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.480572 4632 scope.go:117] "RemoveContainer" containerID="519f8117c2e26f4a0b3f5e7e157a107d38c388ac29acd5faf2f4b4ecf121e55f" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.490385 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdwzm\" (UniqueName: \"kubernetes.io/projected/82ae3a46-0133-43f5-942d-0b9a5b4d59f4-kube-api-access-rdwzm\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.490444 4632 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82ae3a46-0133-43f5-942d-0b9a5b4d59f4-logs\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.490454 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82ae3a46-0133-43f5-942d-0b9a5b4d59f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.490465 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82ae3a46-0133-43f5-942d-0b9a5b4d59f4-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.539155 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82ae3a46-0133-43f5-942d-0b9a5b4d59f4-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "82ae3a46-0133-43f5-942d-0b9a5b4d59f4" (UID: "82ae3a46-0133-43f5-942d-0b9a5b4d59f4"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.558911 4632 scope.go:117] "RemoveContainer" containerID="a8f2d5b445eff9d0451f4c86782cfcac63ef30c3164366e7abf62cc09495ddd8" Mar 13 10:30:05 crc kubenswrapper[4632]: E0313 10:30:05.562451 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8f2d5b445eff9d0451f4c86782cfcac63ef30c3164366e7abf62cc09495ddd8\": container with ID starting with a8f2d5b445eff9d0451f4c86782cfcac63ef30c3164366e7abf62cc09495ddd8 not found: ID does not exist" containerID="a8f2d5b445eff9d0451f4c86782cfcac63ef30c3164366e7abf62cc09495ddd8" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.562499 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8f2d5b445eff9d0451f4c86782cfcac63ef30c3164366e7abf62cc09495ddd8"} err="failed to get container status \"a8f2d5b445eff9d0451f4c86782cfcac63ef30c3164366e7abf62cc09495ddd8\": rpc error: code = NotFound desc = could not find container \"a8f2d5b445eff9d0451f4c86782cfcac63ef30c3164366e7abf62cc09495ddd8\": container with ID starting with a8f2d5b445eff9d0451f4c86782cfcac63ef30c3164366e7abf62cc09495ddd8 not found: ID does not exist" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.562522 4632 scope.go:117] "RemoveContainer" containerID="519f8117c2e26f4a0b3f5e7e157a107d38c388ac29acd5faf2f4b4ecf121e55f" Mar 13 10:30:05 crc kubenswrapper[4632]: E0313 10:30:05.566083 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"519f8117c2e26f4a0b3f5e7e157a107d38c388ac29acd5faf2f4b4ecf121e55f\": container with ID starting with 519f8117c2e26f4a0b3f5e7e157a107d38c388ac29acd5faf2f4b4ecf121e55f not found: ID does not exist" containerID="519f8117c2e26f4a0b3f5e7e157a107d38c388ac29acd5faf2f4b4ecf121e55f" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.566121 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"519f8117c2e26f4a0b3f5e7e157a107d38c388ac29acd5faf2f4b4ecf121e55f"} err="failed to get container status \"519f8117c2e26f4a0b3f5e7e157a107d38c388ac29acd5faf2f4b4ecf121e55f\": rpc error: code = NotFound desc = could not find container \"519f8117c2e26f4a0b3f5e7e157a107d38c388ac29acd5faf2f4b4ecf121e55f\": container with ID starting with 519f8117c2e26f4a0b3f5e7e157a107d38c388ac29acd5faf2f4b4ecf121e55f not found: ID does not exist" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.592413 4632 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/82ae3a46-0133-43f5-942d-0b9a5b4d59f4-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.728837 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.745423 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.754229 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 13 10:30:05 crc kubenswrapper[4632]: E0313 10:30:05.754618 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0f17959-fde8-4cf1-b255-db5fc3325b70" containerName="dnsmasq-dns" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.754634 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0f17959-fde8-4cf1-b255-db5fc3325b70" containerName="dnsmasq-dns" Mar 13 10:30:05 crc kubenswrapper[4632]: E0313 10:30:05.754645 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82ae3a46-0133-43f5-942d-0b9a5b4d59f4" containerName="nova-metadata-metadata" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.754652 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="82ae3a46-0133-43f5-942d-0b9a5b4d59f4" containerName="nova-metadata-metadata" Mar 13 10:30:05 crc kubenswrapper[4632]: E0313 10:30:05.754666 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0f17959-fde8-4cf1-b255-db5fc3325b70" containerName="init" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.754673 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0f17959-fde8-4cf1-b255-db5fc3325b70" containerName="init" Mar 13 10:30:05 crc kubenswrapper[4632]: E0313 10:30:05.754696 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82ae3a46-0133-43f5-942d-0b9a5b4d59f4" containerName="nova-metadata-log" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.754702 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="82ae3a46-0133-43f5-942d-0b9a5b4d59f4" containerName="nova-metadata-log" Mar 13 10:30:05 crc kubenswrapper[4632]: E0313 10:30:05.754715 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e912f5a7-eb85-4d19-9703-6cd7ff46c810" containerName="collect-profiles" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.754721 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="e912f5a7-eb85-4d19-9703-6cd7ff46c810" containerName="collect-profiles" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.754898 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="82ae3a46-0133-43f5-942d-0b9a5b4d59f4" containerName="nova-metadata-metadata" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.754912 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="82ae3a46-0133-43f5-942d-0b9a5b4d59f4" containerName="nova-metadata-log" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.754931 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="e912f5a7-eb85-4d19-9703-6cd7ff46c810" containerName="collect-profiles" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.754960 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0f17959-fde8-4cf1-b255-db5fc3325b70" containerName="dnsmasq-dns" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.755963 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.757847 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.762649 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.775985 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.897746 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b75084d0-782c-4f7e-8cc0-62ac424eec6f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b75084d0-782c-4f7e-8cc0-62ac424eec6f\") " pod="openstack/nova-metadata-0" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.898093 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b75084d0-782c-4f7e-8cc0-62ac424eec6f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b75084d0-782c-4f7e-8cc0-62ac424eec6f\") " pod="openstack/nova-metadata-0" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.898124 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzkds\" (UniqueName: \"kubernetes.io/projected/b75084d0-782c-4f7e-8cc0-62ac424eec6f-kube-api-access-pzkds\") pod \"nova-metadata-0\" (UID: \"b75084d0-782c-4f7e-8cc0-62ac424eec6f\") " pod="openstack/nova-metadata-0" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.898177 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b75084d0-782c-4f7e-8cc0-62ac424eec6f-logs\") pod \"nova-metadata-0\" (UID: \"b75084d0-782c-4f7e-8cc0-62ac424eec6f\") " pod="openstack/nova-metadata-0" Mar 13 10:30:05 crc kubenswrapper[4632]: I0313 10:30:05.898227 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b75084d0-782c-4f7e-8cc0-62ac424eec6f-config-data\") pod \"nova-metadata-0\" (UID: \"b75084d0-782c-4f7e-8cc0-62ac424eec6f\") " pod="openstack/nova-metadata-0" Mar 13 10:30:06 crc kubenswrapper[4632]: I0313 10:30:06.000196 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b75084d0-782c-4f7e-8cc0-62ac424eec6f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b75084d0-782c-4f7e-8cc0-62ac424eec6f\") " pod="openstack/nova-metadata-0" Mar 13 10:30:06 crc kubenswrapper[4632]: I0313 10:30:06.000310 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b75084d0-782c-4f7e-8cc0-62ac424eec6f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b75084d0-782c-4f7e-8cc0-62ac424eec6f\") " pod="openstack/nova-metadata-0" Mar 13 10:30:06 crc kubenswrapper[4632]: I0313 10:30:06.000346 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzkds\" (UniqueName: \"kubernetes.io/projected/b75084d0-782c-4f7e-8cc0-62ac424eec6f-kube-api-access-pzkds\") pod \"nova-metadata-0\" (UID: \"b75084d0-782c-4f7e-8cc0-62ac424eec6f\") " pod="openstack/nova-metadata-0" Mar 13 10:30:06 crc kubenswrapper[4632]: I0313 10:30:06.000415 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b75084d0-782c-4f7e-8cc0-62ac424eec6f-logs\") pod \"nova-metadata-0\" (UID: \"b75084d0-782c-4f7e-8cc0-62ac424eec6f\") " pod="openstack/nova-metadata-0" Mar 13 10:30:06 crc kubenswrapper[4632]: I0313 10:30:06.000486 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b75084d0-782c-4f7e-8cc0-62ac424eec6f-config-data\") pod \"nova-metadata-0\" (UID: \"b75084d0-782c-4f7e-8cc0-62ac424eec6f\") " pod="openstack/nova-metadata-0" Mar 13 10:30:06 crc kubenswrapper[4632]: I0313 10:30:06.001062 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b75084d0-782c-4f7e-8cc0-62ac424eec6f-logs\") pod \"nova-metadata-0\" (UID: \"b75084d0-782c-4f7e-8cc0-62ac424eec6f\") " pod="openstack/nova-metadata-0" Mar 13 10:30:06 crc kubenswrapper[4632]: I0313 10:30:06.022031 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b75084d0-782c-4f7e-8cc0-62ac424eec6f-config-data\") pod \"nova-metadata-0\" (UID: \"b75084d0-782c-4f7e-8cc0-62ac424eec6f\") " pod="openstack/nova-metadata-0" Mar 13 10:30:06 crc kubenswrapper[4632]: I0313 10:30:06.029601 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b75084d0-782c-4f7e-8cc0-62ac424eec6f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b75084d0-782c-4f7e-8cc0-62ac424eec6f\") " pod="openstack/nova-metadata-0" Mar 13 10:30:06 crc kubenswrapper[4632]: I0313 10:30:06.038734 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b75084d0-782c-4f7e-8cc0-62ac424eec6f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b75084d0-782c-4f7e-8cc0-62ac424eec6f\") " pod="openstack/nova-metadata-0" Mar 13 10:30:06 crc kubenswrapper[4632]: I0313 10:30:06.047074 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzkds\" (UniqueName: \"kubernetes.io/projected/b75084d0-782c-4f7e-8cc0-62ac424eec6f-kube-api-access-pzkds\") pod \"nova-metadata-0\" (UID: \"b75084d0-782c-4f7e-8cc0-62ac424eec6f\") " pod="openstack/nova-metadata-0" Mar 13 10:30:06 crc kubenswrapper[4632]: I0313 10:30:06.090336 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 10:30:06 crc kubenswrapper[4632]: I0313 10:30:06.124508 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82ae3a46-0133-43f5-942d-0b9a5b4d59f4" path="/var/lib/kubelet/pods/82ae3a46-0133-43f5-942d-0b9a5b4d59f4/volumes" Mar 13 10:30:06 crc kubenswrapper[4632]: I0313 10:30:06.131755 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0f17959-fde8-4cf1-b255-db5fc3325b70" path="/var/lib/kubelet/pods/e0f17959-fde8-4cf1-b255-db5fc3325b70/volumes" Mar 13 10:30:06 crc kubenswrapper[4632]: I0313 10:30:06.296730 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 10:30:06 crc kubenswrapper[4632]: I0313 10:30:06.423025 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc34b88a-a0cc-4ef1-8267-30f73d9712e7","Type":"ContainerStarted","Data":"8fc781f30b1bf552b69e65bd6fbe07dd0ebc73a78af0ab302a58fadf4aa1b637"} Mar 13 10:30:06 crc kubenswrapper[4632]: I0313 10:30:06.443251 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556630-kxrkn" event={"ID":"b0ccb00a-40ce-4b3d-86e8-8f87354c1e8e","Type":"ContainerStarted","Data":"dd1843e80da062d2b847859e60f624eed6f5f23e9e94519edc79cfc924e74d60"} Mar 13 10:30:06 crc kubenswrapper[4632]: I0313 10:30:06.457122 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d0c4f9f-780f-42d8-9eee-cb2201034218-combined-ca-bundle\") pod \"4d0c4f9f-780f-42d8-9eee-cb2201034218\" (UID: \"4d0c4f9f-780f-42d8-9eee-cb2201034218\") " Mar 13 10:30:06 crc kubenswrapper[4632]: I0313 10:30:06.457197 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d0c4f9f-780f-42d8-9eee-cb2201034218-config-data\") pod \"4d0c4f9f-780f-42d8-9eee-cb2201034218\" (UID: \"4d0c4f9f-780f-42d8-9eee-cb2201034218\") " Mar 13 10:30:06 crc kubenswrapper[4632]: I0313 10:30:06.457228 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dmpm\" (UniqueName: \"kubernetes.io/projected/4d0c4f9f-780f-42d8-9eee-cb2201034218-kube-api-access-2dmpm\") pod \"4d0c4f9f-780f-42d8-9eee-cb2201034218\" (UID: \"4d0c4f9f-780f-42d8-9eee-cb2201034218\") " Mar 13 10:30:06 crc kubenswrapper[4632]: I0313 10:30:06.472565 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d0c4f9f-780f-42d8-9eee-cb2201034218-kube-api-access-2dmpm" (OuterVolumeSpecName: "kube-api-access-2dmpm") pod "4d0c4f9f-780f-42d8-9eee-cb2201034218" (UID: "4d0c4f9f-780f-42d8-9eee-cb2201034218"). InnerVolumeSpecName "kube-api-access-2dmpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:30:06 crc kubenswrapper[4632]: I0313 10:30:06.500960 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556630-kxrkn" podStartSLOduration=4.016257042 podStartE2EDuration="6.500920796s" podCreationTimestamp="2026-03-13 10:30:00 +0000 UTC" firstStartedPulling="2026-03-13 10:30:01.661139541 +0000 UTC m=+1575.683669674" lastFinishedPulling="2026-03-13 10:30:04.145803295 +0000 UTC m=+1578.168333428" observedRunningTime="2026-03-13 10:30:06.466824554 +0000 UTC m=+1580.489354687" watchObservedRunningTime="2026-03-13 10:30:06.500920796 +0000 UTC m=+1580.523450929" Mar 13 10:30:06 crc kubenswrapper[4632]: I0313 10:30:06.532237 4632 generic.go:334] "Generic (PLEG): container finished" podID="4d0c4f9f-780f-42d8-9eee-cb2201034218" containerID="9ea4520ca12a1649f0ffe1aaf48fb3759b0ff4ddc87166ec50af120c6fe09b58" exitCode=0 Mar 13 10:30:06 crc kubenswrapper[4632]: I0313 10:30:06.532503 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4d0c4f9f-780f-42d8-9eee-cb2201034218","Type":"ContainerDied","Data":"9ea4520ca12a1649f0ffe1aaf48fb3759b0ff4ddc87166ec50af120c6fe09b58"} Mar 13 10:30:06 crc kubenswrapper[4632]: I0313 10:30:06.532590 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4d0c4f9f-780f-42d8-9eee-cb2201034218","Type":"ContainerDied","Data":"ccfc717e8149e75cbe225885927d06c595e9efeff8370ba3176af49fbdc5eb3d"} Mar 13 10:30:06 crc kubenswrapper[4632]: I0313 10:30:06.532659 4632 scope.go:117] "RemoveContainer" containerID="9ea4520ca12a1649f0ffe1aaf48fb3759b0ff4ddc87166ec50af120c6fe09b58" Mar 13 10:30:06 crc kubenswrapper[4632]: I0313 10:30:06.532847 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 10:30:06 crc kubenswrapper[4632]: I0313 10:30:06.569067 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dmpm\" (UniqueName: \"kubernetes.io/projected/4d0c4f9f-780f-42d8-9eee-cb2201034218-kube-api-access-2dmpm\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:06 crc kubenswrapper[4632]: I0313 10:30:06.573116 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d0c4f9f-780f-42d8-9eee-cb2201034218-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d0c4f9f-780f-42d8-9eee-cb2201034218" (UID: "4d0c4f9f-780f-42d8-9eee-cb2201034218"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:30:06 crc kubenswrapper[4632]: I0313 10:30:06.577920 4632 scope.go:117] "RemoveContainer" containerID="9ea4520ca12a1649f0ffe1aaf48fb3759b0ff4ddc87166ec50af120c6fe09b58" Mar 13 10:30:06 crc kubenswrapper[4632]: I0313 10:30:06.588302 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d0c4f9f-780f-42d8-9eee-cb2201034218-config-data" (OuterVolumeSpecName: "config-data") pod "4d0c4f9f-780f-42d8-9eee-cb2201034218" (UID: "4d0c4f9f-780f-42d8-9eee-cb2201034218"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:30:06 crc kubenswrapper[4632]: E0313 10:30:06.589626 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ea4520ca12a1649f0ffe1aaf48fb3759b0ff4ddc87166ec50af120c6fe09b58\": container with ID starting with 9ea4520ca12a1649f0ffe1aaf48fb3759b0ff4ddc87166ec50af120c6fe09b58 not found: ID does not exist" containerID="9ea4520ca12a1649f0ffe1aaf48fb3759b0ff4ddc87166ec50af120c6fe09b58" Mar 13 10:30:06 crc kubenswrapper[4632]: I0313 10:30:06.589722 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ea4520ca12a1649f0ffe1aaf48fb3759b0ff4ddc87166ec50af120c6fe09b58"} err="failed to get container status \"9ea4520ca12a1649f0ffe1aaf48fb3759b0ff4ddc87166ec50af120c6fe09b58\": rpc error: code = NotFound desc = could not find container \"9ea4520ca12a1649f0ffe1aaf48fb3759b0ff4ddc87166ec50af120c6fe09b58\": container with ID starting with 9ea4520ca12a1649f0ffe1aaf48fb3759b0ff4ddc87166ec50af120c6fe09b58 not found: ID does not exist" Mar 13 10:30:06 crc kubenswrapper[4632]: I0313 10:30:06.677423 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d0c4f9f-780f-42d8-9eee-cb2201034218-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:06 crc kubenswrapper[4632]: I0313 10:30:06.677455 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d0c4f9f-780f-42d8-9eee-cb2201034218-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:06 crc kubenswrapper[4632]: I0313 10:30:06.798854 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 10:30:06 crc kubenswrapper[4632]: W0313 10:30:06.803710 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb75084d0_782c_4f7e_8cc0_62ac424eec6f.slice/crio-f4c06c100d719e4d44cc6c6c52de3a749fc1ccff65408492392c702aaa713cc3 WatchSource:0}: Error finding container f4c06c100d719e4d44cc6c6c52de3a749fc1ccff65408492392c702aaa713cc3: Status 404 returned error can't find the container with id f4c06c100d719e4d44cc6c6c52de3a749fc1ccff65408492392c702aaa713cc3 Mar 13 10:30:07 crc kubenswrapper[4632]: I0313 10:30:07.038397 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 10:30:07 crc kubenswrapper[4632]: I0313 10:30:07.085013 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 10:30:07 crc kubenswrapper[4632]: I0313 10:30:07.131806 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 10:30:07 crc kubenswrapper[4632]: E0313 10:30:07.136594 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d0c4f9f-780f-42d8-9eee-cb2201034218" containerName="nova-scheduler-scheduler" Mar 13 10:30:07 crc kubenswrapper[4632]: I0313 10:30:07.136821 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d0c4f9f-780f-42d8-9eee-cb2201034218" containerName="nova-scheduler-scheduler" Mar 13 10:30:07 crc kubenswrapper[4632]: I0313 10:30:07.138673 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d0c4f9f-780f-42d8-9eee-cb2201034218" containerName="nova-scheduler-scheduler" Mar 13 10:30:07 crc kubenswrapper[4632]: I0313 10:30:07.140397 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 10:30:07 crc kubenswrapper[4632]: I0313 10:30:07.155486 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 13 10:30:07 crc kubenswrapper[4632]: I0313 10:30:07.229980 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 10:30:07 crc kubenswrapper[4632]: I0313 10:30:07.242761 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndpcn\" (UniqueName: \"kubernetes.io/projected/bd274a76-bf05-4f69-8d56-4844012a1fd1-kube-api-access-ndpcn\") pod \"nova-scheduler-0\" (UID: \"bd274a76-bf05-4f69-8d56-4844012a1fd1\") " pod="openstack/nova-scheduler-0" Mar 13 10:30:07 crc kubenswrapper[4632]: I0313 10:30:07.242932 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd274a76-bf05-4f69-8d56-4844012a1fd1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bd274a76-bf05-4f69-8d56-4844012a1fd1\") " pod="openstack/nova-scheduler-0" Mar 13 10:30:07 crc kubenswrapper[4632]: I0313 10:30:07.243093 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd274a76-bf05-4f69-8d56-4844012a1fd1-config-data\") pod \"nova-scheduler-0\" (UID: \"bd274a76-bf05-4f69-8d56-4844012a1fd1\") " pod="openstack/nova-scheduler-0" Mar 13 10:30:07 crc kubenswrapper[4632]: I0313 10:30:07.347258 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd274a76-bf05-4f69-8d56-4844012a1fd1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bd274a76-bf05-4f69-8d56-4844012a1fd1\") " pod="openstack/nova-scheduler-0" Mar 13 10:30:07 crc kubenswrapper[4632]: I0313 10:30:07.347376 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd274a76-bf05-4f69-8d56-4844012a1fd1-config-data\") pod \"nova-scheduler-0\" (UID: \"bd274a76-bf05-4f69-8d56-4844012a1fd1\") " pod="openstack/nova-scheduler-0" Mar 13 10:30:07 crc kubenswrapper[4632]: I0313 10:30:07.347517 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndpcn\" (UniqueName: \"kubernetes.io/projected/bd274a76-bf05-4f69-8d56-4844012a1fd1-kube-api-access-ndpcn\") pod \"nova-scheduler-0\" (UID: \"bd274a76-bf05-4f69-8d56-4844012a1fd1\") " pod="openstack/nova-scheduler-0" Mar 13 10:30:07 crc kubenswrapper[4632]: I0313 10:30:07.360198 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd274a76-bf05-4f69-8d56-4844012a1fd1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bd274a76-bf05-4f69-8d56-4844012a1fd1\") " pod="openstack/nova-scheduler-0" Mar 13 10:30:07 crc kubenswrapper[4632]: I0313 10:30:07.361599 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd274a76-bf05-4f69-8d56-4844012a1fd1-config-data\") pod \"nova-scheduler-0\" (UID: \"bd274a76-bf05-4f69-8d56-4844012a1fd1\") " pod="openstack/nova-scheduler-0" Mar 13 10:30:07 crc kubenswrapper[4632]: E0313 10:30:07.383697 4632 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d0c4f9f_780f_42d8_9eee_cb2201034218.slice/crio-ccfc717e8149e75cbe225885927d06c595e9efeff8370ba3176af49fbdc5eb3d\": RecentStats: unable to find data in memory cache]" Mar 13 10:30:07 crc kubenswrapper[4632]: I0313 10:30:07.387384 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndpcn\" (UniqueName: \"kubernetes.io/projected/bd274a76-bf05-4f69-8d56-4844012a1fd1-kube-api-access-ndpcn\") pod \"nova-scheduler-0\" (UID: \"bd274a76-bf05-4f69-8d56-4844012a1fd1\") " pod="openstack/nova-scheduler-0" Mar 13 10:30:07 crc kubenswrapper[4632]: I0313 10:30:07.543720 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b75084d0-782c-4f7e-8cc0-62ac424eec6f","Type":"ContainerStarted","Data":"f4c06c100d719e4d44cc6c6c52de3a749fc1ccff65408492392c702aaa713cc3"} Mar 13 10:30:07 crc kubenswrapper[4632]: I0313 10:30:07.546554 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc34b88a-a0cc-4ef1-8267-30f73d9712e7","Type":"ContainerStarted","Data":"2e3874df0599fa71baea0baa935a02f99ef35801e9c669049d94507f046a75da"} Mar 13 10:30:07 crc kubenswrapper[4632]: I0313 10:30:07.555164 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 10:30:08 crc kubenswrapper[4632]: I0313 10:30:08.065030 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d0c4f9f-780f-42d8-9eee-cb2201034218" path="/var/lib/kubelet/pods/4d0c4f9f-780f-42d8-9eee-cb2201034218/volumes" Mar 13 10:30:08 crc kubenswrapper[4632]: I0313 10:30:08.152995 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 10:30:08 crc kubenswrapper[4632]: W0313 10:30:08.163003 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd274a76_bf05_4f69_8d56_4844012a1fd1.slice/crio-39b042e495243a0fff2599056c78a54bf577bf285f31eae823e001fb8087bfca WatchSource:0}: Error finding container 39b042e495243a0fff2599056c78a54bf577bf285f31eae823e001fb8087bfca: Status 404 returned error can't find the container with id 39b042e495243a0fff2599056c78a54bf577bf285f31eae823e001fb8087bfca Mar 13 10:30:08 crc kubenswrapper[4632]: I0313 10:30:08.439847 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n4z22" podUID="d0cabd29-ef3e-4808-8c92-3b032483789e" containerName="registry-server" probeResult="failure" output=< Mar 13 10:30:08 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 10:30:08 crc kubenswrapper[4632]: > Mar 13 10:30:08 crc kubenswrapper[4632]: I0313 10:30:08.562510 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bd274a76-bf05-4f69-8d56-4844012a1fd1","Type":"ContainerStarted","Data":"fedfdf64dd99493278404ee3a2fb9a63214432489ab8d05fc9862a8fc248f3f8"} Mar 13 10:30:08 crc kubenswrapper[4632]: I0313 10:30:08.562553 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bd274a76-bf05-4f69-8d56-4844012a1fd1","Type":"ContainerStarted","Data":"39b042e495243a0fff2599056c78a54bf577bf285f31eae823e001fb8087bfca"} Mar 13 10:30:08 crc kubenswrapper[4632]: I0313 10:30:08.564967 4632 generic.go:334] "Generic (PLEG): container finished" podID="b0ccb00a-40ce-4b3d-86e8-8f87354c1e8e" containerID="dd1843e80da062d2b847859e60f624eed6f5f23e9e94519edc79cfc924e74d60" exitCode=0 Mar 13 10:30:08 crc kubenswrapper[4632]: I0313 10:30:08.565033 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556630-kxrkn" event={"ID":"b0ccb00a-40ce-4b3d-86e8-8f87354c1e8e","Type":"ContainerDied","Data":"dd1843e80da062d2b847859e60f624eed6f5f23e9e94519edc79cfc924e74d60"} Mar 13 10:30:08 crc kubenswrapper[4632]: I0313 10:30:08.566537 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b75084d0-782c-4f7e-8cc0-62ac424eec6f","Type":"ContainerStarted","Data":"b55f59548ce979d4ad220d89c90a338482b481e7c7175939010909518499f902"} Mar 13 10:30:08 crc kubenswrapper[4632]: I0313 10:30:08.566561 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b75084d0-782c-4f7e-8cc0-62ac424eec6f","Type":"ContainerStarted","Data":"a8ac6d9d06c83ccd83d308a76ad43bb6e8c9d0ff988e5799c63c26a39827efbc"} Mar 13 10:30:08 crc kubenswrapper[4632]: I0313 10:30:08.606411 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.6063949210000001 podStartE2EDuration="1.606394921s" podCreationTimestamp="2026-03-13 10:30:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:30:08.587919656 +0000 UTC m=+1582.610449789" watchObservedRunningTime="2026-03-13 10:30:08.606394921 +0000 UTC m=+1582.628925054" Mar 13 10:30:08 crc kubenswrapper[4632]: I0313 10:30:08.631601 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.631582258 podStartE2EDuration="3.631582258s" podCreationTimestamp="2026-03-13 10:30:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:30:08.626838824 +0000 UTC m=+1582.649368957" watchObservedRunningTime="2026-03-13 10:30:08.631582258 +0000 UTC m=+1582.654112391" Mar 13 10:30:09 crc kubenswrapper[4632]: I0313 10:30:09.578135 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc34b88a-a0cc-4ef1-8267-30f73d9712e7","Type":"ContainerStarted","Data":"aafa45b6b50dc62147a1600ded978371569f724c32856206cf7196aba295169c"} Mar 13 10:30:09 crc kubenswrapper[4632]: I0313 10:30:09.622840 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.894984538 podStartE2EDuration="7.622820694s" podCreationTimestamp="2026-03-13 10:30:02 +0000 UTC" firstStartedPulling="2026-03-13 10:30:04.082839488 +0000 UTC m=+1578.105369621" lastFinishedPulling="2026-03-13 10:30:08.810675644 +0000 UTC m=+1582.833205777" observedRunningTime="2026-03-13 10:30:09.609260648 +0000 UTC m=+1583.631790781" watchObservedRunningTime="2026-03-13 10:30:09.622820694 +0000 UTC m=+1583.645350827" Mar 13 10:30:10 crc kubenswrapper[4632]: I0313 10:30:10.095103 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556630-kxrkn" Mar 13 10:30:10 crc kubenswrapper[4632]: I0313 10:30:10.188718 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="82ae3a46-0133-43f5-942d-0b9a5b4d59f4" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.219:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 10:30:10 crc kubenswrapper[4632]: I0313 10:30:10.189407 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="82ae3a46-0133-43f5-942d-0b9a5b4d59f4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.219:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 10:30:10 crc kubenswrapper[4632]: I0313 10:30:10.226498 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4884\" (UniqueName: \"kubernetes.io/projected/b0ccb00a-40ce-4b3d-86e8-8f87354c1e8e-kube-api-access-m4884\") pod \"b0ccb00a-40ce-4b3d-86e8-8f87354c1e8e\" (UID: \"b0ccb00a-40ce-4b3d-86e8-8f87354c1e8e\") " Mar 13 10:30:10 crc kubenswrapper[4632]: I0313 10:30:10.243154 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0ccb00a-40ce-4b3d-86e8-8f87354c1e8e-kube-api-access-m4884" (OuterVolumeSpecName: "kube-api-access-m4884") pod "b0ccb00a-40ce-4b3d-86e8-8f87354c1e8e" (UID: "b0ccb00a-40ce-4b3d-86e8-8f87354c1e8e"). InnerVolumeSpecName "kube-api-access-m4884". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:30:10 crc kubenswrapper[4632]: I0313 10:30:10.328561 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4884\" (UniqueName: \"kubernetes.io/projected/b0ccb00a-40ce-4b3d-86e8-8f87354c1e8e-kube-api-access-m4884\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:10 crc kubenswrapper[4632]: I0313 10:30:10.461408 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 10:30:10 crc kubenswrapper[4632]: I0313 10:30:10.461807 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 10:30:10 crc kubenswrapper[4632]: I0313 10:30:10.611686 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556630-kxrkn" event={"ID":"b0ccb00a-40ce-4b3d-86e8-8f87354c1e8e","Type":"ContainerDied","Data":"4d8d82c51a3c93ea917ffae6d282471f4227fa3a597385ba7ee78758a779abd9"} Mar 13 10:30:10 crc kubenswrapper[4632]: I0313 10:30:10.614551 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d8d82c51a3c93ea917ffae6d282471f4227fa3a597385ba7ee78758a779abd9" Mar 13 10:30:10 crc kubenswrapper[4632]: I0313 10:30:10.614663 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 13 10:30:10 crc kubenswrapper[4632]: I0313 10:30:10.611733 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556630-kxrkn" Mar 13 10:30:10 crc kubenswrapper[4632]: I0313 10:30:10.692187 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556624-tnl4c"] Mar 13 10:30:10 crc kubenswrapper[4632]: I0313 10:30:10.700911 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556624-tnl4c"] Mar 13 10:30:11 crc kubenswrapper[4632]: I0313 10:30:11.041738 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 13 10:30:11 crc kubenswrapper[4632]: I0313 10:30:11.041798 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 13 10:30:11 crc kubenswrapper[4632]: I0313 10:30:11.091804 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 13 10:30:11 crc kubenswrapper[4632]: I0313 10:30:11.092716 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 13 10:30:12 crc kubenswrapper[4632]: I0313 10:30:12.054283 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3ef77ea1-fee2-432d-9aba-c0acfedb4e69" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.227:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:30:12 crc kubenswrapper[4632]: I0313 10:30:12.054318 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3ef77ea1-fee2-432d-9aba-c0acfedb4e69" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.227:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:30:12 crc kubenswrapper[4632]: I0313 10:30:12.059760 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b2374d5-8d19-4837-8d91-79df0e65fc1f" path="/var/lib/kubelet/pods/5b2374d5-8d19-4837-8d91-79df0e65fc1f/volumes" Mar 13 10:30:12 crc kubenswrapper[4632]: I0313 10:30:12.555461 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 13 10:30:16 crc kubenswrapper[4632]: I0313 10:30:16.091393 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 13 10:30:16 crc kubenswrapper[4632]: I0313 10:30:16.091841 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 13 10:30:17 crc kubenswrapper[4632]: I0313 10:30:17.104217 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b75084d0-782c-4f7e-8cc0-62ac424eec6f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.229:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:30:17 crc kubenswrapper[4632]: I0313 10:30:17.104525 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b75084d0-782c-4f7e-8cc0-62ac424eec6f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.229:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:30:17 crc kubenswrapper[4632]: I0313 10:30:17.314891 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n4z22" Mar 13 10:30:17 crc kubenswrapper[4632]: I0313 10:30:17.394425 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n4z22" Mar 13 10:30:17 crc kubenswrapper[4632]: I0313 10:30:17.555632 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 13 10:30:17 crc kubenswrapper[4632]: I0313 10:30:17.586600 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 13 10:30:17 crc kubenswrapper[4632]: I0313 10:30:17.843320 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 13 10:30:18 crc kubenswrapper[4632]: I0313 10:30:18.178900 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n4z22"] Mar 13 10:30:18 crc kubenswrapper[4632]: I0313 10:30:18.819861 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n4z22" podUID="d0cabd29-ef3e-4808-8c92-3b032483789e" containerName="registry-server" containerID="cri-o://d90ad363a11a672d65b86841626f5e66c7cb30326c75375f126c473ca51ca722" gracePeriod=2 Mar 13 10:30:19 crc kubenswrapper[4632]: I0313 10:30:19.431910 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n4z22" Mar 13 10:30:19 crc kubenswrapper[4632]: I0313 10:30:19.547173 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0cabd29-ef3e-4808-8c92-3b032483789e-catalog-content\") pod \"d0cabd29-ef3e-4808-8c92-3b032483789e\" (UID: \"d0cabd29-ef3e-4808-8c92-3b032483789e\") " Mar 13 10:30:19 crc kubenswrapper[4632]: I0313 10:30:19.547487 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpwh9\" (UniqueName: \"kubernetes.io/projected/d0cabd29-ef3e-4808-8c92-3b032483789e-kube-api-access-xpwh9\") pod \"d0cabd29-ef3e-4808-8c92-3b032483789e\" (UID: \"d0cabd29-ef3e-4808-8c92-3b032483789e\") " Mar 13 10:30:19 crc kubenswrapper[4632]: I0313 10:30:19.547644 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0cabd29-ef3e-4808-8c92-3b032483789e-utilities\") pod \"d0cabd29-ef3e-4808-8c92-3b032483789e\" (UID: \"d0cabd29-ef3e-4808-8c92-3b032483789e\") " Mar 13 10:30:19 crc kubenswrapper[4632]: I0313 10:30:19.550638 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0cabd29-ef3e-4808-8c92-3b032483789e-utilities" (OuterVolumeSpecName: "utilities") pod "d0cabd29-ef3e-4808-8c92-3b032483789e" (UID: "d0cabd29-ef3e-4808-8c92-3b032483789e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:30:19 crc kubenswrapper[4632]: I0313 10:30:19.559529 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0cabd29-ef3e-4808-8c92-3b032483789e-kube-api-access-xpwh9" (OuterVolumeSpecName: "kube-api-access-xpwh9") pod "d0cabd29-ef3e-4808-8c92-3b032483789e" (UID: "d0cabd29-ef3e-4808-8c92-3b032483789e"). InnerVolumeSpecName "kube-api-access-xpwh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:30:19 crc kubenswrapper[4632]: I0313 10:30:19.649825 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpwh9\" (UniqueName: \"kubernetes.io/projected/d0cabd29-ef3e-4808-8c92-3b032483789e-kube-api-access-xpwh9\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:19 crc kubenswrapper[4632]: I0313 10:30:19.649862 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0cabd29-ef3e-4808-8c92-3b032483789e-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:19 crc kubenswrapper[4632]: I0313 10:30:19.742020 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0cabd29-ef3e-4808-8c92-3b032483789e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d0cabd29-ef3e-4808-8c92-3b032483789e" (UID: "d0cabd29-ef3e-4808-8c92-3b032483789e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:30:19 crc kubenswrapper[4632]: I0313 10:30:19.752213 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0cabd29-ef3e-4808-8c92-3b032483789e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:19 crc kubenswrapper[4632]: I0313 10:30:19.830485 4632 generic.go:334] "Generic (PLEG): container finished" podID="d0cabd29-ef3e-4808-8c92-3b032483789e" containerID="d90ad363a11a672d65b86841626f5e66c7cb30326c75375f126c473ca51ca722" exitCode=0 Mar 13 10:30:19 crc kubenswrapper[4632]: I0313 10:30:19.830537 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n4z22" event={"ID":"d0cabd29-ef3e-4808-8c92-3b032483789e","Type":"ContainerDied","Data":"d90ad363a11a672d65b86841626f5e66c7cb30326c75375f126c473ca51ca722"} Mar 13 10:30:19 crc kubenswrapper[4632]: I0313 10:30:19.830565 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n4z22" event={"ID":"d0cabd29-ef3e-4808-8c92-3b032483789e","Type":"ContainerDied","Data":"dd0fe42db5b99209dcd168810b0996ceb728a9055a395258ce5d2c5e8afe18b9"} Mar 13 10:30:19 crc kubenswrapper[4632]: I0313 10:30:19.830581 4632 scope.go:117] "RemoveContainer" containerID="d90ad363a11a672d65b86841626f5e66c7cb30326c75375f126c473ca51ca722" Mar 13 10:30:19 crc kubenswrapper[4632]: I0313 10:30:19.830695 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n4z22" Mar 13 10:30:19 crc kubenswrapper[4632]: I0313 10:30:19.867983 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n4z22"] Mar 13 10:30:19 crc kubenswrapper[4632]: I0313 10:30:19.876932 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n4z22"] Mar 13 10:30:19 crc kubenswrapper[4632]: I0313 10:30:19.881025 4632 scope.go:117] "RemoveContainer" containerID="71596437e6a2d325baac09e68c289bacf1de16594d0329b6a19a5f5d92099872" Mar 13 10:30:19 crc kubenswrapper[4632]: I0313 10:30:19.916747 4632 scope.go:117] "RemoveContainer" containerID="201e16e880cd2d90fce52e678a22a53fb8633269541a12fab257e648ab87ec2d" Mar 13 10:30:19 crc kubenswrapper[4632]: I0313 10:30:19.962841 4632 scope.go:117] "RemoveContainer" containerID="d90ad363a11a672d65b86841626f5e66c7cb30326c75375f126c473ca51ca722" Mar 13 10:30:19 crc kubenswrapper[4632]: E0313 10:30:19.963501 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d90ad363a11a672d65b86841626f5e66c7cb30326c75375f126c473ca51ca722\": container with ID starting with d90ad363a11a672d65b86841626f5e66c7cb30326c75375f126c473ca51ca722 not found: ID does not exist" containerID="d90ad363a11a672d65b86841626f5e66c7cb30326c75375f126c473ca51ca722" Mar 13 10:30:19 crc kubenswrapper[4632]: I0313 10:30:19.963557 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d90ad363a11a672d65b86841626f5e66c7cb30326c75375f126c473ca51ca722"} err="failed to get container status \"d90ad363a11a672d65b86841626f5e66c7cb30326c75375f126c473ca51ca722\": rpc error: code = NotFound desc = could not find container \"d90ad363a11a672d65b86841626f5e66c7cb30326c75375f126c473ca51ca722\": container with ID starting with d90ad363a11a672d65b86841626f5e66c7cb30326c75375f126c473ca51ca722 not found: ID does not exist" Mar 13 10:30:19 crc kubenswrapper[4632]: I0313 10:30:19.963585 4632 scope.go:117] "RemoveContainer" containerID="71596437e6a2d325baac09e68c289bacf1de16594d0329b6a19a5f5d92099872" Mar 13 10:30:19 crc kubenswrapper[4632]: E0313 10:30:19.964014 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71596437e6a2d325baac09e68c289bacf1de16594d0329b6a19a5f5d92099872\": container with ID starting with 71596437e6a2d325baac09e68c289bacf1de16594d0329b6a19a5f5d92099872 not found: ID does not exist" containerID="71596437e6a2d325baac09e68c289bacf1de16594d0329b6a19a5f5d92099872" Mar 13 10:30:19 crc kubenswrapper[4632]: I0313 10:30:19.964058 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71596437e6a2d325baac09e68c289bacf1de16594d0329b6a19a5f5d92099872"} err="failed to get container status \"71596437e6a2d325baac09e68c289bacf1de16594d0329b6a19a5f5d92099872\": rpc error: code = NotFound desc = could not find container \"71596437e6a2d325baac09e68c289bacf1de16594d0329b6a19a5f5d92099872\": container with ID starting with 71596437e6a2d325baac09e68c289bacf1de16594d0329b6a19a5f5d92099872 not found: ID does not exist" Mar 13 10:30:19 crc kubenswrapper[4632]: I0313 10:30:19.964094 4632 scope.go:117] "RemoveContainer" containerID="201e16e880cd2d90fce52e678a22a53fb8633269541a12fab257e648ab87ec2d" Mar 13 10:30:19 crc kubenswrapper[4632]: E0313 10:30:19.964443 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"201e16e880cd2d90fce52e678a22a53fb8633269541a12fab257e648ab87ec2d\": container with ID starting with 201e16e880cd2d90fce52e678a22a53fb8633269541a12fab257e648ab87ec2d not found: ID does not exist" containerID="201e16e880cd2d90fce52e678a22a53fb8633269541a12fab257e648ab87ec2d" Mar 13 10:30:19 crc kubenswrapper[4632]: I0313 10:30:19.964466 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"201e16e880cd2d90fce52e678a22a53fb8633269541a12fab257e648ab87ec2d"} err="failed to get container status \"201e16e880cd2d90fce52e678a22a53fb8633269541a12fab257e648ab87ec2d\": rpc error: code = NotFound desc = could not find container \"201e16e880cd2d90fce52e678a22a53fb8633269541a12fab257e648ab87ec2d\": container with ID starting with 201e16e880cd2d90fce52e678a22a53fb8633269541a12fab257e648ab87ec2d not found: ID does not exist" Mar 13 10:30:20 crc kubenswrapper[4632]: I0313 10:30:20.058136 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0cabd29-ef3e-4808-8c92-3b032483789e" path="/var/lib/kubelet/pods/d0cabd29-ef3e-4808-8c92-3b032483789e/volumes" Mar 13 10:30:21 crc kubenswrapper[4632]: I0313 10:30:21.049259 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 13 10:30:21 crc kubenswrapper[4632]: I0313 10:30:21.049644 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 13 10:30:21 crc kubenswrapper[4632]: I0313 10:30:21.050021 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 13 10:30:21 crc kubenswrapper[4632]: I0313 10:30:21.050063 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 13 10:30:21 crc kubenswrapper[4632]: I0313 10:30:21.058524 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 13 10:30:21 crc kubenswrapper[4632]: I0313 10:30:21.060843 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 13 10:30:25 crc kubenswrapper[4632]: I0313 10:30:25.522697 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:30:25 crc kubenswrapper[4632]: I0313 10:30:25.599762 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e4afb91-ce26-4325-89c9-2542da2ec48a-logs\") pod \"3e4afb91-ce26-4325-89c9-2542da2ec48a\" (UID: \"3e4afb91-ce26-4325-89c9-2542da2ec48a\") " Mar 13 10:30:25 crc kubenswrapper[4632]: I0313 10:30:25.599837 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e4afb91-ce26-4325-89c9-2542da2ec48a-combined-ca-bundle\") pod \"3e4afb91-ce26-4325-89c9-2542da2ec48a\" (UID: \"3e4afb91-ce26-4325-89c9-2542da2ec48a\") " Mar 13 10:30:25 crc kubenswrapper[4632]: I0313 10:30:25.599865 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3e4afb91-ce26-4325-89c9-2542da2ec48a-horizon-secret-key\") pod \"3e4afb91-ce26-4325-89c9-2542da2ec48a\" (UID: \"3e4afb91-ce26-4325-89c9-2542da2ec48a\") " Mar 13 10:30:25 crc kubenswrapper[4632]: I0313 10:30:25.599931 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntccx\" (UniqueName: \"kubernetes.io/projected/3e4afb91-ce26-4325-89c9-2542da2ec48a-kube-api-access-ntccx\") pod \"3e4afb91-ce26-4325-89c9-2542da2ec48a\" (UID: \"3e4afb91-ce26-4325-89c9-2542da2ec48a\") " Mar 13 10:30:25 crc kubenswrapper[4632]: I0313 10:30:25.600053 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e4afb91-ce26-4325-89c9-2542da2ec48a-scripts\") pod \"3e4afb91-ce26-4325-89c9-2542da2ec48a\" (UID: \"3e4afb91-ce26-4325-89c9-2542da2ec48a\") " Mar 13 10:30:25 crc kubenswrapper[4632]: I0313 10:30:25.600082 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e4afb91-ce26-4325-89c9-2542da2ec48a-config-data\") pod \"3e4afb91-ce26-4325-89c9-2542da2ec48a\" (UID: \"3e4afb91-ce26-4325-89c9-2542da2ec48a\") " Mar 13 10:30:25 crc kubenswrapper[4632]: I0313 10:30:25.600106 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e4afb91-ce26-4325-89c9-2542da2ec48a-horizon-tls-certs\") pod \"3e4afb91-ce26-4325-89c9-2542da2ec48a\" (UID: \"3e4afb91-ce26-4325-89c9-2542da2ec48a\") " Mar 13 10:30:25 crc kubenswrapper[4632]: I0313 10:30:25.603687 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e4afb91-ce26-4325-89c9-2542da2ec48a-logs" (OuterVolumeSpecName: "logs") pod "3e4afb91-ce26-4325-89c9-2542da2ec48a" (UID: "3e4afb91-ce26-4325-89c9-2542da2ec48a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:30:25 crc kubenswrapper[4632]: I0313 10:30:25.625203 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e4afb91-ce26-4325-89c9-2542da2ec48a-kube-api-access-ntccx" (OuterVolumeSpecName: "kube-api-access-ntccx") pod "3e4afb91-ce26-4325-89c9-2542da2ec48a" (UID: "3e4afb91-ce26-4325-89c9-2542da2ec48a"). InnerVolumeSpecName "kube-api-access-ntccx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:30:25 crc kubenswrapper[4632]: I0313 10:30:25.625298 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e4afb91-ce26-4325-89c9-2542da2ec48a-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "3e4afb91-ce26-4325-89c9-2542da2ec48a" (UID: "3e4afb91-ce26-4325-89c9-2542da2ec48a"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:30:25 crc kubenswrapper[4632]: I0313 10:30:25.632341 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e4afb91-ce26-4325-89c9-2542da2ec48a-scripts" (OuterVolumeSpecName: "scripts") pod "3e4afb91-ce26-4325-89c9-2542da2ec48a" (UID: "3e4afb91-ce26-4325-89c9-2542da2ec48a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:30:25 crc kubenswrapper[4632]: I0313 10:30:25.639036 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e4afb91-ce26-4325-89c9-2542da2ec48a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e4afb91-ce26-4325-89c9-2542da2ec48a" (UID: "3e4afb91-ce26-4325-89c9-2542da2ec48a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:30:25 crc kubenswrapper[4632]: I0313 10:30:25.642253 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e4afb91-ce26-4325-89c9-2542da2ec48a-config-data" (OuterVolumeSpecName: "config-data") pod "3e4afb91-ce26-4325-89c9-2542da2ec48a" (UID: "3e4afb91-ce26-4325-89c9-2542da2ec48a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:30:25 crc kubenswrapper[4632]: I0313 10:30:25.666551 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e4afb91-ce26-4325-89c9-2542da2ec48a-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "3e4afb91-ce26-4325-89c9-2542da2ec48a" (UID: "3e4afb91-ce26-4325-89c9-2542da2ec48a"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:30:25 crc kubenswrapper[4632]: I0313 10:30:25.701553 4632 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e4afb91-ce26-4325-89c9-2542da2ec48a-logs\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:25 crc kubenswrapper[4632]: I0313 10:30:25.701593 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e4afb91-ce26-4325-89c9-2542da2ec48a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:25 crc kubenswrapper[4632]: I0313 10:30:25.701604 4632 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3e4afb91-ce26-4325-89c9-2542da2ec48a-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:25 crc kubenswrapper[4632]: I0313 10:30:25.701644 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntccx\" (UniqueName: \"kubernetes.io/projected/3e4afb91-ce26-4325-89c9-2542da2ec48a-kube-api-access-ntccx\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:25 crc kubenswrapper[4632]: I0313 10:30:25.701653 4632 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e4afb91-ce26-4325-89c9-2542da2ec48a-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:25 crc kubenswrapper[4632]: I0313 10:30:25.701661 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e4afb91-ce26-4325-89c9-2542da2ec48a-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:25 crc kubenswrapper[4632]: I0313 10:30:25.701671 4632 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e4afb91-ce26-4325-89c9-2542da2ec48a-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:25 crc kubenswrapper[4632]: I0313 10:30:25.906708 4632 generic.go:334] "Generic (PLEG): container finished" podID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerID="d57c6cccef4987c7003d38b1c8de63c00e37251e291f0b2d5a1b05218e53dd35" exitCode=137 Mar 13 10:30:25 crc kubenswrapper[4632]: I0313 10:30:25.906743 4632 generic.go:334] "Generic (PLEG): container finished" podID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerID="0848706fc94227d25a25441bdfc8b2affad934f1f484b5adb7c4470d7918e589" exitCode=137 Mar 13 10:30:25 crc kubenswrapper[4632]: I0313 10:30:25.906764 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bdb5f7878-ng2k2" event={"ID":"3e4afb91-ce26-4325-89c9-2542da2ec48a","Type":"ContainerDied","Data":"d57c6cccef4987c7003d38b1c8de63c00e37251e291f0b2d5a1b05218e53dd35"} Mar 13 10:30:25 crc kubenswrapper[4632]: I0313 10:30:25.906796 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bdb5f7878-ng2k2" event={"ID":"3e4afb91-ce26-4325-89c9-2542da2ec48a","Type":"ContainerDied","Data":"0848706fc94227d25a25441bdfc8b2affad934f1f484b5adb7c4470d7918e589"} Mar 13 10:30:25 crc kubenswrapper[4632]: I0313 10:30:25.906795 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bdb5f7878-ng2k2" Mar 13 10:30:25 crc kubenswrapper[4632]: I0313 10:30:25.906818 4632 scope.go:117] "RemoveContainer" containerID="d57c6cccef4987c7003d38b1c8de63c00e37251e291f0b2d5a1b05218e53dd35" Mar 13 10:30:25 crc kubenswrapper[4632]: I0313 10:30:25.906807 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bdb5f7878-ng2k2" event={"ID":"3e4afb91-ce26-4325-89c9-2542da2ec48a","Type":"ContainerDied","Data":"aaad122938f426786c8baabdc4555594b0ba0e55f0c39302b9bf84230f06cfd1"} Mar 13 10:30:25 crc kubenswrapper[4632]: I0313 10:30:25.947880 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7bdb5f7878-ng2k2"] Mar 13 10:30:25 crc kubenswrapper[4632]: I0313 10:30:25.960022 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7bdb5f7878-ng2k2"] Mar 13 10:30:26 crc kubenswrapper[4632]: I0313 10:30:26.068747 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" path="/var/lib/kubelet/pods/3e4afb91-ce26-4325-89c9-2542da2ec48a/volumes" Mar 13 10:30:26 crc kubenswrapper[4632]: I0313 10:30:26.105075 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 13 10:30:26 crc kubenswrapper[4632]: I0313 10:30:26.108170 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 13 10:30:26 crc kubenswrapper[4632]: I0313 10:30:26.115208 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 13 10:30:26 crc kubenswrapper[4632]: I0313 10:30:26.145366 4632 scope.go:117] "RemoveContainer" containerID="0d13a7ba01a78f7619f69655522000449193f4aa62fa0ed0a3c794480484af30" Mar 13 10:30:26 crc kubenswrapper[4632]: I0313 10:30:26.381325 4632 scope.go:117] "RemoveContainer" containerID="0848706fc94227d25a25441bdfc8b2affad934f1f484b5adb7c4470d7918e589" Mar 13 10:30:26 crc kubenswrapper[4632]: I0313 10:30:26.399545 4632 scope.go:117] "RemoveContainer" containerID="d57c6cccef4987c7003d38b1c8de63c00e37251e291f0b2d5a1b05218e53dd35" Mar 13 10:30:26 crc kubenswrapper[4632]: E0313 10:30:26.400116 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d57c6cccef4987c7003d38b1c8de63c00e37251e291f0b2d5a1b05218e53dd35\": container with ID starting with d57c6cccef4987c7003d38b1c8de63c00e37251e291f0b2d5a1b05218e53dd35 not found: ID does not exist" containerID="d57c6cccef4987c7003d38b1c8de63c00e37251e291f0b2d5a1b05218e53dd35" Mar 13 10:30:26 crc kubenswrapper[4632]: I0313 10:30:26.400157 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d57c6cccef4987c7003d38b1c8de63c00e37251e291f0b2d5a1b05218e53dd35"} err="failed to get container status \"d57c6cccef4987c7003d38b1c8de63c00e37251e291f0b2d5a1b05218e53dd35\": rpc error: code = NotFound desc = could not find container \"d57c6cccef4987c7003d38b1c8de63c00e37251e291f0b2d5a1b05218e53dd35\": container with ID starting with d57c6cccef4987c7003d38b1c8de63c00e37251e291f0b2d5a1b05218e53dd35 not found: ID does not exist" Mar 13 10:30:26 crc kubenswrapper[4632]: I0313 10:30:26.400182 4632 scope.go:117] "RemoveContainer" containerID="0d13a7ba01a78f7619f69655522000449193f4aa62fa0ed0a3c794480484af30" Mar 13 10:30:26 crc kubenswrapper[4632]: E0313 10:30:26.400692 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d13a7ba01a78f7619f69655522000449193f4aa62fa0ed0a3c794480484af30\": container with ID starting with 0d13a7ba01a78f7619f69655522000449193f4aa62fa0ed0a3c794480484af30 not found: ID does not exist" containerID="0d13a7ba01a78f7619f69655522000449193f4aa62fa0ed0a3c794480484af30" Mar 13 10:30:26 crc kubenswrapper[4632]: I0313 10:30:26.400723 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d13a7ba01a78f7619f69655522000449193f4aa62fa0ed0a3c794480484af30"} err="failed to get container status \"0d13a7ba01a78f7619f69655522000449193f4aa62fa0ed0a3c794480484af30\": rpc error: code = NotFound desc = could not find container \"0d13a7ba01a78f7619f69655522000449193f4aa62fa0ed0a3c794480484af30\": container with ID starting with 0d13a7ba01a78f7619f69655522000449193f4aa62fa0ed0a3c794480484af30 not found: ID does not exist" Mar 13 10:30:26 crc kubenswrapper[4632]: I0313 10:30:26.400742 4632 scope.go:117] "RemoveContainer" containerID="0848706fc94227d25a25441bdfc8b2affad934f1f484b5adb7c4470d7918e589" Mar 13 10:30:26 crc kubenswrapper[4632]: E0313 10:30:26.401066 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0848706fc94227d25a25441bdfc8b2affad934f1f484b5adb7c4470d7918e589\": container with ID starting with 0848706fc94227d25a25441bdfc8b2affad934f1f484b5adb7c4470d7918e589 not found: ID does not exist" containerID="0848706fc94227d25a25441bdfc8b2affad934f1f484b5adb7c4470d7918e589" Mar 13 10:30:26 crc kubenswrapper[4632]: I0313 10:30:26.401089 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0848706fc94227d25a25441bdfc8b2affad934f1f484b5adb7c4470d7918e589"} err="failed to get container status \"0848706fc94227d25a25441bdfc8b2affad934f1f484b5adb7c4470d7918e589\": rpc error: code = NotFound desc = could not find container \"0848706fc94227d25a25441bdfc8b2affad934f1f484b5adb7c4470d7918e589\": container with ID starting with 0848706fc94227d25a25441bdfc8b2affad934f1f484b5adb7c4470d7918e589 not found: ID does not exist" Mar 13 10:30:26 crc kubenswrapper[4632]: I0313 10:30:26.401106 4632 scope.go:117] "RemoveContainer" containerID="d57c6cccef4987c7003d38b1c8de63c00e37251e291f0b2d5a1b05218e53dd35" Mar 13 10:30:26 crc kubenswrapper[4632]: I0313 10:30:26.401333 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d57c6cccef4987c7003d38b1c8de63c00e37251e291f0b2d5a1b05218e53dd35"} err="failed to get container status \"d57c6cccef4987c7003d38b1c8de63c00e37251e291f0b2d5a1b05218e53dd35\": rpc error: code = NotFound desc = could not find container \"d57c6cccef4987c7003d38b1c8de63c00e37251e291f0b2d5a1b05218e53dd35\": container with ID starting with d57c6cccef4987c7003d38b1c8de63c00e37251e291f0b2d5a1b05218e53dd35 not found: ID does not exist" Mar 13 10:30:26 crc kubenswrapper[4632]: I0313 10:30:26.401348 4632 scope.go:117] "RemoveContainer" containerID="0d13a7ba01a78f7619f69655522000449193f4aa62fa0ed0a3c794480484af30" Mar 13 10:30:26 crc kubenswrapper[4632]: I0313 10:30:26.401537 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d13a7ba01a78f7619f69655522000449193f4aa62fa0ed0a3c794480484af30"} err="failed to get container status \"0d13a7ba01a78f7619f69655522000449193f4aa62fa0ed0a3c794480484af30\": rpc error: code = NotFound desc = could not find container \"0d13a7ba01a78f7619f69655522000449193f4aa62fa0ed0a3c794480484af30\": container with ID starting with 0d13a7ba01a78f7619f69655522000449193f4aa62fa0ed0a3c794480484af30 not found: ID does not exist" Mar 13 10:30:26 crc kubenswrapper[4632]: I0313 10:30:26.401758 4632 scope.go:117] "RemoveContainer" containerID="0848706fc94227d25a25441bdfc8b2affad934f1f484b5adb7c4470d7918e589" Mar 13 10:30:26 crc kubenswrapper[4632]: I0313 10:30:26.402110 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0848706fc94227d25a25441bdfc8b2affad934f1f484b5adb7c4470d7918e589"} err="failed to get container status \"0848706fc94227d25a25441bdfc8b2affad934f1f484b5adb7c4470d7918e589\": rpc error: code = NotFound desc = could not find container \"0848706fc94227d25a25441bdfc8b2affad934f1f484b5adb7c4470d7918e589\": container with ID starting with 0848706fc94227d25a25441bdfc8b2affad934f1f484b5adb7c4470d7918e589 not found: ID does not exist" Mar 13 10:30:26 crc kubenswrapper[4632]: I0313 10:30:26.920153 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 13 10:30:32 crc kubenswrapper[4632]: I0313 10:30:32.980810 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 13 10:30:37 crc kubenswrapper[4632]: I0313 10:30:37.217159 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 13 10:30:37 crc kubenswrapper[4632]: I0313 10:30:37.217711 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="21ce0311-ff05-4626-9663-a373ae31eb56" containerName="kube-state-metrics" containerID="cri-o://1a16adc836bd27406a32b6f9e9672d40ce7e70e9caf414e0a9334fb34a8ec7ab" gracePeriod=30 Mar 13 10:30:37 crc kubenswrapper[4632]: I0313 10:30:37.918978 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 13 10:30:37 crc kubenswrapper[4632]: I0313 10:30:37.971559 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfstv\" (UniqueName: \"kubernetes.io/projected/21ce0311-ff05-4626-9663-a373ae31eb56-kube-api-access-hfstv\") pod \"21ce0311-ff05-4626-9663-a373ae31eb56\" (UID: \"21ce0311-ff05-4626-9663-a373ae31eb56\") " Mar 13 10:30:37 crc kubenswrapper[4632]: I0313 10:30:37.990124 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21ce0311-ff05-4626-9663-a373ae31eb56-kube-api-access-hfstv" (OuterVolumeSpecName: "kube-api-access-hfstv") pod "21ce0311-ff05-4626-9663-a373ae31eb56" (UID: "21ce0311-ff05-4626-9663-a373ae31eb56"). InnerVolumeSpecName "kube-api-access-hfstv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.075479 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfstv\" (UniqueName: \"kubernetes.io/projected/21ce0311-ff05-4626-9663-a373ae31eb56-kube-api-access-hfstv\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.079551 4632 generic.go:334] "Generic (PLEG): container finished" podID="21ce0311-ff05-4626-9663-a373ae31eb56" containerID="1a16adc836bd27406a32b6f9e9672d40ce7e70e9caf414e0a9334fb34a8ec7ab" exitCode=2 Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.079629 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.124629 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"21ce0311-ff05-4626-9663-a373ae31eb56","Type":"ContainerDied","Data":"1a16adc836bd27406a32b6f9e9672d40ce7e70e9caf414e0a9334fb34a8ec7ab"} Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.124715 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"21ce0311-ff05-4626-9663-a373ae31eb56","Type":"ContainerDied","Data":"96915bb97645358a6555ca60c9308596dd68c9b71a65da098dd5679653d9f202"} Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.124755 4632 scope.go:117] "RemoveContainer" containerID="1a16adc836bd27406a32b6f9e9672d40ce7e70e9caf414e0a9334fb34a8ec7ab" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.282860 4632 scope.go:117] "RemoveContainer" containerID="1a16adc836bd27406a32b6f9e9672d40ce7e70e9caf414e0a9334fb34a8ec7ab" Mar 13 10:30:38 crc kubenswrapper[4632]: E0313 10:30:38.288230 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a16adc836bd27406a32b6f9e9672d40ce7e70e9caf414e0a9334fb34a8ec7ab\": container with ID starting with 1a16adc836bd27406a32b6f9e9672d40ce7e70e9caf414e0a9334fb34a8ec7ab not found: ID does not exist" containerID="1a16adc836bd27406a32b6f9e9672d40ce7e70e9caf414e0a9334fb34a8ec7ab" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.288288 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a16adc836bd27406a32b6f9e9672d40ce7e70e9caf414e0a9334fb34a8ec7ab"} err="failed to get container status \"1a16adc836bd27406a32b6f9e9672d40ce7e70e9caf414e0a9334fb34a8ec7ab\": rpc error: code = NotFound desc = could not find container \"1a16adc836bd27406a32b6f9e9672d40ce7e70e9caf414e0a9334fb34a8ec7ab\": container with ID starting with 1a16adc836bd27406a32b6f9e9672d40ce7e70e9caf414e0a9334fb34a8ec7ab not found: ID does not exist" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.326510 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.336838 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.347009 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 13 10:30:38 crc kubenswrapper[4632]: E0313 10:30:38.347497 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerName="horizon" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.347516 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerName="horizon" Mar 13 10:30:38 crc kubenswrapper[4632]: E0313 10:30:38.347530 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0cabd29-ef3e-4808-8c92-3b032483789e" containerName="extract-utilities" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.347537 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0cabd29-ef3e-4808-8c92-3b032483789e" containerName="extract-utilities" Mar 13 10:30:38 crc kubenswrapper[4632]: E0313 10:30:38.347558 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerName="horizon" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.347565 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerName="horizon" Mar 13 10:30:38 crc kubenswrapper[4632]: E0313 10:30:38.347580 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0ccb00a-40ce-4b3d-86e8-8f87354c1e8e" containerName="oc" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.347586 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0ccb00a-40ce-4b3d-86e8-8f87354c1e8e" containerName="oc" Mar 13 10:30:38 crc kubenswrapper[4632]: E0313 10:30:38.347596 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0cabd29-ef3e-4808-8c92-3b032483789e" containerName="registry-server" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.347602 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0cabd29-ef3e-4808-8c92-3b032483789e" containerName="registry-server" Mar 13 10:30:38 crc kubenswrapper[4632]: E0313 10:30:38.347611 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0cabd29-ef3e-4808-8c92-3b032483789e" containerName="extract-content" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.347619 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0cabd29-ef3e-4808-8c92-3b032483789e" containerName="extract-content" Mar 13 10:30:38 crc kubenswrapper[4632]: E0313 10:30:38.347631 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21ce0311-ff05-4626-9663-a373ae31eb56" containerName="kube-state-metrics" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.347638 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="21ce0311-ff05-4626-9663-a373ae31eb56" containerName="kube-state-metrics" Mar 13 10:30:38 crc kubenswrapper[4632]: E0313 10:30:38.347650 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerName="horizon" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.347656 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerName="horizon" Mar 13 10:30:38 crc kubenswrapper[4632]: E0313 10:30:38.347666 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerName="horizon" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.347672 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerName="horizon" Mar 13 10:30:38 crc kubenswrapper[4632]: E0313 10:30:38.347680 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerName="horizon" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.347686 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerName="horizon" Mar 13 10:30:38 crc kubenswrapper[4632]: E0313 10:30:38.347700 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerName="horizon-log" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.347706 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerName="horizon-log" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.347911 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerName="horizon" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.347920 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerName="horizon" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.347930 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0cabd29-ef3e-4808-8c92-3b032483789e" containerName="registry-server" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.347952 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerName="horizon" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.347962 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerName="horizon-log" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.347972 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0ccb00a-40ce-4b3d-86e8-8f87354c1e8e" containerName="oc" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.347983 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="21ce0311-ff05-4626-9663-a373ae31eb56" containerName="kube-state-metrics" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.348730 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.351635 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.351839 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.375078 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.406791 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ce3314-15f1-490c-83e5-a1c609212437-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"26ce3314-15f1-490c-83e5-a1c609212437\") " pod="openstack/kube-state-metrics-0" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.407180 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/26ce3314-15f1-490c-83e5-a1c609212437-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"26ce3314-15f1-490c-83e5-a1c609212437\") " pod="openstack/kube-state-metrics-0" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.407380 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/26ce3314-15f1-490c-83e5-a1c609212437-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"26ce3314-15f1-490c-83e5-a1c609212437\") " pod="openstack/kube-state-metrics-0" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.407534 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvb7r\" (UniqueName: \"kubernetes.io/projected/26ce3314-15f1-490c-83e5-a1c609212437-kube-api-access-nvb7r\") pod \"kube-state-metrics-0\" (UID: \"26ce3314-15f1-490c-83e5-a1c609212437\") " pod="openstack/kube-state-metrics-0" Mar 13 10:30:38 crc kubenswrapper[4632]: E0313 10:30:38.493877 4632 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21ce0311_ff05_4626_9663_a373ae31eb56.slice/crio-96915bb97645358a6555ca60c9308596dd68c9b71a65da098dd5679653d9f202\": RecentStats: unable to find data in memory cache]" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.509678 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ce3314-15f1-490c-83e5-a1c609212437-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"26ce3314-15f1-490c-83e5-a1c609212437\") " pod="openstack/kube-state-metrics-0" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.510065 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/26ce3314-15f1-490c-83e5-a1c609212437-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"26ce3314-15f1-490c-83e5-a1c609212437\") " pod="openstack/kube-state-metrics-0" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.510210 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/26ce3314-15f1-490c-83e5-a1c609212437-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"26ce3314-15f1-490c-83e5-a1c609212437\") " pod="openstack/kube-state-metrics-0" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.510320 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvb7r\" (UniqueName: \"kubernetes.io/projected/26ce3314-15f1-490c-83e5-a1c609212437-kube-api-access-nvb7r\") pod \"kube-state-metrics-0\" (UID: \"26ce3314-15f1-490c-83e5-a1c609212437\") " pod="openstack/kube-state-metrics-0" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.519116 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ce3314-15f1-490c-83e5-a1c609212437-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"26ce3314-15f1-490c-83e5-a1c609212437\") " pod="openstack/kube-state-metrics-0" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.528293 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/26ce3314-15f1-490c-83e5-a1c609212437-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"26ce3314-15f1-490c-83e5-a1c609212437\") " pod="openstack/kube-state-metrics-0" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.531625 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvb7r\" (UniqueName: \"kubernetes.io/projected/26ce3314-15f1-490c-83e5-a1c609212437-kube-api-access-nvb7r\") pod \"kube-state-metrics-0\" (UID: \"26ce3314-15f1-490c-83e5-a1c609212437\") " pod="openstack/kube-state-metrics-0" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.549830 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/26ce3314-15f1-490c-83e5-a1c609212437-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"26ce3314-15f1-490c-83e5-a1c609212437\") " pod="openstack/kube-state-metrics-0" Mar 13 10:30:38 crc kubenswrapper[4632]: I0313 10:30:38.678864 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 13 10:30:39 crc kubenswrapper[4632]: I0313 10:30:39.201968 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 13 10:30:39 crc kubenswrapper[4632]: I0313 10:30:39.859251 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:30:39 crc kubenswrapper[4632]: I0313 10:30:39.859790 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc34b88a-a0cc-4ef1-8267-30f73d9712e7" containerName="ceilometer-central-agent" containerID="cri-o://38c2fc76cd7277bb1d4be79e157389e249663d48647565e9e7553c7283f85329" gracePeriod=30 Mar 13 10:30:39 crc kubenswrapper[4632]: I0313 10:30:39.859921 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc34b88a-a0cc-4ef1-8267-30f73d9712e7" containerName="proxy-httpd" containerID="cri-o://aafa45b6b50dc62147a1600ded978371569f724c32856206cf7196aba295169c" gracePeriod=30 Mar 13 10:30:39 crc kubenswrapper[4632]: I0313 10:30:39.859976 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc34b88a-a0cc-4ef1-8267-30f73d9712e7" containerName="sg-core" containerID="cri-o://2e3874df0599fa71baea0baa935a02f99ef35801e9c669049d94507f046a75da" gracePeriod=30 Mar 13 10:30:39 crc kubenswrapper[4632]: I0313 10:30:39.860010 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc34b88a-a0cc-4ef1-8267-30f73d9712e7" containerName="ceilometer-notification-agent" containerID="cri-o://8fc781f30b1bf552b69e65bd6fbe07dd0ebc73a78af0ab302a58fadf4aa1b637" gracePeriod=30 Mar 13 10:30:40 crc kubenswrapper[4632]: I0313 10:30:40.057142 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21ce0311-ff05-4626-9663-a373ae31eb56" path="/var/lib/kubelet/pods/21ce0311-ff05-4626-9663-a373ae31eb56/volumes" Mar 13 10:30:40 crc kubenswrapper[4632]: I0313 10:30:40.101264 4632 generic.go:334] "Generic (PLEG): container finished" podID="bc34b88a-a0cc-4ef1-8267-30f73d9712e7" containerID="aafa45b6b50dc62147a1600ded978371569f724c32856206cf7196aba295169c" exitCode=0 Mar 13 10:30:40 crc kubenswrapper[4632]: I0313 10:30:40.101515 4632 generic.go:334] "Generic (PLEG): container finished" podID="bc34b88a-a0cc-4ef1-8267-30f73d9712e7" containerID="2e3874df0599fa71baea0baa935a02f99ef35801e9c669049d94507f046a75da" exitCode=2 Mar 13 10:30:40 crc kubenswrapper[4632]: I0313 10:30:40.101748 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc34b88a-a0cc-4ef1-8267-30f73d9712e7","Type":"ContainerDied","Data":"aafa45b6b50dc62147a1600ded978371569f724c32856206cf7196aba295169c"} Mar 13 10:30:40 crc kubenswrapper[4632]: I0313 10:30:40.101776 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc34b88a-a0cc-4ef1-8267-30f73d9712e7","Type":"ContainerDied","Data":"2e3874df0599fa71baea0baa935a02f99ef35801e9c669049d94507f046a75da"} Mar 13 10:30:40 crc kubenswrapper[4632]: I0313 10:30:40.109478 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"26ce3314-15f1-490c-83e5-a1c609212437","Type":"ContainerStarted","Data":"58a379ff336fdb7ea32918613cb19f466310efd4b6a05aea4a6b056ea6809ecf"} Mar 13 10:30:40 crc kubenswrapper[4632]: I0313 10:30:40.109531 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"26ce3314-15f1-490c-83e5-a1c609212437","Type":"ContainerStarted","Data":"2868b0d004bb34c44e511b76b488032e3fe9956857da8e812917313f0b5776f6"} Mar 13 10:30:40 crc kubenswrapper[4632]: I0313 10:30:40.109650 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 13 10:30:40 crc kubenswrapper[4632]: I0313 10:30:40.133480 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.744160599 podStartE2EDuration="2.13345633s" podCreationTimestamp="2026-03-13 10:30:38 +0000 UTC" firstStartedPulling="2026-03-13 10:30:39.23663199 +0000 UTC m=+1613.259162113" lastFinishedPulling="2026-03-13 10:30:39.625927711 +0000 UTC m=+1613.648457844" observedRunningTime="2026-03-13 10:30:40.124398992 +0000 UTC m=+1614.146929125" watchObservedRunningTime="2026-03-13 10:30:40.13345633 +0000 UTC m=+1614.155986473" Mar 13 10:30:40 crc kubenswrapper[4632]: I0313 10:30:40.461313 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 10:30:40 crc kubenswrapper[4632]: I0313 10:30:40.461370 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 10:30:41 crc kubenswrapper[4632]: I0313 10:30:41.119674 4632 generic.go:334] "Generic (PLEG): container finished" podID="bc34b88a-a0cc-4ef1-8267-30f73d9712e7" containerID="38c2fc76cd7277bb1d4be79e157389e249663d48647565e9e7553c7283f85329" exitCode=0 Mar 13 10:30:41 crc kubenswrapper[4632]: I0313 10:30:41.119750 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc34b88a-a0cc-4ef1-8267-30f73d9712e7","Type":"ContainerDied","Data":"38c2fc76cd7277bb1d4be79e157389e249663d48647565e9e7553c7283f85329"} Mar 13 10:30:41 crc kubenswrapper[4632]: I0313 10:30:41.746261 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:30:41 crc kubenswrapper[4632]: I0313 10:30:41.827493 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-log-httpd\") pod \"bc34b88a-a0cc-4ef1-8267-30f73d9712e7\" (UID: \"bc34b88a-a0cc-4ef1-8267-30f73d9712e7\") " Mar 13 10:30:41 crc kubenswrapper[4632]: I0313 10:30:41.827668 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-combined-ca-bundle\") pod \"bc34b88a-a0cc-4ef1-8267-30f73d9712e7\" (UID: \"bc34b88a-a0cc-4ef1-8267-30f73d9712e7\") " Mar 13 10:30:41 crc kubenswrapper[4632]: I0313 10:30:41.827772 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-config-data\") pod \"bc34b88a-a0cc-4ef1-8267-30f73d9712e7\" (UID: \"bc34b88a-a0cc-4ef1-8267-30f73d9712e7\") " Mar 13 10:30:41 crc kubenswrapper[4632]: I0313 10:30:41.827985 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-sg-core-conf-yaml\") pod \"bc34b88a-a0cc-4ef1-8267-30f73d9712e7\" (UID: \"bc34b88a-a0cc-4ef1-8267-30f73d9712e7\") " Mar 13 10:30:41 crc kubenswrapper[4632]: I0313 10:30:41.828180 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvm2p\" (UniqueName: \"kubernetes.io/projected/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-kube-api-access-nvm2p\") pod \"bc34b88a-a0cc-4ef1-8267-30f73d9712e7\" (UID: \"bc34b88a-a0cc-4ef1-8267-30f73d9712e7\") " Mar 13 10:30:41 crc kubenswrapper[4632]: I0313 10:30:41.948545 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-scripts\") pod \"bc34b88a-a0cc-4ef1-8267-30f73d9712e7\" (UID: \"bc34b88a-a0cc-4ef1-8267-30f73d9712e7\") " Mar 13 10:30:41 crc kubenswrapper[4632]: I0313 10:30:41.948626 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-run-httpd\") pod \"bc34b88a-a0cc-4ef1-8267-30f73d9712e7\" (UID: \"bc34b88a-a0cc-4ef1-8267-30f73d9712e7\") " Mar 13 10:30:41 crc kubenswrapper[4632]: I0313 10:30:41.859638 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bc34b88a-a0cc-4ef1-8267-30f73d9712e7" (UID: "bc34b88a-a0cc-4ef1-8267-30f73d9712e7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:30:41 crc kubenswrapper[4632]: I0313 10:30:41.954870 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bc34b88a-a0cc-4ef1-8267-30f73d9712e7" (UID: "bc34b88a-a0cc-4ef1-8267-30f73d9712e7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:30:41 crc kubenswrapper[4632]: I0313 10:30:41.998659 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-scripts" (OuterVolumeSpecName: "scripts") pod "bc34b88a-a0cc-4ef1-8267-30f73d9712e7" (UID: "bc34b88a-a0cc-4ef1-8267-30f73d9712e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.005343 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bc34b88a-a0cc-4ef1-8267-30f73d9712e7" (UID: "bc34b88a-a0cc-4ef1-8267-30f73d9712e7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.014688 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-kube-api-access-nvm2p" (OuterVolumeSpecName: "kube-api-access-nvm2p") pod "bc34b88a-a0cc-4ef1-8267-30f73d9712e7" (UID: "bc34b88a-a0cc-4ef1-8267-30f73d9712e7"). InnerVolumeSpecName "kube-api-access-nvm2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.059836 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvm2p\" (UniqueName: \"kubernetes.io/projected/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-kube-api-access-nvm2p\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.059869 4632 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.059879 4632 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.059889 4632 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.059900 4632 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.163886 4632 generic.go:334] "Generic (PLEG): container finished" podID="bc34b88a-a0cc-4ef1-8267-30f73d9712e7" containerID="8fc781f30b1bf552b69e65bd6fbe07dd0ebc73a78af0ab302a58fadf4aa1b637" exitCode=0 Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.164002 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.170894 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc34b88a-a0cc-4ef1-8267-30f73d9712e7" (UID: "bc34b88a-a0cc-4ef1-8267-30f73d9712e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.182518 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc34b88a-a0cc-4ef1-8267-30f73d9712e7","Type":"ContainerDied","Data":"8fc781f30b1bf552b69e65bd6fbe07dd0ebc73a78af0ab302a58fadf4aa1b637"} Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.182581 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc34b88a-a0cc-4ef1-8267-30f73d9712e7","Type":"ContainerDied","Data":"43e2d3c1f3ad2fd1a1419876a0a3f1ee25556cf01d81619d7588168a228ea654"} Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.182606 4632 scope.go:117] "RemoveContainer" containerID="aafa45b6b50dc62147a1600ded978371569f724c32856206cf7196aba295169c" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.217694 4632 scope.go:117] "RemoveContainer" containerID="2e3874df0599fa71baea0baa935a02f99ef35801e9c669049d94507f046a75da" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.256195 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-config-data" (OuterVolumeSpecName: "config-data") pod "bc34b88a-a0cc-4ef1-8267-30f73d9712e7" (UID: "bc34b88a-a0cc-4ef1-8267-30f73d9712e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.266164 4632 scope.go:117] "RemoveContainer" containerID="8fc781f30b1bf552b69e65bd6fbe07dd0ebc73a78af0ab302a58fadf4aa1b637" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.269479 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.269526 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc34b88a-a0cc-4ef1-8267-30f73d9712e7-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.297528 4632 scope.go:117] "RemoveContainer" containerID="38c2fc76cd7277bb1d4be79e157389e249663d48647565e9e7553c7283f85329" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.333459 4632 scope.go:117] "RemoveContainer" containerID="aafa45b6b50dc62147a1600ded978371569f724c32856206cf7196aba295169c" Mar 13 10:30:42 crc kubenswrapper[4632]: E0313 10:30:42.333977 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aafa45b6b50dc62147a1600ded978371569f724c32856206cf7196aba295169c\": container with ID starting with aafa45b6b50dc62147a1600ded978371569f724c32856206cf7196aba295169c not found: ID does not exist" containerID="aafa45b6b50dc62147a1600ded978371569f724c32856206cf7196aba295169c" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.334026 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aafa45b6b50dc62147a1600ded978371569f724c32856206cf7196aba295169c"} err="failed to get container status \"aafa45b6b50dc62147a1600ded978371569f724c32856206cf7196aba295169c\": rpc error: code = NotFound desc = could not find container \"aafa45b6b50dc62147a1600ded978371569f724c32856206cf7196aba295169c\": container with ID starting with aafa45b6b50dc62147a1600ded978371569f724c32856206cf7196aba295169c not found: ID does not exist" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.334054 4632 scope.go:117] "RemoveContainer" containerID="2e3874df0599fa71baea0baa935a02f99ef35801e9c669049d94507f046a75da" Mar 13 10:30:42 crc kubenswrapper[4632]: E0313 10:30:42.334549 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e3874df0599fa71baea0baa935a02f99ef35801e9c669049d94507f046a75da\": container with ID starting with 2e3874df0599fa71baea0baa935a02f99ef35801e9c669049d94507f046a75da not found: ID does not exist" containerID="2e3874df0599fa71baea0baa935a02f99ef35801e9c669049d94507f046a75da" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.334586 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e3874df0599fa71baea0baa935a02f99ef35801e9c669049d94507f046a75da"} err="failed to get container status \"2e3874df0599fa71baea0baa935a02f99ef35801e9c669049d94507f046a75da\": rpc error: code = NotFound desc = could not find container \"2e3874df0599fa71baea0baa935a02f99ef35801e9c669049d94507f046a75da\": container with ID starting with 2e3874df0599fa71baea0baa935a02f99ef35801e9c669049d94507f046a75da not found: ID does not exist" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.334604 4632 scope.go:117] "RemoveContainer" containerID="8fc781f30b1bf552b69e65bd6fbe07dd0ebc73a78af0ab302a58fadf4aa1b637" Mar 13 10:30:42 crc kubenswrapper[4632]: E0313 10:30:42.334932 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fc781f30b1bf552b69e65bd6fbe07dd0ebc73a78af0ab302a58fadf4aa1b637\": container with ID starting with 8fc781f30b1bf552b69e65bd6fbe07dd0ebc73a78af0ab302a58fadf4aa1b637 not found: ID does not exist" containerID="8fc781f30b1bf552b69e65bd6fbe07dd0ebc73a78af0ab302a58fadf4aa1b637" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.335064 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fc781f30b1bf552b69e65bd6fbe07dd0ebc73a78af0ab302a58fadf4aa1b637"} err="failed to get container status \"8fc781f30b1bf552b69e65bd6fbe07dd0ebc73a78af0ab302a58fadf4aa1b637\": rpc error: code = NotFound desc = could not find container \"8fc781f30b1bf552b69e65bd6fbe07dd0ebc73a78af0ab302a58fadf4aa1b637\": container with ID starting with 8fc781f30b1bf552b69e65bd6fbe07dd0ebc73a78af0ab302a58fadf4aa1b637 not found: ID does not exist" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.335084 4632 scope.go:117] "RemoveContainer" containerID="38c2fc76cd7277bb1d4be79e157389e249663d48647565e9e7553c7283f85329" Mar 13 10:30:42 crc kubenswrapper[4632]: E0313 10:30:42.335383 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38c2fc76cd7277bb1d4be79e157389e249663d48647565e9e7553c7283f85329\": container with ID starting with 38c2fc76cd7277bb1d4be79e157389e249663d48647565e9e7553c7283f85329 not found: ID does not exist" containerID="38c2fc76cd7277bb1d4be79e157389e249663d48647565e9e7553c7283f85329" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.335418 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38c2fc76cd7277bb1d4be79e157389e249663d48647565e9e7553c7283f85329"} err="failed to get container status \"38c2fc76cd7277bb1d4be79e157389e249663d48647565e9e7553c7283f85329\": rpc error: code = NotFound desc = could not find container \"38c2fc76cd7277bb1d4be79e157389e249663d48647565e9e7553c7283f85329\": container with ID starting with 38c2fc76cd7277bb1d4be79e157389e249663d48647565e9e7553c7283f85329 not found: ID does not exist" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.512692 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.521075 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.547209 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:30:42 crc kubenswrapper[4632]: E0313 10:30:42.547722 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc34b88a-a0cc-4ef1-8267-30f73d9712e7" containerName="ceilometer-notification-agent" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.547749 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc34b88a-a0cc-4ef1-8267-30f73d9712e7" containerName="ceilometer-notification-agent" Mar 13 10:30:42 crc kubenswrapper[4632]: E0313 10:30:42.547770 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc34b88a-a0cc-4ef1-8267-30f73d9712e7" containerName="sg-core" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.547782 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc34b88a-a0cc-4ef1-8267-30f73d9712e7" containerName="sg-core" Mar 13 10:30:42 crc kubenswrapper[4632]: E0313 10:30:42.547797 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc34b88a-a0cc-4ef1-8267-30f73d9712e7" containerName="proxy-httpd" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.547806 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc34b88a-a0cc-4ef1-8267-30f73d9712e7" containerName="proxy-httpd" Mar 13 10:30:42 crc kubenswrapper[4632]: E0313 10:30:42.547832 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc34b88a-a0cc-4ef1-8267-30f73d9712e7" containerName="ceilometer-central-agent" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.547840 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc34b88a-a0cc-4ef1-8267-30f73d9712e7" containerName="ceilometer-central-agent" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.548086 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerName="horizon" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.548111 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e4afb91-ce26-4325-89c9-2542da2ec48a" containerName="horizon" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.548132 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc34b88a-a0cc-4ef1-8267-30f73d9712e7" containerName="ceilometer-central-agent" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.548148 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc34b88a-a0cc-4ef1-8267-30f73d9712e7" containerName="ceilometer-notification-agent" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.548168 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc34b88a-a0cc-4ef1-8267-30f73d9712e7" containerName="proxy-httpd" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.548180 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc34b88a-a0cc-4ef1-8267-30f73d9712e7" containerName="sg-core" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.555172 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.561842 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.562348 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.562682 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.582649 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.677629 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac97dc03-9537-4f95-bb79-5bb60a99089d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ac97dc03-9537-4f95-bb79-5bb60a99089d\") " pod="openstack/ceilometer-0" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.677704 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac97dc03-9537-4f95-bb79-5bb60a99089d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ac97dc03-9537-4f95-bb79-5bb60a99089d\") " pod="openstack/ceilometer-0" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.677744 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac97dc03-9537-4f95-bb79-5bb60a99089d-run-httpd\") pod \"ceilometer-0\" (UID: \"ac97dc03-9537-4f95-bb79-5bb60a99089d\") " pod="openstack/ceilometer-0" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.677783 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac97dc03-9537-4f95-bb79-5bb60a99089d-scripts\") pod \"ceilometer-0\" (UID: \"ac97dc03-9537-4f95-bb79-5bb60a99089d\") " pod="openstack/ceilometer-0" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.677872 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac97dc03-9537-4f95-bb79-5bb60a99089d-config-data\") pod \"ceilometer-0\" (UID: \"ac97dc03-9537-4f95-bb79-5bb60a99089d\") " pod="openstack/ceilometer-0" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.677907 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22tlw\" (UniqueName: \"kubernetes.io/projected/ac97dc03-9537-4f95-bb79-5bb60a99089d-kube-api-access-22tlw\") pod \"ceilometer-0\" (UID: \"ac97dc03-9537-4f95-bb79-5bb60a99089d\") " pod="openstack/ceilometer-0" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.677969 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac97dc03-9537-4f95-bb79-5bb60a99089d-log-httpd\") pod \"ceilometer-0\" (UID: \"ac97dc03-9537-4f95-bb79-5bb60a99089d\") " pod="openstack/ceilometer-0" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.678078 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac97dc03-9537-4f95-bb79-5bb60a99089d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ac97dc03-9537-4f95-bb79-5bb60a99089d\") " pod="openstack/ceilometer-0" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.779835 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac97dc03-9537-4f95-bb79-5bb60a99089d-config-data\") pod \"ceilometer-0\" (UID: \"ac97dc03-9537-4f95-bb79-5bb60a99089d\") " pod="openstack/ceilometer-0" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.779908 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22tlw\" (UniqueName: \"kubernetes.io/projected/ac97dc03-9537-4f95-bb79-5bb60a99089d-kube-api-access-22tlw\") pod \"ceilometer-0\" (UID: \"ac97dc03-9537-4f95-bb79-5bb60a99089d\") " pod="openstack/ceilometer-0" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.779974 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac97dc03-9537-4f95-bb79-5bb60a99089d-log-httpd\") pod \"ceilometer-0\" (UID: \"ac97dc03-9537-4f95-bb79-5bb60a99089d\") " pod="openstack/ceilometer-0" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.780049 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac97dc03-9537-4f95-bb79-5bb60a99089d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ac97dc03-9537-4f95-bb79-5bb60a99089d\") " pod="openstack/ceilometer-0" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.780085 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac97dc03-9537-4f95-bb79-5bb60a99089d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ac97dc03-9537-4f95-bb79-5bb60a99089d\") " pod="openstack/ceilometer-0" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.780113 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac97dc03-9537-4f95-bb79-5bb60a99089d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ac97dc03-9537-4f95-bb79-5bb60a99089d\") " pod="openstack/ceilometer-0" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.780133 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac97dc03-9537-4f95-bb79-5bb60a99089d-run-httpd\") pod \"ceilometer-0\" (UID: \"ac97dc03-9537-4f95-bb79-5bb60a99089d\") " pod="openstack/ceilometer-0" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.780820 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac97dc03-9537-4f95-bb79-5bb60a99089d-run-httpd\") pod \"ceilometer-0\" (UID: \"ac97dc03-9537-4f95-bb79-5bb60a99089d\") " pod="openstack/ceilometer-0" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.780866 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac97dc03-9537-4f95-bb79-5bb60a99089d-scripts\") pod \"ceilometer-0\" (UID: \"ac97dc03-9537-4f95-bb79-5bb60a99089d\") " pod="openstack/ceilometer-0" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.781493 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac97dc03-9537-4f95-bb79-5bb60a99089d-log-httpd\") pod \"ceilometer-0\" (UID: \"ac97dc03-9537-4f95-bb79-5bb60a99089d\") " pod="openstack/ceilometer-0" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.785074 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac97dc03-9537-4f95-bb79-5bb60a99089d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ac97dc03-9537-4f95-bb79-5bb60a99089d\") " pod="openstack/ceilometer-0" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.788367 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac97dc03-9537-4f95-bb79-5bb60a99089d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ac97dc03-9537-4f95-bb79-5bb60a99089d\") " pod="openstack/ceilometer-0" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.790477 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac97dc03-9537-4f95-bb79-5bb60a99089d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ac97dc03-9537-4f95-bb79-5bb60a99089d\") " pod="openstack/ceilometer-0" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.793437 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac97dc03-9537-4f95-bb79-5bb60a99089d-scripts\") pod \"ceilometer-0\" (UID: \"ac97dc03-9537-4f95-bb79-5bb60a99089d\") " pod="openstack/ceilometer-0" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.795649 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac97dc03-9537-4f95-bb79-5bb60a99089d-config-data\") pod \"ceilometer-0\" (UID: \"ac97dc03-9537-4f95-bb79-5bb60a99089d\") " pod="openstack/ceilometer-0" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.819009 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22tlw\" (UniqueName: \"kubernetes.io/projected/ac97dc03-9537-4f95-bb79-5bb60a99089d-kube-api-access-22tlw\") pod \"ceilometer-0\" (UID: \"ac97dc03-9537-4f95-bb79-5bb60a99089d\") " pod="openstack/ceilometer-0" Mar 13 10:30:42 crc kubenswrapper[4632]: I0313 10:30:42.904302 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 10:30:43 crc kubenswrapper[4632]: I0313 10:30:43.530438 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 10:30:43 crc kubenswrapper[4632]: W0313 10:30:43.543251 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac97dc03_9537_4f95_bb79_5bb60a99089d.slice/crio-6c35a65f59ec813bb19b2b3e4862d24780f1cf59570c0c358308767506eead20 WatchSource:0}: Error finding container 6c35a65f59ec813bb19b2b3e4862d24780f1cf59570c0c358308767506eead20: Status 404 returned error can't find the container with id 6c35a65f59ec813bb19b2b3e4862d24780f1cf59570c0c358308767506eead20 Mar 13 10:30:44 crc kubenswrapper[4632]: I0313 10:30:44.060306 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc34b88a-a0cc-4ef1-8267-30f73d9712e7" path="/var/lib/kubelet/pods/bc34b88a-a0cc-4ef1-8267-30f73d9712e7/volumes" Mar 13 10:30:44 crc kubenswrapper[4632]: I0313 10:30:44.193614 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac97dc03-9537-4f95-bb79-5bb60a99089d","Type":"ContainerStarted","Data":"3da76186915cfbbbe688750a6110b1e64143d37e61c44ef62a9740eabb32c983"} Mar 13 10:30:44 crc kubenswrapper[4632]: I0313 10:30:44.193680 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac97dc03-9537-4f95-bb79-5bb60a99089d","Type":"ContainerStarted","Data":"6c35a65f59ec813bb19b2b3e4862d24780f1cf59570c0c358308767506eead20"} Mar 13 10:30:45 crc kubenswrapper[4632]: I0313 10:30:45.221916 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac97dc03-9537-4f95-bb79-5bb60a99089d","Type":"ContainerStarted","Data":"5d34565e3f3d53e4eb4eec7fc127b7d0ef95db5c894a8b9fbc65ec70d12e4d20"} Mar 13 10:30:46 crc kubenswrapper[4632]: I0313 10:30:46.238543 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac97dc03-9537-4f95-bb79-5bb60a99089d","Type":"ContainerStarted","Data":"7784ac325dc1b12d740a758a07a8e9e03da012db50eef1bc62b207161880f530"} Mar 13 10:30:47 crc kubenswrapper[4632]: I0313 10:30:47.425893 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 13 10:30:48 crc kubenswrapper[4632]: I0313 10:30:48.266992 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac97dc03-9537-4f95-bb79-5bb60a99089d","Type":"ContainerStarted","Data":"9a2b2ece3b9e850a4d4ebe5776040511a71de7bd0fba43340538aa166e80ade2"} Mar 13 10:30:48 crc kubenswrapper[4632]: I0313 10:30:48.267386 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 13 10:30:48 crc kubenswrapper[4632]: I0313 10:30:48.292580 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.794955469 podStartE2EDuration="6.292554821s" podCreationTimestamp="2026-03-13 10:30:42 +0000 UTC" firstStartedPulling="2026-03-13 10:30:43.546552506 +0000 UTC m=+1617.569082639" lastFinishedPulling="2026-03-13 10:30:47.044151868 +0000 UTC m=+1621.066681991" observedRunningTime="2026-03-13 10:30:48.286647559 +0000 UTC m=+1622.309177692" watchObservedRunningTime="2026-03-13 10:30:48.292554821 +0000 UTC m=+1622.315084964" Mar 13 10:30:48 crc kubenswrapper[4632]: I0313 10:30:48.646994 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 13 10:30:48 crc kubenswrapper[4632]: I0313 10:30:48.964165 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 13 10:30:53 crc kubenswrapper[4632]: I0313 10:30:53.858705 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="211718f0-f29c-457b-bc2b-487bb76d4801" containerName="rabbitmq" containerID="cri-o://40d92cf95f1cc26685e0359414b43dbdc31eeb90ab4b39c564b241d3fcc263fe" gracePeriod=604794 Mar 13 10:30:55 crc kubenswrapper[4632]: I0313 10:30:55.013765 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="159c6cee-c82b-4725-82d6-dbd27216f53c" containerName="rabbitmq" containerID="cri-o://d8fa91cb90a686638520d703bb5ab925cd9f40c680cdbe53067f753945b6ae3f" gracePeriod=604794 Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.514148 4632 generic.go:334] "Generic (PLEG): container finished" podID="211718f0-f29c-457b-bc2b-487bb76d4801" containerID="40d92cf95f1cc26685e0359414b43dbdc31eeb90ab4b39c564b241d3fcc263fe" exitCode=0 Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.514372 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"211718f0-f29c-457b-bc2b-487bb76d4801","Type":"ContainerDied","Data":"40d92cf95f1cc26685e0359414b43dbdc31eeb90ab4b39c564b241d3fcc263fe"} Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.514724 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"211718f0-f29c-457b-bc2b-487bb76d4801","Type":"ContainerDied","Data":"fd0dcad1534e2c23d238622a824c4e32c97444e16220054d2406cb0e89183756"} Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.514741 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd0dcad1534e2c23d238622a824c4e32c97444e16220054d2406cb0e89183756" Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.529759 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.634168 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="159c6cee-c82b-4725-82d6-dbd27216f53c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.722650 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/211718f0-f29c-457b-bc2b-487bb76d4801-server-conf\") pod \"211718f0-f29c-457b-bc2b-487bb76d4801\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.722727 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/211718f0-f29c-457b-bc2b-487bb76d4801-rabbitmq-confd\") pod \"211718f0-f29c-457b-bc2b-487bb76d4801\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.722751 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/211718f0-f29c-457b-bc2b-487bb76d4801-plugins-conf\") pod \"211718f0-f29c-457b-bc2b-487bb76d4801\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.722793 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"211718f0-f29c-457b-bc2b-487bb76d4801\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.722829 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/211718f0-f29c-457b-bc2b-487bb76d4801-rabbitmq-plugins\") pod \"211718f0-f29c-457b-bc2b-487bb76d4801\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.722957 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/211718f0-f29c-457b-bc2b-487bb76d4801-erlang-cookie-secret\") pod \"211718f0-f29c-457b-bc2b-487bb76d4801\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.722992 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kfgh\" (UniqueName: \"kubernetes.io/projected/211718f0-f29c-457b-bc2b-487bb76d4801-kube-api-access-6kfgh\") pod \"211718f0-f29c-457b-bc2b-487bb76d4801\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.723011 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/211718f0-f29c-457b-bc2b-487bb76d4801-rabbitmq-tls\") pod \"211718f0-f29c-457b-bc2b-487bb76d4801\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.723316 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/211718f0-f29c-457b-bc2b-487bb76d4801-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "211718f0-f29c-457b-bc2b-487bb76d4801" (UID: "211718f0-f29c-457b-bc2b-487bb76d4801"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.723384 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/211718f0-f29c-457b-bc2b-487bb76d4801-rabbitmq-erlang-cookie\") pod \"211718f0-f29c-457b-bc2b-487bb76d4801\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.723431 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/211718f0-f29c-457b-bc2b-487bb76d4801-config-data\") pod \"211718f0-f29c-457b-bc2b-487bb76d4801\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.723470 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/211718f0-f29c-457b-bc2b-487bb76d4801-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "211718f0-f29c-457b-bc2b-487bb76d4801" (UID: "211718f0-f29c-457b-bc2b-487bb76d4801"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.723489 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/211718f0-f29c-457b-bc2b-487bb76d4801-pod-info\") pod \"211718f0-f29c-457b-bc2b-487bb76d4801\" (UID: \"211718f0-f29c-457b-bc2b-487bb76d4801\") " Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.724414 4632 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/211718f0-f29c-457b-bc2b-487bb76d4801-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.724436 4632 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/211718f0-f29c-457b-bc2b-487bb76d4801-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.729308 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/211718f0-f29c-457b-bc2b-487bb76d4801-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "211718f0-f29c-457b-bc2b-487bb76d4801" (UID: "211718f0-f29c-457b-bc2b-487bb76d4801"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.740257 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/211718f0-f29c-457b-bc2b-487bb76d4801-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "211718f0-f29c-457b-bc2b-487bb76d4801" (UID: "211718f0-f29c-457b-bc2b-487bb76d4801"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.740455 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/211718f0-f29c-457b-bc2b-487bb76d4801-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "211718f0-f29c-457b-bc2b-487bb76d4801" (UID: "211718f0-f29c-457b-bc2b-487bb76d4801"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.743991 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "211718f0-f29c-457b-bc2b-487bb76d4801" (UID: "211718f0-f29c-457b-bc2b-487bb76d4801"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.749867 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/211718f0-f29c-457b-bc2b-487bb76d4801-kube-api-access-6kfgh" (OuterVolumeSpecName: "kube-api-access-6kfgh") pod "211718f0-f29c-457b-bc2b-487bb76d4801" (UID: "211718f0-f29c-457b-bc2b-487bb76d4801"). InnerVolumeSpecName "kube-api-access-6kfgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.750053 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/211718f0-f29c-457b-bc2b-487bb76d4801-pod-info" (OuterVolumeSpecName: "pod-info") pod "211718f0-f29c-457b-bc2b-487bb76d4801" (UID: "211718f0-f29c-457b-bc2b-487bb76d4801"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.819119 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/211718f0-f29c-457b-bc2b-487bb76d4801-config-data" (OuterVolumeSpecName: "config-data") pod "211718f0-f29c-457b-bc2b-487bb76d4801" (UID: "211718f0-f29c-457b-bc2b-487bb76d4801"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.836659 4632 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.836691 4632 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/211718f0-f29c-457b-bc2b-487bb76d4801-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.836704 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kfgh\" (UniqueName: \"kubernetes.io/projected/211718f0-f29c-457b-bc2b-487bb76d4801-kube-api-access-6kfgh\") on node \"crc\" DevicePath \"\"" Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.836713 4632 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/211718f0-f29c-457b-bc2b-487bb76d4801-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.836723 4632 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/211718f0-f29c-457b-bc2b-487bb76d4801-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.836733 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/211718f0-f29c-457b-bc2b-487bb76d4801-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.836741 4632 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/211718f0-f29c-457b-bc2b-487bb76d4801-pod-info\") on node \"crc\" DevicePath \"\"" Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.838370 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/211718f0-f29c-457b-bc2b-487bb76d4801-server-conf" (OuterVolumeSpecName: "server-conf") pod "211718f0-f29c-457b-bc2b-487bb76d4801" (UID: "211718f0-f29c-457b-bc2b-487bb76d4801"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.887517 4632 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.940267 4632 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/211718f0-f29c-457b-bc2b-487bb76d4801-server-conf\") on node \"crc\" DevicePath \"\"" Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.940306 4632 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Mar 13 10:31:00 crc kubenswrapper[4632]: I0313 10:31:00.954987 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/211718f0-f29c-457b-bc2b-487bb76d4801-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "211718f0-f29c-457b-bc2b-487bb76d4801" (UID: "211718f0-f29c-457b-bc2b-487bb76d4801"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.050423 4632 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/211718f0-f29c-457b-bc2b-487bb76d4801-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.555907 4632 generic.go:334] "Generic (PLEG): container finished" podID="159c6cee-c82b-4725-82d6-dbd27216f53c" containerID="d8fa91cb90a686638520d703bb5ab925cd9f40c680cdbe53067f753945b6ae3f" exitCode=0 Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.556278 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.558094 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"159c6cee-c82b-4725-82d6-dbd27216f53c","Type":"ContainerDied","Data":"d8fa91cb90a686638520d703bb5ab925cd9f40c680cdbe53067f753945b6ae3f"} Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.648783 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.677857 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.739722 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 13 10:31:01 crc kubenswrapper[4632]: E0313 10:31:01.742909 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="211718f0-f29c-457b-bc2b-487bb76d4801" containerName="setup-container" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.742980 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="211718f0-f29c-457b-bc2b-487bb76d4801" containerName="setup-container" Mar 13 10:31:01 crc kubenswrapper[4632]: E0313 10:31:01.743044 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="211718f0-f29c-457b-bc2b-487bb76d4801" containerName="rabbitmq" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.743054 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="211718f0-f29c-457b-bc2b-487bb76d4801" containerName="rabbitmq" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.743524 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="211718f0-f29c-457b-bc2b-487bb76d4801" containerName="rabbitmq" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.745768 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.750854 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.751520 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.751779 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.752082 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.752220 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.752435 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-x424t" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.753033 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.760958 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.874201 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e-config-data\") pod \"rabbitmq-server-0\" (UID: \"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e\") " pod="openstack/rabbitmq-server-0" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.874495 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e\") " pod="openstack/rabbitmq-server-0" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.874590 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e\") " pod="openstack/rabbitmq-server-0" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.874733 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e\") " pod="openstack/rabbitmq-server-0" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.874869 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vfbl\" (UniqueName: \"kubernetes.io/projected/c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e-kube-api-access-9vfbl\") pod \"rabbitmq-server-0\" (UID: \"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e\") " pod="openstack/rabbitmq-server-0" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.875024 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e\") " pod="openstack/rabbitmq-server-0" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.875134 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e\") " pod="openstack/rabbitmq-server-0" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.875257 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e\") " pod="openstack/rabbitmq-server-0" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.875408 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e\") " pod="openstack/rabbitmq-server-0" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.875555 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e\") " pod="openstack/rabbitmq-server-0" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.875662 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e\") " pod="openstack/rabbitmq-server-0" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.879790 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.977179 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/159c6cee-c82b-4725-82d6-dbd27216f53c-plugins-conf\") pod \"159c6cee-c82b-4725-82d6-dbd27216f53c\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.977290 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8hmx\" (UniqueName: \"kubernetes.io/projected/159c6cee-c82b-4725-82d6-dbd27216f53c-kube-api-access-k8hmx\") pod \"159c6cee-c82b-4725-82d6-dbd27216f53c\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.978603 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/159c6cee-c82b-4725-82d6-dbd27216f53c-pod-info\") pod \"159c6cee-c82b-4725-82d6-dbd27216f53c\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.978674 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/159c6cee-c82b-4725-82d6-dbd27216f53c-rabbitmq-confd\") pod \"159c6cee-c82b-4725-82d6-dbd27216f53c\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.978701 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/159c6cee-c82b-4725-82d6-dbd27216f53c-erlang-cookie-secret\") pod \"159c6cee-c82b-4725-82d6-dbd27216f53c\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.978735 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"159c6cee-c82b-4725-82d6-dbd27216f53c\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.978811 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/159c6cee-c82b-4725-82d6-dbd27216f53c-rabbitmq-tls\") pod \"159c6cee-c82b-4725-82d6-dbd27216f53c\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.978847 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/159c6cee-c82b-4725-82d6-dbd27216f53c-rabbitmq-plugins\") pod \"159c6cee-c82b-4725-82d6-dbd27216f53c\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.978987 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/159c6cee-c82b-4725-82d6-dbd27216f53c-server-conf\") pod \"159c6cee-c82b-4725-82d6-dbd27216f53c\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.979026 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/159c6cee-c82b-4725-82d6-dbd27216f53c-rabbitmq-erlang-cookie\") pod \"159c6cee-c82b-4725-82d6-dbd27216f53c\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.979079 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/159c6cee-c82b-4725-82d6-dbd27216f53c-config-data\") pod \"159c6cee-c82b-4725-82d6-dbd27216f53c\" (UID: \"159c6cee-c82b-4725-82d6-dbd27216f53c\") " Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.979361 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/159c6cee-c82b-4725-82d6-dbd27216f53c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "159c6cee-c82b-4725-82d6-dbd27216f53c" (UID: "159c6cee-c82b-4725-82d6-dbd27216f53c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.979418 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e\") " pod="openstack/rabbitmq-server-0" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.979673 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e\") " pod="openstack/rabbitmq-server-0" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.979744 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e\") " pod="openstack/rabbitmq-server-0" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.979875 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e-config-data\") pod \"rabbitmq-server-0\" (UID: \"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e\") " pod="openstack/rabbitmq-server-0" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.980078 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e\") " pod="openstack/rabbitmq-server-0" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.980138 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e\") " pod="openstack/rabbitmq-server-0" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.980221 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e\") " pod="openstack/rabbitmq-server-0" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.980321 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vfbl\" (UniqueName: \"kubernetes.io/projected/c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e-kube-api-access-9vfbl\") pod \"rabbitmq-server-0\" (UID: \"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e\") " pod="openstack/rabbitmq-server-0" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.980421 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e\") " pod="openstack/rabbitmq-server-0" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.980489 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e\") " pod="openstack/rabbitmq-server-0" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.980618 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e\") " pod="openstack/rabbitmq-server-0" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.980789 4632 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/159c6cee-c82b-4725-82d6-dbd27216f53c-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.986574 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/159c6cee-c82b-4725-82d6-dbd27216f53c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "159c6cee-c82b-4725-82d6-dbd27216f53c" (UID: "159c6cee-c82b-4725-82d6-dbd27216f53c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.995898 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e\") " pod="openstack/rabbitmq-server-0" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.997265 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e\") " pod="openstack/rabbitmq-server-0" Mar 13 10:31:01 crc kubenswrapper[4632]: I0313 10:31:01.997713 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e-config-data\") pod \"rabbitmq-server-0\" (UID: \"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e\") " pod="openstack/rabbitmq-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.001829 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e\") " pod="openstack/rabbitmq-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.002090 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/159c6cee-c82b-4725-82d6-dbd27216f53c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "159c6cee-c82b-4725-82d6-dbd27216f53c" (UID: "159c6cee-c82b-4725-82d6-dbd27216f53c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.003513 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e\") " pod="openstack/rabbitmq-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.005589 4632 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.005762 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "159c6cee-c82b-4725-82d6-dbd27216f53c" (UID: "159c6cee-c82b-4725-82d6-dbd27216f53c"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.011102 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/159c6cee-c82b-4725-82d6-dbd27216f53c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "159c6cee-c82b-4725-82d6-dbd27216f53c" (UID: "159c6cee-c82b-4725-82d6-dbd27216f53c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.020777 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/159c6cee-c82b-4725-82d6-dbd27216f53c-kube-api-access-k8hmx" (OuterVolumeSpecName: "kube-api-access-k8hmx") pod "159c6cee-c82b-4725-82d6-dbd27216f53c" (UID: "159c6cee-c82b-4725-82d6-dbd27216f53c"). InnerVolumeSpecName "kube-api-access-k8hmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.020925 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e\") " pod="openstack/rabbitmq-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.026838 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e\") " pod="openstack/rabbitmq-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.026900 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e\") " pod="openstack/rabbitmq-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.027289 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/159c6cee-c82b-4725-82d6-dbd27216f53c-pod-info" (OuterVolumeSpecName: "pod-info") pod "159c6cee-c82b-4725-82d6-dbd27216f53c" (UID: "159c6cee-c82b-4725-82d6-dbd27216f53c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.031769 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e\") " pod="openstack/rabbitmq-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.038807 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/159c6cee-c82b-4725-82d6-dbd27216f53c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "159c6cee-c82b-4725-82d6-dbd27216f53c" (UID: "159c6cee-c82b-4725-82d6-dbd27216f53c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.082848 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vfbl\" (UniqueName: \"kubernetes.io/projected/c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e-kube-api-access-9vfbl\") pod \"rabbitmq-server-0\" (UID: \"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e\") " pod="openstack/rabbitmq-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.100249 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="211718f0-f29c-457b-bc2b-487bb76d4801" path="/var/lib/kubelet/pods/211718f0-f29c-457b-bc2b-487bb76d4801/volumes" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.100652 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8hmx\" (UniqueName: \"kubernetes.io/projected/159c6cee-c82b-4725-82d6-dbd27216f53c-kube-api-access-k8hmx\") on node \"crc\" DevicePath \"\"" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.100684 4632 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/159c6cee-c82b-4725-82d6-dbd27216f53c-pod-info\") on node \"crc\" DevicePath \"\"" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.100696 4632 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/159c6cee-c82b-4725-82d6-dbd27216f53c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.100732 4632 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.100742 4632 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/159c6cee-c82b-4725-82d6-dbd27216f53c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.100752 4632 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/159c6cee-c82b-4725-82d6-dbd27216f53c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.100762 4632 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/159c6cee-c82b-4725-82d6-dbd27216f53c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.156057 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/159c6cee-c82b-4725-82d6-dbd27216f53c-config-data" (OuterVolumeSpecName: "config-data") pod "159c6cee-c82b-4725-82d6-dbd27216f53c" (UID: "159c6cee-c82b-4725-82d6-dbd27216f53c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.170231 4632 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.171593 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e\") " pod="openstack/rabbitmq-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.202341 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/159c6cee-c82b-4725-82d6-dbd27216f53c-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.202382 4632 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.205448 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.229398 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/159c6cee-c82b-4725-82d6-dbd27216f53c-server-conf" (OuterVolumeSpecName: "server-conf") pod "159c6cee-c82b-4725-82d6-dbd27216f53c" (UID: "159c6cee-c82b-4725-82d6-dbd27216f53c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.305471 4632 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/159c6cee-c82b-4725-82d6-dbd27216f53c-server-conf\") on node \"crc\" DevicePath \"\"" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.325311 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/159c6cee-c82b-4725-82d6-dbd27216f53c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "159c6cee-c82b-4725-82d6-dbd27216f53c" (UID: "159c6cee-c82b-4725-82d6-dbd27216f53c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.408388 4632 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/159c6cee-c82b-4725-82d6-dbd27216f53c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.593330 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"159c6cee-c82b-4725-82d6-dbd27216f53c","Type":"ContainerDied","Data":"06613fdc2799f04ea62de7d5a6995bb48161830d28a55edb1ede1542c640e10e"} Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.593736 4632 scope.go:117] "RemoveContainer" containerID="d8fa91cb90a686638520d703bb5ab925cd9f40c680cdbe53067f753945b6ae3f" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.593407 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.684022 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.705468 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.711174 4632 scope.go:117] "RemoveContainer" containerID="d5bd67d741203861cfd1afa23ec3f20fd6236a99625563ac3c10816dbb2a6677" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.744035 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 13 10:31:02 crc kubenswrapper[4632]: E0313 10:31:02.744624 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="159c6cee-c82b-4725-82d6-dbd27216f53c" containerName="setup-container" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.744642 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="159c6cee-c82b-4725-82d6-dbd27216f53c" containerName="setup-container" Mar 13 10:31:02 crc kubenswrapper[4632]: E0313 10:31:02.744654 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="159c6cee-c82b-4725-82d6-dbd27216f53c" containerName="rabbitmq" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.744664 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="159c6cee-c82b-4725-82d6-dbd27216f53c" containerName="rabbitmq" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.744911 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="159c6cee-c82b-4725-82d6-dbd27216f53c" containerName="rabbitmq" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.752444 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.767015 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.768808 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.768995 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.769102 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.769163 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.769288 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-m5r4h" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.769390 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.768813 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.818960 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a3d80d9f-c956-40f5-b2e1-8aea2f136b6e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.819051 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a3d80d9f-c956-40f5-b2e1-8aea2f136b6e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.820415 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a3d80d9f-c956-40f5-b2e1-8aea2f136b6e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.820474 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqgkl\" (UniqueName: \"kubernetes.io/projected/a3d80d9f-c956-40f5-b2e1-8aea2f136b6e-kube-api-access-fqgkl\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.820508 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a3d80d9f-c956-40f5-b2e1-8aea2f136b6e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.820565 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a3d80d9f-c956-40f5-b2e1-8aea2f136b6e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.820690 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a3d80d9f-c956-40f5-b2e1-8aea2f136b6e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.820784 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a3d80d9f-c956-40f5-b2e1-8aea2f136b6e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.820850 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.820901 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a3d80d9f-c956-40f5-b2e1-8aea2f136b6e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.820968 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a3d80d9f-c956-40f5-b2e1-8aea2f136b6e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.925802 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a3d80d9f-c956-40f5-b2e1-8aea2f136b6e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.926240 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a3d80d9f-c956-40f5-b2e1-8aea2f136b6e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.926375 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.926624 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a3d80d9f-c956-40f5-b2e1-8aea2f136b6e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.926800 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a3d80d9f-c956-40f5-b2e1-8aea2f136b6e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.927076 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a3d80d9f-c956-40f5-b2e1-8aea2f136b6e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.927234 4632 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.927275 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a3d80d9f-c956-40f5-b2e1-8aea2f136b6e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.927574 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a3d80d9f-c956-40f5-b2e1-8aea2f136b6e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.927624 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a3d80d9f-c956-40f5-b2e1-8aea2f136b6e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.928171 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a3d80d9f-c956-40f5-b2e1-8aea2f136b6e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.927643 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqgkl\" (UniqueName: \"kubernetes.io/projected/a3d80d9f-c956-40f5-b2e1-8aea2f136b6e-kube-api-access-fqgkl\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.928310 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a3d80d9f-c956-40f5-b2e1-8aea2f136b6e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.928356 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a3d80d9f-c956-40f5-b2e1-8aea2f136b6e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.928517 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a3d80d9f-c956-40f5-b2e1-8aea2f136b6e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.928531 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a3d80d9f-c956-40f5-b2e1-8aea2f136b6e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.935185 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a3d80d9f-c956-40f5-b2e1-8aea2f136b6e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.960551 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a3d80d9f-c956-40f5-b2e1-8aea2f136b6e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.961652 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a3d80d9f-c956-40f5-b2e1-8aea2f136b6e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.962774 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a3d80d9f-c956-40f5-b2e1-8aea2f136b6e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.963019 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqgkl\" (UniqueName: \"kubernetes.io/projected/a3d80d9f-c956-40f5-b2e1-8aea2f136b6e-kube-api-access-fqgkl\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.969176 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.971111 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a3d80d9f-c956-40f5-b2e1-8aea2f136b6e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:02 crc kubenswrapper[4632]: I0313 10:31:02.996049 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:03 crc kubenswrapper[4632]: I0313 10:31:03.095902 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:03 crc kubenswrapper[4632]: I0313 10:31:03.634540 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e","Type":"ContainerStarted","Data":"66423cf2bd8e9a56e0d8a5e063485d987fb7b493b018dca40689aaf9e621933f"} Mar 13 10:31:03 crc kubenswrapper[4632]: I0313 10:31:03.800555 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 13 10:31:04 crc kubenswrapper[4632]: I0313 10:31:04.057043 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="159c6cee-c82b-4725-82d6-dbd27216f53c" path="/var/lib/kubelet/pods/159c6cee-c82b-4725-82d6-dbd27216f53c/volumes" Mar 13 10:31:04 crc kubenswrapper[4632]: I0313 10:31:04.665507 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e","Type":"ContainerStarted","Data":"221855cb8b608a0fa79bdaa1a68882b74336883a0052291e98bb95a394015359"} Mar 13 10:31:05 crc kubenswrapper[4632]: I0313 10:31:05.376460 4632 scope.go:117] "RemoveContainer" containerID="92d546a480b1e583e7b11dc48ab2d570a4a8d7af0616de2352d72ca175520f17" Mar 13 10:31:05 crc kubenswrapper[4632]: I0313 10:31:05.431411 4632 scope.go:117] "RemoveContainer" containerID="40d92cf95f1cc26685e0359414b43dbdc31eeb90ab4b39c564b241d3fcc263fe" Mar 13 10:31:05 crc kubenswrapper[4632]: I0313 10:31:05.485789 4632 scope.go:117] "RemoveContainer" containerID="9011fe3e8ff19daa76b8d8bddf336d224d69f10272938404d994caa9a1a4d6ee" Mar 13 10:31:05 crc kubenswrapper[4632]: I0313 10:31:05.598583 4632 scope.go:117] "RemoveContainer" containerID="c6b6fdf02c5b942ff5eb86fa09449efd1927d429db47c31ad2d68c9602235d4f" Mar 13 10:31:05 crc kubenswrapper[4632]: I0313 10:31:05.643928 4632 scope.go:117] "RemoveContainer" containerID="572bb794023bd7d53a23050c721933f004db547126df9eaf9b5f8e767603f2d3" Mar 13 10:31:05 crc kubenswrapper[4632]: I0313 10:31:05.693447 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e","Type":"ContainerStarted","Data":"f69527ec2be6e4df808bf875b701d62b874066c28a489a38ef573bdae2b131dc"} Mar 13 10:31:05 crc kubenswrapper[4632]: I0313 10:31:05.763256 4632 scope.go:117] "RemoveContainer" containerID="6257821be47ec7e5943095f3b1d29a6e6fd0a1190515cb74642f7cb762d806d1" Mar 13 10:31:05 crc kubenswrapper[4632]: I0313 10:31:05.900512 4632 scope.go:117] "RemoveContainer" containerID="98a44d8e524895de3db65a2da91c25a6875681d7e31dfa6eb205635df601d593" Mar 13 10:31:05 crc kubenswrapper[4632]: I0313 10:31:05.969836 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57b9bf8b5-98n78"] Mar 13 10:31:05 crc kubenswrapper[4632]: I0313 10:31:05.983076 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57b9bf8b5-98n78" Mar 13 10:31:05 crc kubenswrapper[4632]: I0313 10:31:05.991630 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Mar 13 10:31:06 crc kubenswrapper[4632]: I0313 10:31:06.045291 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57b9bf8b5-98n78"] Mar 13 10:31:06 crc kubenswrapper[4632]: I0313 10:31:06.058269 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5pxf\" (UniqueName: \"kubernetes.io/projected/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-kube-api-access-v5pxf\") pod \"dnsmasq-dns-57b9bf8b5-98n78\" (UID: \"7c3ff36b-8bef-403f-b6ac-ec88e26e924f\") " pod="openstack/dnsmasq-dns-57b9bf8b5-98n78" Mar 13 10:31:06 crc kubenswrapper[4632]: I0313 10:31:06.058379 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-ovsdbserver-nb\") pod \"dnsmasq-dns-57b9bf8b5-98n78\" (UID: \"7c3ff36b-8bef-403f-b6ac-ec88e26e924f\") " pod="openstack/dnsmasq-dns-57b9bf8b5-98n78" Mar 13 10:31:06 crc kubenswrapper[4632]: I0313 10:31:06.058478 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-dns-swift-storage-0\") pod \"dnsmasq-dns-57b9bf8b5-98n78\" (UID: \"7c3ff36b-8bef-403f-b6ac-ec88e26e924f\") " pod="openstack/dnsmasq-dns-57b9bf8b5-98n78" Mar 13 10:31:06 crc kubenswrapper[4632]: I0313 10:31:06.058502 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-config\") pod \"dnsmasq-dns-57b9bf8b5-98n78\" (UID: \"7c3ff36b-8bef-403f-b6ac-ec88e26e924f\") " pod="openstack/dnsmasq-dns-57b9bf8b5-98n78" Mar 13 10:31:06 crc kubenswrapper[4632]: I0313 10:31:06.058553 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-ovsdbserver-sb\") pod \"dnsmasq-dns-57b9bf8b5-98n78\" (UID: \"7c3ff36b-8bef-403f-b6ac-ec88e26e924f\") " pod="openstack/dnsmasq-dns-57b9bf8b5-98n78" Mar 13 10:31:06 crc kubenswrapper[4632]: I0313 10:31:06.058575 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-dns-svc\") pod \"dnsmasq-dns-57b9bf8b5-98n78\" (UID: \"7c3ff36b-8bef-403f-b6ac-ec88e26e924f\") " pod="openstack/dnsmasq-dns-57b9bf8b5-98n78" Mar 13 10:31:06 crc kubenswrapper[4632]: I0313 10:31:06.058643 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-openstack-edpm-ipam\") pod \"dnsmasq-dns-57b9bf8b5-98n78\" (UID: \"7c3ff36b-8bef-403f-b6ac-ec88e26e924f\") " pod="openstack/dnsmasq-dns-57b9bf8b5-98n78" Mar 13 10:31:06 crc kubenswrapper[4632]: I0313 10:31:06.163038 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5pxf\" (UniqueName: \"kubernetes.io/projected/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-kube-api-access-v5pxf\") pod \"dnsmasq-dns-57b9bf8b5-98n78\" (UID: \"7c3ff36b-8bef-403f-b6ac-ec88e26e924f\") " pod="openstack/dnsmasq-dns-57b9bf8b5-98n78" Mar 13 10:31:06 crc kubenswrapper[4632]: I0313 10:31:06.163122 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-ovsdbserver-nb\") pod \"dnsmasq-dns-57b9bf8b5-98n78\" (UID: \"7c3ff36b-8bef-403f-b6ac-ec88e26e924f\") " pod="openstack/dnsmasq-dns-57b9bf8b5-98n78" Mar 13 10:31:06 crc kubenswrapper[4632]: I0313 10:31:06.163247 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-dns-swift-storage-0\") pod \"dnsmasq-dns-57b9bf8b5-98n78\" (UID: \"7c3ff36b-8bef-403f-b6ac-ec88e26e924f\") " pod="openstack/dnsmasq-dns-57b9bf8b5-98n78" Mar 13 10:31:06 crc kubenswrapper[4632]: I0313 10:31:06.163282 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-config\") pod \"dnsmasq-dns-57b9bf8b5-98n78\" (UID: \"7c3ff36b-8bef-403f-b6ac-ec88e26e924f\") " pod="openstack/dnsmasq-dns-57b9bf8b5-98n78" Mar 13 10:31:06 crc kubenswrapper[4632]: I0313 10:31:06.163338 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-ovsdbserver-sb\") pod \"dnsmasq-dns-57b9bf8b5-98n78\" (UID: \"7c3ff36b-8bef-403f-b6ac-ec88e26e924f\") " pod="openstack/dnsmasq-dns-57b9bf8b5-98n78" Mar 13 10:31:06 crc kubenswrapper[4632]: I0313 10:31:06.163371 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-dns-svc\") pod \"dnsmasq-dns-57b9bf8b5-98n78\" (UID: \"7c3ff36b-8bef-403f-b6ac-ec88e26e924f\") " pod="openstack/dnsmasq-dns-57b9bf8b5-98n78" Mar 13 10:31:06 crc kubenswrapper[4632]: I0313 10:31:06.163424 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-openstack-edpm-ipam\") pod \"dnsmasq-dns-57b9bf8b5-98n78\" (UID: \"7c3ff36b-8bef-403f-b6ac-ec88e26e924f\") " pod="openstack/dnsmasq-dns-57b9bf8b5-98n78" Mar 13 10:31:06 crc kubenswrapper[4632]: I0313 10:31:06.164641 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-config\") pod \"dnsmasq-dns-57b9bf8b5-98n78\" (UID: \"7c3ff36b-8bef-403f-b6ac-ec88e26e924f\") " pod="openstack/dnsmasq-dns-57b9bf8b5-98n78" Mar 13 10:31:06 crc kubenswrapper[4632]: I0313 10:31:06.164798 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-openstack-edpm-ipam\") pod \"dnsmasq-dns-57b9bf8b5-98n78\" (UID: \"7c3ff36b-8bef-403f-b6ac-ec88e26e924f\") " pod="openstack/dnsmasq-dns-57b9bf8b5-98n78" Mar 13 10:31:06 crc kubenswrapper[4632]: I0313 10:31:06.164898 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-ovsdbserver-nb\") pod \"dnsmasq-dns-57b9bf8b5-98n78\" (UID: \"7c3ff36b-8bef-403f-b6ac-ec88e26e924f\") " pod="openstack/dnsmasq-dns-57b9bf8b5-98n78" Mar 13 10:31:06 crc kubenswrapper[4632]: I0313 10:31:06.165204 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-ovsdbserver-sb\") pod \"dnsmasq-dns-57b9bf8b5-98n78\" (UID: \"7c3ff36b-8bef-403f-b6ac-ec88e26e924f\") " pod="openstack/dnsmasq-dns-57b9bf8b5-98n78" Mar 13 10:31:06 crc kubenswrapper[4632]: I0313 10:31:06.165285 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-dns-svc\") pod \"dnsmasq-dns-57b9bf8b5-98n78\" (UID: \"7c3ff36b-8bef-403f-b6ac-ec88e26e924f\") " pod="openstack/dnsmasq-dns-57b9bf8b5-98n78" Mar 13 10:31:06 crc kubenswrapper[4632]: I0313 10:31:06.165682 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-dns-swift-storage-0\") pod \"dnsmasq-dns-57b9bf8b5-98n78\" (UID: \"7c3ff36b-8bef-403f-b6ac-ec88e26e924f\") " pod="openstack/dnsmasq-dns-57b9bf8b5-98n78" Mar 13 10:31:06 crc kubenswrapper[4632]: I0313 10:31:06.211075 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5pxf\" (UniqueName: \"kubernetes.io/projected/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-kube-api-access-v5pxf\") pod \"dnsmasq-dns-57b9bf8b5-98n78\" (UID: \"7c3ff36b-8bef-403f-b6ac-ec88e26e924f\") " pod="openstack/dnsmasq-dns-57b9bf8b5-98n78" Mar 13 10:31:06 crc kubenswrapper[4632]: I0313 10:31:06.316509 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57b9bf8b5-98n78" Mar 13 10:31:06 crc kubenswrapper[4632]: I0313 10:31:06.755034 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e","Type":"ContainerStarted","Data":"14c062fec112b9ff9dd032399f07c596e52d395c51c6e56a8d5ba1fc6a94ca9a"} Mar 13 10:31:07 crc kubenswrapper[4632]: I0313 10:31:07.029746 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57b9bf8b5-98n78"] Mar 13 10:31:07 crc kubenswrapper[4632]: W0313 10:31:07.039843 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c3ff36b_8bef_403f_b6ac_ec88e26e924f.slice/crio-11f7381d58b27be1d13563ddb1c5a150b98cb261eb7c5049caee80961374ea23 WatchSource:0}: Error finding container 11f7381d58b27be1d13563ddb1c5a150b98cb261eb7c5049caee80961374ea23: Status 404 returned error can't find the container with id 11f7381d58b27be1d13563ddb1c5a150b98cb261eb7c5049caee80961374ea23 Mar 13 10:31:07 crc kubenswrapper[4632]: I0313 10:31:07.778924 4632 generic.go:334] "Generic (PLEG): container finished" podID="7c3ff36b-8bef-403f-b6ac-ec88e26e924f" containerID="90dfbecc999c31c0a51b0624874627a8f3c0659cb11e205820b8e9aab659a4a1" exitCode=0 Mar 13 10:31:07 crc kubenswrapper[4632]: I0313 10:31:07.782698 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57b9bf8b5-98n78" event={"ID":"7c3ff36b-8bef-403f-b6ac-ec88e26e924f","Type":"ContainerDied","Data":"90dfbecc999c31c0a51b0624874627a8f3c0659cb11e205820b8e9aab659a4a1"} Mar 13 10:31:07 crc kubenswrapper[4632]: I0313 10:31:07.782878 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57b9bf8b5-98n78" event={"ID":"7c3ff36b-8bef-403f-b6ac-ec88e26e924f","Type":"ContainerStarted","Data":"11f7381d58b27be1d13563ddb1c5a150b98cb261eb7c5049caee80961374ea23"} Mar 13 10:31:08 crc kubenswrapper[4632]: I0313 10:31:08.791598 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57b9bf8b5-98n78" event={"ID":"7c3ff36b-8bef-403f-b6ac-ec88e26e924f","Type":"ContainerStarted","Data":"1e8d2b5aecd08236cabb2c50425d69df7147e32b58dae758550f96994f27f434"} Mar 13 10:31:08 crc kubenswrapper[4632]: I0313 10:31:08.792660 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57b9bf8b5-98n78" Mar 13 10:31:08 crc kubenswrapper[4632]: I0313 10:31:08.831636 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57b9bf8b5-98n78" podStartSLOduration=3.8316164820000003 podStartE2EDuration="3.831616482s" podCreationTimestamp="2026-03-13 10:31:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:31:08.817712787 +0000 UTC m=+1642.840242920" watchObservedRunningTime="2026-03-13 10:31:08.831616482 +0000 UTC m=+1642.854146615" Mar 13 10:31:10 crc kubenswrapper[4632]: I0313 10:31:10.461317 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 10:31:10 crc kubenswrapper[4632]: I0313 10:31:10.461655 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 10:31:10 crc kubenswrapper[4632]: I0313 10:31:10.461716 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 10:31:10 crc kubenswrapper[4632]: I0313 10:31:10.462423 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8c0ae371a519eb9db1c6eb843b2bd1981f031101e4086e8bec3fc57f1e905a6f"} pod="openshift-machine-config-operator/machine-config-daemon-zkscb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 10:31:10 crc kubenswrapper[4632]: I0313 10:31:10.462489 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" containerID="cri-o://8c0ae371a519eb9db1c6eb843b2bd1981f031101e4086e8bec3fc57f1e905a6f" gracePeriod=600 Mar 13 10:31:10 crc kubenswrapper[4632]: E0313 10:31:10.590546 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:31:10 crc kubenswrapper[4632]: I0313 10:31:10.835412 4632 generic.go:334] "Generic (PLEG): container finished" podID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerID="8c0ae371a519eb9db1c6eb843b2bd1981f031101e4086e8bec3fc57f1e905a6f" exitCode=0 Mar 13 10:31:10 crc kubenswrapper[4632]: I0313 10:31:10.836816 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerDied","Data":"8c0ae371a519eb9db1c6eb843b2bd1981f031101e4086e8bec3fc57f1e905a6f"} Mar 13 10:31:10 crc kubenswrapper[4632]: I0313 10:31:10.836965 4632 scope.go:117] "RemoveContainer" containerID="a148dfa9ef48de458189e9fda19ce88937bedd25c3ec76e22d14f43a4745805f" Mar 13 10:31:10 crc kubenswrapper[4632]: I0313 10:31:10.837612 4632 scope.go:117] "RemoveContainer" containerID="8c0ae371a519eb9db1c6eb843b2bd1981f031101e4086e8bec3fc57f1e905a6f" Mar 13 10:31:10 crc kubenswrapper[4632]: E0313 10:31:10.838093 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:31:12 crc kubenswrapper[4632]: I0313 10:31:12.916316 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 13 10:31:16 crc kubenswrapper[4632]: I0313 10:31:16.318104 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57b9bf8b5-98n78" Mar 13 10:31:16 crc kubenswrapper[4632]: I0313 10:31:16.390426 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-564797cccc-84dg2"] Mar 13 10:31:16 crc kubenswrapper[4632]: I0313 10:31:16.390853 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-564797cccc-84dg2" podUID="ac568760-fbe3-49ca-af4a-13f7780a1ad2" containerName="dnsmasq-dns" containerID="cri-o://3807149ca5beac08d142f3e5ffa3b80f5bf9a97b93a119f317229b5a8536c4a3" gracePeriod=10 Mar 13 10:31:16 crc kubenswrapper[4632]: I0313 10:31:16.620239 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b457785b5-7hzp6"] Mar 13 10:31:16 crc kubenswrapper[4632]: I0313 10:31:16.621775 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b457785b5-7hzp6" Mar 13 10:31:16 crc kubenswrapper[4632]: I0313 10:31:16.643698 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b457785b5-7hzp6"] Mar 13 10:31:16 crc kubenswrapper[4632]: I0313 10:31:16.803719 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1aca78bb-c923-4964-9b4c-5f7fb50badba-openstack-edpm-ipam\") pod \"dnsmasq-dns-7b457785b5-7hzp6\" (UID: \"1aca78bb-c923-4964-9b4c-5f7fb50badba\") " pod="openstack/dnsmasq-dns-7b457785b5-7hzp6" Mar 13 10:31:16 crc kubenswrapper[4632]: I0313 10:31:16.804101 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aca78bb-c923-4964-9b4c-5f7fb50badba-config\") pod \"dnsmasq-dns-7b457785b5-7hzp6\" (UID: \"1aca78bb-c923-4964-9b4c-5f7fb50badba\") " pod="openstack/dnsmasq-dns-7b457785b5-7hzp6" Mar 13 10:31:16 crc kubenswrapper[4632]: I0313 10:31:16.804146 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1aca78bb-c923-4964-9b4c-5f7fb50badba-dns-svc\") pod \"dnsmasq-dns-7b457785b5-7hzp6\" (UID: \"1aca78bb-c923-4964-9b4c-5f7fb50badba\") " pod="openstack/dnsmasq-dns-7b457785b5-7hzp6" Mar 13 10:31:16 crc kubenswrapper[4632]: I0313 10:31:16.804189 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1aca78bb-c923-4964-9b4c-5f7fb50badba-dns-swift-storage-0\") pod \"dnsmasq-dns-7b457785b5-7hzp6\" (UID: \"1aca78bb-c923-4964-9b4c-5f7fb50badba\") " pod="openstack/dnsmasq-dns-7b457785b5-7hzp6" Mar 13 10:31:16 crc kubenswrapper[4632]: I0313 10:31:16.804215 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1aca78bb-c923-4964-9b4c-5f7fb50badba-ovsdbserver-sb\") pod \"dnsmasq-dns-7b457785b5-7hzp6\" (UID: \"1aca78bb-c923-4964-9b4c-5f7fb50badba\") " pod="openstack/dnsmasq-dns-7b457785b5-7hzp6" Mar 13 10:31:16 crc kubenswrapper[4632]: I0313 10:31:16.804272 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1aca78bb-c923-4964-9b4c-5f7fb50badba-ovsdbserver-nb\") pod \"dnsmasq-dns-7b457785b5-7hzp6\" (UID: \"1aca78bb-c923-4964-9b4c-5f7fb50badba\") " pod="openstack/dnsmasq-dns-7b457785b5-7hzp6" Mar 13 10:31:16 crc kubenswrapper[4632]: I0313 10:31:16.804326 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6km2\" (UniqueName: \"kubernetes.io/projected/1aca78bb-c923-4964-9b4c-5f7fb50badba-kube-api-access-s6km2\") pod \"dnsmasq-dns-7b457785b5-7hzp6\" (UID: \"1aca78bb-c923-4964-9b4c-5f7fb50badba\") " pod="openstack/dnsmasq-dns-7b457785b5-7hzp6" Mar 13 10:31:16 crc kubenswrapper[4632]: I0313 10:31:16.898645 4632 generic.go:334] "Generic (PLEG): container finished" podID="ac568760-fbe3-49ca-af4a-13f7780a1ad2" containerID="3807149ca5beac08d142f3e5ffa3b80f5bf9a97b93a119f317229b5a8536c4a3" exitCode=0 Mar 13 10:31:16 crc kubenswrapper[4632]: I0313 10:31:16.898697 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-564797cccc-84dg2" event={"ID":"ac568760-fbe3-49ca-af4a-13f7780a1ad2","Type":"ContainerDied","Data":"3807149ca5beac08d142f3e5ffa3b80f5bf9a97b93a119f317229b5a8536c4a3"} Mar 13 10:31:16 crc kubenswrapper[4632]: I0313 10:31:16.906584 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6km2\" (UniqueName: \"kubernetes.io/projected/1aca78bb-c923-4964-9b4c-5f7fb50badba-kube-api-access-s6km2\") pod \"dnsmasq-dns-7b457785b5-7hzp6\" (UID: \"1aca78bb-c923-4964-9b4c-5f7fb50badba\") " pod="openstack/dnsmasq-dns-7b457785b5-7hzp6" Mar 13 10:31:16 crc kubenswrapper[4632]: I0313 10:31:16.906647 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1aca78bb-c923-4964-9b4c-5f7fb50badba-openstack-edpm-ipam\") pod \"dnsmasq-dns-7b457785b5-7hzp6\" (UID: \"1aca78bb-c923-4964-9b4c-5f7fb50badba\") " pod="openstack/dnsmasq-dns-7b457785b5-7hzp6" Mar 13 10:31:16 crc kubenswrapper[4632]: I0313 10:31:16.906680 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aca78bb-c923-4964-9b4c-5f7fb50badba-config\") pod \"dnsmasq-dns-7b457785b5-7hzp6\" (UID: \"1aca78bb-c923-4964-9b4c-5f7fb50badba\") " pod="openstack/dnsmasq-dns-7b457785b5-7hzp6" Mar 13 10:31:16 crc kubenswrapper[4632]: I0313 10:31:16.906716 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1aca78bb-c923-4964-9b4c-5f7fb50badba-dns-svc\") pod \"dnsmasq-dns-7b457785b5-7hzp6\" (UID: \"1aca78bb-c923-4964-9b4c-5f7fb50badba\") " pod="openstack/dnsmasq-dns-7b457785b5-7hzp6" Mar 13 10:31:16 crc kubenswrapper[4632]: I0313 10:31:16.906756 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1aca78bb-c923-4964-9b4c-5f7fb50badba-dns-swift-storage-0\") pod \"dnsmasq-dns-7b457785b5-7hzp6\" (UID: \"1aca78bb-c923-4964-9b4c-5f7fb50badba\") " pod="openstack/dnsmasq-dns-7b457785b5-7hzp6" Mar 13 10:31:16 crc kubenswrapper[4632]: I0313 10:31:16.906778 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1aca78bb-c923-4964-9b4c-5f7fb50badba-ovsdbserver-sb\") pod \"dnsmasq-dns-7b457785b5-7hzp6\" (UID: \"1aca78bb-c923-4964-9b4c-5f7fb50badba\") " pod="openstack/dnsmasq-dns-7b457785b5-7hzp6" Mar 13 10:31:16 crc kubenswrapper[4632]: I0313 10:31:16.906837 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1aca78bb-c923-4964-9b4c-5f7fb50badba-ovsdbserver-nb\") pod \"dnsmasq-dns-7b457785b5-7hzp6\" (UID: \"1aca78bb-c923-4964-9b4c-5f7fb50badba\") " pod="openstack/dnsmasq-dns-7b457785b5-7hzp6" Mar 13 10:31:16 crc kubenswrapper[4632]: I0313 10:31:16.910185 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1aca78bb-c923-4964-9b4c-5f7fb50badba-dns-svc\") pod \"dnsmasq-dns-7b457785b5-7hzp6\" (UID: \"1aca78bb-c923-4964-9b4c-5f7fb50badba\") " pod="openstack/dnsmasq-dns-7b457785b5-7hzp6" Mar 13 10:31:16 crc kubenswrapper[4632]: I0313 10:31:16.910705 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1aca78bb-c923-4964-9b4c-5f7fb50badba-ovsdbserver-nb\") pod \"dnsmasq-dns-7b457785b5-7hzp6\" (UID: \"1aca78bb-c923-4964-9b4c-5f7fb50badba\") " pod="openstack/dnsmasq-dns-7b457785b5-7hzp6" Mar 13 10:31:16 crc kubenswrapper[4632]: I0313 10:31:16.910781 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1aca78bb-c923-4964-9b4c-5f7fb50badba-ovsdbserver-sb\") pod \"dnsmasq-dns-7b457785b5-7hzp6\" (UID: \"1aca78bb-c923-4964-9b4c-5f7fb50badba\") " pod="openstack/dnsmasq-dns-7b457785b5-7hzp6" Mar 13 10:31:16 crc kubenswrapper[4632]: I0313 10:31:16.911058 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1aca78bb-c923-4964-9b4c-5f7fb50badba-dns-swift-storage-0\") pod \"dnsmasq-dns-7b457785b5-7hzp6\" (UID: \"1aca78bb-c923-4964-9b4c-5f7fb50badba\") " pod="openstack/dnsmasq-dns-7b457785b5-7hzp6" Mar 13 10:31:16 crc kubenswrapper[4632]: I0313 10:31:16.913522 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1aca78bb-c923-4964-9b4c-5f7fb50badba-openstack-edpm-ipam\") pod \"dnsmasq-dns-7b457785b5-7hzp6\" (UID: \"1aca78bb-c923-4964-9b4c-5f7fb50badba\") " pod="openstack/dnsmasq-dns-7b457785b5-7hzp6" Mar 13 10:31:16 crc kubenswrapper[4632]: I0313 10:31:16.922679 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aca78bb-c923-4964-9b4c-5f7fb50badba-config\") pod \"dnsmasq-dns-7b457785b5-7hzp6\" (UID: \"1aca78bb-c923-4964-9b4c-5f7fb50badba\") " pod="openstack/dnsmasq-dns-7b457785b5-7hzp6" Mar 13 10:31:16 crc kubenswrapper[4632]: I0313 10:31:16.963377 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6km2\" (UniqueName: \"kubernetes.io/projected/1aca78bb-c923-4964-9b4c-5f7fb50badba-kube-api-access-s6km2\") pod \"dnsmasq-dns-7b457785b5-7hzp6\" (UID: \"1aca78bb-c923-4964-9b4c-5f7fb50badba\") " pod="openstack/dnsmasq-dns-7b457785b5-7hzp6" Mar 13 10:31:17 crc kubenswrapper[4632]: I0313 10:31:17.079311 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-564797cccc-84dg2" Mar 13 10:31:17 crc kubenswrapper[4632]: I0313 10:31:17.112362 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac568760-fbe3-49ca-af4a-13f7780a1ad2-dns-swift-storage-0\") pod \"ac568760-fbe3-49ca-af4a-13f7780a1ad2\" (UID: \"ac568760-fbe3-49ca-af4a-13f7780a1ad2\") " Mar 13 10:31:17 crc kubenswrapper[4632]: I0313 10:31:17.112424 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac568760-fbe3-49ca-af4a-13f7780a1ad2-ovsdbserver-nb\") pod \"ac568760-fbe3-49ca-af4a-13f7780a1ad2\" (UID: \"ac568760-fbe3-49ca-af4a-13f7780a1ad2\") " Mar 13 10:31:17 crc kubenswrapper[4632]: I0313 10:31:17.112460 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac568760-fbe3-49ca-af4a-13f7780a1ad2-ovsdbserver-sb\") pod \"ac568760-fbe3-49ca-af4a-13f7780a1ad2\" (UID: \"ac568760-fbe3-49ca-af4a-13f7780a1ad2\") " Mar 13 10:31:17 crc kubenswrapper[4632]: I0313 10:31:17.112506 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac568760-fbe3-49ca-af4a-13f7780a1ad2-dns-svc\") pod \"ac568760-fbe3-49ca-af4a-13f7780a1ad2\" (UID: \"ac568760-fbe3-49ca-af4a-13f7780a1ad2\") " Mar 13 10:31:17 crc kubenswrapper[4632]: I0313 10:31:17.202387 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac568760-fbe3-49ca-af4a-13f7780a1ad2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ac568760-fbe3-49ca-af4a-13f7780a1ad2" (UID: "ac568760-fbe3-49ca-af4a-13f7780a1ad2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:31:17 crc kubenswrapper[4632]: I0313 10:31:17.213797 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djmgg\" (UniqueName: \"kubernetes.io/projected/ac568760-fbe3-49ca-af4a-13f7780a1ad2-kube-api-access-djmgg\") pod \"ac568760-fbe3-49ca-af4a-13f7780a1ad2\" (UID: \"ac568760-fbe3-49ca-af4a-13f7780a1ad2\") " Mar 13 10:31:17 crc kubenswrapper[4632]: I0313 10:31:17.214147 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac568760-fbe3-49ca-af4a-13f7780a1ad2-config\") pod \"ac568760-fbe3-49ca-af4a-13f7780a1ad2\" (UID: \"ac568760-fbe3-49ca-af4a-13f7780a1ad2\") " Mar 13 10:31:17 crc kubenswrapper[4632]: I0313 10:31:17.214581 4632 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac568760-fbe3-49ca-af4a-13f7780a1ad2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 13 10:31:17 crc kubenswrapper[4632]: I0313 10:31:17.217054 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac568760-fbe3-49ca-af4a-13f7780a1ad2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ac568760-fbe3-49ca-af4a-13f7780a1ad2" (UID: "ac568760-fbe3-49ca-af4a-13f7780a1ad2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:31:17 crc kubenswrapper[4632]: I0313 10:31:17.227335 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac568760-fbe3-49ca-af4a-13f7780a1ad2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ac568760-fbe3-49ca-af4a-13f7780a1ad2" (UID: "ac568760-fbe3-49ca-af4a-13f7780a1ad2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:31:17 crc kubenswrapper[4632]: I0313 10:31:17.227610 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac568760-fbe3-49ca-af4a-13f7780a1ad2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ac568760-fbe3-49ca-af4a-13f7780a1ad2" (UID: "ac568760-fbe3-49ca-af4a-13f7780a1ad2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:31:17 crc kubenswrapper[4632]: I0313 10:31:17.228371 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac568760-fbe3-49ca-af4a-13f7780a1ad2-kube-api-access-djmgg" (OuterVolumeSpecName: "kube-api-access-djmgg") pod "ac568760-fbe3-49ca-af4a-13f7780a1ad2" (UID: "ac568760-fbe3-49ca-af4a-13f7780a1ad2"). InnerVolumeSpecName "kube-api-access-djmgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:31:17 crc kubenswrapper[4632]: I0313 10:31:17.242420 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b457785b5-7hzp6" Mar 13 10:31:17 crc kubenswrapper[4632]: I0313 10:31:17.270184 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac568760-fbe3-49ca-af4a-13f7780a1ad2-config" (OuterVolumeSpecName: "config") pod "ac568760-fbe3-49ca-af4a-13f7780a1ad2" (UID: "ac568760-fbe3-49ca-af4a-13f7780a1ad2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:31:17 crc kubenswrapper[4632]: I0313 10:31:17.315789 4632 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac568760-fbe3-49ca-af4a-13f7780a1ad2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 13 10:31:17 crc kubenswrapper[4632]: I0313 10:31:17.315835 4632 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac568760-fbe3-49ca-af4a-13f7780a1ad2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 13 10:31:17 crc kubenswrapper[4632]: I0313 10:31:17.315849 4632 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac568760-fbe3-49ca-af4a-13f7780a1ad2-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 13 10:31:17 crc kubenswrapper[4632]: I0313 10:31:17.315863 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac568760-fbe3-49ca-af4a-13f7780a1ad2-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:31:17 crc kubenswrapper[4632]: I0313 10:31:17.315874 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djmgg\" (UniqueName: \"kubernetes.io/projected/ac568760-fbe3-49ca-af4a-13f7780a1ad2-kube-api-access-djmgg\") on node \"crc\" DevicePath \"\"" Mar 13 10:31:17 crc kubenswrapper[4632]: I0313 10:31:17.754016 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b457785b5-7hzp6"] Mar 13 10:31:17 crc kubenswrapper[4632]: I0313 10:31:17.914117 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b457785b5-7hzp6" event={"ID":"1aca78bb-c923-4964-9b4c-5f7fb50badba","Type":"ContainerStarted","Data":"13b59ba479e5d6efa177855cf3835521bdc7ec51c03d2dc65872228d4234c924"} Mar 13 10:31:17 crc kubenswrapper[4632]: I0313 10:31:17.917753 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-564797cccc-84dg2" event={"ID":"ac568760-fbe3-49ca-af4a-13f7780a1ad2","Type":"ContainerDied","Data":"f114a94f0fb42ce2c1f69bef8fad045098717f536b5431da20286872b08fed02"} Mar 13 10:31:17 crc kubenswrapper[4632]: I0313 10:31:17.917807 4632 scope.go:117] "RemoveContainer" containerID="3807149ca5beac08d142f3e5ffa3b80f5bf9a97b93a119f317229b5a8536c4a3" Mar 13 10:31:17 crc kubenswrapper[4632]: I0313 10:31:17.917984 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-564797cccc-84dg2" Mar 13 10:31:17 crc kubenswrapper[4632]: I0313 10:31:17.971290 4632 scope.go:117] "RemoveContainer" containerID="7c9783dd40660c9e8665537c8ead9f633309987f7dedc616633d346075b3da86" Mar 13 10:31:18 crc kubenswrapper[4632]: I0313 10:31:18.024181 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-564797cccc-84dg2"] Mar 13 10:31:18 crc kubenswrapper[4632]: I0313 10:31:18.036452 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-564797cccc-84dg2"] Mar 13 10:31:18 crc kubenswrapper[4632]: I0313 10:31:18.091689 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac568760-fbe3-49ca-af4a-13f7780a1ad2" path="/var/lib/kubelet/pods/ac568760-fbe3-49ca-af4a-13f7780a1ad2/volumes" Mar 13 10:31:18 crc kubenswrapper[4632]: I0313 10:31:18.929518 4632 generic.go:334] "Generic (PLEG): container finished" podID="1aca78bb-c923-4964-9b4c-5f7fb50badba" containerID="cf9f6f4ace996138ffbc8df4b970fa3faa1ff37d0ed951c237b3de812effd2da" exitCode=0 Mar 13 10:31:18 crc kubenswrapper[4632]: I0313 10:31:18.929888 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b457785b5-7hzp6" event={"ID":"1aca78bb-c923-4964-9b4c-5f7fb50badba","Type":"ContainerDied","Data":"cf9f6f4ace996138ffbc8df4b970fa3faa1ff37d0ed951c237b3de812effd2da"} Mar 13 10:31:19 crc kubenswrapper[4632]: I0313 10:31:19.944991 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b457785b5-7hzp6" event={"ID":"1aca78bb-c923-4964-9b4c-5f7fb50badba","Type":"ContainerStarted","Data":"71a2fc806710f00a6f21a003ef29d764436a754a876865e6540b7ecd4a886549"} Mar 13 10:31:19 crc kubenswrapper[4632]: I0313 10:31:19.945473 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b457785b5-7hzp6" Mar 13 10:31:19 crc kubenswrapper[4632]: I0313 10:31:19.967443 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b457785b5-7hzp6" podStartSLOduration=3.967422941 podStartE2EDuration="3.967422941s" podCreationTimestamp="2026-03-13 10:31:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:31:19.963459726 +0000 UTC m=+1653.985989859" watchObservedRunningTime="2026-03-13 10:31:19.967422941 +0000 UTC m=+1653.989953074" Mar 13 10:31:22 crc kubenswrapper[4632]: I0313 10:31:22.044594 4632 scope.go:117] "RemoveContainer" containerID="8c0ae371a519eb9db1c6eb843b2bd1981f031101e4086e8bec3fc57f1e905a6f" Mar 13 10:31:22 crc kubenswrapper[4632]: E0313 10:31:22.045230 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:31:27 crc kubenswrapper[4632]: I0313 10:31:27.244124 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b457785b5-7hzp6" Mar 13 10:31:27 crc kubenswrapper[4632]: I0313 10:31:27.348587 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57b9bf8b5-98n78"] Mar 13 10:31:27 crc kubenswrapper[4632]: I0313 10:31:27.348880 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57b9bf8b5-98n78" podUID="7c3ff36b-8bef-403f-b6ac-ec88e26e924f" containerName="dnsmasq-dns" containerID="cri-o://1e8d2b5aecd08236cabb2c50425d69df7147e32b58dae758550f96994f27f434" gracePeriod=10 Mar 13 10:31:28 crc kubenswrapper[4632]: I0313 10:31:28.026830 4632 generic.go:334] "Generic (PLEG): container finished" podID="7c3ff36b-8bef-403f-b6ac-ec88e26e924f" containerID="1e8d2b5aecd08236cabb2c50425d69df7147e32b58dae758550f96994f27f434" exitCode=0 Mar 13 10:31:28 crc kubenswrapper[4632]: I0313 10:31:28.027445 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57b9bf8b5-98n78" event={"ID":"7c3ff36b-8bef-403f-b6ac-ec88e26e924f","Type":"ContainerDied","Data":"1e8d2b5aecd08236cabb2c50425d69df7147e32b58dae758550f96994f27f434"} Mar 13 10:31:28 crc kubenswrapper[4632]: I0313 10:31:28.027474 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57b9bf8b5-98n78" event={"ID":"7c3ff36b-8bef-403f-b6ac-ec88e26e924f","Type":"ContainerDied","Data":"11f7381d58b27be1d13563ddb1c5a150b98cb261eb7c5049caee80961374ea23"} Mar 13 10:31:28 crc kubenswrapper[4632]: I0313 10:31:28.027488 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11f7381d58b27be1d13563ddb1c5a150b98cb261eb7c5049caee80961374ea23" Mar 13 10:31:28 crc kubenswrapper[4632]: I0313 10:31:28.042691 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57b9bf8b5-98n78" Mar 13 10:31:28 crc kubenswrapper[4632]: I0313 10:31:28.061630 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-dns-svc\") pod \"7c3ff36b-8bef-403f-b6ac-ec88e26e924f\" (UID: \"7c3ff36b-8bef-403f-b6ac-ec88e26e924f\") " Mar 13 10:31:28 crc kubenswrapper[4632]: I0313 10:31:28.061688 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-ovsdbserver-sb\") pod \"7c3ff36b-8bef-403f-b6ac-ec88e26e924f\" (UID: \"7c3ff36b-8bef-403f-b6ac-ec88e26e924f\") " Mar 13 10:31:28 crc kubenswrapper[4632]: I0313 10:31:28.061736 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5pxf\" (UniqueName: \"kubernetes.io/projected/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-kube-api-access-v5pxf\") pod \"7c3ff36b-8bef-403f-b6ac-ec88e26e924f\" (UID: \"7c3ff36b-8bef-403f-b6ac-ec88e26e924f\") " Mar 13 10:31:28 crc kubenswrapper[4632]: I0313 10:31:28.061771 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-openstack-edpm-ipam\") pod \"7c3ff36b-8bef-403f-b6ac-ec88e26e924f\" (UID: \"7c3ff36b-8bef-403f-b6ac-ec88e26e924f\") " Mar 13 10:31:28 crc kubenswrapper[4632]: I0313 10:31:28.061793 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-config\") pod \"7c3ff36b-8bef-403f-b6ac-ec88e26e924f\" (UID: \"7c3ff36b-8bef-403f-b6ac-ec88e26e924f\") " Mar 13 10:31:28 crc kubenswrapper[4632]: I0313 10:31:28.061822 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-ovsdbserver-nb\") pod \"7c3ff36b-8bef-403f-b6ac-ec88e26e924f\" (UID: \"7c3ff36b-8bef-403f-b6ac-ec88e26e924f\") " Mar 13 10:31:28 crc kubenswrapper[4632]: I0313 10:31:28.061863 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-dns-swift-storage-0\") pod \"7c3ff36b-8bef-403f-b6ac-ec88e26e924f\" (UID: \"7c3ff36b-8bef-403f-b6ac-ec88e26e924f\") " Mar 13 10:31:28 crc kubenswrapper[4632]: I0313 10:31:28.100695 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-kube-api-access-v5pxf" (OuterVolumeSpecName: "kube-api-access-v5pxf") pod "7c3ff36b-8bef-403f-b6ac-ec88e26e924f" (UID: "7c3ff36b-8bef-403f-b6ac-ec88e26e924f"). InnerVolumeSpecName "kube-api-access-v5pxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:31:28 crc kubenswrapper[4632]: I0313 10:31:28.165160 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5pxf\" (UniqueName: \"kubernetes.io/projected/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-kube-api-access-v5pxf\") on node \"crc\" DevicePath \"\"" Mar 13 10:31:28 crc kubenswrapper[4632]: I0313 10:31:28.217576 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7c3ff36b-8bef-403f-b6ac-ec88e26e924f" (UID: "7c3ff36b-8bef-403f-b6ac-ec88e26e924f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:31:28 crc kubenswrapper[4632]: I0313 10:31:28.242973 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7c3ff36b-8bef-403f-b6ac-ec88e26e924f" (UID: "7c3ff36b-8bef-403f-b6ac-ec88e26e924f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:31:28 crc kubenswrapper[4632]: I0313 10:31:28.243082 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7c3ff36b-8bef-403f-b6ac-ec88e26e924f" (UID: "7c3ff36b-8bef-403f-b6ac-ec88e26e924f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:31:28 crc kubenswrapper[4632]: I0313 10:31:28.245683 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "7c3ff36b-8bef-403f-b6ac-ec88e26e924f" (UID: "7c3ff36b-8bef-403f-b6ac-ec88e26e924f"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:31:28 crc kubenswrapper[4632]: I0313 10:31:28.266680 4632 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 13 10:31:28 crc kubenswrapper[4632]: I0313 10:31:28.266725 4632 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 13 10:31:28 crc kubenswrapper[4632]: I0313 10:31:28.266740 4632 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 13 10:31:28 crc kubenswrapper[4632]: I0313 10:31:28.266753 4632 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 13 10:31:28 crc kubenswrapper[4632]: I0313 10:31:28.271409 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7c3ff36b-8bef-403f-b6ac-ec88e26e924f" (UID: "7c3ff36b-8bef-403f-b6ac-ec88e26e924f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:31:28 crc kubenswrapper[4632]: I0313 10:31:28.274506 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-config" (OuterVolumeSpecName: "config") pod "7c3ff36b-8bef-403f-b6ac-ec88e26e924f" (UID: "7c3ff36b-8bef-403f-b6ac-ec88e26e924f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:31:28 crc kubenswrapper[4632]: I0313 10:31:28.369166 4632 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 13 10:31:28 crc kubenswrapper[4632]: I0313 10:31:28.369211 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c3ff36b-8bef-403f-b6ac-ec88e26e924f-config\") on node \"crc\" DevicePath \"\"" Mar 13 10:31:29 crc kubenswrapper[4632]: I0313 10:31:29.038085 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57b9bf8b5-98n78" Mar 13 10:31:29 crc kubenswrapper[4632]: I0313 10:31:29.084739 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57b9bf8b5-98n78"] Mar 13 10:31:29 crc kubenswrapper[4632]: I0313 10:31:29.098444 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57b9bf8b5-98n78"] Mar 13 10:31:30 crc kubenswrapper[4632]: I0313 10:31:30.104352 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c3ff36b-8bef-403f-b6ac-ec88e26e924f" path="/var/lib/kubelet/pods/7c3ff36b-8bef-403f-b6ac-ec88e26e924f/volumes" Mar 13 10:31:34 crc kubenswrapper[4632]: I0313 10:31:34.025837 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gjdvf"] Mar 13 10:31:34 crc kubenswrapper[4632]: E0313 10:31:34.026469 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c3ff36b-8bef-403f-b6ac-ec88e26e924f" containerName="dnsmasq-dns" Mar 13 10:31:34 crc kubenswrapper[4632]: I0313 10:31:34.026481 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c3ff36b-8bef-403f-b6ac-ec88e26e924f" containerName="dnsmasq-dns" Mar 13 10:31:34 crc kubenswrapper[4632]: E0313 10:31:34.026495 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac568760-fbe3-49ca-af4a-13f7780a1ad2" containerName="init" Mar 13 10:31:34 crc kubenswrapper[4632]: I0313 10:31:34.026501 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac568760-fbe3-49ca-af4a-13f7780a1ad2" containerName="init" Mar 13 10:31:34 crc kubenswrapper[4632]: E0313 10:31:34.026515 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c3ff36b-8bef-403f-b6ac-ec88e26e924f" containerName="init" Mar 13 10:31:34 crc kubenswrapper[4632]: I0313 10:31:34.026521 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c3ff36b-8bef-403f-b6ac-ec88e26e924f" containerName="init" Mar 13 10:31:34 crc kubenswrapper[4632]: E0313 10:31:34.026534 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac568760-fbe3-49ca-af4a-13f7780a1ad2" containerName="dnsmasq-dns" Mar 13 10:31:34 crc kubenswrapper[4632]: I0313 10:31:34.026540 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac568760-fbe3-49ca-af4a-13f7780a1ad2" containerName="dnsmasq-dns" Mar 13 10:31:34 crc kubenswrapper[4632]: I0313 10:31:34.026712 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c3ff36b-8bef-403f-b6ac-ec88e26e924f" containerName="dnsmasq-dns" Mar 13 10:31:34 crc kubenswrapper[4632]: I0313 10:31:34.026731 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac568760-fbe3-49ca-af4a-13f7780a1ad2" containerName="dnsmasq-dns" Mar 13 10:31:34 crc kubenswrapper[4632]: I0313 10:31:34.028042 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjdvf" Mar 13 10:31:34 crc kubenswrapper[4632]: I0313 10:31:34.042059 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gjdvf"] Mar 13 10:31:34 crc kubenswrapper[4632]: I0313 10:31:34.183961 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxklg\" (UniqueName: \"kubernetes.io/projected/03215c5e-aa7f-4865-8e14-7adb79cc6daa-kube-api-access-zxklg\") pod \"community-operators-gjdvf\" (UID: \"03215c5e-aa7f-4865-8e14-7adb79cc6daa\") " pod="openshift-marketplace/community-operators-gjdvf" Mar 13 10:31:34 crc kubenswrapper[4632]: I0313 10:31:34.184078 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03215c5e-aa7f-4865-8e14-7adb79cc6daa-catalog-content\") pod \"community-operators-gjdvf\" (UID: \"03215c5e-aa7f-4865-8e14-7adb79cc6daa\") " pod="openshift-marketplace/community-operators-gjdvf" Mar 13 10:31:34 crc kubenswrapper[4632]: I0313 10:31:34.184143 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03215c5e-aa7f-4865-8e14-7adb79cc6daa-utilities\") pod \"community-operators-gjdvf\" (UID: \"03215c5e-aa7f-4865-8e14-7adb79cc6daa\") " pod="openshift-marketplace/community-operators-gjdvf" Mar 13 10:31:34 crc kubenswrapper[4632]: I0313 10:31:34.285855 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03215c5e-aa7f-4865-8e14-7adb79cc6daa-catalog-content\") pod \"community-operators-gjdvf\" (UID: \"03215c5e-aa7f-4865-8e14-7adb79cc6daa\") " pod="openshift-marketplace/community-operators-gjdvf" Mar 13 10:31:34 crc kubenswrapper[4632]: I0313 10:31:34.286246 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03215c5e-aa7f-4865-8e14-7adb79cc6daa-utilities\") pod \"community-operators-gjdvf\" (UID: \"03215c5e-aa7f-4865-8e14-7adb79cc6daa\") " pod="openshift-marketplace/community-operators-gjdvf" Mar 13 10:31:34 crc kubenswrapper[4632]: I0313 10:31:34.286328 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxklg\" (UniqueName: \"kubernetes.io/projected/03215c5e-aa7f-4865-8e14-7adb79cc6daa-kube-api-access-zxklg\") pod \"community-operators-gjdvf\" (UID: \"03215c5e-aa7f-4865-8e14-7adb79cc6daa\") " pod="openshift-marketplace/community-operators-gjdvf" Mar 13 10:31:34 crc kubenswrapper[4632]: I0313 10:31:34.286738 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03215c5e-aa7f-4865-8e14-7adb79cc6daa-catalog-content\") pod \"community-operators-gjdvf\" (UID: \"03215c5e-aa7f-4865-8e14-7adb79cc6daa\") " pod="openshift-marketplace/community-operators-gjdvf" Mar 13 10:31:34 crc kubenswrapper[4632]: I0313 10:31:34.287140 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03215c5e-aa7f-4865-8e14-7adb79cc6daa-utilities\") pod \"community-operators-gjdvf\" (UID: \"03215c5e-aa7f-4865-8e14-7adb79cc6daa\") " pod="openshift-marketplace/community-operators-gjdvf" Mar 13 10:31:34 crc kubenswrapper[4632]: I0313 10:31:34.304906 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxklg\" (UniqueName: \"kubernetes.io/projected/03215c5e-aa7f-4865-8e14-7adb79cc6daa-kube-api-access-zxklg\") pod \"community-operators-gjdvf\" (UID: \"03215c5e-aa7f-4865-8e14-7adb79cc6daa\") " pod="openshift-marketplace/community-operators-gjdvf" Mar 13 10:31:34 crc kubenswrapper[4632]: I0313 10:31:34.403874 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjdvf" Mar 13 10:31:34 crc kubenswrapper[4632]: I0313 10:31:34.935845 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gjdvf"] Mar 13 10:31:35 crc kubenswrapper[4632]: I0313 10:31:35.126693 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjdvf" event={"ID":"03215c5e-aa7f-4865-8e14-7adb79cc6daa","Type":"ContainerStarted","Data":"49ad0dc835d9a21740d64cc609bbe889c1787b45abac10c16f9550eca410cb5e"} Mar 13 10:31:35 crc kubenswrapper[4632]: I0313 10:31:35.127025 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjdvf" event={"ID":"03215c5e-aa7f-4865-8e14-7adb79cc6daa","Type":"ContainerStarted","Data":"5bfc882deb40abae4190d943339bb6bc98f0f88a95b8725fe0fdc29c76d7cf9f"} Mar 13 10:31:36 crc kubenswrapper[4632]: I0313 10:31:36.142728 4632 generic.go:334] "Generic (PLEG): container finished" podID="03215c5e-aa7f-4865-8e14-7adb79cc6daa" containerID="49ad0dc835d9a21740d64cc609bbe889c1787b45abac10c16f9550eca410cb5e" exitCode=0 Mar 13 10:31:36 crc kubenswrapper[4632]: I0313 10:31:36.142791 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjdvf" event={"ID":"03215c5e-aa7f-4865-8e14-7adb79cc6daa","Type":"ContainerDied","Data":"49ad0dc835d9a21740d64cc609bbe889c1787b45abac10c16f9550eca410cb5e"} Mar 13 10:31:37 crc kubenswrapper[4632]: I0313 10:31:37.044775 4632 scope.go:117] "RemoveContainer" containerID="8c0ae371a519eb9db1c6eb843b2bd1981f031101e4086e8bec3fc57f1e905a6f" Mar 13 10:31:37 crc kubenswrapper[4632]: E0313 10:31:37.046009 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:31:37 crc kubenswrapper[4632]: I0313 10:31:37.152409 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjdvf" event={"ID":"03215c5e-aa7f-4865-8e14-7adb79cc6daa","Type":"ContainerStarted","Data":"81f430399eabd7810f83b975907d2a562229273cc3df439ab9b33cbfde5ddc35"} Mar 13 10:31:38 crc kubenswrapper[4632]: I0313 10:31:38.164103 4632 generic.go:334] "Generic (PLEG): container finished" podID="c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e" containerID="f69527ec2be6e4df808bf875b701d62b874066c28a489a38ef573bdae2b131dc" exitCode=0 Mar 13 10:31:38 crc kubenswrapper[4632]: I0313 10:31:38.164158 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e","Type":"ContainerDied","Data":"f69527ec2be6e4df808bf875b701d62b874066c28a489a38ef573bdae2b131dc"} Mar 13 10:31:39 crc kubenswrapper[4632]: I0313 10:31:39.178322 4632 generic.go:334] "Generic (PLEG): container finished" podID="03215c5e-aa7f-4865-8e14-7adb79cc6daa" containerID="81f430399eabd7810f83b975907d2a562229273cc3df439ab9b33cbfde5ddc35" exitCode=0 Mar 13 10:31:39 crc kubenswrapper[4632]: I0313 10:31:39.178490 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjdvf" event={"ID":"03215c5e-aa7f-4865-8e14-7adb79cc6daa","Type":"ContainerDied","Data":"81f430399eabd7810f83b975907d2a562229273cc3df439ab9b33cbfde5ddc35"} Mar 13 10:31:39 crc kubenswrapper[4632]: I0313 10:31:39.182985 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e","Type":"ContainerStarted","Data":"6477bb63f9230d4eb7ed71b9808b2a7459fbb37920ec3edee2c2e8ce3382ade9"} Mar 13 10:31:39 crc kubenswrapper[4632]: I0313 10:31:39.183203 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 13 10:31:39 crc kubenswrapper[4632]: I0313 10:31:39.189065 4632 generic.go:334] "Generic (PLEG): container finished" podID="a3d80d9f-c956-40f5-b2e1-8aea2f136b6e" containerID="14c062fec112b9ff9dd032399f07c596e52d395c51c6e56a8d5ba1fc6a94ca9a" exitCode=0 Mar 13 10:31:39 crc kubenswrapper[4632]: I0313 10:31:39.189119 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e","Type":"ContainerDied","Data":"14c062fec112b9ff9dd032399f07c596e52d395c51c6e56a8d5ba1fc6a94ca9a"} Mar 13 10:31:39 crc kubenswrapper[4632]: I0313 10:31:39.336717 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.336693404 podStartE2EDuration="38.336693404s" podCreationTimestamp="2026-03-13 10:31:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:31:39.296024304 +0000 UTC m=+1673.318554437" watchObservedRunningTime="2026-03-13 10:31:39.336693404 +0000 UTC m=+1673.359223547" Mar 13 10:31:40 crc kubenswrapper[4632]: I0313 10:31:40.200615 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjdvf" event={"ID":"03215c5e-aa7f-4865-8e14-7adb79cc6daa","Type":"ContainerStarted","Data":"f3faae6eab76531261d4e9489936f30a1f2265caad99779ef7b44da581399110"} Mar 13 10:31:40 crc kubenswrapper[4632]: I0313 10:31:40.204010 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a3d80d9f-c956-40f5-b2e1-8aea2f136b6e","Type":"ContainerStarted","Data":"4acfb0604b12245729351effb8cd4a294fb3df90c4eb8d0f98d89deda4f3f3dc"} Mar 13 10:31:40 crc kubenswrapper[4632]: I0313 10:31:40.236923 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gjdvf" podStartSLOduration=2.57190617 podStartE2EDuration="6.236901406s" podCreationTimestamp="2026-03-13 10:31:34 +0000 UTC" firstStartedPulling="2026-03-13 10:31:36.144628434 +0000 UTC m=+1670.167158567" lastFinishedPulling="2026-03-13 10:31:39.80962367 +0000 UTC m=+1673.832153803" observedRunningTime="2026-03-13 10:31:40.23041819 +0000 UTC m=+1674.252948333" watchObservedRunningTime="2026-03-13 10:31:40.236901406 +0000 UTC m=+1674.259431529" Mar 13 10:31:40 crc kubenswrapper[4632]: I0313 10:31:40.260456 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.260436413 podStartE2EDuration="38.260436413s" podCreationTimestamp="2026-03-13 10:31:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:31:40.256953449 +0000 UTC m=+1674.279483592" watchObservedRunningTime="2026-03-13 10:31:40.260436413 +0000 UTC m=+1674.282966546" Mar 13 10:31:43 crc kubenswrapper[4632]: I0313 10:31:43.097238 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:44 crc kubenswrapper[4632]: I0313 10:31:44.261579 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9"] Mar 13 10:31:44 crc kubenswrapper[4632]: I0313 10:31:44.263105 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9" Mar 13 10:31:44 crc kubenswrapper[4632]: I0313 10:31:44.267401 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 13 10:31:44 crc kubenswrapper[4632]: I0313 10:31:44.268345 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 13 10:31:44 crc kubenswrapper[4632]: I0313 10:31:44.268405 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 13 10:31:44 crc kubenswrapper[4632]: I0313 10:31:44.269707 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qrzsx" Mar 13 10:31:44 crc kubenswrapper[4632]: I0313 10:31:44.283588 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9"] Mar 13 10:31:44 crc kubenswrapper[4632]: I0313 10:31:44.369531 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea59acf-3206-492e-a7a8-bf855823d92c-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9\" (UID: \"0ea59acf-3206-492e-a7a8-bf855823d92c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9" Mar 13 10:31:44 crc kubenswrapper[4632]: I0313 10:31:44.369734 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0ea59acf-3206-492e-a7a8-bf855823d92c-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9\" (UID: \"0ea59acf-3206-492e-a7a8-bf855823d92c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9" Mar 13 10:31:44 crc kubenswrapper[4632]: I0313 10:31:44.370066 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ea59acf-3206-492e-a7a8-bf855823d92c-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9\" (UID: \"0ea59acf-3206-492e-a7a8-bf855823d92c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9" Mar 13 10:31:44 crc kubenswrapper[4632]: I0313 10:31:44.370178 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7vn8\" (UniqueName: \"kubernetes.io/projected/0ea59acf-3206-492e-a7a8-bf855823d92c-kube-api-access-w7vn8\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9\" (UID: \"0ea59acf-3206-492e-a7a8-bf855823d92c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9" Mar 13 10:31:44 crc kubenswrapper[4632]: I0313 10:31:44.406038 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gjdvf" Mar 13 10:31:44 crc kubenswrapper[4632]: I0313 10:31:44.406085 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gjdvf" Mar 13 10:31:44 crc kubenswrapper[4632]: I0313 10:31:44.472473 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ea59acf-3206-492e-a7a8-bf855823d92c-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9\" (UID: \"0ea59acf-3206-492e-a7a8-bf855823d92c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9" Mar 13 10:31:44 crc kubenswrapper[4632]: I0313 10:31:44.472583 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7vn8\" (UniqueName: \"kubernetes.io/projected/0ea59acf-3206-492e-a7a8-bf855823d92c-kube-api-access-w7vn8\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9\" (UID: \"0ea59acf-3206-492e-a7a8-bf855823d92c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9" Mar 13 10:31:44 crc kubenswrapper[4632]: I0313 10:31:44.472636 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea59acf-3206-492e-a7a8-bf855823d92c-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9\" (UID: \"0ea59acf-3206-492e-a7a8-bf855823d92c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9" Mar 13 10:31:44 crc kubenswrapper[4632]: I0313 10:31:44.472849 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0ea59acf-3206-492e-a7a8-bf855823d92c-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9\" (UID: \"0ea59acf-3206-492e-a7a8-bf855823d92c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9" Mar 13 10:31:44 crc kubenswrapper[4632]: I0313 10:31:44.478264 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ea59acf-3206-492e-a7a8-bf855823d92c-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9\" (UID: \"0ea59acf-3206-492e-a7a8-bf855823d92c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9" Mar 13 10:31:44 crc kubenswrapper[4632]: I0313 10:31:44.483215 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea59acf-3206-492e-a7a8-bf855823d92c-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9\" (UID: \"0ea59acf-3206-492e-a7a8-bf855823d92c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9" Mar 13 10:31:44 crc kubenswrapper[4632]: I0313 10:31:44.488614 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0ea59acf-3206-492e-a7a8-bf855823d92c-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9\" (UID: \"0ea59acf-3206-492e-a7a8-bf855823d92c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9" Mar 13 10:31:44 crc kubenswrapper[4632]: I0313 10:31:44.537841 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7vn8\" (UniqueName: \"kubernetes.io/projected/0ea59acf-3206-492e-a7a8-bf855823d92c-kube-api-access-w7vn8\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9\" (UID: \"0ea59acf-3206-492e-a7a8-bf855823d92c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9" Mar 13 10:31:44 crc kubenswrapper[4632]: I0313 10:31:44.588427 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9" Mar 13 10:31:45 crc kubenswrapper[4632]: I0313 10:31:45.308045 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9"] Mar 13 10:31:45 crc kubenswrapper[4632]: I0313 10:31:45.454576 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-gjdvf" podUID="03215c5e-aa7f-4865-8e14-7adb79cc6daa" containerName="registry-server" probeResult="failure" output=< Mar 13 10:31:45 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 10:31:45 crc kubenswrapper[4632]: > Mar 13 10:31:46 crc kubenswrapper[4632]: I0313 10:31:46.280473 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9" event={"ID":"0ea59acf-3206-492e-a7a8-bf855823d92c","Type":"ContainerStarted","Data":"3b44df298dbf44fed38d413586ce5557104e682a2fdc9a900e9bbae85c8951c4"} Mar 13 10:31:50 crc kubenswrapper[4632]: I0313 10:31:50.045694 4632 scope.go:117] "RemoveContainer" containerID="8c0ae371a519eb9db1c6eb843b2bd1981f031101e4086e8bec3fc57f1e905a6f" Mar 13 10:31:50 crc kubenswrapper[4632]: E0313 10:31:50.046693 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:31:52 crc kubenswrapper[4632]: I0313 10:31:52.221169 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 13 10:31:53 crc kubenswrapper[4632]: I0313 10:31:53.101775 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 13 10:31:55 crc kubenswrapper[4632]: I0313 10:31:55.464041 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-gjdvf" podUID="03215c5e-aa7f-4865-8e14-7adb79cc6daa" containerName="registry-server" probeResult="failure" output=< Mar 13 10:31:55 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 10:31:55 crc kubenswrapper[4632]: > Mar 13 10:31:58 crc kubenswrapper[4632]: I0313 10:31:58.849394 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 13 10:31:59 crc kubenswrapper[4632]: I0313 10:31:59.523620 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9" event={"ID":"0ea59acf-3206-492e-a7a8-bf855823d92c","Type":"ContainerStarted","Data":"2a45f7f396ff9ac0f8fe934eb95d769681cb829e23350a9d92e18b2aeedef144"} Mar 13 10:31:59 crc kubenswrapper[4632]: I0313 10:31:59.558192 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9" podStartSLOduration=2.029747859 podStartE2EDuration="15.558164163s" podCreationTimestamp="2026-03-13 10:31:44 +0000 UTC" firstStartedPulling="2026-03-13 10:31:45.318732775 +0000 UTC m=+1679.341262908" lastFinishedPulling="2026-03-13 10:31:58.847149069 +0000 UTC m=+1692.869679212" observedRunningTime="2026-03-13 10:31:59.544181336 +0000 UTC m=+1693.566711479" watchObservedRunningTime="2026-03-13 10:31:59.558164163 +0000 UTC m=+1693.580694296" Mar 13 10:32:00 crc kubenswrapper[4632]: I0313 10:32:00.155966 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556632-sr4l5"] Mar 13 10:32:00 crc kubenswrapper[4632]: I0313 10:32:00.157835 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556632-sr4l5" Mar 13 10:32:00 crc kubenswrapper[4632]: I0313 10:32:00.160487 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 10:32:00 crc kubenswrapper[4632]: I0313 10:32:00.160926 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 10:32:00 crc kubenswrapper[4632]: I0313 10:32:00.161227 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 10:32:00 crc kubenswrapper[4632]: I0313 10:32:00.168165 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556632-sr4l5"] Mar 13 10:32:00 crc kubenswrapper[4632]: I0313 10:32:00.209993 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7q2t\" (UniqueName: \"kubernetes.io/projected/009f055c-d442-4b23-8f55-52a43362bbb2-kube-api-access-w7q2t\") pod \"auto-csr-approver-29556632-sr4l5\" (UID: \"009f055c-d442-4b23-8f55-52a43362bbb2\") " pod="openshift-infra/auto-csr-approver-29556632-sr4l5" Mar 13 10:32:00 crc kubenswrapper[4632]: I0313 10:32:00.312861 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7q2t\" (UniqueName: \"kubernetes.io/projected/009f055c-d442-4b23-8f55-52a43362bbb2-kube-api-access-w7q2t\") pod \"auto-csr-approver-29556632-sr4l5\" (UID: \"009f055c-d442-4b23-8f55-52a43362bbb2\") " pod="openshift-infra/auto-csr-approver-29556632-sr4l5" Mar 13 10:32:00 crc kubenswrapper[4632]: I0313 10:32:00.338788 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7q2t\" (UniqueName: \"kubernetes.io/projected/009f055c-d442-4b23-8f55-52a43362bbb2-kube-api-access-w7q2t\") pod \"auto-csr-approver-29556632-sr4l5\" (UID: \"009f055c-d442-4b23-8f55-52a43362bbb2\") " pod="openshift-infra/auto-csr-approver-29556632-sr4l5" Mar 13 10:32:00 crc kubenswrapper[4632]: I0313 10:32:00.482604 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556632-sr4l5" Mar 13 10:32:01 crc kubenswrapper[4632]: I0313 10:32:01.474135 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556632-sr4l5"] Mar 13 10:32:01 crc kubenswrapper[4632]: W0313 10:32:01.486072 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod009f055c_d442_4b23_8f55_52a43362bbb2.slice/crio-98b91929707278b3c3c358e0d943597f90b34c6e4534e5a39b399c489440ea9a WatchSource:0}: Error finding container 98b91929707278b3c3c358e0d943597f90b34c6e4534e5a39b399c489440ea9a: Status 404 returned error can't find the container with id 98b91929707278b3c3c358e0d943597f90b34c6e4534e5a39b399c489440ea9a Mar 13 10:32:01 crc kubenswrapper[4632]: I0313 10:32:01.543780 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556632-sr4l5" event={"ID":"009f055c-d442-4b23-8f55-52a43362bbb2","Type":"ContainerStarted","Data":"98b91929707278b3c3c358e0d943597f90b34c6e4534e5a39b399c489440ea9a"} Mar 13 10:32:03 crc kubenswrapper[4632]: I0313 10:32:03.044363 4632 scope.go:117] "RemoveContainer" containerID="8c0ae371a519eb9db1c6eb843b2bd1981f031101e4086e8bec3fc57f1e905a6f" Mar 13 10:32:03 crc kubenswrapper[4632]: E0313 10:32:03.045011 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:32:03 crc kubenswrapper[4632]: I0313 10:32:03.564170 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556632-sr4l5" event={"ID":"009f055c-d442-4b23-8f55-52a43362bbb2","Type":"ContainerStarted","Data":"4ccfb76824c418f1c761a43ca7732c6a7a69b7b1944ea2ee35bd45c569e7d7c6"} Mar 13 10:32:03 crc kubenswrapper[4632]: I0313 10:32:03.583029 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556632-sr4l5" podStartSLOduration=2.408523498 podStartE2EDuration="3.583011419s" podCreationTimestamp="2026-03-13 10:32:00 +0000 UTC" firstStartedPulling="2026-03-13 10:32:01.489179825 +0000 UTC m=+1695.511709958" lastFinishedPulling="2026-03-13 10:32:02.663667746 +0000 UTC m=+1696.686197879" observedRunningTime="2026-03-13 10:32:03.580284133 +0000 UTC m=+1697.602814276" watchObservedRunningTime="2026-03-13 10:32:03.583011419 +0000 UTC m=+1697.605541542" Mar 13 10:32:04 crc kubenswrapper[4632]: I0313 10:32:04.466307 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gjdvf" Mar 13 10:32:04 crc kubenswrapper[4632]: I0313 10:32:04.518772 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gjdvf" Mar 13 10:32:04 crc kubenswrapper[4632]: I0313 10:32:04.577039 4632 generic.go:334] "Generic (PLEG): container finished" podID="009f055c-d442-4b23-8f55-52a43362bbb2" containerID="4ccfb76824c418f1c761a43ca7732c6a7a69b7b1944ea2ee35bd45c569e7d7c6" exitCode=0 Mar 13 10:32:04 crc kubenswrapper[4632]: I0313 10:32:04.577098 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556632-sr4l5" event={"ID":"009f055c-d442-4b23-8f55-52a43362bbb2","Type":"ContainerDied","Data":"4ccfb76824c418f1c761a43ca7732c6a7a69b7b1944ea2ee35bd45c569e7d7c6"} Mar 13 10:32:05 crc kubenswrapper[4632]: I0313 10:32:05.230361 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gjdvf"] Mar 13 10:32:05 crc kubenswrapper[4632]: I0313 10:32:05.586605 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gjdvf" podUID="03215c5e-aa7f-4865-8e14-7adb79cc6daa" containerName="registry-server" containerID="cri-o://f3faae6eab76531261d4e9489936f30a1f2265caad99779ef7b44da581399110" gracePeriod=2 Mar 13 10:32:06 crc kubenswrapper[4632]: I0313 10:32:06.173004 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556632-sr4l5" Mar 13 10:32:06 crc kubenswrapper[4632]: I0313 10:32:06.180424 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjdvf" Mar 13 10:32:06 crc kubenswrapper[4632]: I0313 10:32:06.236482 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7q2t\" (UniqueName: \"kubernetes.io/projected/009f055c-d442-4b23-8f55-52a43362bbb2-kube-api-access-w7q2t\") pod \"009f055c-d442-4b23-8f55-52a43362bbb2\" (UID: \"009f055c-d442-4b23-8f55-52a43362bbb2\") " Mar 13 10:32:06 crc kubenswrapper[4632]: I0313 10:32:06.236630 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03215c5e-aa7f-4865-8e14-7adb79cc6daa-utilities\") pod \"03215c5e-aa7f-4865-8e14-7adb79cc6daa\" (UID: \"03215c5e-aa7f-4865-8e14-7adb79cc6daa\") " Mar 13 10:32:06 crc kubenswrapper[4632]: I0313 10:32:06.236731 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxklg\" (UniqueName: \"kubernetes.io/projected/03215c5e-aa7f-4865-8e14-7adb79cc6daa-kube-api-access-zxklg\") pod \"03215c5e-aa7f-4865-8e14-7adb79cc6daa\" (UID: \"03215c5e-aa7f-4865-8e14-7adb79cc6daa\") " Mar 13 10:32:06 crc kubenswrapper[4632]: I0313 10:32:06.236919 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03215c5e-aa7f-4865-8e14-7adb79cc6daa-catalog-content\") pod \"03215c5e-aa7f-4865-8e14-7adb79cc6daa\" (UID: \"03215c5e-aa7f-4865-8e14-7adb79cc6daa\") " Mar 13 10:32:06 crc kubenswrapper[4632]: I0313 10:32:06.237911 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03215c5e-aa7f-4865-8e14-7adb79cc6daa-utilities" (OuterVolumeSpecName: "utilities") pod "03215c5e-aa7f-4865-8e14-7adb79cc6daa" (UID: "03215c5e-aa7f-4865-8e14-7adb79cc6daa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:32:06 crc kubenswrapper[4632]: I0313 10:32:06.242810 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03215c5e-aa7f-4865-8e14-7adb79cc6daa-kube-api-access-zxklg" (OuterVolumeSpecName: "kube-api-access-zxklg") pod "03215c5e-aa7f-4865-8e14-7adb79cc6daa" (UID: "03215c5e-aa7f-4865-8e14-7adb79cc6daa"). InnerVolumeSpecName "kube-api-access-zxklg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:32:06 crc kubenswrapper[4632]: I0313 10:32:06.242872 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/009f055c-d442-4b23-8f55-52a43362bbb2-kube-api-access-w7q2t" (OuterVolumeSpecName: "kube-api-access-w7q2t") pod "009f055c-d442-4b23-8f55-52a43362bbb2" (UID: "009f055c-d442-4b23-8f55-52a43362bbb2"). InnerVolumeSpecName "kube-api-access-w7q2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:32:06 crc kubenswrapper[4632]: I0313 10:32:06.308252 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03215c5e-aa7f-4865-8e14-7adb79cc6daa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03215c5e-aa7f-4865-8e14-7adb79cc6daa" (UID: "03215c5e-aa7f-4865-8e14-7adb79cc6daa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:32:06 crc kubenswrapper[4632]: I0313 10:32:06.338866 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03215c5e-aa7f-4865-8e14-7adb79cc6daa-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 10:32:06 crc kubenswrapper[4632]: I0313 10:32:06.338899 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7q2t\" (UniqueName: \"kubernetes.io/projected/009f055c-d442-4b23-8f55-52a43362bbb2-kube-api-access-w7q2t\") on node \"crc\" DevicePath \"\"" Mar 13 10:32:06 crc kubenswrapper[4632]: I0313 10:32:06.338910 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03215c5e-aa7f-4865-8e14-7adb79cc6daa-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 10:32:06 crc kubenswrapper[4632]: I0313 10:32:06.338919 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxklg\" (UniqueName: \"kubernetes.io/projected/03215c5e-aa7f-4865-8e14-7adb79cc6daa-kube-api-access-zxklg\") on node \"crc\" DevicePath \"\"" Mar 13 10:32:06 crc kubenswrapper[4632]: I0313 10:32:06.592520 4632 scope.go:117] "RemoveContainer" containerID="19adb417107921a77df964ab1bd8c8cf0029e40afcac705a66952307655b68b9" Mar 13 10:32:06 crc kubenswrapper[4632]: I0313 10:32:06.598273 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556632-sr4l5" event={"ID":"009f055c-d442-4b23-8f55-52a43362bbb2","Type":"ContainerDied","Data":"98b91929707278b3c3c358e0d943597f90b34c6e4534e5a39b399c489440ea9a"} Mar 13 10:32:06 crc kubenswrapper[4632]: I0313 10:32:06.598315 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98b91929707278b3c3c358e0d943597f90b34c6e4534e5a39b399c489440ea9a" Mar 13 10:32:06 crc kubenswrapper[4632]: I0313 10:32:06.598529 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556632-sr4l5" Mar 13 10:32:06 crc kubenswrapper[4632]: I0313 10:32:06.613242 4632 generic.go:334] "Generic (PLEG): container finished" podID="03215c5e-aa7f-4865-8e14-7adb79cc6daa" containerID="f3faae6eab76531261d4e9489936f30a1f2265caad99779ef7b44da581399110" exitCode=0 Mar 13 10:32:06 crc kubenswrapper[4632]: I0313 10:32:06.613299 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjdvf" event={"ID":"03215c5e-aa7f-4865-8e14-7adb79cc6daa","Type":"ContainerDied","Data":"f3faae6eab76531261d4e9489936f30a1f2265caad99779ef7b44da581399110"} Mar 13 10:32:06 crc kubenswrapper[4632]: I0313 10:32:06.613325 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjdvf" Mar 13 10:32:06 crc kubenswrapper[4632]: I0313 10:32:06.613341 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjdvf" event={"ID":"03215c5e-aa7f-4865-8e14-7adb79cc6daa","Type":"ContainerDied","Data":"5bfc882deb40abae4190d943339bb6bc98f0f88a95b8725fe0fdc29c76d7cf9f"} Mar 13 10:32:06 crc kubenswrapper[4632]: I0313 10:32:06.613364 4632 scope.go:117] "RemoveContainer" containerID="f3faae6eab76531261d4e9489936f30a1f2265caad99779ef7b44da581399110" Mar 13 10:32:06 crc kubenswrapper[4632]: I0313 10:32:06.660910 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556626-z45rd"] Mar 13 10:32:06 crc kubenswrapper[4632]: I0313 10:32:06.666863 4632 scope.go:117] "RemoveContainer" containerID="81f430399eabd7810f83b975907d2a562229273cc3df439ab9b33cbfde5ddc35" Mar 13 10:32:06 crc kubenswrapper[4632]: I0313 10:32:06.674257 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556626-z45rd"] Mar 13 10:32:06 crc kubenswrapper[4632]: I0313 10:32:06.686429 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gjdvf"] Mar 13 10:32:06 crc kubenswrapper[4632]: I0313 10:32:06.700899 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gjdvf"] Mar 13 10:32:06 crc kubenswrapper[4632]: I0313 10:32:06.706759 4632 scope.go:117] "RemoveContainer" containerID="49ad0dc835d9a21740d64cc609bbe889c1787b45abac10c16f9550eca410cb5e" Mar 13 10:32:06 crc kubenswrapper[4632]: I0313 10:32:06.762790 4632 scope.go:117] "RemoveContainer" containerID="f3faae6eab76531261d4e9489936f30a1f2265caad99779ef7b44da581399110" Mar 13 10:32:06 crc kubenswrapper[4632]: E0313 10:32:06.763583 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3faae6eab76531261d4e9489936f30a1f2265caad99779ef7b44da581399110\": container with ID starting with f3faae6eab76531261d4e9489936f30a1f2265caad99779ef7b44da581399110 not found: ID does not exist" containerID="f3faae6eab76531261d4e9489936f30a1f2265caad99779ef7b44da581399110" Mar 13 10:32:06 crc kubenswrapper[4632]: I0313 10:32:06.763630 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3faae6eab76531261d4e9489936f30a1f2265caad99779ef7b44da581399110"} err="failed to get container status \"f3faae6eab76531261d4e9489936f30a1f2265caad99779ef7b44da581399110\": rpc error: code = NotFound desc = could not find container \"f3faae6eab76531261d4e9489936f30a1f2265caad99779ef7b44da581399110\": container with ID starting with f3faae6eab76531261d4e9489936f30a1f2265caad99779ef7b44da581399110 not found: ID does not exist" Mar 13 10:32:06 crc kubenswrapper[4632]: I0313 10:32:06.763651 4632 scope.go:117] "RemoveContainer" containerID="81f430399eabd7810f83b975907d2a562229273cc3df439ab9b33cbfde5ddc35" Mar 13 10:32:06 crc kubenswrapper[4632]: E0313 10:32:06.763980 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81f430399eabd7810f83b975907d2a562229273cc3df439ab9b33cbfde5ddc35\": container with ID starting with 81f430399eabd7810f83b975907d2a562229273cc3df439ab9b33cbfde5ddc35 not found: ID does not exist" containerID="81f430399eabd7810f83b975907d2a562229273cc3df439ab9b33cbfde5ddc35" Mar 13 10:32:06 crc kubenswrapper[4632]: I0313 10:32:06.764055 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81f430399eabd7810f83b975907d2a562229273cc3df439ab9b33cbfde5ddc35"} err="failed to get container status \"81f430399eabd7810f83b975907d2a562229273cc3df439ab9b33cbfde5ddc35\": rpc error: code = NotFound desc = could not find container \"81f430399eabd7810f83b975907d2a562229273cc3df439ab9b33cbfde5ddc35\": container with ID starting with 81f430399eabd7810f83b975907d2a562229273cc3df439ab9b33cbfde5ddc35 not found: ID does not exist" Mar 13 10:32:06 crc kubenswrapper[4632]: I0313 10:32:06.764119 4632 scope.go:117] "RemoveContainer" containerID="49ad0dc835d9a21740d64cc609bbe889c1787b45abac10c16f9550eca410cb5e" Mar 13 10:32:06 crc kubenswrapper[4632]: E0313 10:32:06.764381 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49ad0dc835d9a21740d64cc609bbe889c1787b45abac10c16f9550eca410cb5e\": container with ID starting with 49ad0dc835d9a21740d64cc609bbe889c1787b45abac10c16f9550eca410cb5e not found: ID does not exist" containerID="49ad0dc835d9a21740d64cc609bbe889c1787b45abac10c16f9550eca410cb5e" Mar 13 10:32:06 crc kubenswrapper[4632]: I0313 10:32:06.764420 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49ad0dc835d9a21740d64cc609bbe889c1787b45abac10c16f9550eca410cb5e"} err="failed to get container status \"49ad0dc835d9a21740d64cc609bbe889c1787b45abac10c16f9550eca410cb5e\": rpc error: code = NotFound desc = could not find container \"49ad0dc835d9a21740d64cc609bbe889c1787b45abac10c16f9550eca410cb5e\": container with ID starting with 49ad0dc835d9a21740d64cc609bbe889c1787b45abac10c16f9550eca410cb5e not found: ID does not exist" Mar 13 10:32:08 crc kubenswrapper[4632]: I0313 10:32:08.055168 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03215c5e-aa7f-4865-8e14-7adb79cc6daa" path="/var/lib/kubelet/pods/03215c5e-aa7f-4865-8e14-7adb79cc6daa/volumes" Mar 13 10:32:08 crc kubenswrapper[4632]: I0313 10:32:08.056551 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d59f1c4-0990-423d-ab4f-ecf2d0a1ac27" path="/var/lib/kubelet/pods/8d59f1c4-0990-423d-ab4f-ecf2d0a1ac27/volumes" Mar 13 10:32:12 crc kubenswrapper[4632]: I0313 10:32:12.668161 4632 generic.go:334] "Generic (PLEG): container finished" podID="0ea59acf-3206-492e-a7a8-bf855823d92c" containerID="2a45f7f396ff9ac0f8fe934eb95d769681cb829e23350a9d92e18b2aeedef144" exitCode=0 Mar 13 10:32:12 crc kubenswrapper[4632]: I0313 10:32:12.668311 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9" event={"ID":"0ea59acf-3206-492e-a7a8-bf855823d92c","Type":"ContainerDied","Data":"2a45f7f396ff9ac0f8fe934eb95d769681cb829e23350a9d92e18b2aeedef144"} Mar 13 10:32:14 crc kubenswrapper[4632]: I0313 10:32:14.043882 4632 scope.go:117] "RemoveContainer" containerID="8c0ae371a519eb9db1c6eb843b2bd1981f031101e4086e8bec3fc57f1e905a6f" Mar 13 10:32:14 crc kubenswrapper[4632]: E0313 10:32:14.044689 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:32:14 crc kubenswrapper[4632]: I0313 10:32:14.355147 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9" Mar 13 10:32:14 crc kubenswrapper[4632]: I0313 10:32:14.498481 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea59acf-3206-492e-a7a8-bf855823d92c-repo-setup-combined-ca-bundle\") pod \"0ea59acf-3206-492e-a7a8-bf855823d92c\" (UID: \"0ea59acf-3206-492e-a7a8-bf855823d92c\") " Mar 13 10:32:14 crc kubenswrapper[4632]: I0313 10:32:14.499740 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7vn8\" (UniqueName: \"kubernetes.io/projected/0ea59acf-3206-492e-a7a8-bf855823d92c-kube-api-access-w7vn8\") pod \"0ea59acf-3206-492e-a7a8-bf855823d92c\" (UID: \"0ea59acf-3206-492e-a7a8-bf855823d92c\") " Mar 13 10:32:14 crc kubenswrapper[4632]: I0313 10:32:14.500336 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ea59acf-3206-492e-a7a8-bf855823d92c-inventory\") pod \"0ea59acf-3206-492e-a7a8-bf855823d92c\" (UID: \"0ea59acf-3206-492e-a7a8-bf855823d92c\") " Mar 13 10:32:14 crc kubenswrapper[4632]: I0313 10:32:14.500585 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0ea59acf-3206-492e-a7a8-bf855823d92c-ssh-key-openstack-edpm-ipam\") pod \"0ea59acf-3206-492e-a7a8-bf855823d92c\" (UID: \"0ea59acf-3206-492e-a7a8-bf855823d92c\") " Mar 13 10:32:14 crc kubenswrapper[4632]: I0313 10:32:14.505873 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ea59acf-3206-492e-a7a8-bf855823d92c-kube-api-access-w7vn8" (OuterVolumeSpecName: "kube-api-access-w7vn8") pod "0ea59acf-3206-492e-a7a8-bf855823d92c" (UID: "0ea59acf-3206-492e-a7a8-bf855823d92c"). InnerVolumeSpecName "kube-api-access-w7vn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:32:14 crc kubenswrapper[4632]: I0313 10:32:14.511284 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ea59acf-3206-492e-a7a8-bf855823d92c-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "0ea59acf-3206-492e-a7a8-bf855823d92c" (UID: "0ea59acf-3206-492e-a7a8-bf855823d92c"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:32:14 crc kubenswrapper[4632]: I0313 10:32:14.529638 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ea59acf-3206-492e-a7a8-bf855823d92c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0ea59acf-3206-492e-a7a8-bf855823d92c" (UID: "0ea59acf-3206-492e-a7a8-bf855823d92c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:32:14 crc kubenswrapper[4632]: I0313 10:32:14.545146 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ea59acf-3206-492e-a7a8-bf855823d92c-inventory" (OuterVolumeSpecName: "inventory") pod "0ea59acf-3206-492e-a7a8-bf855823d92c" (UID: "0ea59acf-3206-492e-a7a8-bf855823d92c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:32:14 crc kubenswrapper[4632]: I0313 10:32:14.604343 4632 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea59acf-3206-492e-a7a8-bf855823d92c-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:32:14 crc kubenswrapper[4632]: I0313 10:32:14.604394 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7vn8\" (UniqueName: \"kubernetes.io/projected/0ea59acf-3206-492e-a7a8-bf855823d92c-kube-api-access-w7vn8\") on node \"crc\" DevicePath \"\"" Mar 13 10:32:14 crc kubenswrapper[4632]: I0313 10:32:14.604409 4632 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ea59acf-3206-492e-a7a8-bf855823d92c-inventory\") on node \"crc\" DevicePath \"\"" Mar 13 10:32:14 crc kubenswrapper[4632]: I0313 10:32:14.604422 4632 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0ea59acf-3206-492e-a7a8-bf855823d92c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 13 10:32:14 crc kubenswrapper[4632]: I0313 10:32:14.688811 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9" event={"ID":"0ea59acf-3206-492e-a7a8-bf855823d92c","Type":"ContainerDied","Data":"3b44df298dbf44fed38d413586ce5557104e682a2fdc9a900e9bbae85c8951c4"} Mar 13 10:32:14 crc kubenswrapper[4632]: I0313 10:32:14.688878 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b44df298dbf44fed38d413586ce5557104e682a2fdc9a900e9bbae85c8951c4" Mar 13 10:32:14 crc kubenswrapper[4632]: I0313 10:32:14.688836 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9" Mar 13 10:32:14 crc kubenswrapper[4632]: I0313 10:32:14.792290 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-5s64h"] Mar 13 10:32:14 crc kubenswrapper[4632]: E0313 10:32:14.792741 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03215c5e-aa7f-4865-8e14-7adb79cc6daa" containerName="registry-server" Mar 13 10:32:14 crc kubenswrapper[4632]: I0313 10:32:14.792763 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="03215c5e-aa7f-4865-8e14-7adb79cc6daa" containerName="registry-server" Mar 13 10:32:14 crc kubenswrapper[4632]: E0313 10:32:14.792774 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="009f055c-d442-4b23-8f55-52a43362bbb2" containerName="oc" Mar 13 10:32:14 crc kubenswrapper[4632]: I0313 10:32:14.792781 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="009f055c-d442-4b23-8f55-52a43362bbb2" containerName="oc" Mar 13 10:32:14 crc kubenswrapper[4632]: E0313 10:32:14.792816 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ea59acf-3206-492e-a7a8-bf855823d92c" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 13 10:32:14 crc kubenswrapper[4632]: I0313 10:32:14.795079 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ea59acf-3206-492e-a7a8-bf855823d92c" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 13 10:32:14 crc kubenswrapper[4632]: E0313 10:32:14.795104 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03215c5e-aa7f-4865-8e14-7adb79cc6daa" containerName="extract-content" Mar 13 10:32:14 crc kubenswrapper[4632]: I0313 10:32:14.795113 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="03215c5e-aa7f-4865-8e14-7adb79cc6daa" containerName="extract-content" Mar 13 10:32:14 crc kubenswrapper[4632]: E0313 10:32:14.795127 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03215c5e-aa7f-4865-8e14-7adb79cc6daa" containerName="extract-utilities" Mar 13 10:32:14 crc kubenswrapper[4632]: I0313 10:32:14.795136 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="03215c5e-aa7f-4865-8e14-7adb79cc6daa" containerName="extract-utilities" Mar 13 10:32:14 crc kubenswrapper[4632]: I0313 10:32:14.795394 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ea59acf-3206-492e-a7a8-bf855823d92c" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 13 10:32:14 crc kubenswrapper[4632]: I0313 10:32:14.795434 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="03215c5e-aa7f-4865-8e14-7adb79cc6daa" containerName="registry-server" Mar 13 10:32:14 crc kubenswrapper[4632]: I0313 10:32:14.795447 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="009f055c-d442-4b23-8f55-52a43362bbb2" containerName="oc" Mar 13 10:32:14 crc kubenswrapper[4632]: I0313 10:32:14.796207 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5s64h" Mar 13 10:32:14 crc kubenswrapper[4632]: I0313 10:32:14.798824 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 13 10:32:14 crc kubenswrapper[4632]: I0313 10:32:14.799015 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 13 10:32:14 crc kubenswrapper[4632]: I0313 10:32:14.799382 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qrzsx" Mar 13 10:32:14 crc kubenswrapper[4632]: I0313 10:32:14.799502 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 13 10:32:14 crc kubenswrapper[4632]: I0313 10:32:14.847393 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-5s64h"] Mar 13 10:32:14 crc kubenswrapper[4632]: I0313 10:32:14.908974 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gncwc\" (UniqueName: \"kubernetes.io/projected/1dc9191f-32b9-45b9-b49f-fd704075f0a5-kube-api-access-gncwc\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5s64h\" (UID: \"1dc9191f-32b9-45b9-b49f-fd704075f0a5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5s64h" Mar 13 10:32:14 crc kubenswrapper[4632]: I0313 10:32:14.909047 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1dc9191f-32b9-45b9-b49f-fd704075f0a5-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5s64h\" (UID: \"1dc9191f-32b9-45b9-b49f-fd704075f0a5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5s64h" Mar 13 10:32:14 crc kubenswrapper[4632]: I0313 10:32:14.909071 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1dc9191f-32b9-45b9-b49f-fd704075f0a5-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5s64h\" (UID: \"1dc9191f-32b9-45b9-b49f-fd704075f0a5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5s64h" Mar 13 10:32:15 crc kubenswrapper[4632]: I0313 10:32:15.011111 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gncwc\" (UniqueName: \"kubernetes.io/projected/1dc9191f-32b9-45b9-b49f-fd704075f0a5-kube-api-access-gncwc\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5s64h\" (UID: \"1dc9191f-32b9-45b9-b49f-fd704075f0a5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5s64h" Mar 13 10:32:15 crc kubenswrapper[4632]: I0313 10:32:15.011188 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1dc9191f-32b9-45b9-b49f-fd704075f0a5-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5s64h\" (UID: \"1dc9191f-32b9-45b9-b49f-fd704075f0a5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5s64h" Mar 13 10:32:15 crc kubenswrapper[4632]: I0313 10:32:15.011208 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1dc9191f-32b9-45b9-b49f-fd704075f0a5-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5s64h\" (UID: \"1dc9191f-32b9-45b9-b49f-fd704075f0a5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5s64h" Mar 13 10:32:15 crc kubenswrapper[4632]: I0313 10:32:15.016882 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1dc9191f-32b9-45b9-b49f-fd704075f0a5-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5s64h\" (UID: \"1dc9191f-32b9-45b9-b49f-fd704075f0a5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5s64h" Mar 13 10:32:15 crc kubenswrapper[4632]: I0313 10:32:15.023035 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1dc9191f-32b9-45b9-b49f-fd704075f0a5-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5s64h\" (UID: \"1dc9191f-32b9-45b9-b49f-fd704075f0a5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5s64h" Mar 13 10:32:15 crc kubenswrapper[4632]: I0313 10:32:15.029231 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gncwc\" (UniqueName: \"kubernetes.io/projected/1dc9191f-32b9-45b9-b49f-fd704075f0a5-kube-api-access-gncwc\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5s64h\" (UID: \"1dc9191f-32b9-45b9-b49f-fd704075f0a5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5s64h" Mar 13 10:32:15 crc kubenswrapper[4632]: I0313 10:32:15.112507 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5s64h" Mar 13 10:32:15 crc kubenswrapper[4632]: I0313 10:32:15.682345 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-5s64h"] Mar 13 10:32:16 crc kubenswrapper[4632]: I0313 10:32:16.718130 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5s64h" event={"ID":"1dc9191f-32b9-45b9-b49f-fd704075f0a5","Type":"ContainerStarted","Data":"77d6861fd1b459222aac820c3526b629eeb8d651fa3c13e5ff7bf6c45b935373"} Mar 13 10:32:16 crc kubenswrapper[4632]: I0313 10:32:16.718405 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5s64h" event={"ID":"1dc9191f-32b9-45b9-b49f-fd704075f0a5","Type":"ContainerStarted","Data":"7daf0ba28488eb10767fe67cb2c89de198570e079f99b96b008361e72e107851"} Mar 13 10:32:16 crc kubenswrapper[4632]: I0313 10:32:16.743388 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5s64h" podStartSLOduration=2.324840511 podStartE2EDuration="2.743371536s" podCreationTimestamp="2026-03-13 10:32:14 +0000 UTC" firstStartedPulling="2026-03-13 10:32:15.702879262 +0000 UTC m=+1709.725409395" lastFinishedPulling="2026-03-13 10:32:16.121410287 +0000 UTC m=+1710.143940420" observedRunningTime="2026-03-13 10:32:16.73903799 +0000 UTC m=+1710.761568123" watchObservedRunningTime="2026-03-13 10:32:16.743371536 +0000 UTC m=+1710.765901669" Mar 13 10:32:19 crc kubenswrapper[4632]: I0313 10:32:19.754352 4632 generic.go:334] "Generic (PLEG): container finished" podID="1dc9191f-32b9-45b9-b49f-fd704075f0a5" containerID="77d6861fd1b459222aac820c3526b629eeb8d651fa3c13e5ff7bf6c45b935373" exitCode=0 Mar 13 10:32:19 crc kubenswrapper[4632]: I0313 10:32:19.754410 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5s64h" event={"ID":"1dc9191f-32b9-45b9-b49f-fd704075f0a5","Type":"ContainerDied","Data":"77d6861fd1b459222aac820c3526b629eeb8d651fa3c13e5ff7bf6c45b935373"} Mar 13 10:32:21 crc kubenswrapper[4632]: I0313 10:32:21.223992 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5s64h" Mar 13 10:32:21 crc kubenswrapper[4632]: I0313 10:32:21.424015 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1dc9191f-32b9-45b9-b49f-fd704075f0a5-ssh-key-openstack-edpm-ipam\") pod \"1dc9191f-32b9-45b9-b49f-fd704075f0a5\" (UID: \"1dc9191f-32b9-45b9-b49f-fd704075f0a5\") " Mar 13 10:32:21 crc kubenswrapper[4632]: I0313 10:32:21.424080 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gncwc\" (UniqueName: \"kubernetes.io/projected/1dc9191f-32b9-45b9-b49f-fd704075f0a5-kube-api-access-gncwc\") pod \"1dc9191f-32b9-45b9-b49f-fd704075f0a5\" (UID: \"1dc9191f-32b9-45b9-b49f-fd704075f0a5\") " Mar 13 10:32:21 crc kubenswrapper[4632]: I0313 10:32:21.424153 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1dc9191f-32b9-45b9-b49f-fd704075f0a5-inventory\") pod \"1dc9191f-32b9-45b9-b49f-fd704075f0a5\" (UID: \"1dc9191f-32b9-45b9-b49f-fd704075f0a5\") " Mar 13 10:32:21 crc kubenswrapper[4632]: I0313 10:32:21.429858 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dc9191f-32b9-45b9-b49f-fd704075f0a5-kube-api-access-gncwc" (OuterVolumeSpecName: "kube-api-access-gncwc") pod "1dc9191f-32b9-45b9-b49f-fd704075f0a5" (UID: "1dc9191f-32b9-45b9-b49f-fd704075f0a5"). InnerVolumeSpecName "kube-api-access-gncwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:32:21 crc kubenswrapper[4632]: I0313 10:32:21.456122 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dc9191f-32b9-45b9-b49f-fd704075f0a5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1dc9191f-32b9-45b9-b49f-fd704075f0a5" (UID: "1dc9191f-32b9-45b9-b49f-fd704075f0a5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:32:21 crc kubenswrapper[4632]: I0313 10:32:21.457746 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dc9191f-32b9-45b9-b49f-fd704075f0a5-inventory" (OuterVolumeSpecName: "inventory") pod "1dc9191f-32b9-45b9-b49f-fd704075f0a5" (UID: "1dc9191f-32b9-45b9-b49f-fd704075f0a5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:32:21 crc kubenswrapper[4632]: I0313 10:32:21.526165 4632 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1dc9191f-32b9-45b9-b49f-fd704075f0a5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 13 10:32:21 crc kubenswrapper[4632]: I0313 10:32:21.526415 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gncwc\" (UniqueName: \"kubernetes.io/projected/1dc9191f-32b9-45b9-b49f-fd704075f0a5-kube-api-access-gncwc\") on node \"crc\" DevicePath \"\"" Mar 13 10:32:21 crc kubenswrapper[4632]: I0313 10:32:21.526530 4632 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1dc9191f-32b9-45b9-b49f-fd704075f0a5-inventory\") on node \"crc\" DevicePath \"\"" Mar 13 10:32:21 crc kubenswrapper[4632]: I0313 10:32:21.777717 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5s64h" event={"ID":"1dc9191f-32b9-45b9-b49f-fd704075f0a5","Type":"ContainerDied","Data":"7daf0ba28488eb10767fe67cb2c89de198570e079f99b96b008361e72e107851"} Mar 13 10:32:21 crc kubenswrapper[4632]: I0313 10:32:21.777803 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7daf0ba28488eb10767fe67cb2c89de198570e079f99b96b008361e72e107851" Mar 13 10:32:21 crc kubenswrapper[4632]: I0313 10:32:21.777767 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5s64h" Mar 13 10:32:21 crc kubenswrapper[4632]: I0313 10:32:21.878658 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp"] Mar 13 10:32:21 crc kubenswrapper[4632]: E0313 10:32:21.879720 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc9191f-32b9-45b9-b49f-fd704075f0a5" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Mar 13 10:32:21 crc kubenswrapper[4632]: I0313 10:32:21.879827 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc9191f-32b9-45b9-b49f-fd704075f0a5" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Mar 13 10:32:21 crc kubenswrapper[4632]: I0313 10:32:21.880227 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dc9191f-32b9-45b9-b49f-fd704075f0a5" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Mar 13 10:32:21 crc kubenswrapper[4632]: I0313 10:32:21.881109 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp" Mar 13 10:32:21 crc kubenswrapper[4632]: I0313 10:32:21.883341 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 13 10:32:21 crc kubenswrapper[4632]: I0313 10:32:21.883749 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 13 10:32:21 crc kubenswrapper[4632]: I0313 10:32:21.883933 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 13 10:32:21 crc kubenswrapper[4632]: I0313 10:32:21.894816 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qrzsx" Mar 13 10:32:21 crc kubenswrapper[4632]: I0313 10:32:21.897626 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp"] Mar 13 10:32:22 crc kubenswrapper[4632]: I0313 10:32:22.037022 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2z5j\" (UniqueName: \"kubernetes.io/projected/684a2658-ba02-40cf-a371-ec2a8934c0d3-kube-api-access-p2z5j\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp\" (UID: \"684a2658-ba02-40cf-a371-ec2a8934c0d3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp" Mar 13 10:32:22 crc kubenswrapper[4632]: I0313 10:32:22.037088 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/684a2658-ba02-40cf-a371-ec2a8934c0d3-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp\" (UID: \"684a2658-ba02-40cf-a371-ec2a8934c0d3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp" Mar 13 10:32:22 crc kubenswrapper[4632]: I0313 10:32:22.037252 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/684a2658-ba02-40cf-a371-ec2a8934c0d3-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp\" (UID: \"684a2658-ba02-40cf-a371-ec2a8934c0d3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp" Mar 13 10:32:22 crc kubenswrapper[4632]: I0313 10:32:22.037295 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/684a2658-ba02-40cf-a371-ec2a8934c0d3-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp\" (UID: \"684a2658-ba02-40cf-a371-ec2a8934c0d3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp" Mar 13 10:32:22 crc kubenswrapper[4632]: I0313 10:32:22.138836 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2z5j\" (UniqueName: \"kubernetes.io/projected/684a2658-ba02-40cf-a371-ec2a8934c0d3-kube-api-access-p2z5j\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp\" (UID: \"684a2658-ba02-40cf-a371-ec2a8934c0d3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp" Mar 13 10:32:22 crc kubenswrapper[4632]: I0313 10:32:22.138908 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/684a2658-ba02-40cf-a371-ec2a8934c0d3-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp\" (UID: \"684a2658-ba02-40cf-a371-ec2a8934c0d3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp" Mar 13 10:32:22 crc kubenswrapper[4632]: I0313 10:32:22.139108 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/684a2658-ba02-40cf-a371-ec2a8934c0d3-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp\" (UID: \"684a2658-ba02-40cf-a371-ec2a8934c0d3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp" Mar 13 10:32:22 crc kubenswrapper[4632]: I0313 10:32:22.139171 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/684a2658-ba02-40cf-a371-ec2a8934c0d3-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp\" (UID: \"684a2658-ba02-40cf-a371-ec2a8934c0d3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp" Mar 13 10:32:22 crc kubenswrapper[4632]: I0313 10:32:22.149865 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/684a2658-ba02-40cf-a371-ec2a8934c0d3-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp\" (UID: \"684a2658-ba02-40cf-a371-ec2a8934c0d3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp" Mar 13 10:32:22 crc kubenswrapper[4632]: I0313 10:32:22.149864 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/684a2658-ba02-40cf-a371-ec2a8934c0d3-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp\" (UID: \"684a2658-ba02-40cf-a371-ec2a8934c0d3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp" Mar 13 10:32:22 crc kubenswrapper[4632]: I0313 10:32:22.150582 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/684a2658-ba02-40cf-a371-ec2a8934c0d3-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp\" (UID: \"684a2658-ba02-40cf-a371-ec2a8934c0d3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp" Mar 13 10:32:22 crc kubenswrapper[4632]: I0313 10:32:22.155751 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2z5j\" (UniqueName: \"kubernetes.io/projected/684a2658-ba02-40cf-a371-ec2a8934c0d3-kube-api-access-p2z5j\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp\" (UID: \"684a2658-ba02-40cf-a371-ec2a8934c0d3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp" Mar 13 10:32:22 crc kubenswrapper[4632]: I0313 10:32:22.197202 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp" Mar 13 10:32:22 crc kubenswrapper[4632]: I0313 10:32:22.757921 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp"] Mar 13 10:32:22 crc kubenswrapper[4632]: I0313 10:32:22.787826 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp" event={"ID":"684a2658-ba02-40cf-a371-ec2a8934c0d3","Type":"ContainerStarted","Data":"7e733b330c11a117b4bd0ac4f5dc54f5cdcd79d6738005ec9435833371e5bb1e"} Mar 13 10:32:23 crc kubenswrapper[4632]: I0313 10:32:23.799237 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp" event={"ID":"684a2658-ba02-40cf-a371-ec2a8934c0d3","Type":"ContainerStarted","Data":"2a49e9be958588623abc6e501e323438380932e75041c2f3f5b099060c297811"} Mar 13 10:32:23 crc kubenswrapper[4632]: I0313 10:32:23.825022 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp" podStartSLOduration=2.364078684 podStartE2EDuration="2.825002681s" podCreationTimestamp="2026-03-13 10:32:21 +0000 UTC" firstStartedPulling="2026-03-13 10:32:22.769561559 +0000 UTC m=+1716.792091692" lastFinishedPulling="2026-03-13 10:32:23.230485566 +0000 UTC m=+1717.253015689" observedRunningTime="2026-03-13 10:32:23.817119092 +0000 UTC m=+1717.839649225" watchObservedRunningTime="2026-03-13 10:32:23.825002681 +0000 UTC m=+1717.847532814" Mar 13 10:32:25 crc kubenswrapper[4632]: I0313 10:32:25.044714 4632 scope.go:117] "RemoveContainer" containerID="8c0ae371a519eb9db1c6eb843b2bd1981f031101e4086e8bec3fc57f1e905a6f" Mar 13 10:32:25 crc kubenswrapper[4632]: E0313 10:32:25.045028 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:32:38 crc kubenswrapper[4632]: I0313 10:32:38.054237 4632 scope.go:117] "RemoveContainer" containerID="8c0ae371a519eb9db1c6eb843b2bd1981f031101e4086e8bec3fc57f1e905a6f" Mar 13 10:32:38 crc kubenswrapper[4632]: E0313 10:32:38.055063 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:32:49 crc kubenswrapper[4632]: I0313 10:32:49.045171 4632 scope.go:117] "RemoveContainer" containerID="8c0ae371a519eb9db1c6eb843b2bd1981f031101e4086e8bec3fc57f1e905a6f" Mar 13 10:32:49 crc kubenswrapper[4632]: E0313 10:32:49.046263 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:33:02 crc kubenswrapper[4632]: I0313 10:33:02.044810 4632 scope.go:117] "RemoveContainer" containerID="8c0ae371a519eb9db1c6eb843b2bd1981f031101e4086e8bec3fc57f1e905a6f" Mar 13 10:33:02 crc kubenswrapper[4632]: E0313 10:33:02.045504 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:33:06 crc kubenswrapper[4632]: I0313 10:33:06.740819 4632 scope.go:117] "RemoveContainer" containerID="e8fc7f9526396e3f4333f93ccef86f72aee3214939c63a5e8145c990bbf9d938" Mar 13 10:33:15 crc kubenswrapper[4632]: I0313 10:33:15.044933 4632 scope.go:117] "RemoveContainer" containerID="8c0ae371a519eb9db1c6eb843b2bd1981f031101e4086e8bec3fc57f1e905a6f" Mar 13 10:33:15 crc kubenswrapper[4632]: E0313 10:33:15.045731 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:33:29 crc kubenswrapper[4632]: I0313 10:33:29.044470 4632 scope.go:117] "RemoveContainer" containerID="8c0ae371a519eb9db1c6eb843b2bd1981f031101e4086e8bec3fc57f1e905a6f" Mar 13 10:33:29 crc kubenswrapper[4632]: E0313 10:33:29.045236 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:33:41 crc kubenswrapper[4632]: I0313 10:33:41.045988 4632 scope.go:117] "RemoveContainer" containerID="8c0ae371a519eb9db1c6eb843b2bd1981f031101e4086e8bec3fc57f1e905a6f" Mar 13 10:33:41 crc kubenswrapper[4632]: E0313 10:33:41.046853 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:33:54 crc kubenswrapper[4632]: I0313 10:33:54.044024 4632 scope.go:117] "RemoveContainer" containerID="8c0ae371a519eb9db1c6eb843b2bd1981f031101e4086e8bec3fc57f1e905a6f" Mar 13 10:33:54 crc kubenswrapper[4632]: E0313 10:33:54.045006 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:34:00 crc kubenswrapper[4632]: I0313 10:34:00.149140 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556634-6n98g"] Mar 13 10:34:00 crc kubenswrapper[4632]: I0313 10:34:00.152990 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556634-6n98g" Mar 13 10:34:00 crc kubenswrapper[4632]: I0313 10:34:00.155905 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 10:34:00 crc kubenswrapper[4632]: I0313 10:34:00.155919 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 10:34:00 crc kubenswrapper[4632]: I0313 10:34:00.157296 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 10:34:00 crc kubenswrapper[4632]: I0313 10:34:00.162914 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556634-6n98g"] Mar 13 10:34:00 crc kubenswrapper[4632]: I0313 10:34:00.297716 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh7l9\" (UniqueName: \"kubernetes.io/projected/155ba738-4ba0-424a-a1d7-067786728969-kube-api-access-mh7l9\") pod \"auto-csr-approver-29556634-6n98g\" (UID: \"155ba738-4ba0-424a-a1d7-067786728969\") " pod="openshift-infra/auto-csr-approver-29556634-6n98g" Mar 13 10:34:00 crc kubenswrapper[4632]: I0313 10:34:00.400282 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh7l9\" (UniqueName: \"kubernetes.io/projected/155ba738-4ba0-424a-a1d7-067786728969-kube-api-access-mh7l9\") pod \"auto-csr-approver-29556634-6n98g\" (UID: \"155ba738-4ba0-424a-a1d7-067786728969\") " pod="openshift-infra/auto-csr-approver-29556634-6n98g" Mar 13 10:34:00 crc kubenswrapper[4632]: I0313 10:34:00.421404 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh7l9\" (UniqueName: \"kubernetes.io/projected/155ba738-4ba0-424a-a1d7-067786728969-kube-api-access-mh7l9\") pod \"auto-csr-approver-29556634-6n98g\" (UID: \"155ba738-4ba0-424a-a1d7-067786728969\") " pod="openshift-infra/auto-csr-approver-29556634-6n98g" Mar 13 10:34:00 crc kubenswrapper[4632]: I0313 10:34:00.485019 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556634-6n98g" Mar 13 10:34:01 crc kubenswrapper[4632]: I0313 10:34:01.020358 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556634-6n98g"] Mar 13 10:34:01 crc kubenswrapper[4632]: I0313 10:34:01.769761 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556634-6n98g" event={"ID":"155ba738-4ba0-424a-a1d7-067786728969","Type":"ContainerStarted","Data":"162ef96ec7c09d383ed757ecece575f1df9b067804234a90b5649e29f280fcfd"} Mar 13 10:34:03 crc kubenswrapper[4632]: I0313 10:34:03.788576 4632 generic.go:334] "Generic (PLEG): container finished" podID="155ba738-4ba0-424a-a1d7-067786728969" containerID="f95b291d052d44a477db7fca5558efb7e90f20270d66ae208043b37111d582be" exitCode=0 Mar 13 10:34:03 crc kubenswrapper[4632]: I0313 10:34:03.788794 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556634-6n98g" event={"ID":"155ba738-4ba0-424a-a1d7-067786728969","Type":"ContainerDied","Data":"f95b291d052d44a477db7fca5558efb7e90f20270d66ae208043b37111d582be"} Mar 13 10:34:05 crc kubenswrapper[4632]: I0313 10:34:05.045613 4632 scope.go:117] "RemoveContainer" containerID="8c0ae371a519eb9db1c6eb843b2bd1981f031101e4086e8bec3fc57f1e905a6f" Mar 13 10:34:05 crc kubenswrapper[4632]: E0313 10:34:05.046229 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:34:05 crc kubenswrapper[4632]: I0313 10:34:05.159759 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556634-6n98g" Mar 13 10:34:05 crc kubenswrapper[4632]: I0313 10:34:05.296233 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mh7l9\" (UniqueName: \"kubernetes.io/projected/155ba738-4ba0-424a-a1d7-067786728969-kube-api-access-mh7l9\") pod \"155ba738-4ba0-424a-a1d7-067786728969\" (UID: \"155ba738-4ba0-424a-a1d7-067786728969\") " Mar 13 10:34:05 crc kubenswrapper[4632]: I0313 10:34:05.313819 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/155ba738-4ba0-424a-a1d7-067786728969-kube-api-access-mh7l9" (OuterVolumeSpecName: "kube-api-access-mh7l9") pod "155ba738-4ba0-424a-a1d7-067786728969" (UID: "155ba738-4ba0-424a-a1d7-067786728969"). InnerVolumeSpecName "kube-api-access-mh7l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:34:05 crc kubenswrapper[4632]: I0313 10:34:05.398835 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mh7l9\" (UniqueName: \"kubernetes.io/projected/155ba738-4ba0-424a-a1d7-067786728969-kube-api-access-mh7l9\") on node \"crc\" DevicePath \"\"" Mar 13 10:34:05 crc kubenswrapper[4632]: I0313 10:34:05.813805 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556634-6n98g" event={"ID":"155ba738-4ba0-424a-a1d7-067786728969","Type":"ContainerDied","Data":"162ef96ec7c09d383ed757ecece575f1df9b067804234a90b5649e29f280fcfd"} Mar 13 10:34:05 crc kubenswrapper[4632]: I0313 10:34:05.813844 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="162ef96ec7c09d383ed757ecece575f1df9b067804234a90b5649e29f280fcfd" Mar 13 10:34:05 crc kubenswrapper[4632]: I0313 10:34:05.813884 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556634-6n98g" Mar 13 10:34:06 crc kubenswrapper[4632]: I0313 10:34:06.250293 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556628-479rr"] Mar 13 10:34:06 crc kubenswrapper[4632]: I0313 10:34:06.258396 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556628-479rr"] Mar 13 10:34:08 crc kubenswrapper[4632]: I0313 10:34:08.066833 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="658f9ba3-69b7-4d2d-8258-bb7bdf272398" path="/var/lib/kubelet/pods/658f9ba3-69b7-4d2d-8258-bb7bdf272398/volumes" Mar 13 10:34:18 crc kubenswrapper[4632]: I0313 10:34:18.051989 4632 scope.go:117] "RemoveContainer" containerID="8c0ae371a519eb9db1c6eb843b2bd1981f031101e4086e8bec3fc57f1e905a6f" Mar 13 10:34:18 crc kubenswrapper[4632]: E0313 10:34:18.052812 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:34:20 crc kubenswrapper[4632]: I0313 10:34:20.062102 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-bfb6b"] Mar 13 10:34:20 crc kubenswrapper[4632]: I0313 10:34:20.067833 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-bfb6b"] Mar 13 10:34:21 crc kubenswrapper[4632]: I0313 10:34:21.054113 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-64xvf"] Mar 13 10:34:21 crc kubenswrapper[4632]: I0313 10:34:21.070000 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-a750-account-create-update-7wk26"] Mar 13 10:34:21 crc kubenswrapper[4632]: I0313 10:34:21.089241 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-7hqpw"] Mar 13 10:34:21 crc kubenswrapper[4632]: I0313 10:34:21.103252 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-ab0c-account-create-update-tr7hx"] Mar 13 10:34:21 crc kubenswrapper[4632]: I0313 10:34:21.112977 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-9698-account-create-update-9kfhv"] Mar 13 10:34:21 crc kubenswrapper[4632]: I0313 10:34:21.123070 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-64xvf"] Mar 13 10:34:21 crc kubenswrapper[4632]: I0313 10:34:21.132440 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-9698-account-create-update-9kfhv"] Mar 13 10:34:21 crc kubenswrapper[4632]: I0313 10:34:21.140619 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-7hqpw"] Mar 13 10:34:21 crc kubenswrapper[4632]: I0313 10:34:21.149013 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-a750-account-create-update-7wk26"] Mar 13 10:34:21 crc kubenswrapper[4632]: I0313 10:34:21.159174 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-ab0c-account-create-update-tr7hx"] Mar 13 10:34:22 crc kubenswrapper[4632]: I0313 10:34:22.060892 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2610abab-1da4-4912-9ca7-f2aa2d7c0486" path="/var/lib/kubelet/pods/2610abab-1da4-4912-9ca7-f2aa2d7c0486/volumes" Mar 13 10:34:22 crc kubenswrapper[4632]: I0313 10:34:22.063549 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584d2818-4b22-468f-b296-bd1850c7915b" path="/var/lib/kubelet/pods/584d2818-4b22-468f-b296-bd1850c7915b/volumes" Mar 13 10:34:22 crc kubenswrapper[4632]: I0313 10:34:22.067712 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f09e2f4-4f82-4388-9b5a-a9e890d3a950" path="/var/lib/kubelet/pods/5f09e2f4-4f82-4388-9b5a-a9e890d3a950/volumes" Mar 13 10:34:22 crc kubenswrapper[4632]: I0313 10:34:22.072917 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c84aa49-2900-4a14-b81b-bb03e925d1b7" path="/var/lib/kubelet/pods/6c84aa49-2900-4a14-b81b-bb03e925d1b7/volumes" Mar 13 10:34:22 crc kubenswrapper[4632]: I0313 10:34:22.075510 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e353045-e09b-4cd2-b659-1954485ec8db" path="/var/lib/kubelet/pods/8e353045-e09b-4cd2-b659-1954485ec8db/volumes" Mar 13 10:34:22 crc kubenswrapper[4632]: I0313 10:34:22.077457 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4f6b362-7670-4867-b8fa-1f4c6170389f" path="/var/lib/kubelet/pods/c4f6b362-7670-4867-b8fa-1f4c6170389f/volumes" Mar 13 10:34:25 crc kubenswrapper[4632]: I0313 10:34:25.052374 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-lrjmj"] Mar 13 10:34:25 crc kubenswrapper[4632]: I0313 10:34:25.066991 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-lrjmj"] Mar 13 10:34:26 crc kubenswrapper[4632]: I0313 10:34:26.060215 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d670715-74f3-46a6-974c-b6953af9fdb7" path="/var/lib/kubelet/pods/2d670715-74f3-46a6-974c-b6953af9fdb7/volumes" Mar 13 10:34:29 crc kubenswrapper[4632]: I0313 10:34:29.044804 4632 scope.go:117] "RemoveContainer" containerID="8c0ae371a519eb9db1c6eb843b2bd1981f031101e4086e8bec3fc57f1e905a6f" Mar 13 10:34:29 crc kubenswrapper[4632]: E0313 10:34:29.045714 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:34:40 crc kubenswrapper[4632]: I0313 10:34:40.044827 4632 scope.go:117] "RemoveContainer" containerID="8c0ae371a519eb9db1c6eb843b2bd1981f031101e4086e8bec3fc57f1e905a6f" Mar 13 10:34:40 crc kubenswrapper[4632]: E0313 10:34:40.045700 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:34:49 crc kubenswrapper[4632]: I0313 10:34:49.042953 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-21a0-account-create-update-4clr7"] Mar 13 10:34:49 crc kubenswrapper[4632]: I0313 10:34:49.051613 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-21a0-account-create-update-4clr7"] Mar 13 10:34:50 crc kubenswrapper[4632]: I0313 10:34:50.032871 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-dwf4b"] Mar 13 10:34:50 crc kubenswrapper[4632]: I0313 10:34:50.057499 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a03b92ea-cd2c-455d-a88e-1d57b958b138" path="/var/lib/kubelet/pods/a03b92ea-cd2c-455d-a88e-1d57b958b138/volumes" Mar 13 10:34:50 crc kubenswrapper[4632]: I0313 10:34:50.059854 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-kp87n"] Mar 13 10:34:50 crc kubenswrapper[4632]: I0313 10:34:50.059893 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-kp87n"] Mar 13 10:34:50 crc kubenswrapper[4632]: I0313 10:34:50.064695 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-dwf4b"] Mar 13 10:34:52 crc kubenswrapper[4632]: I0313 10:34:52.063631 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="239c554e-360d-4f04-86f0-b2b98974bad3" path="/var/lib/kubelet/pods/239c554e-360d-4f04-86f0-b2b98974bad3/volumes" Mar 13 10:34:52 crc kubenswrapper[4632]: I0313 10:34:52.069955 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcdcfad1-d735-4b55-ae65-0ce16bdbc79d" path="/var/lib/kubelet/pods/bcdcfad1-d735-4b55-ae65-0ce16bdbc79d/volumes" Mar 13 10:34:54 crc kubenswrapper[4632]: I0313 10:34:54.044315 4632 scope.go:117] "RemoveContainer" containerID="8c0ae371a519eb9db1c6eb843b2bd1981f031101e4086e8bec3fc57f1e905a6f" Mar 13 10:34:54 crc kubenswrapper[4632]: E0313 10:34:54.045033 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:34:55 crc kubenswrapper[4632]: I0313 10:34:55.050536 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-da66-account-create-update-tk8pd"] Mar 13 10:34:55 crc kubenswrapper[4632]: I0313 10:34:55.062801 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-pnvjb"] Mar 13 10:34:55 crc kubenswrapper[4632]: I0313 10:34:55.074415 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-b742-account-create-update-gfdkg"] Mar 13 10:34:55 crc kubenswrapper[4632]: I0313 10:34:55.079349 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-g7pfc"] Mar 13 10:34:55 crc kubenswrapper[4632]: I0313 10:34:55.088531 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-pnvjb"] Mar 13 10:34:55 crc kubenswrapper[4632]: I0313 10:34:55.098128 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-g7pfc"] Mar 13 10:34:55 crc kubenswrapper[4632]: I0313 10:34:55.105081 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-da66-account-create-update-tk8pd"] Mar 13 10:34:55 crc kubenswrapper[4632]: I0313 10:34:55.113818 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-4dec-account-create-update-hfnth"] Mar 13 10:34:55 crc kubenswrapper[4632]: I0313 10:34:55.126366 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-b742-account-create-update-gfdkg"] Mar 13 10:34:55 crc kubenswrapper[4632]: I0313 10:34:55.135993 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-4dec-account-create-update-hfnth"] Mar 13 10:34:56 crc kubenswrapper[4632]: I0313 10:34:56.059219 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d045bc7-38b2-46f5-8cd8-cf634003bedf" path="/var/lib/kubelet/pods/0d045bc7-38b2-46f5-8cd8-cf634003bedf/volumes" Mar 13 10:34:56 crc kubenswrapper[4632]: I0313 10:34:56.062630 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47870992-2db9-46f4-84d9-fd50fb9851eb" path="/var/lib/kubelet/pods/47870992-2db9-46f4-84d9-fd50fb9851eb/volumes" Mar 13 10:34:56 crc kubenswrapper[4632]: I0313 10:34:56.064747 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa0000da-8f11-4e97-8ab5-1bcfea0ac894" path="/var/lib/kubelet/pods/aa0000da-8f11-4e97-8ab5-1bcfea0ac894/volumes" Mar 13 10:34:56 crc kubenswrapper[4632]: I0313 10:34:56.066707 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb216b07-9809-4b8b-857b-ac1192747b9c" path="/var/lib/kubelet/pods/cb216b07-9809-4b8b-857b-ac1192747b9c/volumes" Mar 13 10:34:56 crc kubenswrapper[4632]: I0313 10:34:56.069720 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee94a050-f905-44f1-a5da-16536b8cdfa7" path="/var/lib/kubelet/pods/ee94a050-f905-44f1-a5da-16536b8cdfa7/volumes" Mar 13 10:35:03 crc kubenswrapper[4632]: I0313 10:35:03.034851 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-mq9np"] Mar 13 10:35:03 crc kubenswrapper[4632]: I0313 10:35:03.047228 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-mq9np"] Mar 13 10:35:04 crc kubenswrapper[4632]: I0313 10:35:04.059447 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e824ae7d-dbbd-496b-b8b0-8b5c59a4d419" path="/var/lib/kubelet/pods/e824ae7d-dbbd-496b-b8b0-8b5c59a4d419/volumes" Mar 13 10:35:06 crc kubenswrapper[4632]: I0313 10:35:06.045526 4632 scope.go:117] "RemoveContainer" containerID="8c0ae371a519eb9db1c6eb843b2bd1981f031101e4086e8bec3fc57f1e905a6f" Mar 13 10:35:06 crc kubenswrapper[4632]: E0313 10:35:06.046274 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:35:06 crc kubenswrapper[4632]: I0313 10:35:06.857028 4632 scope.go:117] "RemoveContainer" containerID="05358506b7b8a5602da80aa6b4985f897c7b0818d4a2f70ed84421563493ee78" Mar 13 10:35:06 crc kubenswrapper[4632]: I0313 10:35:06.885099 4632 scope.go:117] "RemoveContainer" containerID="53c212eae0f18baff6fdcd0d88db82f3271a3997b68292e7fdae508ea7808719" Mar 13 10:35:06 crc kubenswrapper[4632]: I0313 10:35:06.934853 4632 scope.go:117] "RemoveContainer" containerID="c0ed44d952b9a10d8f17f6b274d11ae8079f72b678bca2ec969eb44a14c0f18e" Mar 13 10:35:06 crc kubenswrapper[4632]: I0313 10:35:06.979225 4632 scope.go:117] "RemoveContainer" containerID="24cb5f7263654577bea6ec83ce575dcb325e9b55c8adac840790cd7a29363013" Mar 13 10:35:07 crc kubenswrapper[4632]: I0313 10:35:07.033866 4632 scope.go:117] "RemoveContainer" containerID="10bcedf0effae05b832e3793407fcf2703d9df4f7136a8211c78de6b0a99c17b" Mar 13 10:35:07 crc kubenswrapper[4632]: I0313 10:35:07.071736 4632 scope.go:117] "RemoveContainer" containerID="2fd6ae14a44d07bfe626dada3603473befbf9326ca83648414737abd80e0ce5e" Mar 13 10:35:07 crc kubenswrapper[4632]: I0313 10:35:07.117237 4632 scope.go:117] "RemoveContainer" containerID="207587c5bdcbf92f71ab5aedfecf2486734ea587705753fb95e8790e674e977d" Mar 13 10:35:07 crc kubenswrapper[4632]: I0313 10:35:07.141124 4632 scope.go:117] "RemoveContainer" containerID="dc07b5437ef3867ede6e9debff7196fad98555045e8df8dafdb4a11a7fb9808e" Mar 13 10:35:07 crc kubenswrapper[4632]: I0313 10:35:07.166250 4632 scope.go:117] "RemoveContainer" containerID="9bfb87771985986bb5edbb713355c76b663fe8b23df1170e73c42c65479f44df" Mar 13 10:35:07 crc kubenswrapper[4632]: I0313 10:35:07.188957 4632 scope.go:117] "RemoveContainer" containerID="dccd7606dfc8be32af7f5d6d0a4bf2a63f79937bfd68d93b573f727a7eb9e402" Mar 13 10:35:07 crc kubenswrapper[4632]: I0313 10:35:07.238496 4632 scope.go:117] "RemoveContainer" containerID="209b78ccf3afd3b3582d4d4eae9056be2d6d19f860431a427d43f1899c69be92" Mar 13 10:35:07 crc kubenswrapper[4632]: I0313 10:35:07.262831 4632 scope.go:117] "RemoveContainer" containerID="f13e115025698b8daa562f4881b31bb57b43cf222144f35c644ca079c94f546c" Mar 13 10:35:07 crc kubenswrapper[4632]: I0313 10:35:07.287100 4632 scope.go:117] "RemoveContainer" containerID="1ead25cb79a035bd17ce1b8995cb1c20666089312b5c266ebcbccc7e66e7c0cc" Mar 13 10:35:07 crc kubenswrapper[4632]: I0313 10:35:07.307562 4632 scope.go:117] "RemoveContainer" containerID="d92125a86d78e277913519dc023b0643c481c49ac75357c10f1cb11e638c36a3" Mar 13 10:35:07 crc kubenswrapper[4632]: I0313 10:35:07.333434 4632 scope.go:117] "RemoveContainer" containerID="40127d251d4cb7407ae0ce8a1705cd5210171fb2a750df3289fa3b2b9a54b055" Mar 13 10:35:07 crc kubenswrapper[4632]: I0313 10:35:07.354074 4632 scope.go:117] "RemoveContainer" containerID="f79fdacee095a4d2c557179a3aeeb0eea1874c7280d8a656f2dd9779cf567f1e" Mar 13 10:35:07 crc kubenswrapper[4632]: I0313 10:35:07.377152 4632 scope.go:117] "RemoveContainer" containerID="e12bb579655132c65f7afaf171587507463b77c9b73d0902f8981397a2c342cd" Mar 13 10:35:20 crc kubenswrapper[4632]: I0313 10:35:20.045380 4632 scope.go:117] "RemoveContainer" containerID="8c0ae371a519eb9db1c6eb843b2bd1981f031101e4086e8bec3fc57f1e905a6f" Mar 13 10:35:20 crc kubenswrapper[4632]: E0313 10:35:20.046190 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:35:25 crc kubenswrapper[4632]: I0313 10:35:25.606256 4632 generic.go:334] "Generic (PLEG): container finished" podID="684a2658-ba02-40cf-a371-ec2a8934c0d3" containerID="2a49e9be958588623abc6e501e323438380932e75041c2f3f5b099060c297811" exitCode=0 Mar 13 10:35:25 crc kubenswrapper[4632]: I0313 10:35:25.606405 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp" event={"ID":"684a2658-ba02-40cf-a371-ec2a8934c0d3","Type":"ContainerDied","Data":"2a49e9be958588623abc6e501e323438380932e75041c2f3f5b099060c297811"} Mar 13 10:35:27 crc kubenswrapper[4632]: I0313 10:35:27.076889 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp" Mar 13 10:35:27 crc kubenswrapper[4632]: I0313 10:35:27.191191 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/684a2658-ba02-40cf-a371-ec2a8934c0d3-inventory\") pod \"684a2658-ba02-40cf-a371-ec2a8934c0d3\" (UID: \"684a2658-ba02-40cf-a371-ec2a8934c0d3\") " Mar 13 10:35:27 crc kubenswrapper[4632]: I0313 10:35:27.191240 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2z5j\" (UniqueName: \"kubernetes.io/projected/684a2658-ba02-40cf-a371-ec2a8934c0d3-kube-api-access-p2z5j\") pod \"684a2658-ba02-40cf-a371-ec2a8934c0d3\" (UID: \"684a2658-ba02-40cf-a371-ec2a8934c0d3\") " Mar 13 10:35:27 crc kubenswrapper[4632]: I0313 10:35:27.191547 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/684a2658-ba02-40cf-a371-ec2a8934c0d3-ssh-key-openstack-edpm-ipam\") pod \"684a2658-ba02-40cf-a371-ec2a8934c0d3\" (UID: \"684a2658-ba02-40cf-a371-ec2a8934c0d3\") " Mar 13 10:35:27 crc kubenswrapper[4632]: I0313 10:35:27.191613 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/684a2658-ba02-40cf-a371-ec2a8934c0d3-bootstrap-combined-ca-bundle\") pod \"684a2658-ba02-40cf-a371-ec2a8934c0d3\" (UID: \"684a2658-ba02-40cf-a371-ec2a8934c0d3\") " Mar 13 10:35:27 crc kubenswrapper[4632]: I0313 10:35:27.196858 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/684a2658-ba02-40cf-a371-ec2a8934c0d3-kube-api-access-p2z5j" (OuterVolumeSpecName: "kube-api-access-p2z5j") pod "684a2658-ba02-40cf-a371-ec2a8934c0d3" (UID: "684a2658-ba02-40cf-a371-ec2a8934c0d3"). InnerVolumeSpecName "kube-api-access-p2z5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:35:27 crc kubenswrapper[4632]: I0313 10:35:27.199202 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/684a2658-ba02-40cf-a371-ec2a8934c0d3-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "684a2658-ba02-40cf-a371-ec2a8934c0d3" (UID: "684a2658-ba02-40cf-a371-ec2a8934c0d3"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:35:27 crc kubenswrapper[4632]: I0313 10:35:27.220965 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/684a2658-ba02-40cf-a371-ec2a8934c0d3-inventory" (OuterVolumeSpecName: "inventory") pod "684a2658-ba02-40cf-a371-ec2a8934c0d3" (UID: "684a2658-ba02-40cf-a371-ec2a8934c0d3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:35:27 crc kubenswrapper[4632]: I0313 10:35:27.221956 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/684a2658-ba02-40cf-a371-ec2a8934c0d3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "684a2658-ba02-40cf-a371-ec2a8934c0d3" (UID: "684a2658-ba02-40cf-a371-ec2a8934c0d3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:35:27 crc kubenswrapper[4632]: I0313 10:35:27.293553 4632 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/684a2658-ba02-40cf-a371-ec2a8934c0d3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 13 10:35:27 crc kubenswrapper[4632]: I0313 10:35:27.293588 4632 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/684a2658-ba02-40cf-a371-ec2a8934c0d3-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:35:27 crc kubenswrapper[4632]: I0313 10:35:27.293600 4632 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/684a2658-ba02-40cf-a371-ec2a8934c0d3-inventory\") on node \"crc\" DevicePath \"\"" Mar 13 10:35:27 crc kubenswrapper[4632]: I0313 10:35:27.293609 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2z5j\" (UniqueName: \"kubernetes.io/projected/684a2658-ba02-40cf-a371-ec2a8934c0d3-kube-api-access-p2z5j\") on node \"crc\" DevicePath \"\"" Mar 13 10:35:27 crc kubenswrapper[4632]: I0313 10:35:27.629226 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp" event={"ID":"684a2658-ba02-40cf-a371-ec2a8934c0d3","Type":"ContainerDied","Data":"7e733b330c11a117b4bd0ac4f5dc54f5cdcd79d6738005ec9435833371e5bb1e"} Mar 13 10:35:27 crc kubenswrapper[4632]: I0313 10:35:27.629270 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e733b330c11a117b4bd0ac4f5dc54f5cdcd79d6738005ec9435833371e5bb1e" Mar 13 10:35:27 crc kubenswrapper[4632]: I0313 10:35:27.629279 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp" Mar 13 10:35:27 crc kubenswrapper[4632]: I0313 10:35:27.726808 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-754cp"] Mar 13 10:35:27 crc kubenswrapper[4632]: E0313 10:35:27.727219 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="684a2658-ba02-40cf-a371-ec2a8934c0d3" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 13 10:35:27 crc kubenswrapper[4632]: I0313 10:35:27.727237 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="684a2658-ba02-40cf-a371-ec2a8934c0d3" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 13 10:35:27 crc kubenswrapper[4632]: E0313 10:35:27.727265 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="155ba738-4ba0-424a-a1d7-067786728969" containerName="oc" Mar 13 10:35:27 crc kubenswrapper[4632]: I0313 10:35:27.727271 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="155ba738-4ba0-424a-a1d7-067786728969" containerName="oc" Mar 13 10:35:27 crc kubenswrapper[4632]: I0313 10:35:27.727449 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="684a2658-ba02-40cf-a371-ec2a8934c0d3" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 13 10:35:27 crc kubenswrapper[4632]: I0313 10:35:27.727481 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="155ba738-4ba0-424a-a1d7-067786728969" containerName="oc" Mar 13 10:35:27 crc kubenswrapper[4632]: I0313 10:35:27.728267 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-754cp" Mar 13 10:35:27 crc kubenswrapper[4632]: I0313 10:35:27.731031 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 13 10:35:27 crc kubenswrapper[4632]: I0313 10:35:27.731078 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 13 10:35:27 crc kubenswrapper[4632]: I0313 10:35:27.731429 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qrzsx" Mar 13 10:35:27 crc kubenswrapper[4632]: I0313 10:35:27.731489 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 13 10:35:27 crc kubenswrapper[4632]: I0313 10:35:27.738106 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-754cp"] Mar 13 10:35:27 crc kubenswrapper[4632]: I0313 10:35:27.802514 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qctqc\" (UniqueName: \"kubernetes.io/projected/0d75181a-4c91-485e-8bcd-02e2aedd4d45-kube-api-access-qctqc\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-754cp\" (UID: \"0d75181a-4c91-485e-8bcd-02e2aedd4d45\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-754cp" Mar 13 10:35:27 crc kubenswrapper[4632]: I0313 10:35:27.802639 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d75181a-4c91-485e-8bcd-02e2aedd4d45-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-754cp\" (UID: \"0d75181a-4c91-485e-8bcd-02e2aedd4d45\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-754cp" Mar 13 10:35:27 crc kubenswrapper[4632]: I0313 10:35:27.802723 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d75181a-4c91-485e-8bcd-02e2aedd4d45-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-754cp\" (UID: \"0d75181a-4c91-485e-8bcd-02e2aedd4d45\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-754cp" Mar 13 10:35:27 crc kubenswrapper[4632]: I0313 10:35:27.905030 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d75181a-4c91-485e-8bcd-02e2aedd4d45-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-754cp\" (UID: \"0d75181a-4c91-485e-8bcd-02e2aedd4d45\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-754cp" Mar 13 10:35:27 crc kubenswrapper[4632]: I0313 10:35:27.905135 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d75181a-4c91-485e-8bcd-02e2aedd4d45-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-754cp\" (UID: \"0d75181a-4c91-485e-8bcd-02e2aedd4d45\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-754cp" Mar 13 10:35:27 crc kubenswrapper[4632]: I0313 10:35:27.905236 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qctqc\" (UniqueName: \"kubernetes.io/projected/0d75181a-4c91-485e-8bcd-02e2aedd4d45-kube-api-access-qctqc\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-754cp\" (UID: \"0d75181a-4c91-485e-8bcd-02e2aedd4d45\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-754cp" Mar 13 10:35:27 crc kubenswrapper[4632]: I0313 10:35:27.933517 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d75181a-4c91-485e-8bcd-02e2aedd4d45-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-754cp\" (UID: \"0d75181a-4c91-485e-8bcd-02e2aedd4d45\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-754cp" Mar 13 10:35:27 crc kubenswrapper[4632]: I0313 10:35:27.936687 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d75181a-4c91-485e-8bcd-02e2aedd4d45-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-754cp\" (UID: \"0d75181a-4c91-485e-8bcd-02e2aedd4d45\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-754cp" Mar 13 10:35:27 crc kubenswrapper[4632]: I0313 10:35:27.942834 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qctqc\" (UniqueName: \"kubernetes.io/projected/0d75181a-4c91-485e-8bcd-02e2aedd4d45-kube-api-access-qctqc\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-754cp\" (UID: \"0d75181a-4c91-485e-8bcd-02e2aedd4d45\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-754cp" Mar 13 10:35:28 crc kubenswrapper[4632]: I0313 10:35:28.043993 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-754cp" Mar 13 10:35:28 crc kubenswrapper[4632]: I0313 10:35:28.708930 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-754cp"] Mar 13 10:35:28 crc kubenswrapper[4632]: I0313 10:35:28.719112 4632 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 10:35:29 crc kubenswrapper[4632]: I0313 10:35:29.652473 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-754cp" event={"ID":"0d75181a-4c91-485e-8bcd-02e2aedd4d45","Type":"ContainerStarted","Data":"bd09862a5fc80def82e97b44b7d539caee7c696bb410023ec19cde3384abb6ae"} Mar 13 10:35:29 crc kubenswrapper[4632]: I0313 10:35:29.652825 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-754cp" event={"ID":"0d75181a-4c91-485e-8bcd-02e2aedd4d45","Type":"ContainerStarted","Data":"b2534ff7153cbd23578a95b816373371c87f8d95ec5b00bff35a4aeb9a12cb51"} Mar 13 10:35:29 crc kubenswrapper[4632]: I0313 10:35:29.683527 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-754cp" podStartSLOduration=2.196335944 podStartE2EDuration="2.683503676s" podCreationTimestamp="2026-03-13 10:35:27 +0000 UTC" firstStartedPulling="2026-03-13 10:35:28.718832011 +0000 UTC m=+1902.741362144" lastFinishedPulling="2026-03-13 10:35:29.205999743 +0000 UTC m=+1903.228529876" observedRunningTime="2026-03-13 10:35:29.672305749 +0000 UTC m=+1903.694835882" watchObservedRunningTime="2026-03-13 10:35:29.683503676 +0000 UTC m=+1903.706033819" Mar 13 10:35:34 crc kubenswrapper[4632]: I0313 10:35:34.044460 4632 scope.go:117] "RemoveContainer" containerID="8c0ae371a519eb9db1c6eb843b2bd1981f031101e4086e8bec3fc57f1e905a6f" Mar 13 10:35:34 crc kubenswrapper[4632]: E0313 10:35:34.045392 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:35:37 crc kubenswrapper[4632]: I0313 10:35:37.042963 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-l6hpb"] Mar 13 10:35:37 crc kubenswrapper[4632]: I0313 10:35:37.051736 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-l6hpb"] Mar 13 10:35:38 crc kubenswrapper[4632]: I0313 10:35:38.055893 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f1c5663-463b-45e2-b200-64e73e6d5698" path="/var/lib/kubelet/pods/4f1c5663-463b-45e2-b200-64e73e6d5698/volumes" Mar 13 10:35:48 crc kubenswrapper[4632]: I0313 10:35:48.086912 4632 scope.go:117] "RemoveContainer" containerID="8c0ae371a519eb9db1c6eb843b2bd1981f031101e4086e8bec3fc57f1e905a6f" Mar 13 10:35:48 crc kubenswrapper[4632]: E0313 10:35:48.089278 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:35:53 crc kubenswrapper[4632]: I0313 10:35:53.043556 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-hlsnz"] Mar 13 10:35:53 crc kubenswrapper[4632]: I0313 10:35:53.054472 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-hlsnz"] Mar 13 10:35:54 crc kubenswrapper[4632]: I0313 10:35:54.087331 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7221b50-7231-4ade-917e-b10f177cb539" path="/var/lib/kubelet/pods/b7221b50-7231-4ade-917e-b10f177cb539/volumes" Mar 13 10:35:59 crc kubenswrapper[4632]: I0313 10:35:59.045259 4632 scope.go:117] "RemoveContainer" containerID="8c0ae371a519eb9db1c6eb843b2bd1981f031101e4086e8bec3fc57f1e905a6f" Mar 13 10:35:59 crc kubenswrapper[4632]: E0313 10:35:59.046174 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:36:00 crc kubenswrapper[4632]: I0313 10:36:00.145924 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556636-zncpw"] Mar 13 10:36:00 crc kubenswrapper[4632]: I0313 10:36:00.147376 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556636-zncpw" Mar 13 10:36:00 crc kubenswrapper[4632]: I0313 10:36:00.150057 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 10:36:00 crc kubenswrapper[4632]: I0313 10:36:00.150835 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 10:36:00 crc kubenswrapper[4632]: I0313 10:36:00.151871 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 10:36:00 crc kubenswrapper[4632]: I0313 10:36:00.158363 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556636-zncpw"] Mar 13 10:36:00 crc kubenswrapper[4632]: I0313 10:36:00.232872 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2ksk\" (UniqueName: \"kubernetes.io/projected/7cb201b3-b479-4877-a996-58045d0720c4-kube-api-access-r2ksk\") pod \"auto-csr-approver-29556636-zncpw\" (UID: \"7cb201b3-b479-4877-a996-58045d0720c4\") " pod="openshift-infra/auto-csr-approver-29556636-zncpw" Mar 13 10:36:00 crc kubenswrapper[4632]: I0313 10:36:00.334477 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2ksk\" (UniqueName: \"kubernetes.io/projected/7cb201b3-b479-4877-a996-58045d0720c4-kube-api-access-r2ksk\") pod \"auto-csr-approver-29556636-zncpw\" (UID: \"7cb201b3-b479-4877-a996-58045d0720c4\") " pod="openshift-infra/auto-csr-approver-29556636-zncpw" Mar 13 10:36:00 crc kubenswrapper[4632]: I0313 10:36:00.365246 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2ksk\" (UniqueName: \"kubernetes.io/projected/7cb201b3-b479-4877-a996-58045d0720c4-kube-api-access-r2ksk\") pod \"auto-csr-approver-29556636-zncpw\" (UID: \"7cb201b3-b479-4877-a996-58045d0720c4\") " pod="openshift-infra/auto-csr-approver-29556636-zncpw" Mar 13 10:36:00 crc kubenswrapper[4632]: I0313 10:36:00.482379 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556636-zncpw" Mar 13 10:36:01 crc kubenswrapper[4632]: I0313 10:36:01.038749 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556636-zncpw"] Mar 13 10:36:01 crc kubenswrapper[4632]: I0313 10:36:01.950302 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556636-zncpw" event={"ID":"7cb201b3-b479-4877-a996-58045d0720c4","Type":"ContainerStarted","Data":"5a14d1fb4e0b470a0079f1b91f89aa951453210f6aa6c7e6131e24e29d1de5b2"} Mar 13 10:36:02 crc kubenswrapper[4632]: I0313 10:36:02.961625 4632 generic.go:334] "Generic (PLEG): container finished" podID="7cb201b3-b479-4877-a996-58045d0720c4" containerID="1d5789598fed395c0d259939fb11bb98aa8eec3b7168c00349a4a3635d4bd5ce" exitCode=0 Mar 13 10:36:02 crc kubenswrapper[4632]: I0313 10:36:02.961670 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556636-zncpw" event={"ID":"7cb201b3-b479-4877-a996-58045d0720c4","Type":"ContainerDied","Data":"1d5789598fed395c0d259939fb11bb98aa8eec3b7168c00349a4a3635d4bd5ce"} Mar 13 10:36:04 crc kubenswrapper[4632]: I0313 10:36:04.339403 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556636-zncpw" Mar 13 10:36:04 crc kubenswrapper[4632]: I0313 10:36:04.529142 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2ksk\" (UniqueName: \"kubernetes.io/projected/7cb201b3-b479-4877-a996-58045d0720c4-kube-api-access-r2ksk\") pod \"7cb201b3-b479-4877-a996-58045d0720c4\" (UID: \"7cb201b3-b479-4877-a996-58045d0720c4\") " Mar 13 10:36:04 crc kubenswrapper[4632]: I0313 10:36:04.535651 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cb201b3-b479-4877-a996-58045d0720c4-kube-api-access-r2ksk" (OuterVolumeSpecName: "kube-api-access-r2ksk") pod "7cb201b3-b479-4877-a996-58045d0720c4" (UID: "7cb201b3-b479-4877-a996-58045d0720c4"). InnerVolumeSpecName "kube-api-access-r2ksk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:36:04 crc kubenswrapper[4632]: I0313 10:36:04.631502 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2ksk\" (UniqueName: \"kubernetes.io/projected/7cb201b3-b479-4877-a996-58045d0720c4-kube-api-access-r2ksk\") on node \"crc\" DevicePath \"\"" Mar 13 10:36:04 crc kubenswrapper[4632]: I0313 10:36:04.985202 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556636-zncpw" event={"ID":"7cb201b3-b479-4877-a996-58045d0720c4","Type":"ContainerDied","Data":"5a14d1fb4e0b470a0079f1b91f89aa951453210f6aa6c7e6131e24e29d1de5b2"} Mar 13 10:36:04 crc kubenswrapper[4632]: I0313 10:36:04.985335 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a14d1fb4e0b470a0079f1b91f89aa951453210f6aa6c7e6131e24e29d1de5b2" Mar 13 10:36:04 crc kubenswrapper[4632]: I0313 10:36:04.985819 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556636-zncpw" Mar 13 10:36:05 crc kubenswrapper[4632]: I0313 10:36:05.413620 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556630-kxrkn"] Mar 13 10:36:05 crc kubenswrapper[4632]: I0313 10:36:05.422970 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556630-kxrkn"] Mar 13 10:36:06 crc kubenswrapper[4632]: I0313 10:36:06.057163 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0ccb00a-40ce-4b3d-86e8-8f87354c1e8e" path="/var/lib/kubelet/pods/b0ccb00a-40ce-4b3d-86e8-8f87354c1e8e/volumes" Mar 13 10:36:07 crc kubenswrapper[4632]: I0313 10:36:07.685542 4632 scope.go:117] "RemoveContainer" containerID="bf8d93edd68f1cf79021467ff9910419baf75397a4140fb3d25bca7f97abbf70" Mar 13 10:36:07 crc kubenswrapper[4632]: I0313 10:36:07.711928 4632 scope.go:117] "RemoveContainer" containerID="4df2156f6fe32fab45f05d256a8ec2adb23f786a2989c939b92b996a496f122f" Mar 13 10:36:07 crc kubenswrapper[4632]: I0313 10:36:07.761419 4632 scope.go:117] "RemoveContainer" containerID="e604b5ae6ce92dde6f33a140a99a7c7d5949aebd7f4821ef087f38b50a0e872b" Mar 13 10:36:07 crc kubenswrapper[4632]: I0313 10:36:07.787065 4632 scope.go:117] "RemoveContainer" containerID="a4f9bd4f877455829b998ee69c6d5f9dd7fb999a6d06fe2960e4af1bfddc1eb0" Mar 13 10:36:07 crc kubenswrapper[4632]: I0313 10:36:07.826240 4632 scope.go:117] "RemoveContainer" containerID="dd1843e80da062d2b847859e60f624eed6f5f23e9e94519edc79cfc924e74d60" Mar 13 10:36:12 crc kubenswrapper[4632]: I0313 10:36:12.045174 4632 scope.go:117] "RemoveContainer" containerID="8c0ae371a519eb9db1c6eb843b2bd1981f031101e4086e8bec3fc57f1e905a6f" Mar 13 10:36:13 crc kubenswrapper[4632]: I0313 10:36:13.082224 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerStarted","Data":"2bb4e222f4f89a1d4e4bebc809fc60cc762d7ea9b6811f4bcc9cb78c179cd0bd"} Mar 13 10:36:14 crc kubenswrapper[4632]: I0313 10:36:14.035584 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-htnd9"] Mar 13 10:36:14 crc kubenswrapper[4632]: I0313 10:36:14.060267 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-htnd9"] Mar 13 10:36:16 crc kubenswrapper[4632]: I0313 10:36:16.056303 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e92afa62-9c75-4e0e-92f4-76e57328d7a0" path="/var/lib/kubelet/pods/e92afa62-9c75-4e0e-92f4-76e57328d7a0/volumes" Mar 13 10:36:24 crc kubenswrapper[4632]: I0313 10:36:24.072798 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-zdgpw"] Mar 13 10:36:24 crc kubenswrapper[4632]: I0313 10:36:24.085920 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-x8tq8"] Mar 13 10:36:24 crc kubenswrapper[4632]: I0313 10:36:24.096080 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-7fvlk"] Mar 13 10:36:24 crc kubenswrapper[4632]: I0313 10:36:24.109163 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-7fvlk"] Mar 13 10:36:24 crc kubenswrapper[4632]: I0313 10:36:24.119690 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-zdgpw"] Mar 13 10:36:24 crc kubenswrapper[4632]: I0313 10:36:24.129074 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-x8tq8"] Mar 13 10:36:26 crc kubenswrapper[4632]: I0313 10:36:26.059414 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="418cb883-abd1-46b4-957f-0a40f3e62297" path="/var/lib/kubelet/pods/418cb883-abd1-46b4-957f-0a40f3e62297/volumes" Mar 13 10:36:26 crc kubenswrapper[4632]: I0313 10:36:26.060653 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d722ddd7-e65d-44f7-a02d-18ddf126ccf5" path="/var/lib/kubelet/pods/d722ddd7-e65d-44f7-a02d-18ddf126ccf5/volumes" Mar 13 10:36:26 crc kubenswrapper[4632]: I0313 10:36:26.064180 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8d0f662-d180-4137-8107-e465c5fb0621" path="/var/lib/kubelet/pods/d8d0f662-d180-4137-8107-e465c5fb0621/volumes" Mar 13 10:36:33 crc kubenswrapper[4632]: I0313 10:36:33.051775 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-kq8lc"] Mar 13 10:36:33 crc kubenswrapper[4632]: I0313 10:36:33.065411 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-kq8lc"] Mar 13 10:36:34 crc kubenswrapper[4632]: I0313 10:36:34.062897 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f916c05-f172-42b6-9b13-0c8d2058bfb1" path="/var/lib/kubelet/pods/8f916c05-f172-42b6-9b13-0c8d2058bfb1/volumes" Mar 13 10:36:50 crc kubenswrapper[4632]: I0313 10:36:50.399889 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cjx7m"] Mar 13 10:36:50 crc kubenswrapper[4632]: E0313 10:36:50.401011 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cb201b3-b479-4877-a996-58045d0720c4" containerName="oc" Mar 13 10:36:50 crc kubenswrapper[4632]: I0313 10:36:50.401029 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cb201b3-b479-4877-a996-58045d0720c4" containerName="oc" Mar 13 10:36:50 crc kubenswrapper[4632]: I0313 10:36:50.401289 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cb201b3-b479-4877-a996-58045d0720c4" containerName="oc" Mar 13 10:36:50 crc kubenswrapper[4632]: I0313 10:36:50.403064 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cjx7m" Mar 13 10:36:50 crc kubenswrapper[4632]: I0313 10:36:50.420310 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cjx7m"] Mar 13 10:36:50 crc kubenswrapper[4632]: I0313 10:36:50.424617 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88a273fb-d2f3-477f-9c9b-807b65124f71-catalog-content\") pod \"certified-operators-cjx7m\" (UID: \"88a273fb-d2f3-477f-9c9b-807b65124f71\") " pod="openshift-marketplace/certified-operators-cjx7m" Mar 13 10:36:50 crc kubenswrapper[4632]: I0313 10:36:50.424683 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw2k4\" (UniqueName: \"kubernetes.io/projected/88a273fb-d2f3-477f-9c9b-807b65124f71-kube-api-access-jw2k4\") pod \"certified-operators-cjx7m\" (UID: \"88a273fb-d2f3-477f-9c9b-807b65124f71\") " pod="openshift-marketplace/certified-operators-cjx7m" Mar 13 10:36:50 crc kubenswrapper[4632]: I0313 10:36:50.424716 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88a273fb-d2f3-477f-9c9b-807b65124f71-utilities\") pod \"certified-operators-cjx7m\" (UID: \"88a273fb-d2f3-477f-9c9b-807b65124f71\") " pod="openshift-marketplace/certified-operators-cjx7m" Mar 13 10:36:50 crc kubenswrapper[4632]: I0313 10:36:50.526230 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88a273fb-d2f3-477f-9c9b-807b65124f71-catalog-content\") pod \"certified-operators-cjx7m\" (UID: \"88a273fb-d2f3-477f-9c9b-807b65124f71\") " pod="openshift-marketplace/certified-operators-cjx7m" Mar 13 10:36:50 crc kubenswrapper[4632]: I0313 10:36:50.526541 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw2k4\" (UniqueName: \"kubernetes.io/projected/88a273fb-d2f3-477f-9c9b-807b65124f71-kube-api-access-jw2k4\") pod \"certified-operators-cjx7m\" (UID: \"88a273fb-d2f3-477f-9c9b-807b65124f71\") " pod="openshift-marketplace/certified-operators-cjx7m" Mar 13 10:36:50 crc kubenswrapper[4632]: I0313 10:36:50.526723 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88a273fb-d2f3-477f-9c9b-807b65124f71-utilities\") pod \"certified-operators-cjx7m\" (UID: \"88a273fb-d2f3-477f-9c9b-807b65124f71\") " pod="openshift-marketplace/certified-operators-cjx7m" Mar 13 10:36:50 crc kubenswrapper[4632]: I0313 10:36:50.526802 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88a273fb-d2f3-477f-9c9b-807b65124f71-catalog-content\") pod \"certified-operators-cjx7m\" (UID: \"88a273fb-d2f3-477f-9c9b-807b65124f71\") " pod="openshift-marketplace/certified-operators-cjx7m" Mar 13 10:36:50 crc kubenswrapper[4632]: I0313 10:36:50.527219 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88a273fb-d2f3-477f-9c9b-807b65124f71-utilities\") pod \"certified-operators-cjx7m\" (UID: \"88a273fb-d2f3-477f-9c9b-807b65124f71\") " pod="openshift-marketplace/certified-operators-cjx7m" Mar 13 10:36:50 crc kubenswrapper[4632]: I0313 10:36:50.555877 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw2k4\" (UniqueName: \"kubernetes.io/projected/88a273fb-d2f3-477f-9c9b-807b65124f71-kube-api-access-jw2k4\") pod \"certified-operators-cjx7m\" (UID: \"88a273fb-d2f3-477f-9c9b-807b65124f71\") " pod="openshift-marketplace/certified-operators-cjx7m" Mar 13 10:36:50 crc kubenswrapper[4632]: I0313 10:36:50.595709 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rpx2w"] Mar 13 10:36:50 crc kubenswrapper[4632]: I0313 10:36:50.598178 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rpx2w" Mar 13 10:36:50 crc kubenswrapper[4632]: I0313 10:36:50.616723 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rpx2w"] Mar 13 10:36:50 crc kubenswrapper[4632]: I0313 10:36:50.628490 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27f03f5b-b62a-4142-9594-79c6ea30f9e2-utilities\") pod \"redhat-marketplace-rpx2w\" (UID: \"27f03f5b-b62a-4142-9594-79c6ea30f9e2\") " pod="openshift-marketplace/redhat-marketplace-rpx2w" Mar 13 10:36:50 crc kubenswrapper[4632]: I0313 10:36:50.628838 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ps6l\" (UniqueName: \"kubernetes.io/projected/27f03f5b-b62a-4142-9594-79c6ea30f9e2-kube-api-access-4ps6l\") pod \"redhat-marketplace-rpx2w\" (UID: \"27f03f5b-b62a-4142-9594-79c6ea30f9e2\") " pod="openshift-marketplace/redhat-marketplace-rpx2w" Mar 13 10:36:50 crc kubenswrapper[4632]: I0313 10:36:50.629016 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27f03f5b-b62a-4142-9594-79c6ea30f9e2-catalog-content\") pod \"redhat-marketplace-rpx2w\" (UID: \"27f03f5b-b62a-4142-9594-79c6ea30f9e2\") " pod="openshift-marketplace/redhat-marketplace-rpx2w" Mar 13 10:36:50 crc kubenswrapper[4632]: I0313 10:36:50.730115 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cjx7m" Mar 13 10:36:50 crc kubenswrapper[4632]: I0313 10:36:50.745308 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ps6l\" (UniqueName: \"kubernetes.io/projected/27f03f5b-b62a-4142-9594-79c6ea30f9e2-kube-api-access-4ps6l\") pod \"redhat-marketplace-rpx2w\" (UID: \"27f03f5b-b62a-4142-9594-79c6ea30f9e2\") " pod="openshift-marketplace/redhat-marketplace-rpx2w" Mar 13 10:36:50 crc kubenswrapper[4632]: I0313 10:36:50.745497 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27f03f5b-b62a-4142-9594-79c6ea30f9e2-catalog-content\") pod \"redhat-marketplace-rpx2w\" (UID: \"27f03f5b-b62a-4142-9594-79c6ea30f9e2\") " pod="openshift-marketplace/redhat-marketplace-rpx2w" Mar 13 10:36:50 crc kubenswrapper[4632]: I0313 10:36:50.745605 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27f03f5b-b62a-4142-9594-79c6ea30f9e2-utilities\") pod \"redhat-marketplace-rpx2w\" (UID: \"27f03f5b-b62a-4142-9594-79c6ea30f9e2\") " pod="openshift-marketplace/redhat-marketplace-rpx2w" Mar 13 10:36:50 crc kubenswrapper[4632]: I0313 10:36:50.746066 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27f03f5b-b62a-4142-9594-79c6ea30f9e2-catalog-content\") pod \"redhat-marketplace-rpx2w\" (UID: \"27f03f5b-b62a-4142-9594-79c6ea30f9e2\") " pod="openshift-marketplace/redhat-marketplace-rpx2w" Mar 13 10:36:50 crc kubenswrapper[4632]: I0313 10:36:50.746214 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27f03f5b-b62a-4142-9594-79c6ea30f9e2-utilities\") pod \"redhat-marketplace-rpx2w\" (UID: \"27f03f5b-b62a-4142-9594-79c6ea30f9e2\") " pod="openshift-marketplace/redhat-marketplace-rpx2w" Mar 13 10:36:50 crc kubenswrapper[4632]: I0313 10:36:50.780620 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ps6l\" (UniqueName: \"kubernetes.io/projected/27f03f5b-b62a-4142-9594-79c6ea30f9e2-kube-api-access-4ps6l\") pod \"redhat-marketplace-rpx2w\" (UID: \"27f03f5b-b62a-4142-9594-79c6ea30f9e2\") " pod="openshift-marketplace/redhat-marketplace-rpx2w" Mar 13 10:36:50 crc kubenswrapper[4632]: I0313 10:36:50.952120 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rpx2w" Mar 13 10:36:51 crc kubenswrapper[4632]: I0313 10:36:51.237075 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cjx7m"] Mar 13 10:36:51 crc kubenswrapper[4632]: I0313 10:36:51.447760 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjx7m" event={"ID":"88a273fb-d2f3-477f-9c9b-807b65124f71","Type":"ContainerStarted","Data":"5ad059c559e099a30b0bde8f6cfb84fc66c8ba7e893d883c696d37b72a5f0e91"} Mar 13 10:36:51 crc kubenswrapper[4632]: I0313 10:36:51.487369 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rpx2w"] Mar 13 10:36:51 crc kubenswrapper[4632]: W0313 10:36:51.538828 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27f03f5b_b62a_4142_9594_79c6ea30f9e2.slice/crio-bd872c1ca4d35e0f9e248bc64b5943094d58c8ac175f58d64936220c31510187 WatchSource:0}: Error finding container bd872c1ca4d35e0f9e248bc64b5943094d58c8ac175f58d64936220c31510187: Status 404 returned error can't find the container with id bd872c1ca4d35e0f9e248bc64b5943094d58c8ac175f58d64936220c31510187 Mar 13 10:36:52 crc kubenswrapper[4632]: I0313 10:36:52.472576 4632 generic.go:334] "Generic (PLEG): container finished" podID="27f03f5b-b62a-4142-9594-79c6ea30f9e2" containerID="8c073b0c9c22380ad30c5fc7961ef51acdc644797f56ab38d2b96fb0fcea4cdf" exitCode=0 Mar 13 10:36:52 crc kubenswrapper[4632]: I0313 10:36:52.472903 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rpx2w" event={"ID":"27f03f5b-b62a-4142-9594-79c6ea30f9e2","Type":"ContainerDied","Data":"8c073b0c9c22380ad30c5fc7961ef51acdc644797f56ab38d2b96fb0fcea4cdf"} Mar 13 10:36:52 crc kubenswrapper[4632]: I0313 10:36:52.472931 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rpx2w" event={"ID":"27f03f5b-b62a-4142-9594-79c6ea30f9e2","Type":"ContainerStarted","Data":"bd872c1ca4d35e0f9e248bc64b5943094d58c8ac175f58d64936220c31510187"} Mar 13 10:36:52 crc kubenswrapper[4632]: I0313 10:36:52.476177 4632 generic.go:334] "Generic (PLEG): container finished" podID="88a273fb-d2f3-477f-9c9b-807b65124f71" containerID="7eb7750a4f5485b1996c42446b4311cb486ceb4d805041caa4edb556018bf415" exitCode=0 Mar 13 10:36:52 crc kubenswrapper[4632]: I0313 10:36:52.476218 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjx7m" event={"ID":"88a273fb-d2f3-477f-9c9b-807b65124f71","Type":"ContainerDied","Data":"7eb7750a4f5485b1996c42446b4311cb486ceb4d805041caa4edb556018bf415"} Mar 13 10:36:53 crc kubenswrapper[4632]: I0313 10:36:53.487836 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjx7m" event={"ID":"88a273fb-d2f3-477f-9c9b-807b65124f71","Type":"ContainerStarted","Data":"c0feac2c5cf2c2fe35bb95c87df88b965616af6218f4c079105180daf90200ee"} Mar 13 10:36:53 crc kubenswrapper[4632]: I0313 10:36:53.492563 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rpx2w" event={"ID":"27f03f5b-b62a-4142-9594-79c6ea30f9e2","Type":"ContainerStarted","Data":"6db39b0a3dfdd27faaf913cbea3cd500e63cfb09859815521c37cd4347e70f19"} Mar 13 10:36:55 crc kubenswrapper[4632]: I0313 10:36:55.513408 4632 generic.go:334] "Generic (PLEG): container finished" podID="27f03f5b-b62a-4142-9594-79c6ea30f9e2" containerID="6db39b0a3dfdd27faaf913cbea3cd500e63cfb09859815521c37cd4347e70f19" exitCode=0 Mar 13 10:36:55 crc kubenswrapper[4632]: I0313 10:36:55.513532 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rpx2w" event={"ID":"27f03f5b-b62a-4142-9594-79c6ea30f9e2","Type":"ContainerDied","Data":"6db39b0a3dfdd27faaf913cbea3cd500e63cfb09859815521c37cd4347e70f19"} Mar 13 10:36:56 crc kubenswrapper[4632]: I0313 10:36:56.525663 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rpx2w" event={"ID":"27f03f5b-b62a-4142-9594-79c6ea30f9e2","Type":"ContainerStarted","Data":"e02267f2e4b4bd2ba62fd1e078a850349c8f601cd41e32ffd2eca3037d604627"} Mar 13 10:36:57 crc kubenswrapper[4632]: I0313 10:36:57.556310 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rpx2w" podStartSLOduration=3.907886684 podStartE2EDuration="7.556293815s" podCreationTimestamp="2026-03-13 10:36:50 +0000 UTC" firstStartedPulling="2026-03-13 10:36:52.474838153 +0000 UTC m=+1986.497368286" lastFinishedPulling="2026-03-13 10:36:56.123245284 +0000 UTC m=+1990.145775417" observedRunningTime="2026-03-13 10:36:57.550470872 +0000 UTC m=+1991.573001005" watchObservedRunningTime="2026-03-13 10:36:57.556293815 +0000 UTC m=+1991.578823948" Mar 13 10:36:58 crc kubenswrapper[4632]: I0313 10:36:58.548675 4632 generic.go:334] "Generic (PLEG): container finished" podID="88a273fb-d2f3-477f-9c9b-807b65124f71" containerID="c0feac2c5cf2c2fe35bb95c87df88b965616af6218f4c079105180daf90200ee" exitCode=0 Mar 13 10:36:58 crc kubenswrapper[4632]: I0313 10:36:58.548743 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjx7m" event={"ID":"88a273fb-d2f3-477f-9c9b-807b65124f71","Type":"ContainerDied","Data":"c0feac2c5cf2c2fe35bb95c87df88b965616af6218f4c079105180daf90200ee"} Mar 13 10:36:59 crc kubenswrapper[4632]: I0313 10:36:59.564703 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjx7m" event={"ID":"88a273fb-d2f3-477f-9c9b-807b65124f71","Type":"ContainerStarted","Data":"d70e1caa95eb14a2494c9054b381b3b987b202fd909d5550b73c7aa627b50a73"} Mar 13 10:36:59 crc kubenswrapper[4632]: I0313 10:36:59.606754 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cjx7m" podStartSLOduration=3.051961518 podStartE2EDuration="9.606713857s" podCreationTimestamp="2026-03-13 10:36:50 +0000 UTC" firstStartedPulling="2026-03-13 10:36:52.478239017 +0000 UTC m=+1986.500769150" lastFinishedPulling="2026-03-13 10:36:59.032991356 +0000 UTC m=+1993.055521489" observedRunningTime="2026-03-13 10:36:59.594524846 +0000 UTC m=+1993.617054979" watchObservedRunningTime="2026-03-13 10:36:59.606713857 +0000 UTC m=+1993.629243990" Mar 13 10:37:00 crc kubenswrapper[4632]: I0313 10:37:00.730514 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cjx7m" Mar 13 10:37:00 crc kubenswrapper[4632]: I0313 10:37:00.730573 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cjx7m" Mar 13 10:37:00 crc kubenswrapper[4632]: I0313 10:37:00.952953 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rpx2w" Mar 13 10:37:00 crc kubenswrapper[4632]: I0313 10:37:00.953015 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rpx2w" Mar 13 10:37:01 crc kubenswrapper[4632]: I0313 10:37:01.789197 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-cjx7m" podUID="88a273fb-d2f3-477f-9c9b-807b65124f71" containerName="registry-server" probeResult="failure" output=< Mar 13 10:37:01 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 10:37:01 crc kubenswrapper[4632]: > Mar 13 10:37:02 crc kubenswrapper[4632]: I0313 10:37:02.014748 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-rpx2w" podUID="27f03f5b-b62a-4142-9594-79c6ea30f9e2" containerName="registry-server" probeResult="failure" output=< Mar 13 10:37:02 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 10:37:02 crc kubenswrapper[4632]: > Mar 13 10:37:08 crc kubenswrapper[4632]: I0313 10:37:07.999655 4632 scope.go:117] "RemoveContainer" containerID="68a82ec143a93c9f66b6d5e73e70ead182bba11acadf06a0bc0700ee8971357d" Mar 13 10:37:08 crc kubenswrapper[4632]: I0313 10:37:08.056127 4632 scope.go:117] "RemoveContainer" containerID="90dfbecc999c31c0a51b0624874627a8f3c0659cb11e205820b8e9aab659a4a1" Mar 13 10:37:08 crc kubenswrapper[4632]: I0313 10:37:08.098732 4632 scope.go:117] "RemoveContainer" containerID="3b5385b113397b9418c59a941d2a27f232c7b0df4b245db65886e55380c57297" Mar 13 10:37:08 crc kubenswrapper[4632]: I0313 10:37:08.146262 4632 scope.go:117] "RemoveContainer" containerID="3672f721f5cc963fe48f19a0fe26275ae0f1cbd82fd44ed2d6b14dcbb240be1d" Mar 13 10:37:08 crc kubenswrapper[4632]: I0313 10:37:08.222322 4632 scope.go:117] "RemoveContainer" containerID="3ef3ce34ce4d2a0d8d000d31874aca20b10c953ddde87f68a0b04979e69b8bae" Mar 13 10:37:08 crc kubenswrapper[4632]: I0313 10:37:08.281309 4632 scope.go:117] "RemoveContainer" containerID="6d5ac5d7a6aab5517e4300c2e14808710d4f8cfa4977c9841f6552b262144012" Mar 13 10:37:08 crc kubenswrapper[4632]: I0313 10:37:08.332609 4632 scope.go:117] "RemoveContainer" containerID="1e8d2b5aecd08236cabb2c50425d69df7147e32b58dae758550f96994f27f434" Mar 13 10:37:10 crc kubenswrapper[4632]: I0313 10:37:10.786440 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cjx7m" Mar 13 10:37:10 crc kubenswrapper[4632]: I0313 10:37:10.846535 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cjx7m" Mar 13 10:37:11 crc kubenswrapper[4632]: I0313 10:37:11.003543 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rpx2w" Mar 13 10:37:11 crc kubenswrapper[4632]: I0313 10:37:11.025400 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cjx7m"] Mar 13 10:37:11 crc kubenswrapper[4632]: I0313 10:37:11.060418 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rpx2w" Mar 13 10:37:12 crc kubenswrapper[4632]: I0313 10:37:12.680021 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cjx7m" podUID="88a273fb-d2f3-477f-9c9b-807b65124f71" containerName="registry-server" containerID="cri-o://d70e1caa95eb14a2494c9054b381b3b987b202fd909d5550b73c7aa627b50a73" gracePeriod=2 Mar 13 10:37:13 crc kubenswrapper[4632]: I0313 10:37:13.174422 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cjx7m" Mar 13 10:37:13 crc kubenswrapper[4632]: I0313 10:37:13.243609 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88a273fb-d2f3-477f-9c9b-807b65124f71-utilities\") pod \"88a273fb-d2f3-477f-9c9b-807b65124f71\" (UID: \"88a273fb-d2f3-477f-9c9b-807b65124f71\") " Mar 13 10:37:13 crc kubenswrapper[4632]: I0313 10:37:13.243856 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88a273fb-d2f3-477f-9c9b-807b65124f71-catalog-content\") pod \"88a273fb-d2f3-477f-9c9b-807b65124f71\" (UID: \"88a273fb-d2f3-477f-9c9b-807b65124f71\") " Mar 13 10:37:13 crc kubenswrapper[4632]: I0313 10:37:13.243934 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw2k4\" (UniqueName: \"kubernetes.io/projected/88a273fb-d2f3-477f-9c9b-807b65124f71-kube-api-access-jw2k4\") pod \"88a273fb-d2f3-477f-9c9b-807b65124f71\" (UID: \"88a273fb-d2f3-477f-9c9b-807b65124f71\") " Mar 13 10:37:13 crc kubenswrapper[4632]: I0313 10:37:13.244730 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88a273fb-d2f3-477f-9c9b-807b65124f71-utilities" (OuterVolumeSpecName: "utilities") pod "88a273fb-d2f3-477f-9c9b-807b65124f71" (UID: "88a273fb-d2f3-477f-9c9b-807b65124f71"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:37:13 crc kubenswrapper[4632]: I0313 10:37:13.251363 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88a273fb-d2f3-477f-9c9b-807b65124f71-kube-api-access-jw2k4" (OuterVolumeSpecName: "kube-api-access-jw2k4") pod "88a273fb-d2f3-477f-9c9b-807b65124f71" (UID: "88a273fb-d2f3-477f-9c9b-807b65124f71"). InnerVolumeSpecName "kube-api-access-jw2k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:37:13 crc kubenswrapper[4632]: I0313 10:37:13.303683 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88a273fb-d2f3-477f-9c9b-807b65124f71-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88a273fb-d2f3-477f-9c9b-807b65124f71" (UID: "88a273fb-d2f3-477f-9c9b-807b65124f71"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:37:13 crc kubenswrapper[4632]: I0313 10:37:13.346401 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88a273fb-d2f3-477f-9c9b-807b65124f71-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 10:37:13 crc kubenswrapper[4632]: I0313 10:37:13.346648 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88a273fb-d2f3-477f-9c9b-807b65124f71-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 10:37:13 crc kubenswrapper[4632]: I0313 10:37:13.346744 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jw2k4\" (UniqueName: \"kubernetes.io/projected/88a273fb-d2f3-477f-9c9b-807b65124f71-kube-api-access-jw2k4\") on node \"crc\" DevicePath \"\"" Mar 13 10:37:13 crc kubenswrapper[4632]: I0313 10:37:13.425234 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rpx2w"] Mar 13 10:37:13 crc kubenswrapper[4632]: I0313 10:37:13.425810 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rpx2w" podUID="27f03f5b-b62a-4142-9594-79c6ea30f9e2" containerName="registry-server" containerID="cri-o://e02267f2e4b4bd2ba62fd1e078a850349c8f601cd41e32ffd2eca3037d604627" gracePeriod=2 Mar 13 10:37:13 crc kubenswrapper[4632]: I0313 10:37:13.712427 4632 generic.go:334] "Generic (PLEG): container finished" podID="88a273fb-d2f3-477f-9c9b-807b65124f71" containerID="d70e1caa95eb14a2494c9054b381b3b987b202fd909d5550b73c7aa627b50a73" exitCode=0 Mar 13 10:37:13 crc kubenswrapper[4632]: I0313 10:37:13.712470 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjx7m" event={"ID":"88a273fb-d2f3-477f-9c9b-807b65124f71","Type":"ContainerDied","Data":"d70e1caa95eb14a2494c9054b381b3b987b202fd909d5550b73c7aa627b50a73"} Mar 13 10:37:13 crc kubenswrapper[4632]: I0313 10:37:13.712503 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cjx7m" Mar 13 10:37:13 crc kubenswrapper[4632]: I0313 10:37:13.712520 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjx7m" event={"ID":"88a273fb-d2f3-477f-9c9b-807b65124f71","Type":"ContainerDied","Data":"5ad059c559e099a30b0bde8f6cfb84fc66c8ba7e893d883c696d37b72a5f0e91"} Mar 13 10:37:13 crc kubenswrapper[4632]: I0313 10:37:13.712543 4632 scope.go:117] "RemoveContainer" containerID="d70e1caa95eb14a2494c9054b381b3b987b202fd909d5550b73c7aa627b50a73" Mar 13 10:37:13 crc kubenswrapper[4632]: I0313 10:37:13.734196 4632 generic.go:334] "Generic (PLEG): container finished" podID="27f03f5b-b62a-4142-9594-79c6ea30f9e2" containerID="e02267f2e4b4bd2ba62fd1e078a850349c8f601cd41e32ffd2eca3037d604627" exitCode=0 Mar 13 10:37:13 crc kubenswrapper[4632]: I0313 10:37:13.734236 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rpx2w" event={"ID":"27f03f5b-b62a-4142-9594-79c6ea30f9e2","Type":"ContainerDied","Data":"e02267f2e4b4bd2ba62fd1e078a850349c8f601cd41e32ffd2eca3037d604627"} Mar 13 10:37:13 crc kubenswrapper[4632]: I0313 10:37:13.770902 4632 scope.go:117] "RemoveContainer" containerID="c0feac2c5cf2c2fe35bb95c87df88b965616af6218f4c079105180daf90200ee" Mar 13 10:37:13 crc kubenswrapper[4632]: I0313 10:37:13.778203 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cjx7m"] Mar 13 10:37:13 crc kubenswrapper[4632]: I0313 10:37:13.791761 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cjx7m"] Mar 13 10:37:13 crc kubenswrapper[4632]: I0313 10:37:13.809370 4632 scope.go:117] "RemoveContainer" containerID="7eb7750a4f5485b1996c42446b4311cb486ceb4d805041caa4edb556018bf415" Mar 13 10:37:13 crc kubenswrapper[4632]: I0313 10:37:13.911575 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rpx2w" Mar 13 10:37:13 crc kubenswrapper[4632]: I0313 10:37:13.921565 4632 scope.go:117] "RemoveContainer" containerID="d70e1caa95eb14a2494c9054b381b3b987b202fd909d5550b73c7aa627b50a73" Mar 13 10:37:13 crc kubenswrapper[4632]: E0313 10:37:13.922411 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d70e1caa95eb14a2494c9054b381b3b987b202fd909d5550b73c7aa627b50a73\": container with ID starting with d70e1caa95eb14a2494c9054b381b3b987b202fd909d5550b73c7aa627b50a73 not found: ID does not exist" containerID="d70e1caa95eb14a2494c9054b381b3b987b202fd909d5550b73c7aa627b50a73" Mar 13 10:37:13 crc kubenswrapper[4632]: I0313 10:37:13.922451 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d70e1caa95eb14a2494c9054b381b3b987b202fd909d5550b73c7aa627b50a73"} err="failed to get container status \"d70e1caa95eb14a2494c9054b381b3b987b202fd909d5550b73c7aa627b50a73\": rpc error: code = NotFound desc = could not find container \"d70e1caa95eb14a2494c9054b381b3b987b202fd909d5550b73c7aa627b50a73\": container with ID starting with d70e1caa95eb14a2494c9054b381b3b987b202fd909d5550b73c7aa627b50a73 not found: ID does not exist" Mar 13 10:37:13 crc kubenswrapper[4632]: I0313 10:37:13.922614 4632 scope.go:117] "RemoveContainer" containerID="c0feac2c5cf2c2fe35bb95c87df88b965616af6218f4c079105180daf90200ee" Mar 13 10:37:13 crc kubenswrapper[4632]: E0313 10:37:13.923138 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0feac2c5cf2c2fe35bb95c87df88b965616af6218f4c079105180daf90200ee\": container with ID starting with c0feac2c5cf2c2fe35bb95c87df88b965616af6218f4c079105180daf90200ee not found: ID does not exist" containerID="c0feac2c5cf2c2fe35bb95c87df88b965616af6218f4c079105180daf90200ee" Mar 13 10:37:13 crc kubenswrapper[4632]: I0313 10:37:13.923194 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0feac2c5cf2c2fe35bb95c87df88b965616af6218f4c079105180daf90200ee"} err="failed to get container status \"c0feac2c5cf2c2fe35bb95c87df88b965616af6218f4c079105180daf90200ee\": rpc error: code = NotFound desc = could not find container \"c0feac2c5cf2c2fe35bb95c87df88b965616af6218f4c079105180daf90200ee\": container with ID starting with c0feac2c5cf2c2fe35bb95c87df88b965616af6218f4c079105180daf90200ee not found: ID does not exist" Mar 13 10:37:13 crc kubenswrapper[4632]: I0313 10:37:13.923228 4632 scope.go:117] "RemoveContainer" containerID="7eb7750a4f5485b1996c42446b4311cb486ceb4d805041caa4edb556018bf415" Mar 13 10:37:13 crc kubenswrapper[4632]: E0313 10:37:13.923565 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7eb7750a4f5485b1996c42446b4311cb486ceb4d805041caa4edb556018bf415\": container with ID starting with 7eb7750a4f5485b1996c42446b4311cb486ceb4d805041caa4edb556018bf415 not found: ID does not exist" containerID="7eb7750a4f5485b1996c42446b4311cb486ceb4d805041caa4edb556018bf415" Mar 13 10:37:13 crc kubenswrapper[4632]: I0313 10:37:13.923595 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eb7750a4f5485b1996c42446b4311cb486ceb4d805041caa4edb556018bf415"} err="failed to get container status \"7eb7750a4f5485b1996c42446b4311cb486ceb4d805041caa4edb556018bf415\": rpc error: code = NotFound desc = could not find container \"7eb7750a4f5485b1996c42446b4311cb486ceb4d805041caa4edb556018bf415\": container with ID starting with 7eb7750a4f5485b1996c42446b4311cb486ceb4d805041caa4edb556018bf415 not found: ID does not exist" Mar 13 10:37:14 crc kubenswrapper[4632]: I0313 10:37:14.053991 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88a273fb-d2f3-477f-9c9b-807b65124f71" path="/var/lib/kubelet/pods/88a273fb-d2f3-477f-9c9b-807b65124f71/volumes" Mar 13 10:37:14 crc kubenswrapper[4632]: I0313 10:37:14.065508 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27f03f5b-b62a-4142-9594-79c6ea30f9e2-catalog-content\") pod \"27f03f5b-b62a-4142-9594-79c6ea30f9e2\" (UID: \"27f03f5b-b62a-4142-9594-79c6ea30f9e2\") " Mar 13 10:37:14 crc kubenswrapper[4632]: I0313 10:37:14.065576 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ps6l\" (UniqueName: \"kubernetes.io/projected/27f03f5b-b62a-4142-9594-79c6ea30f9e2-kube-api-access-4ps6l\") pod \"27f03f5b-b62a-4142-9594-79c6ea30f9e2\" (UID: \"27f03f5b-b62a-4142-9594-79c6ea30f9e2\") " Mar 13 10:37:14 crc kubenswrapper[4632]: I0313 10:37:14.065810 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27f03f5b-b62a-4142-9594-79c6ea30f9e2-utilities\") pod \"27f03f5b-b62a-4142-9594-79c6ea30f9e2\" (UID: \"27f03f5b-b62a-4142-9594-79c6ea30f9e2\") " Mar 13 10:37:14 crc kubenswrapper[4632]: I0313 10:37:14.066811 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27f03f5b-b62a-4142-9594-79c6ea30f9e2-utilities" (OuterVolumeSpecName: "utilities") pod "27f03f5b-b62a-4142-9594-79c6ea30f9e2" (UID: "27f03f5b-b62a-4142-9594-79c6ea30f9e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:37:14 crc kubenswrapper[4632]: I0313 10:37:14.071899 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27f03f5b-b62a-4142-9594-79c6ea30f9e2-kube-api-access-4ps6l" (OuterVolumeSpecName: "kube-api-access-4ps6l") pod "27f03f5b-b62a-4142-9594-79c6ea30f9e2" (UID: "27f03f5b-b62a-4142-9594-79c6ea30f9e2"). InnerVolumeSpecName "kube-api-access-4ps6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:37:14 crc kubenswrapper[4632]: I0313 10:37:14.094926 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27f03f5b-b62a-4142-9594-79c6ea30f9e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "27f03f5b-b62a-4142-9594-79c6ea30f9e2" (UID: "27f03f5b-b62a-4142-9594-79c6ea30f9e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:37:14 crc kubenswrapper[4632]: I0313 10:37:14.167835 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27f03f5b-b62a-4142-9594-79c6ea30f9e2-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 10:37:14 crc kubenswrapper[4632]: I0313 10:37:14.168131 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27f03f5b-b62a-4142-9594-79c6ea30f9e2-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 10:37:14 crc kubenswrapper[4632]: I0313 10:37:14.168205 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ps6l\" (UniqueName: \"kubernetes.io/projected/27f03f5b-b62a-4142-9594-79c6ea30f9e2-kube-api-access-4ps6l\") on node \"crc\" DevicePath \"\"" Mar 13 10:37:14 crc kubenswrapper[4632]: I0313 10:37:14.745493 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rpx2w" Mar 13 10:37:14 crc kubenswrapper[4632]: I0313 10:37:14.745506 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rpx2w" event={"ID":"27f03f5b-b62a-4142-9594-79c6ea30f9e2","Type":"ContainerDied","Data":"bd872c1ca4d35e0f9e248bc64b5943094d58c8ac175f58d64936220c31510187"} Mar 13 10:37:14 crc kubenswrapper[4632]: I0313 10:37:14.746782 4632 scope.go:117] "RemoveContainer" containerID="e02267f2e4b4bd2ba62fd1e078a850349c8f601cd41e32ffd2eca3037d604627" Mar 13 10:37:14 crc kubenswrapper[4632]: I0313 10:37:14.782233 4632 scope.go:117] "RemoveContainer" containerID="6db39b0a3dfdd27faaf913cbea3cd500e63cfb09859815521c37cd4347e70f19" Mar 13 10:37:14 crc kubenswrapper[4632]: I0313 10:37:14.784642 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rpx2w"] Mar 13 10:37:14 crc kubenswrapper[4632]: I0313 10:37:14.802637 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rpx2w"] Mar 13 10:37:14 crc kubenswrapper[4632]: I0313 10:37:14.807488 4632 scope.go:117] "RemoveContainer" containerID="8c073b0c9c22380ad30c5fc7961ef51acdc644797f56ab38d2b96fb0fcea4cdf" Mar 13 10:37:16 crc kubenswrapper[4632]: I0313 10:37:16.058036 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27f03f5b-b62a-4142-9594-79c6ea30f9e2" path="/var/lib/kubelet/pods/27f03f5b-b62a-4142-9594-79c6ea30f9e2/volumes" Mar 13 10:37:28 crc kubenswrapper[4632]: I0313 10:37:28.059267 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-fshjb"] Mar 13 10:37:28 crc kubenswrapper[4632]: I0313 10:37:28.073996 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-kswhw"] Mar 13 10:37:28 crc kubenswrapper[4632]: I0313 10:37:28.082432 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-fshjb"] Mar 13 10:37:28 crc kubenswrapper[4632]: I0313 10:37:28.090961 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-kswhw"] Mar 13 10:37:30 crc kubenswrapper[4632]: I0313 10:37:30.038048 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-2f8c-account-create-update-g4b8g"] Mar 13 10:37:30 crc kubenswrapper[4632]: I0313 10:37:30.057282 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09bd98be-9d10-4a53-8ef6-c4718b05c3f6" path="/var/lib/kubelet/pods/09bd98be-9d10-4a53-8ef6-c4718b05c3f6/volumes" Mar 13 10:37:30 crc kubenswrapper[4632]: I0313 10:37:30.059280 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e0fb1fc-c94a-44f0-a269-e7211c6fcfba" path="/var/lib/kubelet/pods/8e0fb1fc-c94a-44f0-a269-e7211c6fcfba/volumes" Mar 13 10:37:30 crc kubenswrapper[4632]: I0313 10:37:30.060188 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-wgv42"] Mar 13 10:37:30 crc kubenswrapper[4632]: I0313 10:37:30.064615 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-f3f1-account-create-update-29g8s"] Mar 13 10:37:30 crc kubenswrapper[4632]: I0313 10:37:30.070741 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-2f8c-account-create-update-g4b8g"] Mar 13 10:37:30 crc kubenswrapper[4632]: I0313 10:37:30.078485 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-f3f1-account-create-update-29g8s"] Mar 13 10:37:30 crc kubenswrapper[4632]: I0313 10:37:30.085799 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-wgv42"] Mar 13 10:37:31 crc kubenswrapper[4632]: I0313 10:37:31.036759 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-86d4-account-create-update-5c7rj"] Mar 13 10:37:31 crc kubenswrapper[4632]: I0313 10:37:31.045981 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-86d4-account-create-update-5c7rj"] Mar 13 10:37:32 crc kubenswrapper[4632]: I0313 10:37:32.055656 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="234a900d-887b-448c-8336-010107726c1e" path="/var/lib/kubelet/pods/234a900d-887b-448c-8336-010107726c1e/volumes" Mar 13 10:37:32 crc kubenswrapper[4632]: I0313 10:37:32.057574 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8462be25-a577-476d-b54a-73790a8aa189" path="/var/lib/kubelet/pods/8462be25-a577-476d-b54a-73790a8aa189/volumes" Mar 13 10:37:32 crc kubenswrapper[4632]: I0313 10:37:32.058512 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbaf5a79-1c34-4518-afb9-19703fe6c45b" path="/var/lib/kubelet/pods/bbaf5a79-1c34-4518-afb9-19703fe6c45b/volumes" Mar 13 10:37:32 crc kubenswrapper[4632]: I0313 10:37:32.059321 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0c32ed5-c3b0-45ea-99de-87c45cb1ba77" path="/var/lib/kubelet/pods/f0c32ed5-c3b0-45ea-99de-87c45cb1ba77/volumes" Mar 13 10:37:33 crc kubenswrapper[4632]: I0313 10:37:33.920301 4632 generic.go:334] "Generic (PLEG): container finished" podID="0d75181a-4c91-485e-8bcd-02e2aedd4d45" containerID="bd09862a5fc80def82e97b44b7d539caee7c696bb410023ec19cde3384abb6ae" exitCode=0 Mar 13 10:37:33 crc kubenswrapper[4632]: I0313 10:37:33.920381 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-754cp" event={"ID":"0d75181a-4c91-485e-8bcd-02e2aedd4d45","Type":"ContainerDied","Data":"bd09862a5fc80def82e97b44b7d539caee7c696bb410023ec19cde3384abb6ae"} Mar 13 10:37:35 crc kubenswrapper[4632]: I0313 10:37:35.379388 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-754cp" Mar 13 10:37:35 crc kubenswrapper[4632]: I0313 10:37:35.567665 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d75181a-4c91-485e-8bcd-02e2aedd4d45-ssh-key-openstack-edpm-ipam\") pod \"0d75181a-4c91-485e-8bcd-02e2aedd4d45\" (UID: \"0d75181a-4c91-485e-8bcd-02e2aedd4d45\") " Mar 13 10:37:35 crc kubenswrapper[4632]: I0313 10:37:35.569066 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qctqc\" (UniqueName: \"kubernetes.io/projected/0d75181a-4c91-485e-8bcd-02e2aedd4d45-kube-api-access-qctqc\") pod \"0d75181a-4c91-485e-8bcd-02e2aedd4d45\" (UID: \"0d75181a-4c91-485e-8bcd-02e2aedd4d45\") " Mar 13 10:37:35 crc kubenswrapper[4632]: I0313 10:37:35.569232 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d75181a-4c91-485e-8bcd-02e2aedd4d45-inventory\") pod \"0d75181a-4c91-485e-8bcd-02e2aedd4d45\" (UID: \"0d75181a-4c91-485e-8bcd-02e2aedd4d45\") " Mar 13 10:37:35 crc kubenswrapper[4632]: I0313 10:37:35.576019 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d75181a-4c91-485e-8bcd-02e2aedd4d45-kube-api-access-qctqc" (OuterVolumeSpecName: "kube-api-access-qctqc") pod "0d75181a-4c91-485e-8bcd-02e2aedd4d45" (UID: "0d75181a-4c91-485e-8bcd-02e2aedd4d45"). InnerVolumeSpecName "kube-api-access-qctqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:37:35 crc kubenswrapper[4632]: I0313 10:37:35.601767 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d75181a-4c91-485e-8bcd-02e2aedd4d45-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0d75181a-4c91-485e-8bcd-02e2aedd4d45" (UID: "0d75181a-4c91-485e-8bcd-02e2aedd4d45"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:37:35 crc kubenswrapper[4632]: I0313 10:37:35.612627 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d75181a-4c91-485e-8bcd-02e2aedd4d45-inventory" (OuterVolumeSpecName: "inventory") pod "0d75181a-4c91-485e-8bcd-02e2aedd4d45" (UID: "0d75181a-4c91-485e-8bcd-02e2aedd4d45"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:37:35 crc kubenswrapper[4632]: I0313 10:37:35.671386 4632 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d75181a-4c91-485e-8bcd-02e2aedd4d45-inventory\") on node \"crc\" DevicePath \"\"" Mar 13 10:37:35 crc kubenswrapper[4632]: I0313 10:37:35.671626 4632 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d75181a-4c91-485e-8bcd-02e2aedd4d45-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 13 10:37:35 crc kubenswrapper[4632]: I0313 10:37:35.671709 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qctqc\" (UniqueName: \"kubernetes.io/projected/0d75181a-4c91-485e-8bcd-02e2aedd4d45-kube-api-access-qctqc\") on node \"crc\" DevicePath \"\"" Mar 13 10:37:35 crc kubenswrapper[4632]: I0313 10:37:35.942653 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-754cp" event={"ID":"0d75181a-4c91-485e-8bcd-02e2aedd4d45","Type":"ContainerDied","Data":"b2534ff7153cbd23578a95b816373371c87f8d95ec5b00bff35a4aeb9a12cb51"} Mar 13 10:37:35 crc kubenswrapper[4632]: I0313 10:37:35.942694 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2534ff7153cbd23578a95b816373371c87f8d95ec5b00bff35a4aeb9a12cb51" Mar 13 10:37:35 crc kubenswrapper[4632]: I0313 10:37:35.942700 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-754cp" Mar 13 10:37:36 crc kubenswrapper[4632]: I0313 10:37:36.042352 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tpk84"] Mar 13 10:37:36 crc kubenswrapper[4632]: E0313 10:37:36.043056 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27f03f5b-b62a-4142-9594-79c6ea30f9e2" containerName="extract-content" Mar 13 10:37:36 crc kubenswrapper[4632]: I0313 10:37:36.043074 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="27f03f5b-b62a-4142-9594-79c6ea30f9e2" containerName="extract-content" Mar 13 10:37:36 crc kubenswrapper[4632]: E0313 10:37:36.043094 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27f03f5b-b62a-4142-9594-79c6ea30f9e2" containerName="extract-utilities" Mar 13 10:37:36 crc kubenswrapper[4632]: I0313 10:37:36.043101 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="27f03f5b-b62a-4142-9594-79c6ea30f9e2" containerName="extract-utilities" Mar 13 10:37:36 crc kubenswrapper[4632]: E0313 10:37:36.043112 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88a273fb-d2f3-477f-9c9b-807b65124f71" containerName="extract-utilities" Mar 13 10:37:36 crc kubenswrapper[4632]: I0313 10:37:36.043119 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="88a273fb-d2f3-477f-9c9b-807b65124f71" containerName="extract-utilities" Mar 13 10:37:36 crc kubenswrapper[4632]: E0313 10:37:36.043137 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88a273fb-d2f3-477f-9c9b-807b65124f71" containerName="registry-server" Mar 13 10:37:36 crc kubenswrapper[4632]: I0313 10:37:36.043143 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="88a273fb-d2f3-477f-9c9b-807b65124f71" containerName="registry-server" Mar 13 10:37:36 crc kubenswrapper[4632]: E0313 10:37:36.043159 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d75181a-4c91-485e-8bcd-02e2aedd4d45" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Mar 13 10:37:36 crc kubenswrapper[4632]: I0313 10:37:36.043166 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d75181a-4c91-485e-8bcd-02e2aedd4d45" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Mar 13 10:37:36 crc kubenswrapper[4632]: E0313 10:37:36.043176 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88a273fb-d2f3-477f-9c9b-807b65124f71" containerName="extract-content" Mar 13 10:37:36 crc kubenswrapper[4632]: I0313 10:37:36.043182 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="88a273fb-d2f3-477f-9c9b-807b65124f71" containerName="extract-content" Mar 13 10:37:36 crc kubenswrapper[4632]: E0313 10:37:36.043194 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27f03f5b-b62a-4142-9594-79c6ea30f9e2" containerName="registry-server" Mar 13 10:37:36 crc kubenswrapper[4632]: I0313 10:37:36.043200 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="27f03f5b-b62a-4142-9594-79c6ea30f9e2" containerName="registry-server" Mar 13 10:37:36 crc kubenswrapper[4632]: I0313 10:37:36.043375 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="88a273fb-d2f3-477f-9c9b-807b65124f71" containerName="registry-server" Mar 13 10:37:36 crc kubenswrapper[4632]: I0313 10:37:36.043400 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="27f03f5b-b62a-4142-9594-79c6ea30f9e2" containerName="registry-server" Mar 13 10:37:36 crc kubenswrapper[4632]: I0313 10:37:36.043411 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d75181a-4c91-485e-8bcd-02e2aedd4d45" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Mar 13 10:37:36 crc kubenswrapper[4632]: I0313 10:37:36.044182 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tpk84" Mar 13 10:37:36 crc kubenswrapper[4632]: I0313 10:37:36.047242 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 13 10:37:36 crc kubenswrapper[4632]: I0313 10:37:36.047528 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 13 10:37:36 crc kubenswrapper[4632]: I0313 10:37:36.047686 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 13 10:37:36 crc kubenswrapper[4632]: I0313 10:37:36.047828 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qrzsx" Mar 13 10:37:36 crc kubenswrapper[4632]: I0313 10:37:36.075794 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tpk84"] Mar 13 10:37:36 crc kubenswrapper[4632]: I0313 10:37:36.181439 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bcd0e6df-81c2-4541-b0b5-d5c539f03451-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tpk84\" (UID: \"bcd0e6df-81c2-4541-b0b5-d5c539f03451\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tpk84" Mar 13 10:37:36 crc kubenswrapper[4632]: I0313 10:37:36.181556 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6hxn\" (UniqueName: \"kubernetes.io/projected/bcd0e6df-81c2-4541-b0b5-d5c539f03451-kube-api-access-x6hxn\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tpk84\" (UID: \"bcd0e6df-81c2-4541-b0b5-d5c539f03451\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tpk84" Mar 13 10:37:36 crc kubenswrapper[4632]: I0313 10:37:36.181649 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bcd0e6df-81c2-4541-b0b5-d5c539f03451-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tpk84\" (UID: \"bcd0e6df-81c2-4541-b0b5-d5c539f03451\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tpk84" Mar 13 10:37:36 crc kubenswrapper[4632]: I0313 10:37:36.282853 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6hxn\" (UniqueName: \"kubernetes.io/projected/bcd0e6df-81c2-4541-b0b5-d5c539f03451-kube-api-access-x6hxn\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tpk84\" (UID: \"bcd0e6df-81c2-4541-b0b5-d5c539f03451\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tpk84" Mar 13 10:37:36 crc kubenswrapper[4632]: I0313 10:37:36.282992 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bcd0e6df-81c2-4541-b0b5-d5c539f03451-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tpk84\" (UID: \"bcd0e6df-81c2-4541-b0b5-d5c539f03451\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tpk84" Mar 13 10:37:36 crc kubenswrapper[4632]: I0313 10:37:36.283068 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bcd0e6df-81c2-4541-b0b5-d5c539f03451-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tpk84\" (UID: \"bcd0e6df-81c2-4541-b0b5-d5c539f03451\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tpk84" Mar 13 10:37:36 crc kubenswrapper[4632]: I0313 10:37:36.288758 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bcd0e6df-81c2-4541-b0b5-d5c539f03451-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tpk84\" (UID: \"bcd0e6df-81c2-4541-b0b5-d5c539f03451\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tpk84" Mar 13 10:37:36 crc kubenswrapper[4632]: I0313 10:37:36.289554 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bcd0e6df-81c2-4541-b0b5-d5c539f03451-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tpk84\" (UID: \"bcd0e6df-81c2-4541-b0b5-d5c539f03451\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tpk84" Mar 13 10:37:36 crc kubenswrapper[4632]: I0313 10:37:36.303163 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6hxn\" (UniqueName: \"kubernetes.io/projected/bcd0e6df-81c2-4541-b0b5-d5c539f03451-kube-api-access-x6hxn\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tpk84\" (UID: \"bcd0e6df-81c2-4541-b0b5-d5c539f03451\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tpk84" Mar 13 10:37:36 crc kubenswrapper[4632]: I0313 10:37:36.371791 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tpk84" Mar 13 10:37:36 crc kubenswrapper[4632]: I0313 10:37:36.978376 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tpk84"] Mar 13 10:37:37 crc kubenswrapper[4632]: I0313 10:37:37.958831 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tpk84" event={"ID":"bcd0e6df-81c2-4541-b0b5-d5c539f03451","Type":"ContainerStarted","Data":"da621c5c44f364510de28883c64cc52b63ab77c54d882aedd7a7119edf3055a8"} Mar 13 10:37:37 crc kubenswrapper[4632]: I0313 10:37:37.959164 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tpk84" event={"ID":"bcd0e6df-81c2-4541-b0b5-d5c539f03451","Type":"ContainerStarted","Data":"d035c3103945937690807050ad841d4bce39b39f6d04cfc72ad0d28a843a4add"} Mar 13 10:37:37 crc kubenswrapper[4632]: I0313 10:37:37.976762 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tpk84" podStartSLOduration=1.553680848 podStartE2EDuration="1.976746375s" podCreationTimestamp="2026-03-13 10:37:36 +0000 UTC" firstStartedPulling="2026-03-13 10:37:36.992029105 +0000 UTC m=+2031.014559238" lastFinishedPulling="2026-03-13 10:37:37.415094632 +0000 UTC m=+2031.437624765" observedRunningTime="2026-03-13 10:37:37.973281499 +0000 UTC m=+2031.995811652" watchObservedRunningTime="2026-03-13 10:37:37.976746375 +0000 UTC m=+2031.999276508" Mar 13 10:38:00 crc kubenswrapper[4632]: I0313 10:38:00.150904 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556638-p7mdh"] Mar 13 10:38:00 crc kubenswrapper[4632]: I0313 10:38:00.156809 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556638-p7mdh" Mar 13 10:38:00 crc kubenswrapper[4632]: I0313 10:38:00.161984 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 10:38:00 crc kubenswrapper[4632]: I0313 10:38:00.162076 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 10:38:00 crc kubenswrapper[4632]: I0313 10:38:00.162254 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 10:38:00 crc kubenswrapper[4632]: I0313 10:38:00.180139 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556638-p7mdh"] Mar 13 10:38:00 crc kubenswrapper[4632]: I0313 10:38:00.345845 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzq8p\" (UniqueName: \"kubernetes.io/projected/346e767a-d9dd-40e1-9ab3-2e4ec9184667-kube-api-access-tzq8p\") pod \"auto-csr-approver-29556638-p7mdh\" (UID: \"346e767a-d9dd-40e1-9ab3-2e4ec9184667\") " pod="openshift-infra/auto-csr-approver-29556638-p7mdh" Mar 13 10:38:00 crc kubenswrapper[4632]: I0313 10:38:00.449302 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzq8p\" (UniqueName: \"kubernetes.io/projected/346e767a-d9dd-40e1-9ab3-2e4ec9184667-kube-api-access-tzq8p\") pod \"auto-csr-approver-29556638-p7mdh\" (UID: \"346e767a-d9dd-40e1-9ab3-2e4ec9184667\") " pod="openshift-infra/auto-csr-approver-29556638-p7mdh" Mar 13 10:38:00 crc kubenswrapper[4632]: I0313 10:38:00.489732 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzq8p\" (UniqueName: \"kubernetes.io/projected/346e767a-d9dd-40e1-9ab3-2e4ec9184667-kube-api-access-tzq8p\") pod \"auto-csr-approver-29556638-p7mdh\" (UID: \"346e767a-d9dd-40e1-9ab3-2e4ec9184667\") " pod="openshift-infra/auto-csr-approver-29556638-p7mdh" Mar 13 10:38:00 crc kubenswrapper[4632]: I0313 10:38:00.782907 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556638-p7mdh" Mar 13 10:38:01 crc kubenswrapper[4632]: I0313 10:38:01.272743 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556638-p7mdh"] Mar 13 10:38:02 crc kubenswrapper[4632]: I0313 10:38:02.206813 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556638-p7mdh" event={"ID":"346e767a-d9dd-40e1-9ab3-2e4ec9184667","Type":"ContainerStarted","Data":"d8c187bfe15bf5c2773923c01881f05cc533e70265c96c2fe50269b7b59d185c"} Mar 13 10:38:03 crc kubenswrapper[4632]: I0313 10:38:03.217597 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556638-p7mdh" event={"ID":"346e767a-d9dd-40e1-9ab3-2e4ec9184667","Type":"ContainerStarted","Data":"c166f0a830c16b65f03aba2171bb98a995fe4121f1b92036d629fce2afd52c26"} Mar 13 10:38:03 crc kubenswrapper[4632]: I0313 10:38:03.244140 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556638-p7mdh" podStartSLOduration=2.099335833 podStartE2EDuration="3.244092849s" podCreationTimestamp="2026-03-13 10:38:00 +0000 UTC" firstStartedPulling="2026-03-13 10:38:01.279095088 +0000 UTC m=+2055.301625221" lastFinishedPulling="2026-03-13 10:38:02.423852104 +0000 UTC m=+2056.446382237" observedRunningTime="2026-03-13 10:38:03.231781835 +0000 UTC m=+2057.254311968" watchObservedRunningTime="2026-03-13 10:38:03.244092849 +0000 UTC m=+2057.266622982" Mar 13 10:38:04 crc kubenswrapper[4632]: I0313 10:38:04.228658 4632 generic.go:334] "Generic (PLEG): container finished" podID="346e767a-d9dd-40e1-9ab3-2e4ec9184667" containerID="c166f0a830c16b65f03aba2171bb98a995fe4121f1b92036d629fce2afd52c26" exitCode=0 Mar 13 10:38:04 crc kubenswrapper[4632]: I0313 10:38:04.228713 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556638-p7mdh" event={"ID":"346e767a-d9dd-40e1-9ab3-2e4ec9184667","Type":"ContainerDied","Data":"c166f0a830c16b65f03aba2171bb98a995fe4121f1b92036d629fce2afd52c26"} Mar 13 10:38:05 crc kubenswrapper[4632]: I0313 10:38:05.552924 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556638-p7mdh" Mar 13 10:38:05 crc kubenswrapper[4632]: I0313 10:38:05.585325 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzq8p\" (UniqueName: \"kubernetes.io/projected/346e767a-d9dd-40e1-9ab3-2e4ec9184667-kube-api-access-tzq8p\") pod \"346e767a-d9dd-40e1-9ab3-2e4ec9184667\" (UID: \"346e767a-d9dd-40e1-9ab3-2e4ec9184667\") " Mar 13 10:38:05 crc kubenswrapper[4632]: I0313 10:38:05.598795 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/346e767a-d9dd-40e1-9ab3-2e4ec9184667-kube-api-access-tzq8p" (OuterVolumeSpecName: "kube-api-access-tzq8p") pod "346e767a-d9dd-40e1-9ab3-2e4ec9184667" (UID: "346e767a-d9dd-40e1-9ab3-2e4ec9184667"). InnerVolumeSpecName "kube-api-access-tzq8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:38:05 crc kubenswrapper[4632]: I0313 10:38:05.687806 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzq8p\" (UniqueName: \"kubernetes.io/projected/346e767a-d9dd-40e1-9ab3-2e4ec9184667-kube-api-access-tzq8p\") on node \"crc\" DevicePath \"\"" Mar 13 10:38:06 crc kubenswrapper[4632]: I0313 10:38:06.255431 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556638-p7mdh" event={"ID":"346e767a-d9dd-40e1-9ab3-2e4ec9184667","Type":"ContainerDied","Data":"d8c187bfe15bf5c2773923c01881f05cc533e70265c96c2fe50269b7b59d185c"} Mar 13 10:38:06 crc kubenswrapper[4632]: I0313 10:38:06.255506 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8c187bfe15bf5c2773923c01881f05cc533e70265c96c2fe50269b7b59d185c" Mar 13 10:38:06 crc kubenswrapper[4632]: I0313 10:38:06.256164 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556638-p7mdh" Mar 13 10:38:06 crc kubenswrapper[4632]: I0313 10:38:06.319417 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556632-sr4l5"] Mar 13 10:38:06 crc kubenswrapper[4632]: I0313 10:38:06.328658 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556632-sr4l5"] Mar 13 10:38:08 crc kubenswrapper[4632]: I0313 10:38:08.059131 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="009f055c-d442-4b23-8f55-52a43362bbb2" path="/var/lib/kubelet/pods/009f055c-d442-4b23-8f55-52a43362bbb2/volumes" Mar 13 10:38:08 crc kubenswrapper[4632]: I0313 10:38:08.490202 4632 scope.go:117] "RemoveContainer" containerID="f531fb1c9798e5386771f799aeaf5ec81a37e70faa215029f1e44845844c0b7a" Mar 13 10:38:08 crc kubenswrapper[4632]: I0313 10:38:08.522183 4632 scope.go:117] "RemoveContainer" containerID="62c66b71b16f2cd37ff478080f4c30eed65f51b807f687725f8ec89f5dd9d0dc" Mar 13 10:38:08 crc kubenswrapper[4632]: I0313 10:38:08.562684 4632 scope.go:117] "RemoveContainer" containerID="d9f2ab5e1a5be1d4939b9fe05ba3a5cdbc725953ea1e78a027cf1f61d4444ba0" Mar 13 10:38:08 crc kubenswrapper[4632]: I0313 10:38:08.614954 4632 scope.go:117] "RemoveContainer" containerID="9cee7abc6c76d73494106b5582f85b871d225f179b8f40700ad2248a8daa7c60" Mar 13 10:38:08 crc kubenswrapper[4632]: I0313 10:38:08.659235 4632 scope.go:117] "RemoveContainer" containerID="4ccfb76824c418f1c761a43ca7732c6a7a69b7b1944ea2ee35bd45c569e7d7c6" Mar 13 10:38:08 crc kubenswrapper[4632]: I0313 10:38:08.731738 4632 scope.go:117] "RemoveContainer" containerID="baa73e1779483e615256cb324392bd7ff43cccd507e79b501108b7a61007ed58" Mar 13 10:38:08 crc kubenswrapper[4632]: I0313 10:38:08.767399 4632 scope.go:117] "RemoveContainer" containerID="a73d11226d1411728675707324588174ab20222ac0a86a31f153adf5c08496b7" Mar 13 10:38:40 crc kubenswrapper[4632]: I0313 10:38:40.460804 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 10:38:40 crc kubenswrapper[4632]: I0313 10:38:40.461513 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 10:38:41 crc kubenswrapper[4632]: I0313 10:38:41.064199 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5mlm2"] Mar 13 10:38:41 crc kubenswrapper[4632]: I0313 10:38:41.082577 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5mlm2"] Mar 13 10:38:42 crc kubenswrapper[4632]: I0313 10:38:42.056354 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5de81924-9bfc-484e-8276-0216f0bbf72c" path="/var/lib/kubelet/pods/5de81924-9bfc-484e-8276-0216f0bbf72c/volumes" Mar 13 10:38:54 crc kubenswrapper[4632]: I0313 10:38:54.005708 4632 generic.go:334] "Generic (PLEG): container finished" podID="bcd0e6df-81c2-4541-b0b5-d5c539f03451" containerID="da621c5c44f364510de28883c64cc52b63ab77c54d882aedd7a7119edf3055a8" exitCode=0 Mar 13 10:38:54 crc kubenswrapper[4632]: I0313 10:38:54.005786 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tpk84" event={"ID":"bcd0e6df-81c2-4541-b0b5-d5c539f03451","Type":"ContainerDied","Data":"da621c5c44f364510de28883c64cc52b63ab77c54d882aedd7a7119edf3055a8"} Mar 13 10:38:55 crc kubenswrapper[4632]: I0313 10:38:55.562177 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tpk84" Mar 13 10:38:55 crc kubenswrapper[4632]: I0313 10:38:55.728123 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bcd0e6df-81c2-4541-b0b5-d5c539f03451-ssh-key-openstack-edpm-ipam\") pod \"bcd0e6df-81c2-4541-b0b5-d5c539f03451\" (UID: \"bcd0e6df-81c2-4541-b0b5-d5c539f03451\") " Mar 13 10:38:55 crc kubenswrapper[4632]: I0313 10:38:55.728256 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6hxn\" (UniqueName: \"kubernetes.io/projected/bcd0e6df-81c2-4541-b0b5-d5c539f03451-kube-api-access-x6hxn\") pod \"bcd0e6df-81c2-4541-b0b5-d5c539f03451\" (UID: \"bcd0e6df-81c2-4541-b0b5-d5c539f03451\") " Mar 13 10:38:55 crc kubenswrapper[4632]: I0313 10:38:55.728431 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bcd0e6df-81c2-4541-b0b5-d5c539f03451-inventory\") pod \"bcd0e6df-81c2-4541-b0b5-d5c539f03451\" (UID: \"bcd0e6df-81c2-4541-b0b5-d5c539f03451\") " Mar 13 10:38:55 crc kubenswrapper[4632]: I0313 10:38:55.764252 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcd0e6df-81c2-4541-b0b5-d5c539f03451-kube-api-access-x6hxn" (OuterVolumeSpecName: "kube-api-access-x6hxn") pod "bcd0e6df-81c2-4541-b0b5-d5c539f03451" (UID: "bcd0e6df-81c2-4541-b0b5-d5c539f03451"). InnerVolumeSpecName "kube-api-access-x6hxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:38:55 crc kubenswrapper[4632]: I0313 10:38:55.810170 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcd0e6df-81c2-4541-b0b5-d5c539f03451-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bcd0e6df-81c2-4541-b0b5-d5c539f03451" (UID: "bcd0e6df-81c2-4541-b0b5-d5c539f03451"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:38:55 crc kubenswrapper[4632]: I0313 10:38:55.817131 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcd0e6df-81c2-4541-b0b5-d5c539f03451-inventory" (OuterVolumeSpecName: "inventory") pod "bcd0e6df-81c2-4541-b0b5-d5c539f03451" (UID: "bcd0e6df-81c2-4541-b0b5-d5c539f03451"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:38:55 crc kubenswrapper[4632]: I0313 10:38:55.833224 4632 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bcd0e6df-81c2-4541-b0b5-d5c539f03451-inventory\") on node \"crc\" DevicePath \"\"" Mar 13 10:38:55 crc kubenswrapper[4632]: I0313 10:38:55.833575 4632 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bcd0e6df-81c2-4541-b0b5-d5c539f03451-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 13 10:38:55 crc kubenswrapper[4632]: I0313 10:38:55.833587 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6hxn\" (UniqueName: \"kubernetes.io/projected/bcd0e6df-81c2-4541-b0b5-d5c539f03451-kube-api-access-x6hxn\") on node \"crc\" DevicePath \"\"" Mar 13 10:38:56 crc kubenswrapper[4632]: I0313 10:38:56.023456 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tpk84" event={"ID":"bcd0e6df-81c2-4541-b0b5-d5c539f03451","Type":"ContainerDied","Data":"d035c3103945937690807050ad841d4bce39b39f6d04cfc72ad0d28a843a4add"} Mar 13 10:38:56 crc kubenswrapper[4632]: I0313 10:38:56.023521 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d035c3103945937690807050ad841d4bce39b39f6d04cfc72ad0d28a843a4add" Mar 13 10:38:56 crc kubenswrapper[4632]: I0313 10:38:56.023535 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tpk84" Mar 13 10:38:56 crc kubenswrapper[4632]: I0313 10:38:56.190851 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6w5gg"] Mar 13 10:38:56 crc kubenswrapper[4632]: E0313 10:38:56.192406 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcd0e6df-81c2-4541-b0b5-d5c539f03451" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 13 10:38:56 crc kubenswrapper[4632]: I0313 10:38:56.192455 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcd0e6df-81c2-4541-b0b5-d5c539f03451" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 13 10:38:56 crc kubenswrapper[4632]: E0313 10:38:56.192508 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="346e767a-d9dd-40e1-9ab3-2e4ec9184667" containerName="oc" Mar 13 10:38:56 crc kubenswrapper[4632]: I0313 10:38:56.192530 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="346e767a-d9dd-40e1-9ab3-2e4ec9184667" containerName="oc" Mar 13 10:38:56 crc kubenswrapper[4632]: I0313 10:38:56.193263 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcd0e6df-81c2-4541-b0b5-d5c539f03451" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 13 10:38:56 crc kubenswrapper[4632]: I0313 10:38:56.193357 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="346e767a-d9dd-40e1-9ab3-2e4ec9184667" containerName="oc" Mar 13 10:38:56 crc kubenswrapper[4632]: I0313 10:38:56.195836 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6w5gg" Mar 13 10:38:56 crc kubenswrapper[4632]: I0313 10:38:56.203167 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 13 10:38:56 crc kubenswrapper[4632]: I0313 10:38:56.203300 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qrzsx" Mar 13 10:38:56 crc kubenswrapper[4632]: I0313 10:38:56.203182 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 13 10:38:56 crc kubenswrapper[4632]: I0313 10:38:56.204778 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 13 10:38:56 crc kubenswrapper[4632]: I0313 10:38:56.252070 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddrbk\" (UniqueName: \"kubernetes.io/projected/a1c30ff2-4a23-4fb1-b689-59318014bf57-kube-api-access-ddrbk\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6w5gg\" (UID: \"a1c30ff2-4a23-4fb1-b689-59318014bf57\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6w5gg" Mar 13 10:38:56 crc kubenswrapper[4632]: I0313 10:38:56.252561 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a1c30ff2-4a23-4fb1-b689-59318014bf57-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6w5gg\" (UID: \"a1c30ff2-4a23-4fb1-b689-59318014bf57\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6w5gg" Mar 13 10:38:56 crc kubenswrapper[4632]: I0313 10:38:56.252971 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1c30ff2-4a23-4fb1-b689-59318014bf57-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6w5gg\" (UID: \"a1c30ff2-4a23-4fb1-b689-59318014bf57\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6w5gg" Mar 13 10:38:56 crc kubenswrapper[4632]: I0313 10:38:56.263956 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6w5gg"] Mar 13 10:38:56 crc kubenswrapper[4632]: I0313 10:38:56.355291 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddrbk\" (UniqueName: \"kubernetes.io/projected/a1c30ff2-4a23-4fb1-b689-59318014bf57-kube-api-access-ddrbk\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6w5gg\" (UID: \"a1c30ff2-4a23-4fb1-b689-59318014bf57\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6w5gg" Mar 13 10:38:56 crc kubenswrapper[4632]: I0313 10:38:56.355502 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a1c30ff2-4a23-4fb1-b689-59318014bf57-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6w5gg\" (UID: \"a1c30ff2-4a23-4fb1-b689-59318014bf57\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6w5gg" Mar 13 10:38:56 crc kubenswrapper[4632]: I0313 10:38:56.355621 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1c30ff2-4a23-4fb1-b689-59318014bf57-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6w5gg\" (UID: \"a1c30ff2-4a23-4fb1-b689-59318014bf57\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6w5gg" Mar 13 10:38:56 crc kubenswrapper[4632]: I0313 10:38:56.361869 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1c30ff2-4a23-4fb1-b689-59318014bf57-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6w5gg\" (UID: \"a1c30ff2-4a23-4fb1-b689-59318014bf57\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6w5gg" Mar 13 10:38:56 crc kubenswrapper[4632]: I0313 10:38:56.362390 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a1c30ff2-4a23-4fb1-b689-59318014bf57-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6w5gg\" (UID: \"a1c30ff2-4a23-4fb1-b689-59318014bf57\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6w5gg" Mar 13 10:38:56 crc kubenswrapper[4632]: I0313 10:38:56.386813 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddrbk\" (UniqueName: \"kubernetes.io/projected/a1c30ff2-4a23-4fb1-b689-59318014bf57-kube-api-access-ddrbk\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6w5gg\" (UID: \"a1c30ff2-4a23-4fb1-b689-59318014bf57\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6w5gg" Mar 13 10:38:56 crc kubenswrapper[4632]: I0313 10:38:56.566123 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6w5gg" Mar 13 10:38:57 crc kubenswrapper[4632]: I0313 10:38:57.130098 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6w5gg"] Mar 13 10:38:58 crc kubenswrapper[4632]: I0313 10:38:58.076467 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6w5gg" event={"ID":"a1c30ff2-4a23-4fb1-b689-59318014bf57","Type":"ContainerStarted","Data":"d7a973a4828687613ca13c1cd91ec5768f0e35db944e95cc7f52edd24c762464"} Mar 13 10:38:58 crc kubenswrapper[4632]: I0313 10:38:58.077122 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6w5gg" event={"ID":"a1c30ff2-4a23-4fb1-b689-59318014bf57","Type":"ContainerStarted","Data":"f435c3660d1b5d90d5cebceffa800b6b01daace4f9fed586d1ee0eae3bfc0830"} Mar 13 10:38:58 crc kubenswrapper[4632]: I0313 10:38:58.092293 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6w5gg" podStartSLOduration=1.601623713 podStartE2EDuration="2.092271709s" podCreationTimestamp="2026-03-13 10:38:56 +0000 UTC" firstStartedPulling="2026-03-13 10:38:57.156815408 +0000 UTC m=+2111.179345541" lastFinishedPulling="2026-03-13 10:38:57.647463404 +0000 UTC m=+2111.669993537" observedRunningTime="2026-03-13 10:38:58.086979809 +0000 UTC m=+2112.109509962" watchObservedRunningTime="2026-03-13 10:38:58.092271709 +0000 UTC m=+2112.114801852" Mar 13 10:39:03 crc kubenswrapper[4632]: I0313 10:39:03.096782 4632 generic.go:334] "Generic (PLEG): container finished" podID="a1c30ff2-4a23-4fb1-b689-59318014bf57" containerID="d7a973a4828687613ca13c1cd91ec5768f0e35db944e95cc7f52edd24c762464" exitCode=0 Mar 13 10:39:03 crc kubenswrapper[4632]: I0313 10:39:03.096844 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6w5gg" event={"ID":"a1c30ff2-4a23-4fb1-b689-59318014bf57","Type":"ContainerDied","Data":"d7a973a4828687613ca13c1cd91ec5768f0e35db944e95cc7f52edd24c762464"} Mar 13 10:39:04 crc kubenswrapper[4632]: I0313 10:39:04.547337 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6w5gg" Mar 13 10:39:04 crc kubenswrapper[4632]: I0313 10:39:04.634608 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddrbk\" (UniqueName: \"kubernetes.io/projected/a1c30ff2-4a23-4fb1-b689-59318014bf57-kube-api-access-ddrbk\") pod \"a1c30ff2-4a23-4fb1-b689-59318014bf57\" (UID: \"a1c30ff2-4a23-4fb1-b689-59318014bf57\") " Mar 13 10:39:04 crc kubenswrapper[4632]: I0313 10:39:04.635081 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1c30ff2-4a23-4fb1-b689-59318014bf57-inventory\") pod \"a1c30ff2-4a23-4fb1-b689-59318014bf57\" (UID: \"a1c30ff2-4a23-4fb1-b689-59318014bf57\") " Mar 13 10:39:04 crc kubenswrapper[4632]: I0313 10:39:04.635180 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a1c30ff2-4a23-4fb1-b689-59318014bf57-ssh-key-openstack-edpm-ipam\") pod \"a1c30ff2-4a23-4fb1-b689-59318014bf57\" (UID: \"a1c30ff2-4a23-4fb1-b689-59318014bf57\") " Mar 13 10:39:04 crc kubenswrapper[4632]: I0313 10:39:04.647741 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1c30ff2-4a23-4fb1-b689-59318014bf57-kube-api-access-ddrbk" (OuterVolumeSpecName: "kube-api-access-ddrbk") pod "a1c30ff2-4a23-4fb1-b689-59318014bf57" (UID: "a1c30ff2-4a23-4fb1-b689-59318014bf57"). InnerVolumeSpecName "kube-api-access-ddrbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:39:04 crc kubenswrapper[4632]: I0313 10:39:04.672612 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1c30ff2-4a23-4fb1-b689-59318014bf57-inventory" (OuterVolumeSpecName: "inventory") pod "a1c30ff2-4a23-4fb1-b689-59318014bf57" (UID: "a1c30ff2-4a23-4fb1-b689-59318014bf57"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:39:04 crc kubenswrapper[4632]: I0313 10:39:04.673115 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1c30ff2-4a23-4fb1-b689-59318014bf57-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a1c30ff2-4a23-4fb1-b689-59318014bf57" (UID: "a1c30ff2-4a23-4fb1-b689-59318014bf57"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:39:04 crc kubenswrapper[4632]: I0313 10:39:04.737651 4632 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1c30ff2-4a23-4fb1-b689-59318014bf57-inventory\") on node \"crc\" DevicePath \"\"" Mar 13 10:39:04 crc kubenswrapper[4632]: I0313 10:39:04.737707 4632 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a1c30ff2-4a23-4fb1-b689-59318014bf57-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 13 10:39:04 crc kubenswrapper[4632]: I0313 10:39:04.737724 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddrbk\" (UniqueName: \"kubernetes.io/projected/a1c30ff2-4a23-4fb1-b689-59318014bf57-kube-api-access-ddrbk\") on node \"crc\" DevicePath \"\"" Mar 13 10:39:05 crc kubenswrapper[4632]: I0313 10:39:05.121627 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6w5gg" event={"ID":"a1c30ff2-4a23-4fb1-b689-59318014bf57","Type":"ContainerDied","Data":"f435c3660d1b5d90d5cebceffa800b6b01daace4f9fed586d1ee0eae3bfc0830"} Mar 13 10:39:05 crc kubenswrapper[4632]: I0313 10:39:05.122204 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f435c3660d1b5d90d5cebceffa800b6b01daace4f9fed586d1ee0eae3bfc0830" Mar 13 10:39:05 crc kubenswrapper[4632]: I0313 10:39:05.121704 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6w5gg" Mar 13 10:39:05 crc kubenswrapper[4632]: I0313 10:39:05.214618 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-srdvg"] Mar 13 10:39:05 crc kubenswrapper[4632]: E0313 10:39:05.215517 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1c30ff2-4a23-4fb1-b689-59318014bf57" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 13 10:39:05 crc kubenswrapper[4632]: I0313 10:39:05.215677 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1c30ff2-4a23-4fb1-b689-59318014bf57" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 13 10:39:05 crc kubenswrapper[4632]: I0313 10:39:05.216055 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1c30ff2-4a23-4fb1-b689-59318014bf57" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 13 10:39:05 crc kubenswrapper[4632]: I0313 10:39:05.216962 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-srdvg" Mar 13 10:39:05 crc kubenswrapper[4632]: I0313 10:39:05.221891 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 13 10:39:05 crc kubenswrapper[4632]: I0313 10:39:05.222663 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qrzsx" Mar 13 10:39:05 crc kubenswrapper[4632]: I0313 10:39:05.223225 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 13 10:39:05 crc kubenswrapper[4632]: I0313 10:39:05.229599 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 13 10:39:05 crc kubenswrapper[4632]: I0313 10:39:05.242803 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-srdvg"] Mar 13 10:39:05 crc kubenswrapper[4632]: I0313 10:39:05.350180 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfd27\" (UniqueName: \"kubernetes.io/projected/78de7f45-2a11-4cbe-84bf-46c4307a1459-kube-api-access-qfd27\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-srdvg\" (UID: \"78de7f45-2a11-4cbe-84bf-46c4307a1459\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-srdvg" Mar 13 10:39:05 crc kubenswrapper[4632]: I0313 10:39:05.350257 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78de7f45-2a11-4cbe-84bf-46c4307a1459-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-srdvg\" (UID: \"78de7f45-2a11-4cbe-84bf-46c4307a1459\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-srdvg" Mar 13 10:39:05 crc kubenswrapper[4632]: I0313 10:39:05.350309 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78de7f45-2a11-4cbe-84bf-46c4307a1459-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-srdvg\" (UID: \"78de7f45-2a11-4cbe-84bf-46c4307a1459\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-srdvg" Mar 13 10:39:05 crc kubenswrapper[4632]: I0313 10:39:05.452333 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfd27\" (UniqueName: \"kubernetes.io/projected/78de7f45-2a11-4cbe-84bf-46c4307a1459-kube-api-access-qfd27\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-srdvg\" (UID: \"78de7f45-2a11-4cbe-84bf-46c4307a1459\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-srdvg" Mar 13 10:39:05 crc kubenswrapper[4632]: I0313 10:39:05.452778 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78de7f45-2a11-4cbe-84bf-46c4307a1459-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-srdvg\" (UID: \"78de7f45-2a11-4cbe-84bf-46c4307a1459\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-srdvg" Mar 13 10:39:05 crc kubenswrapper[4632]: I0313 10:39:05.452988 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78de7f45-2a11-4cbe-84bf-46c4307a1459-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-srdvg\" (UID: \"78de7f45-2a11-4cbe-84bf-46c4307a1459\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-srdvg" Mar 13 10:39:05 crc kubenswrapper[4632]: I0313 10:39:05.457903 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78de7f45-2a11-4cbe-84bf-46c4307a1459-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-srdvg\" (UID: \"78de7f45-2a11-4cbe-84bf-46c4307a1459\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-srdvg" Mar 13 10:39:05 crc kubenswrapper[4632]: I0313 10:39:05.460414 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78de7f45-2a11-4cbe-84bf-46c4307a1459-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-srdvg\" (UID: \"78de7f45-2a11-4cbe-84bf-46c4307a1459\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-srdvg" Mar 13 10:39:05 crc kubenswrapper[4632]: I0313 10:39:05.472542 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfd27\" (UniqueName: \"kubernetes.io/projected/78de7f45-2a11-4cbe-84bf-46c4307a1459-kube-api-access-qfd27\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-srdvg\" (UID: \"78de7f45-2a11-4cbe-84bf-46c4307a1459\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-srdvg" Mar 13 10:39:05 crc kubenswrapper[4632]: I0313 10:39:05.554721 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-srdvg" Mar 13 10:39:06 crc kubenswrapper[4632]: I0313 10:39:06.126732 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-srdvg"] Mar 13 10:39:07 crc kubenswrapper[4632]: I0313 10:39:07.145031 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-srdvg" event={"ID":"78de7f45-2a11-4cbe-84bf-46c4307a1459","Type":"ContainerStarted","Data":"273653ee710557decd4c33917e21a560bb09b0670eb74d27ef3d7ffc75810416"} Mar 13 10:39:07 crc kubenswrapper[4632]: I0313 10:39:07.145657 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-srdvg" event={"ID":"78de7f45-2a11-4cbe-84bf-46c4307a1459","Type":"ContainerStarted","Data":"3b3cfd0adf831ab35b815f5b2519338f27aa52d69c55fcd9c9ad26553b698b72"} Mar 13 10:39:07 crc kubenswrapper[4632]: I0313 10:39:07.178496 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-srdvg" podStartSLOduration=1.749609712 podStartE2EDuration="2.178473064s" podCreationTimestamp="2026-03-13 10:39:05 +0000 UTC" firstStartedPulling="2026-03-13 10:39:06.14189683 +0000 UTC m=+2120.164426963" lastFinishedPulling="2026-03-13 10:39:06.570760182 +0000 UTC m=+2120.593290315" observedRunningTime="2026-03-13 10:39:07.168235612 +0000 UTC m=+2121.190765745" watchObservedRunningTime="2026-03-13 10:39:07.178473064 +0000 UTC m=+2121.201003197" Mar 13 10:39:08 crc kubenswrapper[4632]: I0313 10:39:08.986290 4632 scope.go:117] "RemoveContainer" containerID="afb05bb00debb2ea4a81d169362ff2bd38d824053184e249dbe02cc1cb10e945" Mar 13 10:39:10 crc kubenswrapper[4632]: I0313 10:39:10.460737 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 10:39:10 crc kubenswrapper[4632]: I0313 10:39:10.461157 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 10:39:18 crc kubenswrapper[4632]: I0313 10:39:18.073169 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-gwj5n"] Mar 13 10:39:18 crc kubenswrapper[4632]: I0313 10:39:18.084206 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-gwj5n"] Mar 13 10:39:19 crc kubenswrapper[4632]: I0313 10:39:19.033023 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9n7gj"] Mar 13 10:39:19 crc kubenswrapper[4632]: I0313 10:39:19.041551 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9n7gj"] Mar 13 10:39:20 crc kubenswrapper[4632]: I0313 10:39:20.058233 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcce9343-52a3-4e6d-98fd-8e66390020ac" path="/var/lib/kubelet/pods/bcce9343-52a3-4e6d-98fd-8e66390020ac/volumes" Mar 13 10:39:20 crc kubenswrapper[4632]: I0313 10:39:20.060997 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf19672e-3284-49bc-a460-f2e629881d9b" path="/var/lib/kubelet/pods/cf19672e-3284-49bc-a460-f2e629881d9b/volumes" Mar 13 10:39:40 crc kubenswrapper[4632]: I0313 10:39:40.460798 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 10:39:40 crc kubenswrapper[4632]: I0313 10:39:40.461478 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 10:39:40 crc kubenswrapper[4632]: I0313 10:39:40.461535 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 10:39:40 crc kubenswrapper[4632]: I0313 10:39:40.462456 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2bb4e222f4f89a1d4e4bebc809fc60cc762d7ea9b6811f4bcc9cb78c179cd0bd"} pod="openshift-machine-config-operator/machine-config-daemon-zkscb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 10:39:40 crc kubenswrapper[4632]: I0313 10:39:40.462514 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" containerID="cri-o://2bb4e222f4f89a1d4e4bebc809fc60cc762d7ea9b6811f4bcc9cb78c179cd0bd" gracePeriod=600 Mar 13 10:39:41 crc kubenswrapper[4632]: I0313 10:39:41.462741 4632 generic.go:334] "Generic (PLEG): container finished" podID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerID="2bb4e222f4f89a1d4e4bebc809fc60cc762d7ea9b6811f4bcc9cb78c179cd0bd" exitCode=0 Mar 13 10:39:41 crc kubenswrapper[4632]: I0313 10:39:41.462841 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerDied","Data":"2bb4e222f4f89a1d4e4bebc809fc60cc762d7ea9b6811f4bcc9cb78c179cd0bd"} Mar 13 10:39:41 crc kubenswrapper[4632]: I0313 10:39:41.463199 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerStarted","Data":"7e6ad458a7a5f032b976d0f3e06f3cdb95d1f8fc235ab6b7d8f577ae0282cd20"} Mar 13 10:39:41 crc kubenswrapper[4632]: I0313 10:39:41.463232 4632 scope.go:117] "RemoveContainer" containerID="8c0ae371a519eb9db1c6eb843b2bd1981f031101e4086e8bec3fc57f1e905a6f" Mar 13 10:39:48 crc kubenswrapper[4632]: I0313 10:39:48.527801 4632 generic.go:334] "Generic (PLEG): container finished" podID="78de7f45-2a11-4cbe-84bf-46c4307a1459" containerID="273653ee710557decd4c33917e21a560bb09b0670eb74d27ef3d7ffc75810416" exitCode=0 Mar 13 10:39:48 crc kubenswrapper[4632]: I0313 10:39:48.527908 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-srdvg" event={"ID":"78de7f45-2a11-4cbe-84bf-46c4307a1459","Type":"ContainerDied","Data":"273653ee710557decd4c33917e21a560bb09b0670eb74d27ef3d7ffc75810416"} Mar 13 10:39:50 crc kubenswrapper[4632]: I0313 10:39:50.228161 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-srdvg" Mar 13 10:39:50 crc kubenswrapper[4632]: I0313 10:39:50.310362 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfd27\" (UniqueName: \"kubernetes.io/projected/78de7f45-2a11-4cbe-84bf-46c4307a1459-kube-api-access-qfd27\") pod \"78de7f45-2a11-4cbe-84bf-46c4307a1459\" (UID: \"78de7f45-2a11-4cbe-84bf-46c4307a1459\") " Mar 13 10:39:50 crc kubenswrapper[4632]: I0313 10:39:50.310432 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78de7f45-2a11-4cbe-84bf-46c4307a1459-ssh-key-openstack-edpm-ipam\") pod \"78de7f45-2a11-4cbe-84bf-46c4307a1459\" (UID: \"78de7f45-2a11-4cbe-84bf-46c4307a1459\") " Mar 13 10:39:50 crc kubenswrapper[4632]: I0313 10:39:50.310672 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78de7f45-2a11-4cbe-84bf-46c4307a1459-inventory\") pod \"78de7f45-2a11-4cbe-84bf-46c4307a1459\" (UID: \"78de7f45-2a11-4cbe-84bf-46c4307a1459\") " Mar 13 10:39:50 crc kubenswrapper[4632]: I0313 10:39:50.317163 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78de7f45-2a11-4cbe-84bf-46c4307a1459-kube-api-access-qfd27" (OuterVolumeSpecName: "kube-api-access-qfd27") pod "78de7f45-2a11-4cbe-84bf-46c4307a1459" (UID: "78de7f45-2a11-4cbe-84bf-46c4307a1459"). InnerVolumeSpecName "kube-api-access-qfd27". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:39:50 crc kubenswrapper[4632]: I0313 10:39:50.342827 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78de7f45-2a11-4cbe-84bf-46c4307a1459-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "78de7f45-2a11-4cbe-84bf-46c4307a1459" (UID: "78de7f45-2a11-4cbe-84bf-46c4307a1459"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:39:50 crc kubenswrapper[4632]: I0313 10:39:50.343068 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78de7f45-2a11-4cbe-84bf-46c4307a1459-inventory" (OuterVolumeSpecName: "inventory") pod "78de7f45-2a11-4cbe-84bf-46c4307a1459" (UID: "78de7f45-2a11-4cbe-84bf-46c4307a1459"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:39:50 crc kubenswrapper[4632]: I0313 10:39:50.414650 4632 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78de7f45-2a11-4cbe-84bf-46c4307a1459-inventory\") on node \"crc\" DevicePath \"\"" Mar 13 10:39:50 crc kubenswrapper[4632]: I0313 10:39:50.414680 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfd27\" (UniqueName: \"kubernetes.io/projected/78de7f45-2a11-4cbe-84bf-46c4307a1459-kube-api-access-qfd27\") on node \"crc\" DevicePath \"\"" Mar 13 10:39:50 crc kubenswrapper[4632]: I0313 10:39:50.414693 4632 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78de7f45-2a11-4cbe-84bf-46c4307a1459-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 13 10:39:50 crc kubenswrapper[4632]: I0313 10:39:50.550401 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-srdvg" event={"ID":"78de7f45-2a11-4cbe-84bf-46c4307a1459","Type":"ContainerDied","Data":"3b3cfd0adf831ab35b815f5b2519338f27aa52d69c55fcd9c9ad26553b698b72"} Mar 13 10:39:50 crc kubenswrapper[4632]: I0313 10:39:50.550448 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-srdvg" Mar 13 10:39:50 crc kubenswrapper[4632]: I0313 10:39:50.550461 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b3cfd0adf831ab35b815f5b2519338f27aa52d69c55fcd9c9ad26553b698b72" Mar 13 10:39:50 crc kubenswrapper[4632]: I0313 10:39:50.664085 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5fcgw"] Mar 13 10:39:50 crc kubenswrapper[4632]: E0313 10:39:50.664595 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78de7f45-2a11-4cbe-84bf-46c4307a1459" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 13 10:39:50 crc kubenswrapper[4632]: I0313 10:39:50.664624 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="78de7f45-2a11-4cbe-84bf-46c4307a1459" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 13 10:39:50 crc kubenswrapper[4632]: I0313 10:39:50.664958 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="78de7f45-2a11-4cbe-84bf-46c4307a1459" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 13 10:39:50 crc kubenswrapper[4632]: I0313 10:39:50.665970 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5fcgw" Mar 13 10:39:50 crc kubenswrapper[4632]: I0313 10:39:50.669648 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 13 10:39:50 crc kubenswrapper[4632]: I0313 10:39:50.670087 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qrzsx" Mar 13 10:39:50 crc kubenswrapper[4632]: I0313 10:39:50.670292 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 13 10:39:50 crc kubenswrapper[4632]: I0313 10:39:50.670308 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 13 10:39:50 crc kubenswrapper[4632]: I0313 10:39:50.681255 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5fcgw"] Mar 13 10:39:50 crc kubenswrapper[4632]: I0313 10:39:50.822363 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4931647b-bba4-489f-b5c1-cbe714834388-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5fcgw\" (UID: \"4931647b-bba4-489f-b5c1-cbe714834388\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5fcgw" Mar 13 10:39:50 crc kubenswrapper[4632]: I0313 10:39:50.822882 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4931647b-bba4-489f-b5c1-cbe714834388-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5fcgw\" (UID: \"4931647b-bba4-489f-b5c1-cbe714834388\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5fcgw" Mar 13 10:39:50 crc kubenswrapper[4632]: I0313 10:39:50.822985 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr759\" (UniqueName: \"kubernetes.io/projected/4931647b-bba4-489f-b5c1-cbe714834388-kube-api-access-hr759\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5fcgw\" (UID: \"4931647b-bba4-489f-b5c1-cbe714834388\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5fcgw" Mar 13 10:39:50 crc kubenswrapper[4632]: I0313 10:39:50.925781 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4931647b-bba4-489f-b5c1-cbe714834388-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5fcgw\" (UID: \"4931647b-bba4-489f-b5c1-cbe714834388\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5fcgw" Mar 13 10:39:50 crc kubenswrapper[4632]: I0313 10:39:50.925876 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4931647b-bba4-489f-b5c1-cbe714834388-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5fcgw\" (UID: \"4931647b-bba4-489f-b5c1-cbe714834388\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5fcgw" Mar 13 10:39:50 crc kubenswrapper[4632]: I0313 10:39:50.925956 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr759\" (UniqueName: \"kubernetes.io/projected/4931647b-bba4-489f-b5c1-cbe714834388-kube-api-access-hr759\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5fcgw\" (UID: \"4931647b-bba4-489f-b5c1-cbe714834388\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5fcgw" Mar 13 10:39:50 crc kubenswrapper[4632]: I0313 10:39:50.931225 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4931647b-bba4-489f-b5c1-cbe714834388-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5fcgw\" (UID: \"4931647b-bba4-489f-b5c1-cbe714834388\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5fcgw" Mar 13 10:39:50 crc kubenswrapper[4632]: I0313 10:39:50.931592 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4931647b-bba4-489f-b5c1-cbe714834388-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5fcgw\" (UID: \"4931647b-bba4-489f-b5c1-cbe714834388\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5fcgw" Mar 13 10:39:50 crc kubenswrapper[4632]: I0313 10:39:50.944879 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr759\" (UniqueName: \"kubernetes.io/projected/4931647b-bba4-489f-b5c1-cbe714834388-kube-api-access-hr759\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5fcgw\" (UID: \"4931647b-bba4-489f-b5c1-cbe714834388\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5fcgw" Mar 13 10:39:50 crc kubenswrapper[4632]: I0313 10:39:50.990881 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5fcgw" Mar 13 10:39:51 crc kubenswrapper[4632]: I0313 10:39:51.582520 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5fcgw"] Mar 13 10:39:52 crc kubenswrapper[4632]: I0313 10:39:52.573173 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5fcgw" event={"ID":"4931647b-bba4-489f-b5c1-cbe714834388","Type":"ContainerStarted","Data":"66ae927115cf223ae80a7f1174de3d22e1d8235d0f7cb5cfd88ba65e0b69c69d"} Mar 13 10:39:52 crc kubenswrapper[4632]: I0313 10:39:52.573510 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5fcgw" event={"ID":"4931647b-bba4-489f-b5c1-cbe714834388","Type":"ContainerStarted","Data":"23b2acab45805714175c6485dd856ee0e45e8f30118161e4579b9b2fc662cd4d"} Mar 13 10:39:52 crc kubenswrapper[4632]: I0313 10:39:52.595583 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5fcgw" podStartSLOduration=2.197014011 podStartE2EDuration="2.595567647s" podCreationTimestamp="2026-03-13 10:39:50 +0000 UTC" firstStartedPulling="2026-03-13 10:39:51.602812703 +0000 UTC m=+2165.625342836" lastFinishedPulling="2026-03-13 10:39:52.001366339 +0000 UTC m=+2166.023896472" observedRunningTime="2026-03-13 10:39:52.594582353 +0000 UTC m=+2166.617112496" watchObservedRunningTime="2026-03-13 10:39:52.595567647 +0000 UTC m=+2166.618097780" Mar 13 10:40:00 crc kubenswrapper[4632]: I0313 10:40:00.056134 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-ngzsx"] Mar 13 10:40:00 crc kubenswrapper[4632]: I0313 10:40:00.064680 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-ngzsx"] Mar 13 10:40:00 crc kubenswrapper[4632]: I0313 10:40:00.149698 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556640-zlhsq"] Mar 13 10:40:00 crc kubenswrapper[4632]: I0313 10:40:00.151260 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556640-zlhsq" Mar 13 10:40:00 crc kubenswrapper[4632]: I0313 10:40:00.153625 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 10:40:00 crc kubenswrapper[4632]: I0313 10:40:00.154198 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 10:40:00 crc kubenswrapper[4632]: I0313 10:40:00.154262 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 10:40:00 crc kubenswrapper[4632]: I0313 10:40:00.169596 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556640-zlhsq"] Mar 13 10:40:00 crc kubenswrapper[4632]: I0313 10:40:00.213985 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj8j9\" (UniqueName: \"kubernetes.io/projected/4bcf0de2-27ca-4278-80a3-080ce237e6df-kube-api-access-rj8j9\") pod \"auto-csr-approver-29556640-zlhsq\" (UID: \"4bcf0de2-27ca-4278-80a3-080ce237e6df\") " pod="openshift-infra/auto-csr-approver-29556640-zlhsq" Mar 13 10:40:00 crc kubenswrapper[4632]: I0313 10:40:00.316071 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj8j9\" (UniqueName: \"kubernetes.io/projected/4bcf0de2-27ca-4278-80a3-080ce237e6df-kube-api-access-rj8j9\") pod \"auto-csr-approver-29556640-zlhsq\" (UID: \"4bcf0de2-27ca-4278-80a3-080ce237e6df\") " pod="openshift-infra/auto-csr-approver-29556640-zlhsq" Mar 13 10:40:00 crc kubenswrapper[4632]: I0313 10:40:00.335609 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj8j9\" (UniqueName: \"kubernetes.io/projected/4bcf0de2-27ca-4278-80a3-080ce237e6df-kube-api-access-rj8j9\") pod \"auto-csr-approver-29556640-zlhsq\" (UID: \"4bcf0de2-27ca-4278-80a3-080ce237e6df\") " pod="openshift-infra/auto-csr-approver-29556640-zlhsq" Mar 13 10:40:00 crc kubenswrapper[4632]: I0313 10:40:00.480617 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556640-zlhsq" Mar 13 10:40:01 crc kubenswrapper[4632]: I0313 10:40:01.018730 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556640-zlhsq"] Mar 13 10:40:01 crc kubenswrapper[4632]: I0313 10:40:01.668920 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556640-zlhsq" event={"ID":"4bcf0de2-27ca-4278-80a3-080ce237e6df","Type":"ContainerStarted","Data":"dc6b97651a5d32b9be163f9c3152748b4ed8b19f6748652486195b679826838d"} Mar 13 10:40:02 crc kubenswrapper[4632]: I0313 10:40:02.057739 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="601f3615-5015-486a-bbb5-04c683da6990" path="/var/lib/kubelet/pods/601f3615-5015-486a-bbb5-04c683da6990/volumes" Mar 13 10:40:02 crc kubenswrapper[4632]: I0313 10:40:02.678524 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556640-zlhsq" event={"ID":"4bcf0de2-27ca-4278-80a3-080ce237e6df","Type":"ContainerStarted","Data":"84030d1b6c9dd12b070ed748955e52fe36ed2cac9f9bdddb744ca14dc6fbfa0a"} Mar 13 10:40:02 crc kubenswrapper[4632]: I0313 10:40:02.696642 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556640-zlhsq" podStartSLOduration=1.48696992 podStartE2EDuration="2.696617s" podCreationTimestamp="2026-03-13 10:40:00 +0000 UTC" firstStartedPulling="2026-03-13 10:40:01.032449784 +0000 UTC m=+2175.054979917" lastFinishedPulling="2026-03-13 10:40:02.242096864 +0000 UTC m=+2176.264626997" observedRunningTime="2026-03-13 10:40:02.692860477 +0000 UTC m=+2176.715390610" watchObservedRunningTime="2026-03-13 10:40:02.696617 +0000 UTC m=+2176.719147133" Mar 13 10:40:03 crc kubenswrapper[4632]: I0313 10:40:03.690022 4632 generic.go:334] "Generic (PLEG): container finished" podID="4bcf0de2-27ca-4278-80a3-080ce237e6df" containerID="84030d1b6c9dd12b070ed748955e52fe36ed2cac9f9bdddb744ca14dc6fbfa0a" exitCode=0 Mar 13 10:40:03 crc kubenswrapper[4632]: I0313 10:40:03.690334 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556640-zlhsq" event={"ID":"4bcf0de2-27ca-4278-80a3-080ce237e6df","Type":"ContainerDied","Data":"84030d1b6c9dd12b070ed748955e52fe36ed2cac9f9bdddb744ca14dc6fbfa0a"} Mar 13 10:40:05 crc kubenswrapper[4632]: I0313 10:40:05.053755 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556640-zlhsq" Mar 13 10:40:05 crc kubenswrapper[4632]: I0313 10:40:05.106837 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj8j9\" (UniqueName: \"kubernetes.io/projected/4bcf0de2-27ca-4278-80a3-080ce237e6df-kube-api-access-rj8j9\") pod \"4bcf0de2-27ca-4278-80a3-080ce237e6df\" (UID: \"4bcf0de2-27ca-4278-80a3-080ce237e6df\") " Mar 13 10:40:05 crc kubenswrapper[4632]: I0313 10:40:05.114308 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bcf0de2-27ca-4278-80a3-080ce237e6df-kube-api-access-rj8j9" (OuterVolumeSpecName: "kube-api-access-rj8j9") pod "4bcf0de2-27ca-4278-80a3-080ce237e6df" (UID: "4bcf0de2-27ca-4278-80a3-080ce237e6df"). InnerVolumeSpecName "kube-api-access-rj8j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:40:05 crc kubenswrapper[4632]: I0313 10:40:05.211522 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj8j9\" (UniqueName: \"kubernetes.io/projected/4bcf0de2-27ca-4278-80a3-080ce237e6df-kube-api-access-rj8j9\") on node \"crc\" DevicePath \"\"" Mar 13 10:40:05 crc kubenswrapper[4632]: I0313 10:40:05.710209 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556640-zlhsq" event={"ID":"4bcf0de2-27ca-4278-80a3-080ce237e6df","Type":"ContainerDied","Data":"dc6b97651a5d32b9be163f9c3152748b4ed8b19f6748652486195b679826838d"} Mar 13 10:40:05 crc kubenswrapper[4632]: I0313 10:40:05.710248 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc6b97651a5d32b9be163f9c3152748b4ed8b19f6748652486195b679826838d" Mar 13 10:40:05 crc kubenswrapper[4632]: I0313 10:40:05.710489 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556640-zlhsq" Mar 13 10:40:05 crc kubenswrapper[4632]: I0313 10:40:05.762839 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556634-6n98g"] Mar 13 10:40:05 crc kubenswrapper[4632]: I0313 10:40:05.769870 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556634-6n98g"] Mar 13 10:40:06 crc kubenswrapper[4632]: I0313 10:40:06.058504 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="155ba738-4ba0-424a-a1d7-067786728969" path="/var/lib/kubelet/pods/155ba738-4ba0-424a-a1d7-067786728969/volumes" Mar 13 10:40:09 crc kubenswrapper[4632]: I0313 10:40:09.042268 4632 scope.go:117] "RemoveContainer" containerID="a8f98d9cfd7da7677c0fe463edd081d6aa2858ecb1027917673862b2700f1545" Mar 13 10:40:09 crc kubenswrapper[4632]: I0313 10:40:09.084227 4632 scope.go:117] "RemoveContainer" containerID="f95b291d052d44a477db7fca5558efb7e90f20270d66ae208043b37111d582be" Mar 13 10:40:09 crc kubenswrapper[4632]: I0313 10:40:09.160262 4632 scope.go:117] "RemoveContainer" containerID="379356ecac878a5f4776d015be267e8c7eec62c977ce924abd53ff44455ce8e4" Mar 13 10:40:09 crc kubenswrapper[4632]: I0313 10:40:09.205542 4632 scope.go:117] "RemoveContainer" containerID="e181311595cfc3a50154df8d12fbc0793d907a3185d962d8a64fc357e0b6ee4f" Mar 13 10:40:45 crc kubenswrapper[4632]: I0313 10:40:45.065022 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5fcgw" event={"ID":"4931647b-bba4-489f-b5c1-cbe714834388","Type":"ContainerDied","Data":"66ae927115cf223ae80a7f1174de3d22e1d8235d0f7cb5cfd88ba65e0b69c69d"} Mar 13 10:40:45 crc kubenswrapper[4632]: I0313 10:40:45.065033 4632 generic.go:334] "Generic (PLEG): container finished" podID="4931647b-bba4-489f-b5c1-cbe714834388" containerID="66ae927115cf223ae80a7f1174de3d22e1d8235d0f7cb5cfd88ba65e0b69c69d" exitCode=0 Mar 13 10:40:46 crc kubenswrapper[4632]: I0313 10:40:46.631240 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5fcgw" Mar 13 10:40:46 crc kubenswrapper[4632]: I0313 10:40:46.730680 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr759\" (UniqueName: \"kubernetes.io/projected/4931647b-bba4-489f-b5c1-cbe714834388-kube-api-access-hr759\") pod \"4931647b-bba4-489f-b5c1-cbe714834388\" (UID: \"4931647b-bba4-489f-b5c1-cbe714834388\") " Mar 13 10:40:46 crc kubenswrapper[4632]: I0313 10:40:46.731310 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4931647b-bba4-489f-b5c1-cbe714834388-ssh-key-openstack-edpm-ipam\") pod \"4931647b-bba4-489f-b5c1-cbe714834388\" (UID: \"4931647b-bba4-489f-b5c1-cbe714834388\") " Mar 13 10:40:46 crc kubenswrapper[4632]: I0313 10:40:46.731461 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4931647b-bba4-489f-b5c1-cbe714834388-inventory\") pod \"4931647b-bba4-489f-b5c1-cbe714834388\" (UID: \"4931647b-bba4-489f-b5c1-cbe714834388\") " Mar 13 10:40:46 crc kubenswrapper[4632]: I0313 10:40:46.753696 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4931647b-bba4-489f-b5c1-cbe714834388-kube-api-access-hr759" (OuterVolumeSpecName: "kube-api-access-hr759") pod "4931647b-bba4-489f-b5c1-cbe714834388" (UID: "4931647b-bba4-489f-b5c1-cbe714834388"). InnerVolumeSpecName "kube-api-access-hr759". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:40:46 crc kubenswrapper[4632]: I0313 10:40:46.770485 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4931647b-bba4-489f-b5c1-cbe714834388-inventory" (OuterVolumeSpecName: "inventory") pod "4931647b-bba4-489f-b5c1-cbe714834388" (UID: "4931647b-bba4-489f-b5c1-cbe714834388"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:40:46 crc kubenswrapper[4632]: I0313 10:40:46.778521 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4931647b-bba4-489f-b5c1-cbe714834388-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4931647b-bba4-489f-b5c1-cbe714834388" (UID: "4931647b-bba4-489f-b5c1-cbe714834388"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:40:46 crc kubenswrapper[4632]: I0313 10:40:46.836011 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr759\" (UniqueName: \"kubernetes.io/projected/4931647b-bba4-489f-b5c1-cbe714834388-kube-api-access-hr759\") on node \"crc\" DevicePath \"\"" Mar 13 10:40:46 crc kubenswrapper[4632]: I0313 10:40:46.836061 4632 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4931647b-bba4-489f-b5c1-cbe714834388-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 13 10:40:46 crc kubenswrapper[4632]: I0313 10:40:46.836078 4632 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4931647b-bba4-489f-b5c1-cbe714834388-inventory\") on node \"crc\" DevicePath \"\"" Mar 13 10:40:47 crc kubenswrapper[4632]: I0313 10:40:47.090230 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5fcgw" event={"ID":"4931647b-bba4-489f-b5c1-cbe714834388","Type":"ContainerDied","Data":"23b2acab45805714175c6485dd856ee0e45e8f30118161e4579b9b2fc662cd4d"} Mar 13 10:40:47 crc kubenswrapper[4632]: I0313 10:40:47.090283 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23b2acab45805714175c6485dd856ee0e45e8f30118161e4579b9b2fc662cd4d" Mar 13 10:40:47 crc kubenswrapper[4632]: I0313 10:40:47.090370 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5fcgw" Mar 13 10:40:47 crc kubenswrapper[4632]: I0313 10:40:47.166666 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-55n7g"] Mar 13 10:40:47 crc kubenswrapper[4632]: E0313 10:40:47.167068 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4931647b-bba4-489f-b5c1-cbe714834388" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 13 10:40:47 crc kubenswrapper[4632]: I0313 10:40:47.167088 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="4931647b-bba4-489f-b5c1-cbe714834388" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 13 10:40:47 crc kubenswrapper[4632]: E0313 10:40:47.167110 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bcf0de2-27ca-4278-80a3-080ce237e6df" containerName="oc" Mar 13 10:40:47 crc kubenswrapper[4632]: I0313 10:40:47.167116 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bcf0de2-27ca-4278-80a3-080ce237e6df" containerName="oc" Mar 13 10:40:47 crc kubenswrapper[4632]: I0313 10:40:47.167319 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="4931647b-bba4-489f-b5c1-cbe714834388" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 13 10:40:47 crc kubenswrapper[4632]: I0313 10:40:47.167353 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bcf0de2-27ca-4278-80a3-080ce237e6df" containerName="oc" Mar 13 10:40:47 crc kubenswrapper[4632]: I0313 10:40:47.167973 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-55n7g" Mar 13 10:40:47 crc kubenswrapper[4632]: I0313 10:40:47.170090 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 13 10:40:47 crc kubenswrapper[4632]: I0313 10:40:47.170561 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qrzsx" Mar 13 10:40:47 crc kubenswrapper[4632]: I0313 10:40:47.170802 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 13 10:40:47 crc kubenswrapper[4632]: I0313 10:40:47.171058 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 13 10:40:47 crc kubenswrapper[4632]: I0313 10:40:47.189925 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-55n7g"] Mar 13 10:40:47 crc kubenswrapper[4632]: I0313 10:40:47.345134 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ff4122d-b9f1-4dd0-80dc-deb9d84760e1-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-55n7g\" (UID: \"9ff4122d-b9f1-4dd0-80dc-deb9d84760e1\") " pod="openstack/ssh-known-hosts-edpm-deployment-55n7g" Mar 13 10:40:47 crc kubenswrapper[4632]: I0313 10:40:47.345291 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nzt5\" (UniqueName: \"kubernetes.io/projected/9ff4122d-b9f1-4dd0-80dc-deb9d84760e1-kube-api-access-7nzt5\") pod \"ssh-known-hosts-edpm-deployment-55n7g\" (UID: \"9ff4122d-b9f1-4dd0-80dc-deb9d84760e1\") " pod="openstack/ssh-known-hosts-edpm-deployment-55n7g" Mar 13 10:40:47 crc kubenswrapper[4632]: I0313 10:40:47.345385 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9ff4122d-b9f1-4dd0-80dc-deb9d84760e1-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-55n7g\" (UID: \"9ff4122d-b9f1-4dd0-80dc-deb9d84760e1\") " pod="openstack/ssh-known-hosts-edpm-deployment-55n7g" Mar 13 10:40:47 crc kubenswrapper[4632]: I0313 10:40:47.447738 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nzt5\" (UniqueName: \"kubernetes.io/projected/9ff4122d-b9f1-4dd0-80dc-deb9d84760e1-kube-api-access-7nzt5\") pod \"ssh-known-hosts-edpm-deployment-55n7g\" (UID: \"9ff4122d-b9f1-4dd0-80dc-deb9d84760e1\") " pod="openstack/ssh-known-hosts-edpm-deployment-55n7g" Mar 13 10:40:47 crc kubenswrapper[4632]: I0313 10:40:47.447878 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9ff4122d-b9f1-4dd0-80dc-deb9d84760e1-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-55n7g\" (UID: \"9ff4122d-b9f1-4dd0-80dc-deb9d84760e1\") " pod="openstack/ssh-known-hosts-edpm-deployment-55n7g" Mar 13 10:40:47 crc kubenswrapper[4632]: I0313 10:40:47.447976 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ff4122d-b9f1-4dd0-80dc-deb9d84760e1-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-55n7g\" (UID: \"9ff4122d-b9f1-4dd0-80dc-deb9d84760e1\") " pod="openstack/ssh-known-hosts-edpm-deployment-55n7g" Mar 13 10:40:47 crc kubenswrapper[4632]: I0313 10:40:47.452617 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9ff4122d-b9f1-4dd0-80dc-deb9d84760e1-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-55n7g\" (UID: \"9ff4122d-b9f1-4dd0-80dc-deb9d84760e1\") " pod="openstack/ssh-known-hosts-edpm-deployment-55n7g" Mar 13 10:40:47 crc kubenswrapper[4632]: I0313 10:40:47.471645 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ff4122d-b9f1-4dd0-80dc-deb9d84760e1-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-55n7g\" (UID: \"9ff4122d-b9f1-4dd0-80dc-deb9d84760e1\") " pod="openstack/ssh-known-hosts-edpm-deployment-55n7g" Mar 13 10:40:47 crc kubenswrapper[4632]: I0313 10:40:47.482140 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nzt5\" (UniqueName: \"kubernetes.io/projected/9ff4122d-b9f1-4dd0-80dc-deb9d84760e1-kube-api-access-7nzt5\") pod \"ssh-known-hosts-edpm-deployment-55n7g\" (UID: \"9ff4122d-b9f1-4dd0-80dc-deb9d84760e1\") " pod="openstack/ssh-known-hosts-edpm-deployment-55n7g" Mar 13 10:40:47 crc kubenswrapper[4632]: I0313 10:40:47.485028 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-55n7g" Mar 13 10:40:48 crc kubenswrapper[4632]: I0313 10:40:48.357562 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-55n7g"] Mar 13 10:40:48 crc kubenswrapper[4632]: I0313 10:40:48.376148 4632 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 10:40:48 crc kubenswrapper[4632]: I0313 10:40:48.833195 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 13 10:40:49 crc kubenswrapper[4632]: I0313 10:40:49.112085 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-55n7g" event={"ID":"9ff4122d-b9f1-4dd0-80dc-deb9d84760e1","Type":"ContainerStarted","Data":"67d1c660a605cb7f487da3c88f2f7b771a995c235aa1d21c9c9772f3c660b828"} Mar 13 10:40:49 crc kubenswrapper[4632]: I0313 10:40:49.113077 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-55n7g" event={"ID":"9ff4122d-b9f1-4dd0-80dc-deb9d84760e1","Type":"ContainerStarted","Data":"8cb8154bbb80936b80e55c8289b4e5686e6030c27b5b59809debc612b023074a"} Mar 13 10:40:49 crc kubenswrapper[4632]: I0313 10:40:49.141102 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-55n7g" podStartSLOduration=1.685400061 podStartE2EDuration="2.141072403s" podCreationTimestamp="2026-03-13 10:40:47 +0000 UTC" firstStartedPulling="2026-03-13 10:40:48.375116381 +0000 UTC m=+2222.397646514" lastFinishedPulling="2026-03-13 10:40:48.830788723 +0000 UTC m=+2222.853318856" observedRunningTime="2026-03-13 10:40:49.131739482 +0000 UTC m=+2223.154269625" watchObservedRunningTime="2026-03-13 10:40:49.141072403 +0000 UTC m=+2223.163602536" Mar 13 10:40:53 crc kubenswrapper[4632]: I0313 10:40:53.862993 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c6vkp"] Mar 13 10:40:53 crc kubenswrapper[4632]: I0313 10:40:53.865853 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c6vkp" Mar 13 10:40:53 crc kubenswrapper[4632]: I0313 10:40:53.894311 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c6vkp"] Mar 13 10:40:54 crc kubenswrapper[4632]: I0313 10:40:54.031862 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94fd5d57-5fb3-4b34-a545-6bdc3f219354-catalog-content\") pod \"redhat-operators-c6vkp\" (UID: \"94fd5d57-5fb3-4b34-a545-6bdc3f219354\") " pod="openshift-marketplace/redhat-operators-c6vkp" Mar 13 10:40:54 crc kubenswrapper[4632]: I0313 10:40:54.032079 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94fd5d57-5fb3-4b34-a545-6bdc3f219354-utilities\") pod \"redhat-operators-c6vkp\" (UID: \"94fd5d57-5fb3-4b34-a545-6bdc3f219354\") " pod="openshift-marketplace/redhat-operators-c6vkp" Mar 13 10:40:54 crc kubenswrapper[4632]: I0313 10:40:54.032272 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srngn\" (UniqueName: \"kubernetes.io/projected/94fd5d57-5fb3-4b34-a545-6bdc3f219354-kube-api-access-srngn\") pod \"redhat-operators-c6vkp\" (UID: \"94fd5d57-5fb3-4b34-a545-6bdc3f219354\") " pod="openshift-marketplace/redhat-operators-c6vkp" Mar 13 10:40:54 crc kubenswrapper[4632]: I0313 10:40:54.134396 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94fd5d57-5fb3-4b34-a545-6bdc3f219354-catalog-content\") pod \"redhat-operators-c6vkp\" (UID: \"94fd5d57-5fb3-4b34-a545-6bdc3f219354\") " pod="openshift-marketplace/redhat-operators-c6vkp" Mar 13 10:40:54 crc kubenswrapper[4632]: I0313 10:40:54.134504 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94fd5d57-5fb3-4b34-a545-6bdc3f219354-utilities\") pod \"redhat-operators-c6vkp\" (UID: \"94fd5d57-5fb3-4b34-a545-6bdc3f219354\") " pod="openshift-marketplace/redhat-operators-c6vkp" Mar 13 10:40:54 crc kubenswrapper[4632]: I0313 10:40:54.134581 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srngn\" (UniqueName: \"kubernetes.io/projected/94fd5d57-5fb3-4b34-a545-6bdc3f219354-kube-api-access-srngn\") pod \"redhat-operators-c6vkp\" (UID: \"94fd5d57-5fb3-4b34-a545-6bdc3f219354\") " pod="openshift-marketplace/redhat-operators-c6vkp" Mar 13 10:40:54 crc kubenswrapper[4632]: I0313 10:40:54.135276 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94fd5d57-5fb3-4b34-a545-6bdc3f219354-utilities\") pod \"redhat-operators-c6vkp\" (UID: \"94fd5d57-5fb3-4b34-a545-6bdc3f219354\") " pod="openshift-marketplace/redhat-operators-c6vkp" Mar 13 10:40:54 crc kubenswrapper[4632]: I0313 10:40:54.135360 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94fd5d57-5fb3-4b34-a545-6bdc3f219354-catalog-content\") pod \"redhat-operators-c6vkp\" (UID: \"94fd5d57-5fb3-4b34-a545-6bdc3f219354\") " pod="openshift-marketplace/redhat-operators-c6vkp" Mar 13 10:40:54 crc kubenswrapper[4632]: I0313 10:40:54.157840 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srngn\" (UniqueName: \"kubernetes.io/projected/94fd5d57-5fb3-4b34-a545-6bdc3f219354-kube-api-access-srngn\") pod \"redhat-operators-c6vkp\" (UID: \"94fd5d57-5fb3-4b34-a545-6bdc3f219354\") " pod="openshift-marketplace/redhat-operators-c6vkp" Mar 13 10:40:54 crc kubenswrapper[4632]: I0313 10:40:54.197777 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c6vkp" Mar 13 10:40:54 crc kubenswrapper[4632]: I0313 10:40:54.732670 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c6vkp"] Mar 13 10:40:55 crc kubenswrapper[4632]: I0313 10:40:55.179825 4632 generic.go:334] "Generic (PLEG): container finished" podID="94fd5d57-5fb3-4b34-a545-6bdc3f219354" containerID="d6c5924afdfbc6243bd936794f31e2092a18f09ea985a6206e818699a4eab9bc" exitCode=0 Mar 13 10:40:55 crc kubenswrapper[4632]: I0313 10:40:55.180233 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6vkp" event={"ID":"94fd5d57-5fb3-4b34-a545-6bdc3f219354","Type":"ContainerDied","Data":"d6c5924afdfbc6243bd936794f31e2092a18f09ea985a6206e818699a4eab9bc"} Mar 13 10:40:55 crc kubenswrapper[4632]: I0313 10:40:55.180262 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6vkp" event={"ID":"94fd5d57-5fb3-4b34-a545-6bdc3f219354","Type":"ContainerStarted","Data":"cfe1bf74852782b72ab02590cc17b2a400ee246e92a62ec6365c71796dd98bad"} Mar 13 10:40:57 crc kubenswrapper[4632]: I0313 10:40:57.219356 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6vkp" event={"ID":"94fd5d57-5fb3-4b34-a545-6bdc3f219354","Type":"ContainerStarted","Data":"e5b275bdebfc8c0e84bc1511d86355567607af8b44b7d9fd25e952a957898f8e"} Mar 13 10:40:58 crc kubenswrapper[4632]: I0313 10:40:58.229154 4632 generic.go:334] "Generic (PLEG): container finished" podID="9ff4122d-b9f1-4dd0-80dc-deb9d84760e1" containerID="67d1c660a605cb7f487da3c88f2f7b771a995c235aa1d21c9c9772f3c660b828" exitCode=0 Mar 13 10:40:58 crc kubenswrapper[4632]: I0313 10:40:58.229237 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-55n7g" event={"ID":"9ff4122d-b9f1-4dd0-80dc-deb9d84760e1","Type":"ContainerDied","Data":"67d1c660a605cb7f487da3c88f2f7b771a995c235aa1d21c9c9772f3c660b828"} Mar 13 10:40:59 crc kubenswrapper[4632]: I0313 10:40:59.834548 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-55n7g" Mar 13 10:40:59 crc kubenswrapper[4632]: I0313 10:40:59.975466 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nzt5\" (UniqueName: \"kubernetes.io/projected/9ff4122d-b9f1-4dd0-80dc-deb9d84760e1-kube-api-access-7nzt5\") pod \"9ff4122d-b9f1-4dd0-80dc-deb9d84760e1\" (UID: \"9ff4122d-b9f1-4dd0-80dc-deb9d84760e1\") " Mar 13 10:40:59 crc kubenswrapper[4632]: I0313 10:40:59.975851 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ff4122d-b9f1-4dd0-80dc-deb9d84760e1-ssh-key-openstack-edpm-ipam\") pod \"9ff4122d-b9f1-4dd0-80dc-deb9d84760e1\" (UID: \"9ff4122d-b9f1-4dd0-80dc-deb9d84760e1\") " Mar 13 10:40:59 crc kubenswrapper[4632]: I0313 10:40:59.975985 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9ff4122d-b9f1-4dd0-80dc-deb9d84760e1-inventory-0\") pod \"9ff4122d-b9f1-4dd0-80dc-deb9d84760e1\" (UID: \"9ff4122d-b9f1-4dd0-80dc-deb9d84760e1\") " Mar 13 10:40:59 crc kubenswrapper[4632]: I0313 10:40:59.994660 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ff4122d-b9f1-4dd0-80dc-deb9d84760e1-kube-api-access-7nzt5" (OuterVolumeSpecName: "kube-api-access-7nzt5") pod "9ff4122d-b9f1-4dd0-80dc-deb9d84760e1" (UID: "9ff4122d-b9f1-4dd0-80dc-deb9d84760e1"). InnerVolumeSpecName "kube-api-access-7nzt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:41:00 crc kubenswrapper[4632]: I0313 10:41:00.015107 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ff4122d-b9f1-4dd0-80dc-deb9d84760e1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9ff4122d-b9f1-4dd0-80dc-deb9d84760e1" (UID: "9ff4122d-b9f1-4dd0-80dc-deb9d84760e1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:41:00 crc kubenswrapper[4632]: I0313 10:41:00.020278 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ff4122d-b9f1-4dd0-80dc-deb9d84760e1-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "9ff4122d-b9f1-4dd0-80dc-deb9d84760e1" (UID: "9ff4122d-b9f1-4dd0-80dc-deb9d84760e1"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:41:00 crc kubenswrapper[4632]: I0313 10:41:00.078591 4632 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9ff4122d-b9f1-4dd0-80dc-deb9d84760e1-inventory-0\") on node \"crc\" DevicePath \"\"" Mar 13 10:41:00 crc kubenswrapper[4632]: I0313 10:41:00.078629 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nzt5\" (UniqueName: \"kubernetes.io/projected/9ff4122d-b9f1-4dd0-80dc-deb9d84760e1-kube-api-access-7nzt5\") on node \"crc\" DevicePath \"\"" Mar 13 10:41:00 crc kubenswrapper[4632]: I0313 10:41:00.078661 4632 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ff4122d-b9f1-4dd0-80dc-deb9d84760e1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 13 10:41:00 crc kubenswrapper[4632]: I0313 10:41:00.248826 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-55n7g" event={"ID":"9ff4122d-b9f1-4dd0-80dc-deb9d84760e1","Type":"ContainerDied","Data":"8cb8154bbb80936b80e55c8289b4e5686e6030c27b5b59809debc612b023074a"} Mar 13 10:41:00 crc kubenswrapper[4632]: I0313 10:41:00.248868 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-55n7g" Mar 13 10:41:00 crc kubenswrapper[4632]: I0313 10:41:00.248869 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cb8154bbb80936b80e55c8289b4e5686e6030c27b5b59809debc612b023074a" Mar 13 10:41:00 crc kubenswrapper[4632]: I0313 10:41:00.336719 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9rbk"] Mar 13 10:41:00 crc kubenswrapper[4632]: E0313 10:41:00.337751 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ff4122d-b9f1-4dd0-80dc-deb9d84760e1" containerName="ssh-known-hosts-edpm-deployment" Mar 13 10:41:00 crc kubenswrapper[4632]: I0313 10:41:00.337781 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ff4122d-b9f1-4dd0-80dc-deb9d84760e1" containerName="ssh-known-hosts-edpm-deployment" Mar 13 10:41:00 crc kubenswrapper[4632]: I0313 10:41:00.338343 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ff4122d-b9f1-4dd0-80dc-deb9d84760e1" containerName="ssh-known-hosts-edpm-deployment" Mar 13 10:41:00 crc kubenswrapper[4632]: I0313 10:41:00.343626 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9rbk" Mar 13 10:41:00 crc kubenswrapper[4632]: I0313 10:41:00.348451 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 13 10:41:00 crc kubenswrapper[4632]: I0313 10:41:00.360660 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 13 10:41:00 crc kubenswrapper[4632]: I0313 10:41:00.360844 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qrzsx" Mar 13 10:41:00 crc kubenswrapper[4632]: I0313 10:41:00.361184 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 13 10:41:00 crc kubenswrapper[4632]: I0313 10:41:00.407929 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9rbk"] Mar 13 10:41:00 crc kubenswrapper[4632]: I0313 10:41:00.488316 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f69a3b21-eb1c-4300-91dc-55766900da95-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f9rbk\" (UID: \"f69a3b21-eb1c-4300-91dc-55766900da95\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9rbk" Mar 13 10:41:00 crc kubenswrapper[4632]: I0313 10:41:00.488411 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f69a3b21-eb1c-4300-91dc-55766900da95-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f9rbk\" (UID: \"f69a3b21-eb1c-4300-91dc-55766900da95\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9rbk" Mar 13 10:41:00 crc kubenswrapper[4632]: I0313 10:41:00.488483 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d8hr\" (UniqueName: \"kubernetes.io/projected/f69a3b21-eb1c-4300-91dc-55766900da95-kube-api-access-6d8hr\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f9rbk\" (UID: \"f69a3b21-eb1c-4300-91dc-55766900da95\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9rbk" Mar 13 10:41:00 crc kubenswrapper[4632]: I0313 10:41:00.590328 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f69a3b21-eb1c-4300-91dc-55766900da95-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f9rbk\" (UID: \"f69a3b21-eb1c-4300-91dc-55766900da95\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9rbk" Mar 13 10:41:00 crc kubenswrapper[4632]: I0313 10:41:00.590421 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f69a3b21-eb1c-4300-91dc-55766900da95-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f9rbk\" (UID: \"f69a3b21-eb1c-4300-91dc-55766900da95\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9rbk" Mar 13 10:41:00 crc kubenswrapper[4632]: I0313 10:41:00.590491 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d8hr\" (UniqueName: \"kubernetes.io/projected/f69a3b21-eb1c-4300-91dc-55766900da95-kube-api-access-6d8hr\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f9rbk\" (UID: \"f69a3b21-eb1c-4300-91dc-55766900da95\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9rbk" Mar 13 10:41:00 crc kubenswrapper[4632]: I0313 10:41:00.596219 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f69a3b21-eb1c-4300-91dc-55766900da95-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f9rbk\" (UID: \"f69a3b21-eb1c-4300-91dc-55766900da95\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9rbk" Mar 13 10:41:00 crc kubenswrapper[4632]: I0313 10:41:00.597204 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f69a3b21-eb1c-4300-91dc-55766900da95-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f9rbk\" (UID: \"f69a3b21-eb1c-4300-91dc-55766900da95\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9rbk" Mar 13 10:41:00 crc kubenswrapper[4632]: I0313 10:41:00.617258 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d8hr\" (UniqueName: \"kubernetes.io/projected/f69a3b21-eb1c-4300-91dc-55766900da95-kube-api-access-6d8hr\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f9rbk\" (UID: \"f69a3b21-eb1c-4300-91dc-55766900da95\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9rbk" Mar 13 10:41:00 crc kubenswrapper[4632]: I0313 10:41:00.688915 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9rbk" Mar 13 10:41:01 crc kubenswrapper[4632]: I0313 10:41:01.316696 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9rbk"] Mar 13 10:41:01 crc kubenswrapper[4632]: W0313 10:41:01.369160 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf69a3b21_eb1c_4300_91dc_55766900da95.slice/crio-c652193575851bc88eb9652edd9cc1fa8aa0f642f4ed0ff9f52d97763733bab8 WatchSource:0}: Error finding container c652193575851bc88eb9652edd9cc1fa8aa0f642f4ed0ff9f52d97763733bab8: Status 404 returned error can't find the container with id c652193575851bc88eb9652edd9cc1fa8aa0f642f4ed0ff9f52d97763733bab8 Mar 13 10:41:02 crc kubenswrapper[4632]: I0313 10:41:02.270722 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9rbk" event={"ID":"f69a3b21-eb1c-4300-91dc-55766900da95","Type":"ContainerStarted","Data":"d84c74e44c6bd120cba0edb788c43ce395addd117643e58c370d3d9a16355c26"} Mar 13 10:41:02 crc kubenswrapper[4632]: I0313 10:41:02.271055 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9rbk" event={"ID":"f69a3b21-eb1c-4300-91dc-55766900da95","Type":"ContainerStarted","Data":"c652193575851bc88eb9652edd9cc1fa8aa0f642f4ed0ff9f52d97763733bab8"} Mar 13 10:41:02 crc kubenswrapper[4632]: I0313 10:41:02.273205 4632 generic.go:334] "Generic (PLEG): container finished" podID="94fd5d57-5fb3-4b34-a545-6bdc3f219354" containerID="e5b275bdebfc8c0e84bc1511d86355567607af8b44b7d9fd25e952a957898f8e" exitCode=0 Mar 13 10:41:02 crc kubenswrapper[4632]: I0313 10:41:02.273236 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6vkp" event={"ID":"94fd5d57-5fb3-4b34-a545-6bdc3f219354","Type":"ContainerDied","Data":"e5b275bdebfc8c0e84bc1511d86355567607af8b44b7d9fd25e952a957898f8e"} Mar 13 10:41:02 crc kubenswrapper[4632]: I0313 10:41:02.323957 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9rbk" podStartSLOduration=1.84543192 podStartE2EDuration="2.323925517s" podCreationTimestamp="2026-03-13 10:41:00 +0000 UTC" firstStartedPulling="2026-03-13 10:41:01.372558564 +0000 UTC m=+2235.395088697" lastFinishedPulling="2026-03-13 10:41:01.851052161 +0000 UTC m=+2235.873582294" observedRunningTime="2026-03-13 10:41:02.296086602 +0000 UTC m=+2236.318616735" watchObservedRunningTime="2026-03-13 10:41:02.323925517 +0000 UTC m=+2236.346455650" Mar 13 10:41:03 crc kubenswrapper[4632]: I0313 10:41:03.283645 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6vkp" event={"ID":"94fd5d57-5fb3-4b34-a545-6bdc3f219354","Type":"ContainerStarted","Data":"c7a9c2f67c2bc9ec6015b2a50c3cfdd53b590fc7fda70e8b949fe66cb4e0aff2"} Mar 13 10:41:03 crc kubenswrapper[4632]: I0313 10:41:03.306726 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c6vkp" podStartSLOduration=2.655884976 podStartE2EDuration="10.306708495s" podCreationTimestamp="2026-03-13 10:40:53 +0000 UTC" firstStartedPulling="2026-03-13 10:40:55.182312252 +0000 UTC m=+2229.204842385" lastFinishedPulling="2026-03-13 10:41:02.833135771 +0000 UTC m=+2236.855665904" observedRunningTime="2026-03-13 10:41:03.302905842 +0000 UTC m=+2237.325435985" watchObservedRunningTime="2026-03-13 10:41:03.306708495 +0000 UTC m=+2237.329238628" Mar 13 10:41:04 crc kubenswrapper[4632]: I0313 10:41:04.198453 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c6vkp" Mar 13 10:41:04 crc kubenswrapper[4632]: I0313 10:41:04.199960 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c6vkp" Mar 13 10:41:05 crc kubenswrapper[4632]: I0313 10:41:05.252110 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c6vkp" podUID="94fd5d57-5fb3-4b34-a545-6bdc3f219354" containerName="registry-server" probeResult="failure" output=< Mar 13 10:41:05 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 10:41:05 crc kubenswrapper[4632]: > Mar 13 10:41:12 crc kubenswrapper[4632]: I0313 10:41:12.374133 4632 generic.go:334] "Generic (PLEG): container finished" podID="f69a3b21-eb1c-4300-91dc-55766900da95" containerID="d84c74e44c6bd120cba0edb788c43ce395addd117643e58c370d3d9a16355c26" exitCode=0 Mar 13 10:41:12 crc kubenswrapper[4632]: I0313 10:41:12.374221 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9rbk" event={"ID":"f69a3b21-eb1c-4300-91dc-55766900da95","Type":"ContainerDied","Data":"d84c74e44c6bd120cba0edb788c43ce395addd117643e58c370d3d9a16355c26"} Mar 13 10:41:13 crc kubenswrapper[4632]: I0313 10:41:13.861363 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9rbk" Mar 13 10:41:13 crc kubenswrapper[4632]: I0313 10:41:13.960970 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f69a3b21-eb1c-4300-91dc-55766900da95-inventory\") pod \"f69a3b21-eb1c-4300-91dc-55766900da95\" (UID: \"f69a3b21-eb1c-4300-91dc-55766900da95\") " Mar 13 10:41:13 crc kubenswrapper[4632]: I0313 10:41:13.961123 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f69a3b21-eb1c-4300-91dc-55766900da95-ssh-key-openstack-edpm-ipam\") pod \"f69a3b21-eb1c-4300-91dc-55766900da95\" (UID: \"f69a3b21-eb1c-4300-91dc-55766900da95\") " Mar 13 10:41:13 crc kubenswrapper[4632]: I0313 10:41:13.961993 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6d8hr\" (UniqueName: \"kubernetes.io/projected/f69a3b21-eb1c-4300-91dc-55766900da95-kube-api-access-6d8hr\") pod \"f69a3b21-eb1c-4300-91dc-55766900da95\" (UID: \"f69a3b21-eb1c-4300-91dc-55766900da95\") " Mar 13 10:41:13 crc kubenswrapper[4632]: I0313 10:41:13.967256 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f69a3b21-eb1c-4300-91dc-55766900da95-kube-api-access-6d8hr" (OuterVolumeSpecName: "kube-api-access-6d8hr") pod "f69a3b21-eb1c-4300-91dc-55766900da95" (UID: "f69a3b21-eb1c-4300-91dc-55766900da95"). InnerVolumeSpecName "kube-api-access-6d8hr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:41:13 crc kubenswrapper[4632]: E0313 10:41:13.990015 4632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f69a3b21-eb1c-4300-91dc-55766900da95-inventory podName:f69a3b21-eb1c-4300-91dc-55766900da95 nodeName:}" failed. No retries permitted until 2026-03-13 10:41:14.489435037 +0000 UTC m=+2248.511965170 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "inventory" (UniqueName: "kubernetes.io/secret/f69a3b21-eb1c-4300-91dc-55766900da95-inventory") pod "f69a3b21-eb1c-4300-91dc-55766900da95" (UID: "f69a3b21-eb1c-4300-91dc-55766900da95") : error deleting /var/lib/kubelet/pods/f69a3b21-eb1c-4300-91dc-55766900da95/volume-subpaths: remove /var/lib/kubelet/pods/f69a3b21-eb1c-4300-91dc-55766900da95/volume-subpaths: no such file or directory Mar 13 10:41:13 crc kubenswrapper[4632]: I0313 10:41:13.992048 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f69a3b21-eb1c-4300-91dc-55766900da95-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f69a3b21-eb1c-4300-91dc-55766900da95" (UID: "f69a3b21-eb1c-4300-91dc-55766900da95"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:41:14 crc kubenswrapper[4632]: I0313 10:41:14.067155 4632 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f69a3b21-eb1c-4300-91dc-55766900da95-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 13 10:41:14 crc kubenswrapper[4632]: I0313 10:41:14.067528 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6d8hr\" (UniqueName: \"kubernetes.io/projected/f69a3b21-eb1c-4300-91dc-55766900da95-kube-api-access-6d8hr\") on node \"crc\" DevicePath \"\"" Mar 13 10:41:14 crc kubenswrapper[4632]: I0313 10:41:14.393007 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9rbk" event={"ID":"f69a3b21-eb1c-4300-91dc-55766900da95","Type":"ContainerDied","Data":"c652193575851bc88eb9652edd9cc1fa8aa0f642f4ed0ff9f52d97763733bab8"} Mar 13 10:41:14 crc kubenswrapper[4632]: I0313 10:41:14.393055 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c652193575851bc88eb9652edd9cc1fa8aa0f642f4ed0ff9f52d97763733bab8" Mar 13 10:41:14 crc kubenswrapper[4632]: I0313 10:41:14.393085 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9rbk" Mar 13 10:41:14 crc kubenswrapper[4632]: I0313 10:41:14.471617 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fzxsz"] Mar 13 10:41:14 crc kubenswrapper[4632]: E0313 10:41:14.472025 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f69a3b21-eb1c-4300-91dc-55766900da95" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 13 10:41:14 crc kubenswrapper[4632]: I0313 10:41:14.472042 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="f69a3b21-eb1c-4300-91dc-55766900da95" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 13 10:41:14 crc kubenswrapper[4632]: I0313 10:41:14.472255 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="f69a3b21-eb1c-4300-91dc-55766900da95" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 13 10:41:14 crc kubenswrapper[4632]: I0313 10:41:14.472974 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fzxsz" Mar 13 10:41:14 crc kubenswrapper[4632]: I0313 10:41:14.494303 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fzxsz"] Mar 13 10:41:14 crc kubenswrapper[4632]: I0313 10:41:14.576424 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f69a3b21-eb1c-4300-91dc-55766900da95-inventory\") pod \"f69a3b21-eb1c-4300-91dc-55766900da95\" (UID: \"f69a3b21-eb1c-4300-91dc-55766900da95\") " Mar 13 10:41:14 crc kubenswrapper[4632]: I0313 10:41:14.576962 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cztz\" (UniqueName: \"kubernetes.io/projected/4eaeef27-fa4c-41d9-a197-a780a6a6cebd-kube-api-access-6cztz\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fzxsz\" (UID: \"4eaeef27-fa4c-41d9-a197-a780a6a6cebd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fzxsz" Mar 13 10:41:14 crc kubenswrapper[4632]: I0313 10:41:14.577020 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4eaeef27-fa4c-41d9-a197-a780a6a6cebd-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fzxsz\" (UID: \"4eaeef27-fa4c-41d9-a197-a780a6a6cebd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fzxsz" Mar 13 10:41:14 crc kubenswrapper[4632]: I0313 10:41:14.577090 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4eaeef27-fa4c-41d9-a197-a780a6a6cebd-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fzxsz\" (UID: \"4eaeef27-fa4c-41d9-a197-a780a6a6cebd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fzxsz" Mar 13 10:41:14 crc kubenswrapper[4632]: I0313 10:41:14.592632 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f69a3b21-eb1c-4300-91dc-55766900da95-inventory" (OuterVolumeSpecName: "inventory") pod "f69a3b21-eb1c-4300-91dc-55766900da95" (UID: "f69a3b21-eb1c-4300-91dc-55766900da95"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:41:14 crc kubenswrapper[4632]: I0313 10:41:14.678925 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4eaeef27-fa4c-41d9-a197-a780a6a6cebd-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fzxsz\" (UID: \"4eaeef27-fa4c-41d9-a197-a780a6a6cebd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fzxsz" Mar 13 10:41:14 crc kubenswrapper[4632]: I0313 10:41:14.679052 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4eaeef27-fa4c-41d9-a197-a780a6a6cebd-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fzxsz\" (UID: \"4eaeef27-fa4c-41d9-a197-a780a6a6cebd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fzxsz" Mar 13 10:41:14 crc kubenswrapper[4632]: I0313 10:41:14.679175 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cztz\" (UniqueName: \"kubernetes.io/projected/4eaeef27-fa4c-41d9-a197-a780a6a6cebd-kube-api-access-6cztz\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fzxsz\" (UID: \"4eaeef27-fa4c-41d9-a197-a780a6a6cebd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fzxsz" Mar 13 10:41:14 crc kubenswrapper[4632]: I0313 10:41:14.679225 4632 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f69a3b21-eb1c-4300-91dc-55766900da95-inventory\") on node \"crc\" DevicePath \"\"" Mar 13 10:41:14 crc kubenswrapper[4632]: I0313 10:41:14.682750 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4eaeef27-fa4c-41d9-a197-a780a6a6cebd-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fzxsz\" (UID: \"4eaeef27-fa4c-41d9-a197-a780a6a6cebd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fzxsz" Mar 13 10:41:14 crc kubenswrapper[4632]: I0313 10:41:14.683038 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4eaeef27-fa4c-41d9-a197-a780a6a6cebd-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fzxsz\" (UID: \"4eaeef27-fa4c-41d9-a197-a780a6a6cebd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fzxsz" Mar 13 10:41:14 crc kubenswrapper[4632]: I0313 10:41:14.696109 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cztz\" (UniqueName: \"kubernetes.io/projected/4eaeef27-fa4c-41d9-a197-a780a6a6cebd-kube-api-access-6cztz\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fzxsz\" (UID: \"4eaeef27-fa4c-41d9-a197-a780a6a6cebd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fzxsz" Mar 13 10:41:14 crc kubenswrapper[4632]: I0313 10:41:14.794879 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fzxsz" Mar 13 10:41:15 crc kubenswrapper[4632]: I0313 10:41:15.245605 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c6vkp" podUID="94fd5d57-5fb3-4b34-a545-6bdc3f219354" containerName="registry-server" probeResult="failure" output=< Mar 13 10:41:15 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 10:41:15 crc kubenswrapper[4632]: > Mar 13 10:41:15 crc kubenswrapper[4632]: I0313 10:41:15.383736 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fzxsz"] Mar 13 10:41:15 crc kubenswrapper[4632]: I0313 10:41:15.406821 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fzxsz" event={"ID":"4eaeef27-fa4c-41d9-a197-a780a6a6cebd","Type":"ContainerStarted","Data":"ced96d05e4eec27d78e68d54f820e0f8c837211b9f85da7592e625e2f0f9fcaa"} Mar 13 10:41:16 crc kubenswrapper[4632]: I0313 10:41:16.519062 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fzxsz" event={"ID":"4eaeef27-fa4c-41d9-a197-a780a6a6cebd","Type":"ContainerStarted","Data":"cd71d4c7541e14dbca5f760d64290b31a098a75a4bb28c7734560d76d0b91bde"} Mar 13 10:41:16 crc kubenswrapper[4632]: I0313 10:41:16.537764 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fzxsz" podStartSLOduration=2.060925844 podStartE2EDuration="2.537742448s" podCreationTimestamp="2026-03-13 10:41:14 +0000 UTC" firstStartedPulling="2026-03-13 10:41:15.383346301 +0000 UTC m=+2249.405876444" lastFinishedPulling="2026-03-13 10:41:15.860162915 +0000 UTC m=+2249.882693048" observedRunningTime="2026-03-13 10:41:16.535239856 +0000 UTC m=+2250.557769989" watchObservedRunningTime="2026-03-13 10:41:16.537742448 +0000 UTC m=+2250.560272601" Mar 13 10:41:25 crc kubenswrapper[4632]: I0313 10:41:25.247957 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c6vkp" podUID="94fd5d57-5fb3-4b34-a545-6bdc3f219354" containerName="registry-server" probeResult="failure" output=< Mar 13 10:41:25 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 10:41:25 crc kubenswrapper[4632]: > Mar 13 10:41:27 crc kubenswrapper[4632]: I0313 10:41:27.621164 4632 generic.go:334] "Generic (PLEG): container finished" podID="4eaeef27-fa4c-41d9-a197-a780a6a6cebd" containerID="cd71d4c7541e14dbca5f760d64290b31a098a75a4bb28c7734560d76d0b91bde" exitCode=0 Mar 13 10:41:27 crc kubenswrapper[4632]: I0313 10:41:27.621233 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fzxsz" event={"ID":"4eaeef27-fa4c-41d9-a197-a780a6a6cebd","Type":"ContainerDied","Data":"cd71d4c7541e14dbca5f760d64290b31a098a75a4bb28c7734560d76d0b91bde"} Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.076676 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fzxsz" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.111421 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cztz\" (UniqueName: \"kubernetes.io/projected/4eaeef27-fa4c-41d9-a197-a780a6a6cebd-kube-api-access-6cztz\") pod \"4eaeef27-fa4c-41d9-a197-a780a6a6cebd\" (UID: \"4eaeef27-fa4c-41d9-a197-a780a6a6cebd\") " Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.111589 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4eaeef27-fa4c-41d9-a197-a780a6a6cebd-ssh-key-openstack-edpm-ipam\") pod \"4eaeef27-fa4c-41d9-a197-a780a6a6cebd\" (UID: \"4eaeef27-fa4c-41d9-a197-a780a6a6cebd\") " Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.111618 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4eaeef27-fa4c-41d9-a197-a780a6a6cebd-inventory\") pod \"4eaeef27-fa4c-41d9-a197-a780a6a6cebd\" (UID: \"4eaeef27-fa4c-41d9-a197-a780a6a6cebd\") " Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.118389 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eaeef27-fa4c-41d9-a197-a780a6a6cebd-kube-api-access-6cztz" (OuterVolumeSpecName: "kube-api-access-6cztz") pod "4eaeef27-fa4c-41d9-a197-a780a6a6cebd" (UID: "4eaeef27-fa4c-41d9-a197-a780a6a6cebd"). InnerVolumeSpecName "kube-api-access-6cztz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.146229 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eaeef27-fa4c-41d9-a197-a780a6a6cebd-inventory" (OuterVolumeSpecName: "inventory") pod "4eaeef27-fa4c-41d9-a197-a780a6a6cebd" (UID: "4eaeef27-fa4c-41d9-a197-a780a6a6cebd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.148873 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eaeef27-fa4c-41d9-a197-a780a6a6cebd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4eaeef27-fa4c-41d9-a197-a780a6a6cebd" (UID: "4eaeef27-fa4c-41d9-a197-a780a6a6cebd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.213870 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cztz\" (UniqueName: \"kubernetes.io/projected/4eaeef27-fa4c-41d9-a197-a780a6a6cebd-kube-api-access-6cztz\") on node \"crc\" DevicePath \"\"" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.213912 4632 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4eaeef27-fa4c-41d9-a197-a780a6a6cebd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.213954 4632 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4eaeef27-fa4c-41d9-a197-a780a6a6cebd-inventory\") on node \"crc\" DevicePath \"\"" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.639549 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fzxsz" event={"ID":"4eaeef27-fa4c-41d9-a197-a780a6a6cebd","Type":"ContainerDied","Data":"ced96d05e4eec27d78e68d54f820e0f8c837211b9f85da7592e625e2f0f9fcaa"} Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.639590 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ced96d05e4eec27d78e68d54f820e0f8c837211b9f85da7592e625e2f0f9fcaa" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.639641 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fzxsz" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.741735 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn"] Mar 13 10:41:29 crc kubenswrapper[4632]: E0313 10:41:29.742259 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eaeef27-fa4c-41d9-a197-a780a6a6cebd" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.742287 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eaeef27-fa4c-41d9-a197-a780a6a6cebd" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.742569 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eaeef27-fa4c-41d9-a197-a780a6a6cebd" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.743732 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.746012 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.746105 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.746756 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.748347 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.748850 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.750959 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qrzsx" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.751148 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.751460 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.781554 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn"] Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.825445 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.825503 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzlkb\" (UniqueName: \"kubernetes.io/projected/41861d23-3e34-4f91-bafc-1b7eeee125db-kube-api-access-rzlkb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.825538 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.825561 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.825587 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.825609 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41861d23-3e34-4f91-bafc-1b7eeee125db-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.825645 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.825683 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.825747 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41861d23-3e34-4f91-bafc-1b7eeee125db-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.825766 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.825799 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41861d23-3e34-4f91-bafc-1b7eeee125db-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.825862 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.825907 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.825949 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41861d23-3e34-4f91-bafc-1b7eeee125db-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.927549 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.927616 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzlkb\" (UniqueName: \"kubernetes.io/projected/41861d23-3e34-4f91-bafc-1b7eeee125db-kube-api-access-rzlkb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.927655 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.927686 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.927753 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.927786 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41861d23-3e34-4f91-bafc-1b7eeee125db-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.927834 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.927884 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.927996 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41861d23-3e34-4f91-bafc-1b7eeee125db-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.928024 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.928067 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41861d23-3e34-4f91-bafc-1b7eeee125db-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.928144 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.928197 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.928236 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41861d23-3e34-4f91-bafc-1b7eeee125db-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.935749 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.938235 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.939281 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41861d23-3e34-4f91-bafc-1b7eeee125db-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.940043 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41861d23-3e34-4f91-bafc-1b7eeee125db-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.940749 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.942701 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.943150 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.945364 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.948704 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.949870 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.950860 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzlkb\" (UniqueName: \"kubernetes.io/projected/41861d23-3e34-4f91-bafc-1b7eeee125db-kube-api-access-rzlkb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.951198 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.952056 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41861d23-3e34-4f91-bafc-1b7eeee125db-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:29 crc kubenswrapper[4632]: I0313 10:41:29.960322 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41861d23-3e34-4f91-bafc-1b7eeee125db-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:30 crc kubenswrapper[4632]: I0313 10:41:30.066635 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:41:30 crc kubenswrapper[4632]: I0313 10:41:30.602432 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn"] Mar 13 10:41:30 crc kubenswrapper[4632]: I0313 10:41:30.650957 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" event={"ID":"41861d23-3e34-4f91-bafc-1b7eeee125db","Type":"ContainerStarted","Data":"aa943b891e21fa2ddf52c1882c6b609ca996747cf346a718083b0fc64cf76d58"} Mar 13 10:41:31 crc kubenswrapper[4632]: I0313 10:41:31.667811 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" event={"ID":"41861d23-3e34-4f91-bafc-1b7eeee125db","Type":"ContainerStarted","Data":"8bcf160168d0bd44e7bdc4cd090dfd9a1207f38af53c4c136868bdc7acd63fcf"} Mar 13 10:41:31 crc kubenswrapper[4632]: I0313 10:41:31.690027 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" podStartSLOduration=2.279689938 podStartE2EDuration="2.690008032s" podCreationTimestamp="2026-03-13 10:41:29 +0000 UTC" firstStartedPulling="2026-03-13 10:41:30.611613949 +0000 UTC m=+2264.634144072" lastFinishedPulling="2026-03-13 10:41:31.021932043 +0000 UTC m=+2265.044462166" observedRunningTime="2026-03-13 10:41:31.687954142 +0000 UTC m=+2265.710484275" watchObservedRunningTime="2026-03-13 10:41:31.690008032 +0000 UTC m=+2265.712538165" Mar 13 10:41:35 crc kubenswrapper[4632]: I0313 10:41:35.243029 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c6vkp" podUID="94fd5d57-5fb3-4b34-a545-6bdc3f219354" containerName="registry-server" probeResult="failure" output=< Mar 13 10:41:35 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 10:41:35 crc kubenswrapper[4632]: > Mar 13 10:41:40 crc kubenswrapper[4632]: I0313 10:41:40.460856 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 10:41:40 crc kubenswrapper[4632]: I0313 10:41:40.461374 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 10:41:44 crc kubenswrapper[4632]: I0313 10:41:44.248638 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c6vkp" Mar 13 10:41:44 crc kubenswrapper[4632]: I0313 10:41:44.302160 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c6vkp" Mar 13 10:41:44 crc kubenswrapper[4632]: I0313 10:41:44.491991 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c6vkp"] Mar 13 10:41:45 crc kubenswrapper[4632]: I0313 10:41:45.786511 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c6vkp" podUID="94fd5d57-5fb3-4b34-a545-6bdc3f219354" containerName="registry-server" containerID="cri-o://c7a9c2f67c2bc9ec6015b2a50c3cfdd53b590fc7fda70e8b949fe66cb4e0aff2" gracePeriod=2 Mar 13 10:41:46 crc kubenswrapper[4632]: I0313 10:41:46.384483 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c6vkp" Mar 13 10:41:46 crc kubenswrapper[4632]: I0313 10:41:46.480684 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94fd5d57-5fb3-4b34-a545-6bdc3f219354-utilities\") pod \"94fd5d57-5fb3-4b34-a545-6bdc3f219354\" (UID: \"94fd5d57-5fb3-4b34-a545-6bdc3f219354\") " Mar 13 10:41:46 crc kubenswrapper[4632]: I0313 10:41:46.480721 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94fd5d57-5fb3-4b34-a545-6bdc3f219354-catalog-content\") pod \"94fd5d57-5fb3-4b34-a545-6bdc3f219354\" (UID: \"94fd5d57-5fb3-4b34-a545-6bdc3f219354\") " Mar 13 10:41:46 crc kubenswrapper[4632]: I0313 10:41:46.480934 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srngn\" (UniqueName: \"kubernetes.io/projected/94fd5d57-5fb3-4b34-a545-6bdc3f219354-kube-api-access-srngn\") pod \"94fd5d57-5fb3-4b34-a545-6bdc3f219354\" (UID: \"94fd5d57-5fb3-4b34-a545-6bdc3f219354\") " Mar 13 10:41:46 crc kubenswrapper[4632]: I0313 10:41:46.490460 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94fd5d57-5fb3-4b34-a545-6bdc3f219354-kube-api-access-srngn" (OuterVolumeSpecName: "kube-api-access-srngn") pod "94fd5d57-5fb3-4b34-a545-6bdc3f219354" (UID: "94fd5d57-5fb3-4b34-a545-6bdc3f219354"). InnerVolumeSpecName "kube-api-access-srngn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:41:46 crc kubenswrapper[4632]: I0313 10:41:46.498714 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94fd5d57-5fb3-4b34-a545-6bdc3f219354-utilities" (OuterVolumeSpecName: "utilities") pod "94fd5d57-5fb3-4b34-a545-6bdc3f219354" (UID: "94fd5d57-5fb3-4b34-a545-6bdc3f219354"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:41:46 crc kubenswrapper[4632]: I0313 10:41:46.583563 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94fd5d57-5fb3-4b34-a545-6bdc3f219354-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 10:41:46 crc kubenswrapper[4632]: I0313 10:41:46.583600 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srngn\" (UniqueName: \"kubernetes.io/projected/94fd5d57-5fb3-4b34-a545-6bdc3f219354-kube-api-access-srngn\") on node \"crc\" DevicePath \"\"" Mar 13 10:41:46 crc kubenswrapper[4632]: I0313 10:41:46.669140 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94fd5d57-5fb3-4b34-a545-6bdc3f219354-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94fd5d57-5fb3-4b34-a545-6bdc3f219354" (UID: "94fd5d57-5fb3-4b34-a545-6bdc3f219354"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:41:46 crc kubenswrapper[4632]: I0313 10:41:46.685175 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94fd5d57-5fb3-4b34-a545-6bdc3f219354-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 10:41:46 crc kubenswrapper[4632]: I0313 10:41:46.800720 4632 generic.go:334] "Generic (PLEG): container finished" podID="94fd5d57-5fb3-4b34-a545-6bdc3f219354" containerID="c7a9c2f67c2bc9ec6015b2a50c3cfdd53b590fc7fda70e8b949fe66cb4e0aff2" exitCode=0 Mar 13 10:41:46 crc kubenswrapper[4632]: I0313 10:41:46.800798 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6vkp" event={"ID":"94fd5d57-5fb3-4b34-a545-6bdc3f219354","Type":"ContainerDied","Data":"c7a9c2f67c2bc9ec6015b2a50c3cfdd53b590fc7fda70e8b949fe66cb4e0aff2"} Mar 13 10:41:46 crc kubenswrapper[4632]: I0313 10:41:46.800846 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6vkp" event={"ID":"94fd5d57-5fb3-4b34-a545-6bdc3f219354","Type":"ContainerDied","Data":"cfe1bf74852782b72ab02590cc17b2a400ee246e92a62ec6365c71796dd98bad"} Mar 13 10:41:46 crc kubenswrapper[4632]: I0313 10:41:46.800849 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c6vkp" Mar 13 10:41:46 crc kubenswrapper[4632]: I0313 10:41:46.800872 4632 scope.go:117] "RemoveContainer" containerID="c7a9c2f67c2bc9ec6015b2a50c3cfdd53b590fc7fda70e8b949fe66cb4e0aff2" Mar 13 10:41:46 crc kubenswrapper[4632]: I0313 10:41:46.853406 4632 scope.go:117] "RemoveContainer" containerID="e5b275bdebfc8c0e84bc1511d86355567607af8b44b7d9fd25e952a957898f8e" Mar 13 10:41:46 crc kubenswrapper[4632]: I0313 10:41:46.854064 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c6vkp"] Mar 13 10:41:46 crc kubenswrapper[4632]: I0313 10:41:46.863143 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c6vkp"] Mar 13 10:41:46 crc kubenswrapper[4632]: I0313 10:41:46.892927 4632 scope.go:117] "RemoveContainer" containerID="d6c5924afdfbc6243bd936794f31e2092a18f09ea985a6206e818699a4eab9bc" Mar 13 10:41:46 crc kubenswrapper[4632]: I0313 10:41:46.930784 4632 scope.go:117] "RemoveContainer" containerID="c7a9c2f67c2bc9ec6015b2a50c3cfdd53b590fc7fda70e8b949fe66cb4e0aff2" Mar 13 10:41:46 crc kubenswrapper[4632]: E0313 10:41:46.931440 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7a9c2f67c2bc9ec6015b2a50c3cfdd53b590fc7fda70e8b949fe66cb4e0aff2\": container with ID starting with c7a9c2f67c2bc9ec6015b2a50c3cfdd53b590fc7fda70e8b949fe66cb4e0aff2 not found: ID does not exist" containerID="c7a9c2f67c2bc9ec6015b2a50c3cfdd53b590fc7fda70e8b949fe66cb4e0aff2" Mar 13 10:41:46 crc kubenswrapper[4632]: I0313 10:41:46.931489 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7a9c2f67c2bc9ec6015b2a50c3cfdd53b590fc7fda70e8b949fe66cb4e0aff2"} err="failed to get container status \"c7a9c2f67c2bc9ec6015b2a50c3cfdd53b590fc7fda70e8b949fe66cb4e0aff2\": rpc error: code = NotFound desc = could not find container \"c7a9c2f67c2bc9ec6015b2a50c3cfdd53b590fc7fda70e8b949fe66cb4e0aff2\": container with ID starting with c7a9c2f67c2bc9ec6015b2a50c3cfdd53b590fc7fda70e8b949fe66cb4e0aff2 not found: ID does not exist" Mar 13 10:41:46 crc kubenswrapper[4632]: I0313 10:41:46.931518 4632 scope.go:117] "RemoveContainer" containerID="e5b275bdebfc8c0e84bc1511d86355567607af8b44b7d9fd25e952a957898f8e" Mar 13 10:41:46 crc kubenswrapper[4632]: E0313 10:41:46.931806 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5b275bdebfc8c0e84bc1511d86355567607af8b44b7d9fd25e952a957898f8e\": container with ID starting with e5b275bdebfc8c0e84bc1511d86355567607af8b44b7d9fd25e952a957898f8e not found: ID does not exist" containerID="e5b275bdebfc8c0e84bc1511d86355567607af8b44b7d9fd25e952a957898f8e" Mar 13 10:41:46 crc kubenswrapper[4632]: I0313 10:41:46.931830 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5b275bdebfc8c0e84bc1511d86355567607af8b44b7d9fd25e952a957898f8e"} err="failed to get container status \"e5b275bdebfc8c0e84bc1511d86355567607af8b44b7d9fd25e952a957898f8e\": rpc error: code = NotFound desc = could not find container \"e5b275bdebfc8c0e84bc1511d86355567607af8b44b7d9fd25e952a957898f8e\": container with ID starting with e5b275bdebfc8c0e84bc1511d86355567607af8b44b7d9fd25e952a957898f8e not found: ID does not exist" Mar 13 10:41:46 crc kubenswrapper[4632]: I0313 10:41:46.931844 4632 scope.go:117] "RemoveContainer" containerID="d6c5924afdfbc6243bd936794f31e2092a18f09ea985a6206e818699a4eab9bc" Mar 13 10:41:46 crc kubenswrapper[4632]: E0313 10:41:46.932154 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6c5924afdfbc6243bd936794f31e2092a18f09ea985a6206e818699a4eab9bc\": container with ID starting with d6c5924afdfbc6243bd936794f31e2092a18f09ea985a6206e818699a4eab9bc not found: ID does not exist" containerID="d6c5924afdfbc6243bd936794f31e2092a18f09ea985a6206e818699a4eab9bc" Mar 13 10:41:46 crc kubenswrapper[4632]: I0313 10:41:46.932175 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6c5924afdfbc6243bd936794f31e2092a18f09ea985a6206e818699a4eab9bc"} err="failed to get container status \"d6c5924afdfbc6243bd936794f31e2092a18f09ea985a6206e818699a4eab9bc\": rpc error: code = NotFound desc = could not find container \"d6c5924afdfbc6243bd936794f31e2092a18f09ea985a6206e818699a4eab9bc\": container with ID starting with d6c5924afdfbc6243bd936794f31e2092a18f09ea985a6206e818699a4eab9bc not found: ID does not exist" Mar 13 10:41:48 crc kubenswrapper[4632]: I0313 10:41:48.055729 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94fd5d57-5fb3-4b34-a545-6bdc3f219354" path="/var/lib/kubelet/pods/94fd5d57-5fb3-4b34-a545-6bdc3f219354/volumes" Mar 13 10:42:00 crc kubenswrapper[4632]: I0313 10:42:00.148178 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556642-f6lzb"] Mar 13 10:42:00 crc kubenswrapper[4632]: E0313 10:42:00.150135 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94fd5d57-5fb3-4b34-a545-6bdc3f219354" containerName="extract-utilities" Mar 13 10:42:00 crc kubenswrapper[4632]: I0313 10:42:00.150245 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="94fd5d57-5fb3-4b34-a545-6bdc3f219354" containerName="extract-utilities" Mar 13 10:42:00 crc kubenswrapper[4632]: E0313 10:42:00.150339 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94fd5d57-5fb3-4b34-a545-6bdc3f219354" containerName="extract-content" Mar 13 10:42:00 crc kubenswrapper[4632]: I0313 10:42:00.150405 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="94fd5d57-5fb3-4b34-a545-6bdc3f219354" containerName="extract-content" Mar 13 10:42:00 crc kubenswrapper[4632]: E0313 10:42:00.150473 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94fd5d57-5fb3-4b34-a545-6bdc3f219354" containerName="registry-server" Mar 13 10:42:00 crc kubenswrapper[4632]: I0313 10:42:00.150536 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="94fd5d57-5fb3-4b34-a545-6bdc3f219354" containerName="registry-server" Mar 13 10:42:00 crc kubenswrapper[4632]: I0313 10:42:00.150813 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="94fd5d57-5fb3-4b34-a545-6bdc3f219354" containerName="registry-server" Mar 13 10:42:00 crc kubenswrapper[4632]: I0313 10:42:00.152693 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556642-f6lzb" Mar 13 10:42:00 crc kubenswrapper[4632]: I0313 10:42:00.154592 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 10:42:00 crc kubenswrapper[4632]: I0313 10:42:00.155472 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 10:42:00 crc kubenswrapper[4632]: I0313 10:42:00.155782 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 10:42:00 crc kubenswrapper[4632]: I0313 10:42:00.161094 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556642-f6lzb"] Mar 13 10:42:00 crc kubenswrapper[4632]: I0313 10:42:00.279236 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lqks\" (UniqueName: \"kubernetes.io/projected/a6df4a28-3b7b-4904-aa41-62caa26889a8-kube-api-access-8lqks\") pod \"auto-csr-approver-29556642-f6lzb\" (UID: \"a6df4a28-3b7b-4904-aa41-62caa26889a8\") " pod="openshift-infra/auto-csr-approver-29556642-f6lzb" Mar 13 10:42:00 crc kubenswrapper[4632]: I0313 10:42:00.381073 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lqks\" (UniqueName: \"kubernetes.io/projected/a6df4a28-3b7b-4904-aa41-62caa26889a8-kube-api-access-8lqks\") pod \"auto-csr-approver-29556642-f6lzb\" (UID: \"a6df4a28-3b7b-4904-aa41-62caa26889a8\") " pod="openshift-infra/auto-csr-approver-29556642-f6lzb" Mar 13 10:42:00 crc kubenswrapper[4632]: I0313 10:42:00.405638 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lqks\" (UniqueName: \"kubernetes.io/projected/a6df4a28-3b7b-4904-aa41-62caa26889a8-kube-api-access-8lqks\") pod \"auto-csr-approver-29556642-f6lzb\" (UID: \"a6df4a28-3b7b-4904-aa41-62caa26889a8\") " pod="openshift-infra/auto-csr-approver-29556642-f6lzb" Mar 13 10:42:00 crc kubenswrapper[4632]: I0313 10:42:00.476633 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556642-f6lzb" Mar 13 10:42:00 crc kubenswrapper[4632]: I0313 10:42:00.977322 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556642-f6lzb"] Mar 13 10:42:01 crc kubenswrapper[4632]: I0313 10:42:01.941129 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556642-f6lzb" event={"ID":"a6df4a28-3b7b-4904-aa41-62caa26889a8","Type":"ContainerStarted","Data":"f9e840b0d2d0ed339be1bb29f4a60a1df28737f0945b06042560ed3f762bf5b1"} Mar 13 10:42:02 crc kubenswrapper[4632]: I0313 10:42:02.952693 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556642-f6lzb" event={"ID":"a6df4a28-3b7b-4904-aa41-62caa26889a8","Type":"ContainerStarted","Data":"2360a4309504ac747bcde26fcceae28cb04d811f34c5f4e463b65c45b06c70f5"} Mar 13 10:42:02 crc kubenswrapper[4632]: I0313 10:42:02.975079 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556642-f6lzb" podStartSLOduration=1.969066635 podStartE2EDuration="2.975052425s" podCreationTimestamp="2026-03-13 10:42:00 +0000 UTC" firstStartedPulling="2026-03-13 10:42:00.998567251 +0000 UTC m=+2295.021097374" lastFinishedPulling="2026-03-13 10:42:02.004553031 +0000 UTC m=+2296.027083164" observedRunningTime="2026-03-13 10:42:02.965693555 +0000 UTC m=+2296.988223688" watchObservedRunningTime="2026-03-13 10:42:02.975052425 +0000 UTC m=+2296.997582558" Mar 13 10:42:03 crc kubenswrapper[4632]: I0313 10:42:03.965851 4632 generic.go:334] "Generic (PLEG): container finished" podID="a6df4a28-3b7b-4904-aa41-62caa26889a8" containerID="2360a4309504ac747bcde26fcceae28cb04d811f34c5f4e463b65c45b06c70f5" exitCode=0 Mar 13 10:42:03 crc kubenswrapper[4632]: I0313 10:42:03.965897 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556642-f6lzb" event={"ID":"a6df4a28-3b7b-4904-aa41-62caa26889a8","Type":"ContainerDied","Data":"2360a4309504ac747bcde26fcceae28cb04d811f34c5f4e463b65c45b06c70f5"} Mar 13 10:42:05 crc kubenswrapper[4632]: I0313 10:42:05.412003 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556642-f6lzb" Mar 13 10:42:05 crc kubenswrapper[4632]: I0313 10:42:05.492163 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lqks\" (UniqueName: \"kubernetes.io/projected/a6df4a28-3b7b-4904-aa41-62caa26889a8-kube-api-access-8lqks\") pod \"a6df4a28-3b7b-4904-aa41-62caa26889a8\" (UID: \"a6df4a28-3b7b-4904-aa41-62caa26889a8\") " Mar 13 10:42:05 crc kubenswrapper[4632]: I0313 10:42:05.499574 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6df4a28-3b7b-4904-aa41-62caa26889a8-kube-api-access-8lqks" (OuterVolumeSpecName: "kube-api-access-8lqks") pod "a6df4a28-3b7b-4904-aa41-62caa26889a8" (UID: "a6df4a28-3b7b-4904-aa41-62caa26889a8"). InnerVolumeSpecName "kube-api-access-8lqks". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:42:05 crc kubenswrapper[4632]: I0313 10:42:05.595522 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lqks\" (UniqueName: \"kubernetes.io/projected/a6df4a28-3b7b-4904-aa41-62caa26889a8-kube-api-access-8lqks\") on node \"crc\" DevicePath \"\"" Mar 13 10:42:05 crc kubenswrapper[4632]: I0313 10:42:05.982519 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556642-f6lzb" event={"ID":"a6df4a28-3b7b-4904-aa41-62caa26889a8","Type":"ContainerDied","Data":"f9e840b0d2d0ed339be1bb29f4a60a1df28737f0945b06042560ed3f762bf5b1"} Mar 13 10:42:05 crc kubenswrapper[4632]: I0313 10:42:05.982556 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9e840b0d2d0ed339be1bb29f4a60a1df28737f0945b06042560ed3f762bf5b1" Mar 13 10:42:05 crc kubenswrapper[4632]: I0313 10:42:05.982562 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556642-f6lzb" Mar 13 10:42:06 crc kubenswrapper[4632]: I0313 10:42:06.093709 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556636-zncpw"] Mar 13 10:42:06 crc kubenswrapper[4632]: I0313 10:42:06.112311 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556636-zncpw"] Mar 13 10:42:08 crc kubenswrapper[4632]: I0313 10:42:08.064318 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cb201b3-b479-4877-a996-58045d0720c4" path="/var/lib/kubelet/pods/7cb201b3-b479-4877-a996-58045d0720c4/volumes" Mar 13 10:42:09 crc kubenswrapper[4632]: I0313 10:42:09.008996 4632 generic.go:334] "Generic (PLEG): container finished" podID="41861d23-3e34-4f91-bafc-1b7eeee125db" containerID="8bcf160168d0bd44e7bdc4cd090dfd9a1207f38af53c4c136868bdc7acd63fcf" exitCode=0 Mar 13 10:42:09 crc kubenswrapper[4632]: I0313 10:42:09.009119 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" event={"ID":"41861d23-3e34-4f91-bafc-1b7eeee125db","Type":"ContainerDied","Data":"8bcf160168d0bd44e7bdc4cd090dfd9a1207f38af53c4c136868bdc7acd63fcf"} Mar 13 10:42:09 crc kubenswrapper[4632]: I0313 10:42:09.379651 4632 scope.go:117] "RemoveContainer" containerID="1d5789598fed395c0d259939fb11bb98aa8eec3b7168c00349a4a3635d4bd5ce" Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.433385 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.464635 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.464686 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.519967 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-telemetry-combined-ca-bundle\") pod \"41861d23-3e34-4f91-bafc-1b7eeee125db\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.520051 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41861d23-3e34-4f91-bafc-1b7eeee125db-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"41861d23-3e34-4f91-bafc-1b7eeee125db\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.520121 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-inventory\") pod \"41861d23-3e34-4f91-bafc-1b7eeee125db\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.520157 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzlkb\" (UniqueName: \"kubernetes.io/projected/41861d23-3e34-4f91-bafc-1b7eeee125db-kube-api-access-rzlkb\") pod \"41861d23-3e34-4f91-bafc-1b7eeee125db\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.520192 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-neutron-metadata-combined-ca-bundle\") pod \"41861d23-3e34-4f91-bafc-1b7eeee125db\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.520332 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41861d23-3e34-4f91-bafc-1b7eeee125db-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"41861d23-3e34-4f91-bafc-1b7eeee125db\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.520372 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-repo-setup-combined-ca-bundle\") pod \"41861d23-3e34-4f91-bafc-1b7eeee125db\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.520434 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-libvirt-combined-ca-bundle\") pod \"41861d23-3e34-4f91-bafc-1b7eeee125db\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.520470 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-ovn-combined-ca-bundle\") pod \"41861d23-3e34-4f91-bafc-1b7eeee125db\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.520539 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-bootstrap-combined-ca-bundle\") pod \"41861d23-3e34-4f91-bafc-1b7eeee125db\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.520578 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41861d23-3e34-4f91-bafc-1b7eeee125db-openstack-edpm-ipam-ovn-default-certs-0\") pod \"41861d23-3e34-4f91-bafc-1b7eeee125db\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.520634 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-ssh-key-openstack-edpm-ipam\") pod \"41861d23-3e34-4f91-bafc-1b7eeee125db\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.520664 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41861d23-3e34-4f91-bafc-1b7eeee125db-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"41861d23-3e34-4f91-bafc-1b7eeee125db\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.520696 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-nova-combined-ca-bundle\") pod \"41861d23-3e34-4f91-bafc-1b7eeee125db\" (UID: \"41861d23-3e34-4f91-bafc-1b7eeee125db\") " Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.527679 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "41861d23-3e34-4f91-bafc-1b7eeee125db" (UID: "41861d23-3e34-4f91-bafc-1b7eeee125db"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.529107 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41861d23-3e34-4f91-bafc-1b7eeee125db-kube-api-access-rzlkb" (OuterVolumeSpecName: "kube-api-access-rzlkb") pod "41861d23-3e34-4f91-bafc-1b7eeee125db" (UID: "41861d23-3e34-4f91-bafc-1b7eeee125db"). InnerVolumeSpecName "kube-api-access-rzlkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.530201 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41861d23-3e34-4f91-bafc-1b7eeee125db-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "41861d23-3e34-4f91-bafc-1b7eeee125db" (UID: "41861d23-3e34-4f91-bafc-1b7eeee125db"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.530571 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "41861d23-3e34-4f91-bafc-1b7eeee125db" (UID: "41861d23-3e34-4f91-bafc-1b7eeee125db"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.531065 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "41861d23-3e34-4f91-bafc-1b7eeee125db" (UID: "41861d23-3e34-4f91-bafc-1b7eeee125db"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.532337 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "41861d23-3e34-4f91-bafc-1b7eeee125db" (UID: "41861d23-3e34-4f91-bafc-1b7eeee125db"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.532810 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41861d23-3e34-4f91-bafc-1b7eeee125db-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "41861d23-3e34-4f91-bafc-1b7eeee125db" (UID: "41861d23-3e34-4f91-bafc-1b7eeee125db"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.533597 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41861d23-3e34-4f91-bafc-1b7eeee125db-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "41861d23-3e34-4f91-bafc-1b7eeee125db" (UID: "41861d23-3e34-4f91-bafc-1b7eeee125db"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.534096 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41861d23-3e34-4f91-bafc-1b7eeee125db-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "41861d23-3e34-4f91-bafc-1b7eeee125db" (UID: "41861d23-3e34-4f91-bafc-1b7eeee125db"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.535222 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "41861d23-3e34-4f91-bafc-1b7eeee125db" (UID: "41861d23-3e34-4f91-bafc-1b7eeee125db"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.549111 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "41861d23-3e34-4f91-bafc-1b7eeee125db" (UID: "41861d23-3e34-4f91-bafc-1b7eeee125db"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.549663 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "41861d23-3e34-4f91-bafc-1b7eeee125db" (UID: "41861d23-3e34-4f91-bafc-1b7eeee125db"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.561607 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-inventory" (OuterVolumeSpecName: "inventory") pod "41861d23-3e34-4f91-bafc-1b7eeee125db" (UID: "41861d23-3e34-4f91-bafc-1b7eeee125db"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.567014 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "41861d23-3e34-4f91-bafc-1b7eeee125db" (UID: "41861d23-3e34-4f91-bafc-1b7eeee125db"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.624856 4632 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41861d23-3e34-4f91-bafc-1b7eeee125db-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.625096 4632 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.625162 4632 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.625265 4632 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.625360 4632 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.625479 4632 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41861d23-3e34-4f91-bafc-1b7eeee125db-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.625617 4632 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.625698 4632 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41861d23-3e34-4f91-bafc-1b7eeee125db-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.625770 4632 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.625834 4632 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.625898 4632 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/41861d23-3e34-4f91-bafc-1b7eeee125db-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.625976 4632 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-inventory\") on node \"crc\" DevicePath \"\"" Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.626093 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzlkb\" (UniqueName: \"kubernetes.io/projected/41861d23-3e34-4f91-bafc-1b7eeee125db-kube-api-access-rzlkb\") on node \"crc\" DevicePath \"\"" Mar 13 10:42:10 crc kubenswrapper[4632]: I0313 10:42:10.626166 4632 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41861d23-3e34-4f91-bafc-1b7eeee125db-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:42:11 crc kubenswrapper[4632]: I0313 10:42:11.030108 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" event={"ID":"41861d23-3e34-4f91-bafc-1b7eeee125db","Type":"ContainerDied","Data":"aa943b891e21fa2ddf52c1882c6b609ca996747cf346a718083b0fc64cf76d58"} Mar 13 10:42:11 crc kubenswrapper[4632]: I0313 10:42:11.030153 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa943b891e21fa2ddf52c1882c6b609ca996747cf346a718083b0fc64cf76d58" Mar 13 10:42:11 crc kubenswrapper[4632]: I0313 10:42:11.030235 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn" Mar 13 10:42:11 crc kubenswrapper[4632]: I0313 10:42:11.124424 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-6t9b6"] Mar 13 10:42:11 crc kubenswrapper[4632]: E0313 10:42:11.124871 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6df4a28-3b7b-4904-aa41-62caa26889a8" containerName="oc" Mar 13 10:42:11 crc kubenswrapper[4632]: I0313 10:42:11.124896 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6df4a28-3b7b-4904-aa41-62caa26889a8" containerName="oc" Mar 13 10:42:11 crc kubenswrapper[4632]: E0313 10:42:11.124916 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41861d23-3e34-4f91-bafc-1b7eeee125db" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 13 10:42:11 crc kubenswrapper[4632]: I0313 10:42:11.124927 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="41861d23-3e34-4f91-bafc-1b7eeee125db" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 13 10:42:11 crc kubenswrapper[4632]: I0313 10:42:11.125180 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6df4a28-3b7b-4904-aa41-62caa26889a8" containerName="oc" Mar 13 10:42:11 crc kubenswrapper[4632]: I0313 10:42:11.125225 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="41861d23-3e34-4f91-bafc-1b7eeee125db" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 13 10:42:11 crc kubenswrapper[4632]: I0313 10:42:11.125992 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6t9b6" Mar 13 10:42:11 crc kubenswrapper[4632]: I0313 10:42:11.128708 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qrzsx" Mar 13 10:42:11 crc kubenswrapper[4632]: I0313 10:42:11.138055 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 13 10:42:11 crc kubenswrapper[4632]: I0313 10:42:11.138133 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 13 10:42:11 crc kubenswrapper[4632]: I0313 10:42:11.138365 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Mar 13 10:42:11 crc kubenswrapper[4632]: I0313 10:42:11.144591 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 13 10:42:11 crc kubenswrapper[4632]: I0313 10:42:11.150789 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-6t9b6"] Mar 13 10:42:11 crc kubenswrapper[4632]: I0313 10:42:11.237771 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96ca1247-6625-4b08-b155-34c56f02ec04-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6t9b6\" (UID: \"96ca1247-6625-4b08-b155-34c56f02ec04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6t9b6" Mar 13 10:42:11 crc kubenswrapper[4632]: I0313 10:42:11.238218 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96ca1247-6625-4b08-b155-34c56f02ec04-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6t9b6\" (UID: \"96ca1247-6625-4b08-b155-34c56f02ec04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6t9b6" Mar 13 10:42:11 crc kubenswrapper[4632]: I0313 10:42:11.239237 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/96ca1247-6625-4b08-b155-34c56f02ec04-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6t9b6\" (UID: \"96ca1247-6625-4b08-b155-34c56f02ec04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6t9b6" Mar 13 10:42:11 crc kubenswrapper[4632]: I0313 10:42:11.239439 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdp7x\" (UniqueName: \"kubernetes.io/projected/96ca1247-6625-4b08-b155-34c56f02ec04-kube-api-access-wdp7x\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6t9b6\" (UID: \"96ca1247-6625-4b08-b155-34c56f02ec04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6t9b6" Mar 13 10:42:11 crc kubenswrapper[4632]: I0313 10:42:11.240014 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96ca1247-6625-4b08-b155-34c56f02ec04-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6t9b6\" (UID: \"96ca1247-6625-4b08-b155-34c56f02ec04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6t9b6" Mar 13 10:42:11 crc kubenswrapper[4632]: I0313 10:42:11.342077 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96ca1247-6625-4b08-b155-34c56f02ec04-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6t9b6\" (UID: \"96ca1247-6625-4b08-b155-34c56f02ec04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6t9b6" Mar 13 10:42:11 crc kubenswrapper[4632]: I0313 10:42:11.342222 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96ca1247-6625-4b08-b155-34c56f02ec04-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6t9b6\" (UID: \"96ca1247-6625-4b08-b155-34c56f02ec04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6t9b6" Mar 13 10:42:11 crc kubenswrapper[4632]: I0313 10:42:11.342258 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96ca1247-6625-4b08-b155-34c56f02ec04-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6t9b6\" (UID: \"96ca1247-6625-4b08-b155-34c56f02ec04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6t9b6" Mar 13 10:42:11 crc kubenswrapper[4632]: I0313 10:42:11.342338 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/96ca1247-6625-4b08-b155-34c56f02ec04-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6t9b6\" (UID: \"96ca1247-6625-4b08-b155-34c56f02ec04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6t9b6" Mar 13 10:42:11 crc kubenswrapper[4632]: I0313 10:42:11.342369 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdp7x\" (UniqueName: \"kubernetes.io/projected/96ca1247-6625-4b08-b155-34c56f02ec04-kube-api-access-wdp7x\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6t9b6\" (UID: \"96ca1247-6625-4b08-b155-34c56f02ec04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6t9b6" Mar 13 10:42:11 crc kubenswrapper[4632]: I0313 10:42:11.344043 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/96ca1247-6625-4b08-b155-34c56f02ec04-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6t9b6\" (UID: \"96ca1247-6625-4b08-b155-34c56f02ec04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6t9b6" Mar 13 10:42:11 crc kubenswrapper[4632]: I0313 10:42:11.348557 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96ca1247-6625-4b08-b155-34c56f02ec04-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6t9b6\" (UID: \"96ca1247-6625-4b08-b155-34c56f02ec04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6t9b6" Mar 13 10:42:11 crc kubenswrapper[4632]: I0313 10:42:11.348742 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96ca1247-6625-4b08-b155-34c56f02ec04-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6t9b6\" (UID: \"96ca1247-6625-4b08-b155-34c56f02ec04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6t9b6" Mar 13 10:42:11 crc kubenswrapper[4632]: I0313 10:42:11.358764 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96ca1247-6625-4b08-b155-34c56f02ec04-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6t9b6\" (UID: \"96ca1247-6625-4b08-b155-34c56f02ec04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6t9b6" Mar 13 10:42:11 crc kubenswrapper[4632]: I0313 10:42:11.373845 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdp7x\" (UniqueName: \"kubernetes.io/projected/96ca1247-6625-4b08-b155-34c56f02ec04-kube-api-access-wdp7x\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6t9b6\" (UID: \"96ca1247-6625-4b08-b155-34c56f02ec04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6t9b6" Mar 13 10:42:11 crc kubenswrapper[4632]: I0313 10:42:11.453874 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6t9b6" Mar 13 10:42:11 crc kubenswrapper[4632]: I0313 10:42:11.997356 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-6t9b6"] Mar 13 10:42:12 crc kubenswrapper[4632]: I0313 10:42:12.040471 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6t9b6" event={"ID":"96ca1247-6625-4b08-b155-34c56f02ec04","Type":"ContainerStarted","Data":"7cff807ce30996fc61ff5a53f3c16703d17afe3357255159f2daf0f8d5320802"} Mar 13 10:42:13 crc kubenswrapper[4632]: I0313 10:42:13.055054 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6t9b6" event={"ID":"96ca1247-6625-4b08-b155-34c56f02ec04","Type":"ContainerStarted","Data":"b9c8440031ae4a9073037d4c844efc1d48630f961db4b97500123a9880df8925"} Mar 13 10:42:13 crc kubenswrapper[4632]: I0313 10:42:13.084079 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6t9b6" podStartSLOduration=1.352222222 podStartE2EDuration="2.084059384s" podCreationTimestamp="2026-03-13 10:42:11 +0000 UTC" firstStartedPulling="2026-03-13 10:42:12.002032139 +0000 UTC m=+2306.024562272" lastFinishedPulling="2026-03-13 10:42:12.733869301 +0000 UTC m=+2306.756399434" observedRunningTime="2026-03-13 10:42:13.073523275 +0000 UTC m=+2307.096053438" watchObservedRunningTime="2026-03-13 10:42:13.084059384 +0000 UTC m=+2307.106589517" Mar 13 10:42:34 crc kubenswrapper[4632]: I0313 10:42:34.246827 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gnw2p"] Mar 13 10:42:34 crc kubenswrapper[4632]: I0313 10:42:34.249207 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gnw2p" Mar 13 10:42:34 crc kubenswrapper[4632]: I0313 10:42:34.266446 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gnw2p"] Mar 13 10:42:34 crc kubenswrapper[4632]: I0313 10:42:34.277585 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf7c7e9e-1b37-4be4-916f-a5a90a6db26d-utilities\") pod \"community-operators-gnw2p\" (UID: \"cf7c7e9e-1b37-4be4-916f-a5a90a6db26d\") " pod="openshift-marketplace/community-operators-gnw2p" Mar 13 10:42:34 crc kubenswrapper[4632]: I0313 10:42:34.277735 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf7c7e9e-1b37-4be4-916f-a5a90a6db26d-catalog-content\") pod \"community-operators-gnw2p\" (UID: \"cf7c7e9e-1b37-4be4-916f-a5a90a6db26d\") " pod="openshift-marketplace/community-operators-gnw2p" Mar 13 10:42:34 crc kubenswrapper[4632]: I0313 10:42:34.277827 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlqlv\" (UniqueName: \"kubernetes.io/projected/cf7c7e9e-1b37-4be4-916f-a5a90a6db26d-kube-api-access-vlqlv\") pod \"community-operators-gnw2p\" (UID: \"cf7c7e9e-1b37-4be4-916f-a5a90a6db26d\") " pod="openshift-marketplace/community-operators-gnw2p" Mar 13 10:42:34 crc kubenswrapper[4632]: I0313 10:42:34.379778 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf7c7e9e-1b37-4be4-916f-a5a90a6db26d-utilities\") pod \"community-operators-gnw2p\" (UID: \"cf7c7e9e-1b37-4be4-916f-a5a90a6db26d\") " pod="openshift-marketplace/community-operators-gnw2p" Mar 13 10:42:34 crc kubenswrapper[4632]: I0313 10:42:34.379852 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf7c7e9e-1b37-4be4-916f-a5a90a6db26d-catalog-content\") pod \"community-operators-gnw2p\" (UID: \"cf7c7e9e-1b37-4be4-916f-a5a90a6db26d\") " pod="openshift-marketplace/community-operators-gnw2p" Mar 13 10:42:34 crc kubenswrapper[4632]: I0313 10:42:34.379929 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlqlv\" (UniqueName: \"kubernetes.io/projected/cf7c7e9e-1b37-4be4-916f-a5a90a6db26d-kube-api-access-vlqlv\") pod \"community-operators-gnw2p\" (UID: \"cf7c7e9e-1b37-4be4-916f-a5a90a6db26d\") " pod="openshift-marketplace/community-operators-gnw2p" Mar 13 10:42:34 crc kubenswrapper[4632]: I0313 10:42:34.380407 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf7c7e9e-1b37-4be4-916f-a5a90a6db26d-utilities\") pod \"community-operators-gnw2p\" (UID: \"cf7c7e9e-1b37-4be4-916f-a5a90a6db26d\") " pod="openshift-marketplace/community-operators-gnw2p" Mar 13 10:42:34 crc kubenswrapper[4632]: I0313 10:42:34.380432 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf7c7e9e-1b37-4be4-916f-a5a90a6db26d-catalog-content\") pod \"community-operators-gnw2p\" (UID: \"cf7c7e9e-1b37-4be4-916f-a5a90a6db26d\") " pod="openshift-marketplace/community-operators-gnw2p" Mar 13 10:42:34 crc kubenswrapper[4632]: I0313 10:42:34.402390 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlqlv\" (UniqueName: \"kubernetes.io/projected/cf7c7e9e-1b37-4be4-916f-a5a90a6db26d-kube-api-access-vlqlv\") pod \"community-operators-gnw2p\" (UID: \"cf7c7e9e-1b37-4be4-916f-a5a90a6db26d\") " pod="openshift-marketplace/community-operators-gnw2p" Mar 13 10:42:34 crc kubenswrapper[4632]: I0313 10:42:34.576733 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gnw2p" Mar 13 10:42:35 crc kubenswrapper[4632]: I0313 10:42:35.217025 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gnw2p"] Mar 13 10:42:35 crc kubenswrapper[4632]: I0313 10:42:35.294568 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gnw2p" event={"ID":"cf7c7e9e-1b37-4be4-916f-a5a90a6db26d","Type":"ContainerStarted","Data":"5d8fa1de309b7087e5b0807d70e759ae4655eeeff831ca9798400cbf8bccd783"} Mar 13 10:42:36 crc kubenswrapper[4632]: I0313 10:42:36.306926 4632 generic.go:334] "Generic (PLEG): container finished" podID="cf7c7e9e-1b37-4be4-916f-a5a90a6db26d" containerID="3dd84647257d4d428f925670105498da793e6d9758ec9cb89c02c2e2a54e558f" exitCode=0 Mar 13 10:42:36 crc kubenswrapper[4632]: I0313 10:42:36.307139 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gnw2p" event={"ID":"cf7c7e9e-1b37-4be4-916f-a5a90a6db26d","Type":"ContainerDied","Data":"3dd84647257d4d428f925670105498da793e6d9758ec9cb89c02c2e2a54e558f"} Mar 13 10:42:37 crc kubenswrapper[4632]: I0313 10:42:37.318302 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gnw2p" event={"ID":"cf7c7e9e-1b37-4be4-916f-a5a90a6db26d","Type":"ContainerStarted","Data":"8702aa0a52c8778ee09c744c907f2b88b34171b80b2c62359a090363a7ecc8ac"} Mar 13 10:42:38 crc kubenswrapper[4632]: I0313 10:42:38.329981 4632 generic.go:334] "Generic (PLEG): container finished" podID="cf7c7e9e-1b37-4be4-916f-a5a90a6db26d" containerID="8702aa0a52c8778ee09c744c907f2b88b34171b80b2c62359a090363a7ecc8ac" exitCode=0 Mar 13 10:42:38 crc kubenswrapper[4632]: I0313 10:42:38.330341 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gnw2p" event={"ID":"cf7c7e9e-1b37-4be4-916f-a5a90a6db26d","Type":"ContainerDied","Data":"8702aa0a52c8778ee09c744c907f2b88b34171b80b2c62359a090363a7ecc8ac"} Mar 13 10:42:39 crc kubenswrapper[4632]: I0313 10:42:39.342426 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gnw2p" event={"ID":"cf7c7e9e-1b37-4be4-916f-a5a90a6db26d","Type":"ContainerStarted","Data":"81c2fc126faa38cd627c0814225a882d113c0439776ae9c08105c81325a2f0d5"} Mar 13 10:42:39 crc kubenswrapper[4632]: I0313 10:42:39.375910 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gnw2p" podStartSLOduration=2.8662370839999998 podStartE2EDuration="5.375892s" podCreationTimestamp="2026-03-13 10:42:34 +0000 UTC" firstStartedPulling="2026-03-13 10:42:36.311667874 +0000 UTC m=+2330.334198007" lastFinishedPulling="2026-03-13 10:42:38.82132279 +0000 UTC m=+2332.843852923" observedRunningTime="2026-03-13 10:42:39.365892244 +0000 UTC m=+2333.388422387" watchObservedRunningTime="2026-03-13 10:42:39.375892 +0000 UTC m=+2333.398422133" Mar 13 10:42:40 crc kubenswrapper[4632]: I0313 10:42:40.461431 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 10:42:40 crc kubenswrapper[4632]: I0313 10:42:40.461757 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 10:42:40 crc kubenswrapper[4632]: I0313 10:42:40.461818 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 10:42:40 crc kubenswrapper[4632]: I0313 10:42:40.462926 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7e6ad458a7a5f032b976d0f3e06f3cdb95d1f8fc235ab6b7d8f577ae0282cd20"} pod="openshift-machine-config-operator/machine-config-daemon-zkscb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 10:42:40 crc kubenswrapper[4632]: I0313 10:42:40.463023 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" containerID="cri-o://7e6ad458a7a5f032b976d0f3e06f3cdb95d1f8fc235ab6b7d8f577ae0282cd20" gracePeriod=600 Mar 13 10:42:40 crc kubenswrapper[4632]: E0313 10:42:40.593310 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:42:41 crc kubenswrapper[4632]: I0313 10:42:41.360184 4632 generic.go:334] "Generic (PLEG): container finished" podID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerID="7e6ad458a7a5f032b976d0f3e06f3cdb95d1f8fc235ab6b7d8f577ae0282cd20" exitCode=0 Mar 13 10:42:41 crc kubenswrapper[4632]: I0313 10:42:41.360710 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerDied","Data":"7e6ad458a7a5f032b976d0f3e06f3cdb95d1f8fc235ab6b7d8f577ae0282cd20"} Mar 13 10:42:41 crc kubenswrapper[4632]: I0313 10:42:41.360760 4632 scope.go:117] "RemoveContainer" containerID="2bb4e222f4f89a1d4e4bebc809fc60cc762d7ea9b6811f4bcc9cb78c179cd0bd" Mar 13 10:42:41 crc kubenswrapper[4632]: I0313 10:42:41.361441 4632 scope.go:117] "RemoveContainer" containerID="7e6ad458a7a5f032b976d0f3e06f3cdb95d1f8fc235ab6b7d8f577ae0282cd20" Mar 13 10:42:41 crc kubenswrapper[4632]: E0313 10:42:41.361744 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:42:44 crc kubenswrapper[4632]: I0313 10:42:44.578052 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gnw2p" Mar 13 10:42:44 crc kubenswrapper[4632]: I0313 10:42:44.578393 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gnw2p" Mar 13 10:42:44 crc kubenswrapper[4632]: I0313 10:42:44.630381 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gnw2p" Mar 13 10:42:45 crc kubenswrapper[4632]: I0313 10:42:45.463552 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gnw2p" Mar 13 10:42:45 crc kubenswrapper[4632]: I0313 10:42:45.515359 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gnw2p"] Mar 13 10:42:47 crc kubenswrapper[4632]: I0313 10:42:47.432027 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gnw2p" podUID="cf7c7e9e-1b37-4be4-916f-a5a90a6db26d" containerName="registry-server" containerID="cri-o://81c2fc126faa38cd627c0814225a882d113c0439776ae9c08105c81325a2f0d5" gracePeriod=2 Mar 13 10:42:47 crc kubenswrapper[4632]: I0313 10:42:47.896907 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gnw2p" Mar 13 10:42:48 crc kubenswrapper[4632]: I0313 10:42:48.059728 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlqlv\" (UniqueName: \"kubernetes.io/projected/cf7c7e9e-1b37-4be4-916f-a5a90a6db26d-kube-api-access-vlqlv\") pod \"cf7c7e9e-1b37-4be4-916f-a5a90a6db26d\" (UID: \"cf7c7e9e-1b37-4be4-916f-a5a90a6db26d\") " Mar 13 10:42:48 crc kubenswrapper[4632]: I0313 10:42:48.059833 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf7c7e9e-1b37-4be4-916f-a5a90a6db26d-catalog-content\") pod \"cf7c7e9e-1b37-4be4-916f-a5a90a6db26d\" (UID: \"cf7c7e9e-1b37-4be4-916f-a5a90a6db26d\") " Mar 13 10:42:48 crc kubenswrapper[4632]: I0313 10:42:48.059924 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf7c7e9e-1b37-4be4-916f-a5a90a6db26d-utilities\") pod \"cf7c7e9e-1b37-4be4-916f-a5a90a6db26d\" (UID: \"cf7c7e9e-1b37-4be4-916f-a5a90a6db26d\") " Mar 13 10:42:48 crc kubenswrapper[4632]: I0313 10:42:48.061606 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf7c7e9e-1b37-4be4-916f-a5a90a6db26d-utilities" (OuterVolumeSpecName: "utilities") pod "cf7c7e9e-1b37-4be4-916f-a5a90a6db26d" (UID: "cf7c7e9e-1b37-4be4-916f-a5a90a6db26d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:42:48 crc kubenswrapper[4632]: I0313 10:42:48.084732 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf7c7e9e-1b37-4be4-916f-a5a90a6db26d-kube-api-access-vlqlv" (OuterVolumeSpecName: "kube-api-access-vlqlv") pod "cf7c7e9e-1b37-4be4-916f-a5a90a6db26d" (UID: "cf7c7e9e-1b37-4be4-916f-a5a90a6db26d"). InnerVolumeSpecName "kube-api-access-vlqlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:42:48 crc kubenswrapper[4632]: I0313 10:42:48.133396 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf7c7e9e-1b37-4be4-916f-a5a90a6db26d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf7c7e9e-1b37-4be4-916f-a5a90a6db26d" (UID: "cf7c7e9e-1b37-4be4-916f-a5a90a6db26d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:42:48 crc kubenswrapper[4632]: I0313 10:42:48.162163 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlqlv\" (UniqueName: \"kubernetes.io/projected/cf7c7e9e-1b37-4be4-916f-a5a90a6db26d-kube-api-access-vlqlv\") on node \"crc\" DevicePath \"\"" Mar 13 10:42:48 crc kubenswrapper[4632]: I0313 10:42:48.162471 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf7c7e9e-1b37-4be4-916f-a5a90a6db26d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 10:42:48 crc kubenswrapper[4632]: I0313 10:42:48.162483 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf7c7e9e-1b37-4be4-916f-a5a90a6db26d-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 10:42:48 crc kubenswrapper[4632]: I0313 10:42:48.447289 4632 generic.go:334] "Generic (PLEG): container finished" podID="cf7c7e9e-1b37-4be4-916f-a5a90a6db26d" containerID="81c2fc126faa38cd627c0814225a882d113c0439776ae9c08105c81325a2f0d5" exitCode=0 Mar 13 10:42:48 crc kubenswrapper[4632]: I0313 10:42:48.447345 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gnw2p" event={"ID":"cf7c7e9e-1b37-4be4-916f-a5a90a6db26d","Type":"ContainerDied","Data":"81c2fc126faa38cd627c0814225a882d113c0439776ae9c08105c81325a2f0d5"} Mar 13 10:42:48 crc kubenswrapper[4632]: I0313 10:42:48.447381 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gnw2p" event={"ID":"cf7c7e9e-1b37-4be4-916f-a5a90a6db26d","Type":"ContainerDied","Data":"5d8fa1de309b7087e5b0807d70e759ae4655eeeff831ca9798400cbf8bccd783"} Mar 13 10:42:48 crc kubenswrapper[4632]: I0313 10:42:48.447404 4632 scope.go:117] "RemoveContainer" containerID="81c2fc126faa38cd627c0814225a882d113c0439776ae9c08105c81325a2f0d5" Mar 13 10:42:48 crc kubenswrapper[4632]: I0313 10:42:48.447431 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gnw2p" Mar 13 10:42:48 crc kubenswrapper[4632]: I0313 10:42:48.482061 4632 scope.go:117] "RemoveContainer" containerID="8702aa0a52c8778ee09c744c907f2b88b34171b80b2c62359a090363a7ecc8ac" Mar 13 10:42:48 crc kubenswrapper[4632]: I0313 10:42:48.490749 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gnw2p"] Mar 13 10:42:48 crc kubenswrapper[4632]: I0313 10:42:48.499134 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gnw2p"] Mar 13 10:42:48 crc kubenswrapper[4632]: I0313 10:42:48.519105 4632 scope.go:117] "RemoveContainer" containerID="3dd84647257d4d428f925670105498da793e6d9758ec9cb89c02c2e2a54e558f" Mar 13 10:42:48 crc kubenswrapper[4632]: I0313 10:42:48.562214 4632 scope.go:117] "RemoveContainer" containerID="81c2fc126faa38cd627c0814225a882d113c0439776ae9c08105c81325a2f0d5" Mar 13 10:42:48 crc kubenswrapper[4632]: E0313 10:42:48.562642 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81c2fc126faa38cd627c0814225a882d113c0439776ae9c08105c81325a2f0d5\": container with ID starting with 81c2fc126faa38cd627c0814225a882d113c0439776ae9c08105c81325a2f0d5 not found: ID does not exist" containerID="81c2fc126faa38cd627c0814225a882d113c0439776ae9c08105c81325a2f0d5" Mar 13 10:42:48 crc kubenswrapper[4632]: I0313 10:42:48.562674 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81c2fc126faa38cd627c0814225a882d113c0439776ae9c08105c81325a2f0d5"} err="failed to get container status \"81c2fc126faa38cd627c0814225a882d113c0439776ae9c08105c81325a2f0d5\": rpc error: code = NotFound desc = could not find container \"81c2fc126faa38cd627c0814225a882d113c0439776ae9c08105c81325a2f0d5\": container with ID starting with 81c2fc126faa38cd627c0814225a882d113c0439776ae9c08105c81325a2f0d5 not found: ID does not exist" Mar 13 10:42:48 crc kubenswrapper[4632]: I0313 10:42:48.562696 4632 scope.go:117] "RemoveContainer" containerID="8702aa0a52c8778ee09c744c907f2b88b34171b80b2c62359a090363a7ecc8ac" Mar 13 10:42:48 crc kubenswrapper[4632]: E0313 10:42:48.563105 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8702aa0a52c8778ee09c744c907f2b88b34171b80b2c62359a090363a7ecc8ac\": container with ID starting with 8702aa0a52c8778ee09c744c907f2b88b34171b80b2c62359a090363a7ecc8ac not found: ID does not exist" containerID="8702aa0a52c8778ee09c744c907f2b88b34171b80b2c62359a090363a7ecc8ac" Mar 13 10:42:48 crc kubenswrapper[4632]: I0313 10:42:48.563131 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8702aa0a52c8778ee09c744c907f2b88b34171b80b2c62359a090363a7ecc8ac"} err="failed to get container status \"8702aa0a52c8778ee09c744c907f2b88b34171b80b2c62359a090363a7ecc8ac\": rpc error: code = NotFound desc = could not find container \"8702aa0a52c8778ee09c744c907f2b88b34171b80b2c62359a090363a7ecc8ac\": container with ID starting with 8702aa0a52c8778ee09c744c907f2b88b34171b80b2c62359a090363a7ecc8ac not found: ID does not exist" Mar 13 10:42:48 crc kubenswrapper[4632]: I0313 10:42:48.563145 4632 scope.go:117] "RemoveContainer" containerID="3dd84647257d4d428f925670105498da793e6d9758ec9cb89c02c2e2a54e558f" Mar 13 10:42:48 crc kubenswrapper[4632]: E0313 10:42:48.563417 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dd84647257d4d428f925670105498da793e6d9758ec9cb89c02c2e2a54e558f\": container with ID starting with 3dd84647257d4d428f925670105498da793e6d9758ec9cb89c02c2e2a54e558f not found: ID does not exist" containerID="3dd84647257d4d428f925670105498da793e6d9758ec9cb89c02c2e2a54e558f" Mar 13 10:42:48 crc kubenswrapper[4632]: I0313 10:42:48.563439 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dd84647257d4d428f925670105498da793e6d9758ec9cb89c02c2e2a54e558f"} err="failed to get container status \"3dd84647257d4d428f925670105498da793e6d9758ec9cb89c02c2e2a54e558f\": rpc error: code = NotFound desc = could not find container \"3dd84647257d4d428f925670105498da793e6d9758ec9cb89c02c2e2a54e558f\": container with ID starting with 3dd84647257d4d428f925670105498da793e6d9758ec9cb89c02c2e2a54e558f not found: ID does not exist" Mar 13 10:42:50 crc kubenswrapper[4632]: I0313 10:42:50.054655 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf7c7e9e-1b37-4be4-916f-a5a90a6db26d" path="/var/lib/kubelet/pods/cf7c7e9e-1b37-4be4-916f-a5a90a6db26d/volumes" Mar 13 10:42:54 crc kubenswrapper[4632]: I0313 10:42:54.043897 4632 scope.go:117] "RemoveContainer" containerID="7e6ad458a7a5f032b976d0f3e06f3cdb95d1f8fc235ab6b7d8f577ae0282cd20" Mar 13 10:42:54 crc kubenswrapper[4632]: E0313 10:42:54.044773 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:43:08 crc kubenswrapper[4632]: I0313 10:43:08.059518 4632 scope.go:117] "RemoveContainer" containerID="7e6ad458a7a5f032b976d0f3e06f3cdb95d1f8fc235ab6b7d8f577ae0282cd20" Mar 13 10:43:08 crc kubenswrapper[4632]: E0313 10:43:08.060405 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:43:19 crc kubenswrapper[4632]: I0313 10:43:19.043754 4632 scope.go:117] "RemoveContainer" containerID="7e6ad458a7a5f032b976d0f3e06f3cdb95d1f8fc235ab6b7d8f577ae0282cd20" Mar 13 10:43:19 crc kubenswrapper[4632]: E0313 10:43:19.044642 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:43:19 crc kubenswrapper[4632]: I0313 10:43:19.720817 4632 generic.go:334] "Generic (PLEG): container finished" podID="96ca1247-6625-4b08-b155-34c56f02ec04" containerID="b9c8440031ae4a9073037d4c844efc1d48630f961db4b97500123a9880df8925" exitCode=0 Mar 13 10:43:19 crc kubenswrapper[4632]: I0313 10:43:19.720866 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6t9b6" event={"ID":"96ca1247-6625-4b08-b155-34c56f02ec04","Type":"ContainerDied","Data":"b9c8440031ae4a9073037d4c844efc1d48630f961db4b97500123a9880df8925"} Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.293900 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6t9b6" Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.401577 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdp7x\" (UniqueName: \"kubernetes.io/projected/96ca1247-6625-4b08-b155-34c56f02ec04-kube-api-access-wdp7x\") pod \"96ca1247-6625-4b08-b155-34c56f02ec04\" (UID: \"96ca1247-6625-4b08-b155-34c56f02ec04\") " Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.402186 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/96ca1247-6625-4b08-b155-34c56f02ec04-ovncontroller-config-0\") pod \"96ca1247-6625-4b08-b155-34c56f02ec04\" (UID: \"96ca1247-6625-4b08-b155-34c56f02ec04\") " Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.402222 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96ca1247-6625-4b08-b155-34c56f02ec04-ovn-combined-ca-bundle\") pod \"96ca1247-6625-4b08-b155-34c56f02ec04\" (UID: \"96ca1247-6625-4b08-b155-34c56f02ec04\") " Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.402476 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96ca1247-6625-4b08-b155-34c56f02ec04-inventory\") pod \"96ca1247-6625-4b08-b155-34c56f02ec04\" (UID: \"96ca1247-6625-4b08-b155-34c56f02ec04\") " Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.402512 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96ca1247-6625-4b08-b155-34c56f02ec04-ssh-key-openstack-edpm-ipam\") pod \"96ca1247-6625-4b08-b155-34c56f02ec04\" (UID: \"96ca1247-6625-4b08-b155-34c56f02ec04\") " Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.410272 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96ca1247-6625-4b08-b155-34c56f02ec04-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "96ca1247-6625-4b08-b155-34c56f02ec04" (UID: "96ca1247-6625-4b08-b155-34c56f02ec04"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.410369 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96ca1247-6625-4b08-b155-34c56f02ec04-kube-api-access-wdp7x" (OuterVolumeSpecName: "kube-api-access-wdp7x") pod "96ca1247-6625-4b08-b155-34c56f02ec04" (UID: "96ca1247-6625-4b08-b155-34c56f02ec04"). InnerVolumeSpecName "kube-api-access-wdp7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.434223 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96ca1247-6625-4b08-b155-34c56f02ec04-inventory" (OuterVolumeSpecName: "inventory") pod "96ca1247-6625-4b08-b155-34c56f02ec04" (UID: "96ca1247-6625-4b08-b155-34c56f02ec04"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.436503 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96ca1247-6625-4b08-b155-34c56f02ec04-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "96ca1247-6625-4b08-b155-34c56f02ec04" (UID: "96ca1247-6625-4b08-b155-34c56f02ec04"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.441232 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96ca1247-6625-4b08-b155-34c56f02ec04-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "96ca1247-6625-4b08-b155-34c56f02ec04" (UID: "96ca1247-6625-4b08-b155-34c56f02ec04"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.505026 4632 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96ca1247-6625-4b08-b155-34c56f02ec04-inventory\") on node \"crc\" DevicePath \"\"" Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.505060 4632 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96ca1247-6625-4b08-b155-34c56f02ec04-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.505074 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdp7x\" (UniqueName: \"kubernetes.io/projected/96ca1247-6625-4b08-b155-34c56f02ec04-kube-api-access-wdp7x\") on node \"crc\" DevicePath \"\"" Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.505083 4632 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/96ca1247-6625-4b08-b155-34c56f02ec04-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.505093 4632 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96ca1247-6625-4b08-b155-34c56f02ec04-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.744093 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6t9b6" event={"ID":"96ca1247-6625-4b08-b155-34c56f02ec04","Type":"ContainerDied","Data":"7cff807ce30996fc61ff5a53f3c16703d17afe3357255159f2daf0f8d5320802"} Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.744368 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cff807ce30996fc61ff5a53f3c16703d17afe3357255159f2daf0f8d5320802" Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.744238 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6t9b6" Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.831700 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq"] Mar 13 10:43:21 crc kubenswrapper[4632]: E0313 10:43:21.832121 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf7c7e9e-1b37-4be4-916f-a5a90a6db26d" containerName="extract-utilities" Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.832134 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf7c7e9e-1b37-4be4-916f-a5a90a6db26d" containerName="extract-utilities" Mar 13 10:43:21 crc kubenswrapper[4632]: E0313 10:43:21.832147 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf7c7e9e-1b37-4be4-916f-a5a90a6db26d" containerName="extract-content" Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.832155 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf7c7e9e-1b37-4be4-916f-a5a90a6db26d" containerName="extract-content" Mar 13 10:43:21 crc kubenswrapper[4632]: E0313 10:43:21.832163 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96ca1247-6625-4b08-b155-34c56f02ec04" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.832170 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="96ca1247-6625-4b08-b155-34c56f02ec04" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 13 10:43:21 crc kubenswrapper[4632]: E0313 10:43:21.832192 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf7c7e9e-1b37-4be4-916f-a5a90a6db26d" containerName="registry-server" Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.832198 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf7c7e9e-1b37-4be4-916f-a5a90a6db26d" containerName="registry-server" Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.832385 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf7c7e9e-1b37-4be4-916f-a5a90a6db26d" containerName="registry-server" Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.832400 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="96ca1247-6625-4b08-b155-34c56f02ec04" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.833001 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq" Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.838522 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.838603 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.838659 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qrzsx" Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.838712 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.839428 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.839518 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.856060 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq"] Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.912550 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktxnr\" (UniqueName: \"kubernetes.io/projected/96e4ce1c-8f09-4563-864f-da1f95bdd500-kube-api-access-ktxnr\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq\" (UID: \"96e4ce1c-8f09-4563-864f-da1f95bdd500\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq" Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.912608 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96e4ce1c-8f09-4563-864f-da1f95bdd500-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq\" (UID: \"96e4ce1c-8f09-4563-864f-da1f95bdd500\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq" Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.912725 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/96e4ce1c-8f09-4563-864f-da1f95bdd500-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq\" (UID: \"96e4ce1c-8f09-4563-864f-da1f95bdd500\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq" Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.912815 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96e4ce1c-8f09-4563-864f-da1f95bdd500-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq\" (UID: \"96e4ce1c-8f09-4563-864f-da1f95bdd500\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq" Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.912860 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/96e4ce1c-8f09-4563-864f-da1f95bdd500-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq\" (UID: \"96e4ce1c-8f09-4563-864f-da1f95bdd500\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq" Mar 13 10:43:21 crc kubenswrapper[4632]: I0313 10:43:21.912962 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e4ce1c-8f09-4563-864f-da1f95bdd500-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq\" (UID: \"96e4ce1c-8f09-4563-864f-da1f95bdd500\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq" Mar 13 10:43:22 crc kubenswrapper[4632]: I0313 10:43:22.015237 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktxnr\" (UniqueName: \"kubernetes.io/projected/96e4ce1c-8f09-4563-864f-da1f95bdd500-kube-api-access-ktxnr\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq\" (UID: \"96e4ce1c-8f09-4563-864f-da1f95bdd500\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq" Mar 13 10:43:22 crc kubenswrapper[4632]: I0313 10:43:22.015298 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96e4ce1c-8f09-4563-864f-da1f95bdd500-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq\" (UID: \"96e4ce1c-8f09-4563-864f-da1f95bdd500\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq" Mar 13 10:43:22 crc kubenswrapper[4632]: I0313 10:43:22.015362 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/96e4ce1c-8f09-4563-864f-da1f95bdd500-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq\" (UID: \"96e4ce1c-8f09-4563-864f-da1f95bdd500\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq" Mar 13 10:43:22 crc kubenswrapper[4632]: I0313 10:43:22.015419 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96e4ce1c-8f09-4563-864f-da1f95bdd500-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq\" (UID: \"96e4ce1c-8f09-4563-864f-da1f95bdd500\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq" Mar 13 10:43:22 crc kubenswrapper[4632]: I0313 10:43:22.015458 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/96e4ce1c-8f09-4563-864f-da1f95bdd500-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq\" (UID: \"96e4ce1c-8f09-4563-864f-da1f95bdd500\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq" Mar 13 10:43:22 crc kubenswrapper[4632]: I0313 10:43:22.015526 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e4ce1c-8f09-4563-864f-da1f95bdd500-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq\" (UID: \"96e4ce1c-8f09-4563-864f-da1f95bdd500\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq" Mar 13 10:43:22 crc kubenswrapper[4632]: I0313 10:43:22.020696 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96e4ce1c-8f09-4563-864f-da1f95bdd500-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq\" (UID: \"96e4ce1c-8f09-4563-864f-da1f95bdd500\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq" Mar 13 10:43:22 crc kubenswrapper[4632]: I0313 10:43:22.020916 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/96e4ce1c-8f09-4563-864f-da1f95bdd500-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq\" (UID: \"96e4ce1c-8f09-4563-864f-da1f95bdd500\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq" Mar 13 10:43:22 crc kubenswrapper[4632]: I0313 10:43:22.021403 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/96e4ce1c-8f09-4563-864f-da1f95bdd500-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq\" (UID: \"96e4ce1c-8f09-4563-864f-da1f95bdd500\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq" Mar 13 10:43:22 crc kubenswrapper[4632]: I0313 10:43:22.028868 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e4ce1c-8f09-4563-864f-da1f95bdd500-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq\" (UID: \"96e4ce1c-8f09-4563-864f-da1f95bdd500\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq" Mar 13 10:43:22 crc kubenswrapper[4632]: I0313 10:43:22.035074 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96e4ce1c-8f09-4563-864f-da1f95bdd500-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq\" (UID: \"96e4ce1c-8f09-4563-864f-da1f95bdd500\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq" Mar 13 10:43:22 crc kubenswrapper[4632]: I0313 10:43:22.039516 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktxnr\" (UniqueName: \"kubernetes.io/projected/96e4ce1c-8f09-4563-864f-da1f95bdd500-kube-api-access-ktxnr\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq\" (UID: \"96e4ce1c-8f09-4563-864f-da1f95bdd500\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq" Mar 13 10:43:22 crc kubenswrapper[4632]: I0313 10:43:22.153671 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq" Mar 13 10:43:22 crc kubenswrapper[4632]: I0313 10:43:22.716104 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq"] Mar 13 10:43:22 crc kubenswrapper[4632]: I0313 10:43:22.753841 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq" event={"ID":"96e4ce1c-8f09-4563-864f-da1f95bdd500","Type":"ContainerStarted","Data":"63699ebe9acf1d6d33dc9f40302fd0b79959f65b39fbb9fa8c035bf1300ba29c"} Mar 13 10:43:23 crc kubenswrapper[4632]: I0313 10:43:23.772225 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq" event={"ID":"96e4ce1c-8f09-4563-864f-da1f95bdd500","Type":"ContainerStarted","Data":"85251a26d676b79e5e052771251527a54fb8c42ed8c25a05daa652dca5b3df9e"} Mar 13 10:43:23 crc kubenswrapper[4632]: I0313 10:43:23.797549 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq" podStartSLOduration=2.360757036 podStartE2EDuration="2.797525478s" podCreationTimestamp="2026-03-13 10:43:21 +0000 UTC" firstStartedPulling="2026-03-13 10:43:22.728535836 +0000 UTC m=+2376.751065969" lastFinishedPulling="2026-03-13 10:43:23.165304278 +0000 UTC m=+2377.187834411" observedRunningTime="2026-03-13 10:43:23.786457066 +0000 UTC m=+2377.808987229" watchObservedRunningTime="2026-03-13 10:43:23.797525478 +0000 UTC m=+2377.820055631" Mar 13 10:43:34 crc kubenswrapper[4632]: I0313 10:43:34.044498 4632 scope.go:117] "RemoveContainer" containerID="7e6ad458a7a5f032b976d0f3e06f3cdb95d1f8fc235ab6b7d8f577ae0282cd20" Mar 13 10:43:34 crc kubenswrapper[4632]: E0313 10:43:34.045308 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:43:45 crc kubenswrapper[4632]: I0313 10:43:45.044466 4632 scope.go:117] "RemoveContainer" containerID="7e6ad458a7a5f032b976d0f3e06f3cdb95d1f8fc235ab6b7d8f577ae0282cd20" Mar 13 10:43:45 crc kubenswrapper[4632]: E0313 10:43:45.045355 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:44:00 crc kubenswrapper[4632]: I0313 10:44:00.044433 4632 scope.go:117] "RemoveContainer" containerID="7e6ad458a7a5f032b976d0f3e06f3cdb95d1f8fc235ab6b7d8f577ae0282cd20" Mar 13 10:44:00 crc kubenswrapper[4632]: E0313 10:44:00.045187 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:44:00 crc kubenswrapper[4632]: I0313 10:44:00.147676 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556644-dq2jd"] Mar 13 10:44:00 crc kubenswrapper[4632]: I0313 10:44:00.149494 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556644-dq2jd" Mar 13 10:44:00 crc kubenswrapper[4632]: I0313 10:44:00.152850 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 10:44:00 crc kubenswrapper[4632]: I0313 10:44:00.153116 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 10:44:00 crc kubenswrapper[4632]: I0313 10:44:00.155572 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 10:44:00 crc kubenswrapper[4632]: I0313 10:44:00.163171 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556644-dq2jd"] Mar 13 10:44:00 crc kubenswrapper[4632]: I0313 10:44:00.263779 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvzv7\" (UniqueName: \"kubernetes.io/projected/2462765e-6333-4e22-b4d7-ee2b2c6aa538-kube-api-access-kvzv7\") pod \"auto-csr-approver-29556644-dq2jd\" (UID: \"2462765e-6333-4e22-b4d7-ee2b2c6aa538\") " pod="openshift-infra/auto-csr-approver-29556644-dq2jd" Mar 13 10:44:00 crc kubenswrapper[4632]: I0313 10:44:00.366369 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvzv7\" (UniqueName: \"kubernetes.io/projected/2462765e-6333-4e22-b4d7-ee2b2c6aa538-kube-api-access-kvzv7\") pod \"auto-csr-approver-29556644-dq2jd\" (UID: \"2462765e-6333-4e22-b4d7-ee2b2c6aa538\") " pod="openshift-infra/auto-csr-approver-29556644-dq2jd" Mar 13 10:44:00 crc kubenswrapper[4632]: I0313 10:44:00.389404 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvzv7\" (UniqueName: \"kubernetes.io/projected/2462765e-6333-4e22-b4d7-ee2b2c6aa538-kube-api-access-kvzv7\") pod \"auto-csr-approver-29556644-dq2jd\" (UID: \"2462765e-6333-4e22-b4d7-ee2b2c6aa538\") " pod="openshift-infra/auto-csr-approver-29556644-dq2jd" Mar 13 10:44:00 crc kubenswrapper[4632]: I0313 10:44:00.475304 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556644-dq2jd" Mar 13 10:44:01 crc kubenswrapper[4632]: I0313 10:44:01.063624 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556644-dq2jd"] Mar 13 10:44:01 crc kubenswrapper[4632]: I0313 10:44:01.106579 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556644-dq2jd" event={"ID":"2462765e-6333-4e22-b4d7-ee2b2c6aa538","Type":"ContainerStarted","Data":"f56f0f67e1577faee7fca58378e1afee27b69eb2b7b50ae2443bfca5b6158d20"} Mar 13 10:44:03 crc kubenswrapper[4632]: I0313 10:44:03.130459 4632 generic.go:334] "Generic (PLEG): container finished" podID="2462765e-6333-4e22-b4d7-ee2b2c6aa538" containerID="07b2fe4a97569c9089b7972685eb914fd04195d02c9e7b239121095e54e42352" exitCode=0 Mar 13 10:44:03 crc kubenswrapper[4632]: I0313 10:44:03.131866 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556644-dq2jd" event={"ID":"2462765e-6333-4e22-b4d7-ee2b2c6aa538","Type":"ContainerDied","Data":"07b2fe4a97569c9089b7972685eb914fd04195d02c9e7b239121095e54e42352"} Mar 13 10:44:04 crc kubenswrapper[4632]: I0313 10:44:04.500878 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556644-dq2jd" Mar 13 10:44:04 crc kubenswrapper[4632]: I0313 10:44:04.658796 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvzv7\" (UniqueName: \"kubernetes.io/projected/2462765e-6333-4e22-b4d7-ee2b2c6aa538-kube-api-access-kvzv7\") pod \"2462765e-6333-4e22-b4d7-ee2b2c6aa538\" (UID: \"2462765e-6333-4e22-b4d7-ee2b2c6aa538\") " Mar 13 10:44:04 crc kubenswrapper[4632]: I0313 10:44:04.664414 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2462765e-6333-4e22-b4d7-ee2b2c6aa538-kube-api-access-kvzv7" (OuterVolumeSpecName: "kube-api-access-kvzv7") pod "2462765e-6333-4e22-b4d7-ee2b2c6aa538" (UID: "2462765e-6333-4e22-b4d7-ee2b2c6aa538"). InnerVolumeSpecName "kube-api-access-kvzv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:44:04 crc kubenswrapper[4632]: I0313 10:44:04.760984 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvzv7\" (UniqueName: \"kubernetes.io/projected/2462765e-6333-4e22-b4d7-ee2b2c6aa538-kube-api-access-kvzv7\") on node \"crc\" DevicePath \"\"" Mar 13 10:44:05 crc kubenswrapper[4632]: I0313 10:44:05.151972 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556644-dq2jd" event={"ID":"2462765e-6333-4e22-b4d7-ee2b2c6aa538","Type":"ContainerDied","Data":"f56f0f67e1577faee7fca58378e1afee27b69eb2b7b50ae2443bfca5b6158d20"} Mar 13 10:44:05 crc kubenswrapper[4632]: I0313 10:44:05.153415 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f56f0f67e1577faee7fca58378e1afee27b69eb2b7b50ae2443bfca5b6158d20" Mar 13 10:44:05 crc kubenswrapper[4632]: I0313 10:44:05.152415 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556644-dq2jd" Mar 13 10:44:05 crc kubenswrapper[4632]: I0313 10:44:05.577628 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556638-p7mdh"] Mar 13 10:44:05 crc kubenswrapper[4632]: I0313 10:44:05.587726 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556638-p7mdh"] Mar 13 10:44:06 crc kubenswrapper[4632]: I0313 10:44:06.056536 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="346e767a-d9dd-40e1-9ab3-2e4ec9184667" path="/var/lib/kubelet/pods/346e767a-d9dd-40e1-9ab3-2e4ec9184667/volumes" Mar 13 10:44:09 crc kubenswrapper[4632]: I0313 10:44:09.504441 4632 scope.go:117] "RemoveContainer" containerID="c166f0a830c16b65f03aba2171bb98a995fe4121f1b92036d629fce2afd52c26" Mar 13 10:44:12 crc kubenswrapper[4632]: I0313 10:44:12.044627 4632 scope.go:117] "RemoveContainer" containerID="7e6ad458a7a5f032b976d0f3e06f3cdb95d1f8fc235ab6b7d8f577ae0282cd20" Mar 13 10:44:12 crc kubenswrapper[4632]: E0313 10:44:12.046356 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:44:14 crc kubenswrapper[4632]: I0313 10:44:14.229233 4632 generic.go:334] "Generic (PLEG): container finished" podID="96e4ce1c-8f09-4563-864f-da1f95bdd500" containerID="85251a26d676b79e5e052771251527a54fb8c42ed8c25a05daa652dca5b3df9e" exitCode=0 Mar 13 10:44:14 crc kubenswrapper[4632]: I0313 10:44:14.229327 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq" event={"ID":"96e4ce1c-8f09-4563-864f-da1f95bdd500","Type":"ContainerDied","Data":"85251a26d676b79e5e052771251527a54fb8c42ed8c25a05daa652dca5b3df9e"} Mar 13 10:44:15 crc kubenswrapper[4632]: I0313 10:44:15.692152 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq" Mar 13 10:44:15 crc kubenswrapper[4632]: I0313 10:44:15.788903 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktxnr\" (UniqueName: \"kubernetes.io/projected/96e4ce1c-8f09-4563-864f-da1f95bdd500-kube-api-access-ktxnr\") pod \"96e4ce1c-8f09-4563-864f-da1f95bdd500\" (UID: \"96e4ce1c-8f09-4563-864f-da1f95bdd500\") " Mar 13 10:44:15 crc kubenswrapper[4632]: I0313 10:44:15.790101 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96e4ce1c-8f09-4563-864f-da1f95bdd500-ssh-key-openstack-edpm-ipam\") pod \"96e4ce1c-8f09-4563-864f-da1f95bdd500\" (UID: \"96e4ce1c-8f09-4563-864f-da1f95bdd500\") " Mar 13 10:44:15 crc kubenswrapper[4632]: I0313 10:44:15.790166 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96e4ce1c-8f09-4563-864f-da1f95bdd500-inventory\") pod \"96e4ce1c-8f09-4563-864f-da1f95bdd500\" (UID: \"96e4ce1c-8f09-4563-864f-da1f95bdd500\") " Mar 13 10:44:15 crc kubenswrapper[4632]: I0313 10:44:15.790212 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/96e4ce1c-8f09-4563-864f-da1f95bdd500-nova-metadata-neutron-config-0\") pod \"96e4ce1c-8f09-4563-864f-da1f95bdd500\" (UID: \"96e4ce1c-8f09-4563-864f-da1f95bdd500\") " Mar 13 10:44:15 crc kubenswrapper[4632]: I0313 10:44:15.790258 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/96e4ce1c-8f09-4563-864f-da1f95bdd500-neutron-ovn-metadata-agent-neutron-config-0\") pod \"96e4ce1c-8f09-4563-864f-da1f95bdd500\" (UID: \"96e4ce1c-8f09-4563-864f-da1f95bdd500\") " Mar 13 10:44:15 crc kubenswrapper[4632]: I0313 10:44:15.790416 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e4ce1c-8f09-4563-864f-da1f95bdd500-neutron-metadata-combined-ca-bundle\") pod \"96e4ce1c-8f09-4563-864f-da1f95bdd500\" (UID: \"96e4ce1c-8f09-4563-864f-da1f95bdd500\") " Mar 13 10:44:15 crc kubenswrapper[4632]: I0313 10:44:15.796393 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96e4ce1c-8f09-4563-864f-da1f95bdd500-kube-api-access-ktxnr" (OuterVolumeSpecName: "kube-api-access-ktxnr") pod "96e4ce1c-8f09-4563-864f-da1f95bdd500" (UID: "96e4ce1c-8f09-4563-864f-da1f95bdd500"). InnerVolumeSpecName "kube-api-access-ktxnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:44:15 crc kubenswrapper[4632]: I0313 10:44:15.798607 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e4ce1c-8f09-4563-864f-da1f95bdd500-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "96e4ce1c-8f09-4563-864f-da1f95bdd500" (UID: "96e4ce1c-8f09-4563-864f-da1f95bdd500"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:44:15 crc kubenswrapper[4632]: I0313 10:44:15.818081 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e4ce1c-8f09-4563-864f-da1f95bdd500-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "96e4ce1c-8f09-4563-864f-da1f95bdd500" (UID: "96e4ce1c-8f09-4563-864f-da1f95bdd500"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:44:15 crc kubenswrapper[4632]: I0313 10:44:15.822407 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e4ce1c-8f09-4563-864f-da1f95bdd500-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "96e4ce1c-8f09-4563-864f-da1f95bdd500" (UID: "96e4ce1c-8f09-4563-864f-da1f95bdd500"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:44:15 crc kubenswrapper[4632]: I0313 10:44:15.825475 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e4ce1c-8f09-4563-864f-da1f95bdd500-inventory" (OuterVolumeSpecName: "inventory") pod "96e4ce1c-8f09-4563-864f-da1f95bdd500" (UID: "96e4ce1c-8f09-4563-864f-da1f95bdd500"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:44:15 crc kubenswrapper[4632]: I0313 10:44:15.828899 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e4ce1c-8f09-4563-864f-da1f95bdd500-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "96e4ce1c-8f09-4563-864f-da1f95bdd500" (UID: "96e4ce1c-8f09-4563-864f-da1f95bdd500"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:44:15 crc kubenswrapper[4632]: I0313 10:44:15.893666 4632 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e4ce1c-8f09-4563-864f-da1f95bdd500-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:44:15 crc kubenswrapper[4632]: I0313 10:44:15.893715 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktxnr\" (UniqueName: \"kubernetes.io/projected/96e4ce1c-8f09-4563-864f-da1f95bdd500-kube-api-access-ktxnr\") on node \"crc\" DevicePath \"\"" Mar 13 10:44:15 crc kubenswrapper[4632]: I0313 10:44:15.893727 4632 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96e4ce1c-8f09-4563-864f-da1f95bdd500-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 13 10:44:15 crc kubenswrapper[4632]: I0313 10:44:15.893738 4632 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96e4ce1c-8f09-4563-864f-da1f95bdd500-inventory\") on node \"crc\" DevicePath \"\"" Mar 13 10:44:15 crc kubenswrapper[4632]: I0313 10:44:15.893749 4632 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/96e4ce1c-8f09-4563-864f-da1f95bdd500-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Mar 13 10:44:15 crc kubenswrapper[4632]: I0313 10:44:15.893768 4632 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/96e4ce1c-8f09-4563-864f-da1f95bdd500-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Mar 13 10:44:16 crc kubenswrapper[4632]: I0313 10:44:16.247472 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq" event={"ID":"96e4ce1c-8f09-4563-864f-da1f95bdd500","Type":"ContainerDied","Data":"63699ebe9acf1d6d33dc9f40302fd0b79959f65b39fbb9fa8c035bf1300ba29c"} Mar 13 10:44:16 crc kubenswrapper[4632]: I0313 10:44:16.247713 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63699ebe9acf1d6d33dc9f40302fd0b79959f65b39fbb9fa8c035bf1300ba29c" Mar 13 10:44:16 crc kubenswrapper[4632]: I0313 10:44:16.247518 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq" Mar 13 10:44:16 crc kubenswrapper[4632]: I0313 10:44:16.395031 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-skjrh"] Mar 13 10:44:16 crc kubenswrapper[4632]: E0313 10:44:16.395517 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96e4ce1c-8f09-4563-864f-da1f95bdd500" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 13 10:44:16 crc kubenswrapper[4632]: I0313 10:44:16.395543 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="96e4ce1c-8f09-4563-864f-da1f95bdd500" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 13 10:44:16 crc kubenswrapper[4632]: E0313 10:44:16.395589 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2462765e-6333-4e22-b4d7-ee2b2c6aa538" containerName="oc" Mar 13 10:44:16 crc kubenswrapper[4632]: I0313 10:44:16.395601 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="2462765e-6333-4e22-b4d7-ee2b2c6aa538" containerName="oc" Mar 13 10:44:16 crc kubenswrapper[4632]: I0313 10:44:16.395829 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="2462765e-6333-4e22-b4d7-ee2b2c6aa538" containerName="oc" Mar 13 10:44:16 crc kubenswrapper[4632]: I0313 10:44:16.395867 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="96e4ce1c-8f09-4563-864f-da1f95bdd500" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 13 10:44:16 crc kubenswrapper[4632]: I0313 10:44:16.396689 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-skjrh" Mar 13 10:44:16 crc kubenswrapper[4632]: I0313 10:44:16.399062 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 13 10:44:16 crc kubenswrapper[4632]: I0313 10:44:16.399828 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 13 10:44:16 crc kubenswrapper[4632]: I0313 10:44:16.399996 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Mar 13 10:44:16 crc kubenswrapper[4632]: I0313 10:44:16.400215 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 13 10:44:16 crc kubenswrapper[4632]: I0313 10:44:16.401274 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qrzsx" Mar 13 10:44:16 crc kubenswrapper[4632]: I0313 10:44:16.405586 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-skjrh"] Mar 13 10:44:16 crc kubenswrapper[4632]: I0313 10:44:16.507414 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed1a2c50-a476-43ca-9764-e0ebffb14134-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-skjrh\" (UID: \"ed1a2c50-a476-43ca-9764-e0ebffb14134\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-skjrh" Mar 13 10:44:16 crc kubenswrapper[4632]: I0313 10:44:16.507471 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh7rq\" (UniqueName: \"kubernetes.io/projected/ed1a2c50-a476-43ca-9764-e0ebffb14134-kube-api-access-xh7rq\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-skjrh\" (UID: \"ed1a2c50-a476-43ca-9764-e0ebffb14134\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-skjrh" Mar 13 10:44:16 crc kubenswrapper[4632]: I0313 10:44:16.507522 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed1a2c50-a476-43ca-9764-e0ebffb14134-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-skjrh\" (UID: \"ed1a2c50-a476-43ca-9764-e0ebffb14134\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-skjrh" Mar 13 10:44:16 crc kubenswrapper[4632]: I0313 10:44:16.507585 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ed1a2c50-a476-43ca-9764-e0ebffb14134-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-skjrh\" (UID: \"ed1a2c50-a476-43ca-9764-e0ebffb14134\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-skjrh" Mar 13 10:44:16 crc kubenswrapper[4632]: I0313 10:44:16.507694 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed1a2c50-a476-43ca-9764-e0ebffb14134-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-skjrh\" (UID: \"ed1a2c50-a476-43ca-9764-e0ebffb14134\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-skjrh" Mar 13 10:44:16 crc kubenswrapper[4632]: I0313 10:44:16.610025 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ed1a2c50-a476-43ca-9764-e0ebffb14134-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-skjrh\" (UID: \"ed1a2c50-a476-43ca-9764-e0ebffb14134\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-skjrh" Mar 13 10:44:16 crc kubenswrapper[4632]: I0313 10:44:16.610180 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed1a2c50-a476-43ca-9764-e0ebffb14134-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-skjrh\" (UID: \"ed1a2c50-a476-43ca-9764-e0ebffb14134\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-skjrh" Mar 13 10:44:16 crc kubenswrapper[4632]: I0313 10:44:16.610324 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed1a2c50-a476-43ca-9764-e0ebffb14134-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-skjrh\" (UID: \"ed1a2c50-a476-43ca-9764-e0ebffb14134\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-skjrh" Mar 13 10:44:16 crc kubenswrapper[4632]: I0313 10:44:16.610354 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh7rq\" (UniqueName: \"kubernetes.io/projected/ed1a2c50-a476-43ca-9764-e0ebffb14134-kube-api-access-xh7rq\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-skjrh\" (UID: \"ed1a2c50-a476-43ca-9764-e0ebffb14134\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-skjrh" Mar 13 10:44:16 crc kubenswrapper[4632]: I0313 10:44:16.610379 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed1a2c50-a476-43ca-9764-e0ebffb14134-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-skjrh\" (UID: \"ed1a2c50-a476-43ca-9764-e0ebffb14134\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-skjrh" Mar 13 10:44:16 crc kubenswrapper[4632]: I0313 10:44:16.613811 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed1a2c50-a476-43ca-9764-e0ebffb14134-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-skjrh\" (UID: \"ed1a2c50-a476-43ca-9764-e0ebffb14134\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-skjrh" Mar 13 10:44:16 crc kubenswrapper[4632]: I0313 10:44:16.614322 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed1a2c50-a476-43ca-9764-e0ebffb14134-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-skjrh\" (UID: \"ed1a2c50-a476-43ca-9764-e0ebffb14134\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-skjrh" Mar 13 10:44:16 crc kubenswrapper[4632]: I0313 10:44:16.614780 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ed1a2c50-a476-43ca-9764-e0ebffb14134-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-skjrh\" (UID: \"ed1a2c50-a476-43ca-9764-e0ebffb14134\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-skjrh" Mar 13 10:44:16 crc kubenswrapper[4632]: I0313 10:44:16.624076 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed1a2c50-a476-43ca-9764-e0ebffb14134-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-skjrh\" (UID: \"ed1a2c50-a476-43ca-9764-e0ebffb14134\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-skjrh" Mar 13 10:44:16 crc kubenswrapper[4632]: I0313 10:44:16.630708 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh7rq\" (UniqueName: \"kubernetes.io/projected/ed1a2c50-a476-43ca-9764-e0ebffb14134-kube-api-access-xh7rq\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-skjrh\" (UID: \"ed1a2c50-a476-43ca-9764-e0ebffb14134\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-skjrh" Mar 13 10:44:16 crc kubenswrapper[4632]: I0313 10:44:16.712546 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-skjrh" Mar 13 10:44:17 crc kubenswrapper[4632]: I0313 10:44:17.273522 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-skjrh"] Mar 13 10:44:18 crc kubenswrapper[4632]: I0313 10:44:18.276000 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-skjrh" event={"ID":"ed1a2c50-a476-43ca-9764-e0ebffb14134","Type":"ContainerStarted","Data":"e94f04b40e30694707c6fd5089936e853269c95465d0464597019d512ad17ad4"} Mar 13 10:44:18 crc kubenswrapper[4632]: I0313 10:44:18.276314 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-skjrh" event={"ID":"ed1a2c50-a476-43ca-9764-e0ebffb14134","Type":"ContainerStarted","Data":"251492aaae45a40b4ef377f82a35e7e430a26d0b96081ac467780ad353a89a5b"} Mar 13 10:44:18 crc kubenswrapper[4632]: I0313 10:44:18.312046 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-skjrh" podStartSLOduration=1.68438618 podStartE2EDuration="2.312021216s" podCreationTimestamp="2026-03-13 10:44:16 +0000 UTC" firstStartedPulling="2026-03-13 10:44:17.27578921 +0000 UTC m=+2431.298319343" lastFinishedPulling="2026-03-13 10:44:17.903424256 +0000 UTC m=+2431.925954379" observedRunningTime="2026-03-13 10:44:18.303675811 +0000 UTC m=+2432.326205944" watchObservedRunningTime="2026-03-13 10:44:18.312021216 +0000 UTC m=+2432.334551349" Mar 13 10:44:27 crc kubenswrapper[4632]: I0313 10:44:27.044518 4632 scope.go:117] "RemoveContainer" containerID="7e6ad458a7a5f032b976d0f3e06f3cdb95d1f8fc235ab6b7d8f577ae0282cd20" Mar 13 10:44:27 crc kubenswrapper[4632]: E0313 10:44:27.046143 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:44:41 crc kubenswrapper[4632]: I0313 10:44:41.046274 4632 scope.go:117] "RemoveContainer" containerID="7e6ad458a7a5f032b976d0f3e06f3cdb95d1f8fc235ab6b7d8f577ae0282cd20" Mar 13 10:44:41 crc kubenswrapper[4632]: E0313 10:44:41.047114 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:44:56 crc kubenswrapper[4632]: I0313 10:44:56.045448 4632 scope.go:117] "RemoveContainer" containerID="7e6ad458a7a5f032b976d0f3e06f3cdb95d1f8fc235ab6b7d8f577ae0282cd20" Mar 13 10:44:56 crc kubenswrapper[4632]: E0313 10:44:56.046205 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:45:00 crc kubenswrapper[4632]: I0313 10:45:00.156129 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556645-4btb8"] Mar 13 10:45:00 crc kubenswrapper[4632]: I0313 10:45:00.158107 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556645-4btb8" Mar 13 10:45:00 crc kubenswrapper[4632]: I0313 10:45:00.160509 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 13 10:45:00 crc kubenswrapper[4632]: I0313 10:45:00.162511 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 13 10:45:00 crc kubenswrapper[4632]: I0313 10:45:00.221610 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556645-4btb8"] Mar 13 10:45:00 crc kubenswrapper[4632]: I0313 10:45:00.295920 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8cf52265-21b3-40f0-a2f5-d379c03cc045-config-volume\") pod \"collect-profiles-29556645-4btb8\" (UID: \"8cf52265-21b3-40f0-a2f5-d379c03cc045\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556645-4btb8" Mar 13 10:45:00 crc kubenswrapper[4632]: I0313 10:45:00.296088 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8cf52265-21b3-40f0-a2f5-d379c03cc045-secret-volume\") pod \"collect-profiles-29556645-4btb8\" (UID: \"8cf52265-21b3-40f0-a2f5-d379c03cc045\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556645-4btb8" Mar 13 10:45:00 crc kubenswrapper[4632]: I0313 10:45:00.296259 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8dpp\" (UniqueName: \"kubernetes.io/projected/8cf52265-21b3-40f0-a2f5-d379c03cc045-kube-api-access-h8dpp\") pod \"collect-profiles-29556645-4btb8\" (UID: \"8cf52265-21b3-40f0-a2f5-d379c03cc045\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556645-4btb8" Mar 13 10:45:00 crc kubenswrapper[4632]: I0313 10:45:00.398071 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8cf52265-21b3-40f0-a2f5-d379c03cc045-secret-volume\") pod \"collect-profiles-29556645-4btb8\" (UID: \"8cf52265-21b3-40f0-a2f5-d379c03cc045\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556645-4btb8" Mar 13 10:45:00 crc kubenswrapper[4632]: I0313 10:45:00.398580 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8dpp\" (UniqueName: \"kubernetes.io/projected/8cf52265-21b3-40f0-a2f5-d379c03cc045-kube-api-access-h8dpp\") pod \"collect-profiles-29556645-4btb8\" (UID: \"8cf52265-21b3-40f0-a2f5-d379c03cc045\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556645-4btb8" Mar 13 10:45:00 crc kubenswrapper[4632]: I0313 10:45:00.398648 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8cf52265-21b3-40f0-a2f5-d379c03cc045-config-volume\") pod \"collect-profiles-29556645-4btb8\" (UID: \"8cf52265-21b3-40f0-a2f5-d379c03cc045\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556645-4btb8" Mar 13 10:45:00 crc kubenswrapper[4632]: I0313 10:45:00.399555 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8cf52265-21b3-40f0-a2f5-d379c03cc045-config-volume\") pod \"collect-profiles-29556645-4btb8\" (UID: \"8cf52265-21b3-40f0-a2f5-d379c03cc045\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556645-4btb8" Mar 13 10:45:00 crc kubenswrapper[4632]: I0313 10:45:00.414785 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8cf52265-21b3-40f0-a2f5-d379c03cc045-secret-volume\") pod \"collect-profiles-29556645-4btb8\" (UID: \"8cf52265-21b3-40f0-a2f5-d379c03cc045\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556645-4btb8" Mar 13 10:45:00 crc kubenswrapper[4632]: I0313 10:45:00.421251 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8dpp\" (UniqueName: \"kubernetes.io/projected/8cf52265-21b3-40f0-a2f5-d379c03cc045-kube-api-access-h8dpp\") pod \"collect-profiles-29556645-4btb8\" (UID: \"8cf52265-21b3-40f0-a2f5-d379c03cc045\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556645-4btb8" Mar 13 10:45:00 crc kubenswrapper[4632]: I0313 10:45:00.512683 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556645-4btb8" Mar 13 10:45:01 crc kubenswrapper[4632]: I0313 10:45:01.057691 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556645-4btb8"] Mar 13 10:45:01 crc kubenswrapper[4632]: I0313 10:45:01.702465 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556645-4btb8" event={"ID":"8cf52265-21b3-40f0-a2f5-d379c03cc045","Type":"ContainerStarted","Data":"8e20db958c001216e89a657171c617c2e4d78b297bcd654a9af9c2d8d32242ac"} Mar 13 10:45:01 crc kubenswrapper[4632]: I0313 10:45:01.704111 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556645-4btb8" event={"ID":"8cf52265-21b3-40f0-a2f5-d379c03cc045","Type":"ContainerStarted","Data":"8d57aa4aed604811ff55c6ede5103947b8bd4fba767956c296f00e30b4b5ac65"} Mar 13 10:45:01 crc kubenswrapper[4632]: I0313 10:45:01.728271 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29556645-4btb8" podStartSLOduration=1.728248449 podStartE2EDuration="1.728248449s" podCreationTimestamp="2026-03-13 10:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:45:01.723911482 +0000 UTC m=+2475.746441615" watchObservedRunningTime="2026-03-13 10:45:01.728248449 +0000 UTC m=+2475.750778582" Mar 13 10:45:02 crc kubenswrapper[4632]: I0313 10:45:02.712828 4632 generic.go:334] "Generic (PLEG): container finished" podID="8cf52265-21b3-40f0-a2f5-d379c03cc045" containerID="8e20db958c001216e89a657171c617c2e4d78b297bcd654a9af9c2d8d32242ac" exitCode=0 Mar 13 10:45:02 crc kubenswrapper[4632]: I0313 10:45:02.712884 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556645-4btb8" event={"ID":"8cf52265-21b3-40f0-a2f5-d379c03cc045","Type":"ContainerDied","Data":"8e20db958c001216e89a657171c617c2e4d78b297bcd654a9af9c2d8d32242ac"} Mar 13 10:45:04 crc kubenswrapper[4632]: I0313 10:45:04.072013 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556645-4btb8" Mar 13 10:45:04 crc kubenswrapper[4632]: I0313 10:45:04.190856 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8cf52265-21b3-40f0-a2f5-d379c03cc045-config-volume\") pod \"8cf52265-21b3-40f0-a2f5-d379c03cc045\" (UID: \"8cf52265-21b3-40f0-a2f5-d379c03cc045\") " Mar 13 10:45:04 crc kubenswrapper[4632]: I0313 10:45:04.190920 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8cf52265-21b3-40f0-a2f5-d379c03cc045-secret-volume\") pod \"8cf52265-21b3-40f0-a2f5-d379c03cc045\" (UID: \"8cf52265-21b3-40f0-a2f5-d379c03cc045\") " Mar 13 10:45:04 crc kubenswrapper[4632]: I0313 10:45:04.191107 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8dpp\" (UniqueName: \"kubernetes.io/projected/8cf52265-21b3-40f0-a2f5-d379c03cc045-kube-api-access-h8dpp\") pod \"8cf52265-21b3-40f0-a2f5-d379c03cc045\" (UID: \"8cf52265-21b3-40f0-a2f5-d379c03cc045\") " Mar 13 10:45:04 crc kubenswrapper[4632]: I0313 10:45:04.191569 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cf52265-21b3-40f0-a2f5-d379c03cc045-config-volume" (OuterVolumeSpecName: "config-volume") pod "8cf52265-21b3-40f0-a2f5-d379c03cc045" (UID: "8cf52265-21b3-40f0-a2f5-d379c03cc045"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:45:04 crc kubenswrapper[4632]: I0313 10:45:04.199127 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cf52265-21b3-40f0-a2f5-d379c03cc045-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8cf52265-21b3-40f0-a2f5-d379c03cc045" (UID: "8cf52265-21b3-40f0-a2f5-d379c03cc045"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:45:04 crc kubenswrapper[4632]: I0313 10:45:04.199211 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cf52265-21b3-40f0-a2f5-d379c03cc045-kube-api-access-h8dpp" (OuterVolumeSpecName: "kube-api-access-h8dpp") pod "8cf52265-21b3-40f0-a2f5-d379c03cc045" (UID: "8cf52265-21b3-40f0-a2f5-d379c03cc045"). InnerVolumeSpecName "kube-api-access-h8dpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:45:04 crc kubenswrapper[4632]: I0313 10:45:04.293712 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8dpp\" (UniqueName: \"kubernetes.io/projected/8cf52265-21b3-40f0-a2f5-d379c03cc045-kube-api-access-h8dpp\") on node \"crc\" DevicePath \"\"" Mar 13 10:45:04 crc kubenswrapper[4632]: I0313 10:45:04.293758 4632 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8cf52265-21b3-40f0-a2f5-d379c03cc045-config-volume\") on node \"crc\" DevicePath \"\"" Mar 13 10:45:04 crc kubenswrapper[4632]: I0313 10:45:04.293768 4632 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8cf52265-21b3-40f0-a2f5-d379c03cc045-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 13 10:45:04 crc kubenswrapper[4632]: I0313 10:45:04.731685 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556645-4btb8" event={"ID":"8cf52265-21b3-40f0-a2f5-d379c03cc045","Type":"ContainerDied","Data":"8d57aa4aed604811ff55c6ede5103947b8bd4fba767956c296f00e30b4b5ac65"} Mar 13 10:45:04 crc kubenswrapper[4632]: I0313 10:45:04.731964 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d57aa4aed604811ff55c6ede5103947b8bd4fba767956c296f00e30b4b5ac65" Mar 13 10:45:04 crc kubenswrapper[4632]: I0313 10:45:04.731793 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556645-4btb8" Mar 13 10:45:04 crc kubenswrapper[4632]: I0313 10:45:04.807620 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556600-r9flg"] Mar 13 10:45:04 crc kubenswrapper[4632]: I0313 10:45:04.817549 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556600-r9flg"] Mar 13 10:45:06 crc kubenswrapper[4632]: I0313 10:45:06.058720 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="528d3aa9-10bf-4029-a4d2-85768264fde8" path="/var/lib/kubelet/pods/528d3aa9-10bf-4029-a4d2-85768264fde8/volumes" Mar 13 10:45:09 crc kubenswrapper[4632]: I0313 10:45:09.588802 4632 scope.go:117] "RemoveContainer" containerID="da165dd4ae62fa2ea1c777c8125fcd4bfe4bd102f508da056f1a058689bba35e" Mar 13 10:45:11 crc kubenswrapper[4632]: I0313 10:45:11.044987 4632 scope.go:117] "RemoveContainer" containerID="7e6ad458a7a5f032b976d0f3e06f3cdb95d1f8fc235ab6b7d8f577ae0282cd20" Mar 13 10:45:11 crc kubenswrapper[4632]: E0313 10:45:11.045449 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:45:26 crc kubenswrapper[4632]: I0313 10:45:26.044287 4632 scope.go:117] "RemoveContainer" containerID="7e6ad458a7a5f032b976d0f3e06f3cdb95d1f8fc235ab6b7d8f577ae0282cd20" Mar 13 10:45:26 crc kubenswrapper[4632]: E0313 10:45:26.045223 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:45:38 crc kubenswrapper[4632]: I0313 10:45:38.053586 4632 scope.go:117] "RemoveContainer" containerID="7e6ad458a7a5f032b976d0f3e06f3cdb95d1f8fc235ab6b7d8f577ae0282cd20" Mar 13 10:45:38 crc kubenswrapper[4632]: E0313 10:45:38.054565 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:45:51 crc kubenswrapper[4632]: I0313 10:45:51.044349 4632 scope.go:117] "RemoveContainer" containerID="7e6ad458a7a5f032b976d0f3e06f3cdb95d1f8fc235ab6b7d8f577ae0282cd20" Mar 13 10:45:51 crc kubenswrapper[4632]: E0313 10:45:51.045689 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:46:00 crc kubenswrapper[4632]: I0313 10:46:00.143904 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556646-blkxp"] Mar 13 10:46:00 crc kubenswrapper[4632]: E0313 10:46:00.144922 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cf52265-21b3-40f0-a2f5-d379c03cc045" containerName="collect-profiles" Mar 13 10:46:00 crc kubenswrapper[4632]: I0313 10:46:00.144939 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cf52265-21b3-40f0-a2f5-d379c03cc045" containerName="collect-profiles" Mar 13 10:46:00 crc kubenswrapper[4632]: I0313 10:46:00.145170 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cf52265-21b3-40f0-a2f5-d379c03cc045" containerName="collect-profiles" Mar 13 10:46:00 crc kubenswrapper[4632]: I0313 10:46:00.145872 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556646-blkxp" Mar 13 10:46:00 crc kubenswrapper[4632]: I0313 10:46:00.148151 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 10:46:00 crc kubenswrapper[4632]: I0313 10:46:00.148265 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 10:46:00 crc kubenswrapper[4632]: I0313 10:46:00.148265 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 10:46:00 crc kubenswrapper[4632]: I0313 10:46:00.164437 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556646-blkxp"] Mar 13 10:46:00 crc kubenswrapper[4632]: I0313 10:46:00.273900 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnzgm\" (UniqueName: \"kubernetes.io/projected/93948d53-dbf3-47ce-8af0-bee10cc7e246-kube-api-access-fnzgm\") pod \"auto-csr-approver-29556646-blkxp\" (UID: \"93948d53-dbf3-47ce-8af0-bee10cc7e246\") " pod="openshift-infra/auto-csr-approver-29556646-blkxp" Mar 13 10:46:00 crc kubenswrapper[4632]: I0313 10:46:00.376224 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnzgm\" (UniqueName: \"kubernetes.io/projected/93948d53-dbf3-47ce-8af0-bee10cc7e246-kube-api-access-fnzgm\") pod \"auto-csr-approver-29556646-blkxp\" (UID: \"93948d53-dbf3-47ce-8af0-bee10cc7e246\") " pod="openshift-infra/auto-csr-approver-29556646-blkxp" Mar 13 10:46:00 crc kubenswrapper[4632]: I0313 10:46:00.400791 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnzgm\" (UniqueName: \"kubernetes.io/projected/93948d53-dbf3-47ce-8af0-bee10cc7e246-kube-api-access-fnzgm\") pod \"auto-csr-approver-29556646-blkxp\" (UID: \"93948d53-dbf3-47ce-8af0-bee10cc7e246\") " pod="openshift-infra/auto-csr-approver-29556646-blkxp" Mar 13 10:46:00 crc kubenswrapper[4632]: I0313 10:46:00.471216 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556646-blkxp" Mar 13 10:46:01 crc kubenswrapper[4632]: I0313 10:46:01.002410 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556646-blkxp"] Mar 13 10:46:01 crc kubenswrapper[4632]: I0313 10:46:01.020300 4632 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 10:46:01 crc kubenswrapper[4632]: I0313 10:46:01.285109 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556646-blkxp" event={"ID":"93948d53-dbf3-47ce-8af0-bee10cc7e246","Type":"ContainerStarted","Data":"1e499c382014b0aca0d9f283dcf3e408db5620f77e2f5a1e95fbd235bc60e907"} Mar 13 10:46:02 crc kubenswrapper[4632]: I0313 10:46:02.045181 4632 scope.go:117] "RemoveContainer" containerID="7e6ad458a7a5f032b976d0f3e06f3cdb95d1f8fc235ab6b7d8f577ae0282cd20" Mar 13 10:46:02 crc kubenswrapper[4632]: E0313 10:46:02.045679 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:46:03 crc kubenswrapper[4632]: I0313 10:46:03.316718 4632 generic.go:334] "Generic (PLEG): container finished" podID="93948d53-dbf3-47ce-8af0-bee10cc7e246" containerID="92f6939c452dda4592aa326adcecce982f4fafb95f93ce909a101db10372c2ab" exitCode=0 Mar 13 10:46:03 crc kubenswrapper[4632]: I0313 10:46:03.317052 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556646-blkxp" event={"ID":"93948d53-dbf3-47ce-8af0-bee10cc7e246","Type":"ContainerDied","Data":"92f6939c452dda4592aa326adcecce982f4fafb95f93ce909a101db10372c2ab"} Mar 13 10:46:04 crc kubenswrapper[4632]: I0313 10:46:04.685559 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556646-blkxp" Mar 13 10:46:04 crc kubenswrapper[4632]: I0313 10:46:04.785404 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnzgm\" (UniqueName: \"kubernetes.io/projected/93948d53-dbf3-47ce-8af0-bee10cc7e246-kube-api-access-fnzgm\") pod \"93948d53-dbf3-47ce-8af0-bee10cc7e246\" (UID: \"93948d53-dbf3-47ce-8af0-bee10cc7e246\") " Mar 13 10:46:04 crc kubenswrapper[4632]: I0313 10:46:04.793271 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93948d53-dbf3-47ce-8af0-bee10cc7e246-kube-api-access-fnzgm" (OuterVolumeSpecName: "kube-api-access-fnzgm") pod "93948d53-dbf3-47ce-8af0-bee10cc7e246" (UID: "93948d53-dbf3-47ce-8af0-bee10cc7e246"). InnerVolumeSpecName "kube-api-access-fnzgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:46:04 crc kubenswrapper[4632]: I0313 10:46:04.887908 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnzgm\" (UniqueName: \"kubernetes.io/projected/93948d53-dbf3-47ce-8af0-bee10cc7e246-kube-api-access-fnzgm\") on node \"crc\" DevicePath \"\"" Mar 13 10:46:05 crc kubenswrapper[4632]: I0313 10:46:05.334078 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556646-blkxp" event={"ID":"93948d53-dbf3-47ce-8af0-bee10cc7e246","Type":"ContainerDied","Data":"1e499c382014b0aca0d9f283dcf3e408db5620f77e2f5a1e95fbd235bc60e907"} Mar 13 10:46:05 crc kubenswrapper[4632]: I0313 10:46:05.334119 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e499c382014b0aca0d9f283dcf3e408db5620f77e2f5a1e95fbd235bc60e907" Mar 13 10:46:05 crc kubenswrapper[4632]: I0313 10:46:05.334150 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556646-blkxp" Mar 13 10:46:05 crc kubenswrapper[4632]: I0313 10:46:05.758874 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556640-zlhsq"] Mar 13 10:46:05 crc kubenswrapper[4632]: I0313 10:46:05.768504 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556640-zlhsq"] Mar 13 10:46:06 crc kubenswrapper[4632]: I0313 10:46:06.056334 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bcf0de2-27ca-4278-80a3-080ce237e6df" path="/var/lib/kubelet/pods/4bcf0de2-27ca-4278-80a3-080ce237e6df/volumes" Mar 13 10:46:09 crc kubenswrapper[4632]: I0313 10:46:09.673595 4632 scope.go:117] "RemoveContainer" containerID="84030d1b6c9dd12b070ed748955e52fe36ed2cac9f9bdddb744ca14dc6fbfa0a" Mar 13 10:46:16 crc kubenswrapper[4632]: I0313 10:46:16.045700 4632 scope.go:117] "RemoveContainer" containerID="7e6ad458a7a5f032b976d0f3e06f3cdb95d1f8fc235ab6b7d8f577ae0282cd20" Mar 13 10:46:16 crc kubenswrapper[4632]: E0313 10:46:16.046671 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:46:31 crc kubenswrapper[4632]: I0313 10:46:31.044824 4632 scope.go:117] "RemoveContainer" containerID="7e6ad458a7a5f032b976d0f3e06f3cdb95d1f8fc235ab6b7d8f577ae0282cd20" Mar 13 10:46:31 crc kubenswrapper[4632]: E0313 10:46:31.046418 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:46:42 crc kubenswrapper[4632]: I0313 10:46:42.044337 4632 scope.go:117] "RemoveContainer" containerID="7e6ad458a7a5f032b976d0f3e06f3cdb95d1f8fc235ab6b7d8f577ae0282cd20" Mar 13 10:46:42 crc kubenswrapper[4632]: E0313 10:46:42.045322 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:46:55 crc kubenswrapper[4632]: I0313 10:46:55.340498 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8rs94"] Mar 13 10:46:55 crc kubenswrapper[4632]: E0313 10:46:55.342846 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93948d53-dbf3-47ce-8af0-bee10cc7e246" containerName="oc" Mar 13 10:46:55 crc kubenswrapper[4632]: I0313 10:46:55.342971 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="93948d53-dbf3-47ce-8af0-bee10cc7e246" containerName="oc" Mar 13 10:46:55 crc kubenswrapper[4632]: I0313 10:46:55.343276 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="93948d53-dbf3-47ce-8af0-bee10cc7e246" containerName="oc" Mar 13 10:46:55 crc kubenswrapper[4632]: I0313 10:46:55.344611 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8rs94" Mar 13 10:46:55 crc kubenswrapper[4632]: I0313 10:46:55.404451 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8rs94"] Mar 13 10:46:55 crc kubenswrapper[4632]: I0313 10:46:55.430310 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58-utilities\") pod \"certified-operators-8rs94\" (UID: \"4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58\") " pod="openshift-marketplace/certified-operators-8rs94" Mar 13 10:46:55 crc kubenswrapper[4632]: I0313 10:46:55.430424 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhljf\" (UniqueName: \"kubernetes.io/projected/4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58-kube-api-access-vhljf\") pod \"certified-operators-8rs94\" (UID: \"4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58\") " pod="openshift-marketplace/certified-operators-8rs94" Mar 13 10:46:55 crc kubenswrapper[4632]: I0313 10:46:55.430544 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58-catalog-content\") pod \"certified-operators-8rs94\" (UID: \"4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58\") " pod="openshift-marketplace/certified-operators-8rs94" Mar 13 10:46:55 crc kubenswrapper[4632]: I0313 10:46:55.532651 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58-utilities\") pod \"certified-operators-8rs94\" (UID: \"4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58\") " pod="openshift-marketplace/certified-operators-8rs94" Mar 13 10:46:55 crc kubenswrapper[4632]: I0313 10:46:55.532752 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhljf\" (UniqueName: \"kubernetes.io/projected/4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58-kube-api-access-vhljf\") pod \"certified-operators-8rs94\" (UID: \"4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58\") " pod="openshift-marketplace/certified-operators-8rs94" Mar 13 10:46:55 crc kubenswrapper[4632]: I0313 10:46:55.532823 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58-catalog-content\") pod \"certified-operators-8rs94\" (UID: \"4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58\") " pod="openshift-marketplace/certified-operators-8rs94" Mar 13 10:46:55 crc kubenswrapper[4632]: I0313 10:46:55.533434 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58-utilities\") pod \"certified-operators-8rs94\" (UID: \"4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58\") " pod="openshift-marketplace/certified-operators-8rs94" Mar 13 10:46:55 crc kubenswrapper[4632]: I0313 10:46:55.533363 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58-catalog-content\") pod \"certified-operators-8rs94\" (UID: \"4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58\") " pod="openshift-marketplace/certified-operators-8rs94" Mar 13 10:46:55 crc kubenswrapper[4632]: I0313 10:46:55.561192 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhljf\" (UniqueName: \"kubernetes.io/projected/4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58-kube-api-access-vhljf\") pod \"certified-operators-8rs94\" (UID: \"4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58\") " pod="openshift-marketplace/certified-operators-8rs94" Mar 13 10:46:55 crc kubenswrapper[4632]: I0313 10:46:55.705747 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8rs94" Mar 13 10:46:56 crc kubenswrapper[4632]: I0313 10:46:56.047138 4632 scope.go:117] "RemoveContainer" containerID="7e6ad458a7a5f032b976d0f3e06f3cdb95d1f8fc235ab6b7d8f577ae0282cd20" Mar 13 10:46:56 crc kubenswrapper[4632]: E0313 10:46:56.047351 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:46:56 crc kubenswrapper[4632]: I0313 10:46:56.298909 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8rs94"] Mar 13 10:46:56 crc kubenswrapper[4632]: I0313 10:46:56.962975 4632 generic.go:334] "Generic (PLEG): container finished" podID="4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58" containerID="9a68434b5791583fc729ae45ace124d9bffa3419f6e081364198559e5e722d2d" exitCode=0 Mar 13 10:46:56 crc kubenswrapper[4632]: I0313 10:46:56.963163 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rs94" event={"ID":"4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58","Type":"ContainerDied","Data":"9a68434b5791583fc729ae45ace124d9bffa3419f6e081364198559e5e722d2d"} Mar 13 10:46:56 crc kubenswrapper[4632]: I0313 10:46:56.963336 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rs94" event={"ID":"4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58","Type":"ContainerStarted","Data":"2fcd4f33b9170a9e1f17871a87902d5c3a4bca4f5bff191585eaa7b78efa9537"} Mar 13 10:46:57 crc kubenswrapper[4632]: I0313 10:46:57.974734 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rs94" event={"ID":"4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58","Type":"ContainerStarted","Data":"073ef57962037ed0b42aebc1b9b9c42dc3c21fd678c44de3f2fc1650fff2b40f"} Mar 13 10:46:59 crc kubenswrapper[4632]: I0313 10:46:59.993401 4632 generic.go:334] "Generic (PLEG): container finished" podID="4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58" containerID="073ef57962037ed0b42aebc1b9b9c42dc3c21fd678c44de3f2fc1650fff2b40f" exitCode=0 Mar 13 10:46:59 crc kubenswrapper[4632]: I0313 10:46:59.993458 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rs94" event={"ID":"4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58","Type":"ContainerDied","Data":"073ef57962037ed0b42aebc1b9b9c42dc3c21fd678c44de3f2fc1650fff2b40f"} Mar 13 10:47:01 crc kubenswrapper[4632]: I0313 10:47:01.005311 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rs94" event={"ID":"4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58","Type":"ContainerStarted","Data":"debb15938677e3a625a25bb92735b8bbd7318e3df789b75b917f0f39fb974e78"} Mar 13 10:47:01 crc kubenswrapper[4632]: I0313 10:47:01.033069 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8rs94" podStartSLOduration=2.5485645 podStartE2EDuration="6.033051869s" podCreationTimestamp="2026-03-13 10:46:55 +0000 UTC" firstStartedPulling="2026-03-13 10:46:56.965795831 +0000 UTC m=+2590.988325964" lastFinishedPulling="2026-03-13 10:47:00.45028318 +0000 UTC m=+2594.472813333" observedRunningTime="2026-03-13 10:47:01.031188604 +0000 UTC m=+2595.053718737" watchObservedRunningTime="2026-03-13 10:47:01.033051869 +0000 UTC m=+2595.055582002" Mar 13 10:47:05 crc kubenswrapper[4632]: I0313 10:47:05.595852 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r7qtl"] Mar 13 10:47:05 crc kubenswrapper[4632]: I0313 10:47:05.600554 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r7qtl" Mar 13 10:47:05 crc kubenswrapper[4632]: I0313 10:47:05.636484 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r7qtl"] Mar 13 10:47:05 crc kubenswrapper[4632]: I0313 10:47:05.648673 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e81415f8-ada1-4f0a-b1fd-92e2e5f5f511-utilities\") pod \"redhat-marketplace-r7qtl\" (UID: \"e81415f8-ada1-4f0a-b1fd-92e2e5f5f511\") " pod="openshift-marketplace/redhat-marketplace-r7qtl" Mar 13 10:47:05 crc kubenswrapper[4632]: I0313 10:47:05.648821 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z5hv\" (UniqueName: \"kubernetes.io/projected/e81415f8-ada1-4f0a-b1fd-92e2e5f5f511-kube-api-access-5z5hv\") pod \"redhat-marketplace-r7qtl\" (UID: \"e81415f8-ada1-4f0a-b1fd-92e2e5f5f511\") " pod="openshift-marketplace/redhat-marketplace-r7qtl" Mar 13 10:47:05 crc kubenswrapper[4632]: I0313 10:47:05.648908 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e81415f8-ada1-4f0a-b1fd-92e2e5f5f511-catalog-content\") pod \"redhat-marketplace-r7qtl\" (UID: \"e81415f8-ada1-4f0a-b1fd-92e2e5f5f511\") " pod="openshift-marketplace/redhat-marketplace-r7qtl" Mar 13 10:47:05 crc kubenswrapper[4632]: I0313 10:47:05.706816 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8rs94" Mar 13 10:47:05 crc kubenswrapper[4632]: I0313 10:47:05.707149 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8rs94" Mar 13 10:47:05 crc kubenswrapper[4632]: I0313 10:47:05.751569 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z5hv\" (UniqueName: \"kubernetes.io/projected/e81415f8-ada1-4f0a-b1fd-92e2e5f5f511-kube-api-access-5z5hv\") pod \"redhat-marketplace-r7qtl\" (UID: \"e81415f8-ada1-4f0a-b1fd-92e2e5f5f511\") " pod="openshift-marketplace/redhat-marketplace-r7qtl" Mar 13 10:47:05 crc kubenswrapper[4632]: I0313 10:47:05.751895 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e81415f8-ada1-4f0a-b1fd-92e2e5f5f511-catalog-content\") pod \"redhat-marketplace-r7qtl\" (UID: \"e81415f8-ada1-4f0a-b1fd-92e2e5f5f511\") " pod="openshift-marketplace/redhat-marketplace-r7qtl" Mar 13 10:47:05 crc kubenswrapper[4632]: I0313 10:47:05.752157 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e81415f8-ada1-4f0a-b1fd-92e2e5f5f511-utilities\") pod \"redhat-marketplace-r7qtl\" (UID: \"e81415f8-ada1-4f0a-b1fd-92e2e5f5f511\") " pod="openshift-marketplace/redhat-marketplace-r7qtl" Mar 13 10:47:05 crc kubenswrapper[4632]: I0313 10:47:05.752510 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e81415f8-ada1-4f0a-b1fd-92e2e5f5f511-catalog-content\") pod \"redhat-marketplace-r7qtl\" (UID: \"e81415f8-ada1-4f0a-b1fd-92e2e5f5f511\") " pod="openshift-marketplace/redhat-marketplace-r7qtl" Mar 13 10:47:05 crc kubenswrapper[4632]: I0313 10:47:05.752855 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e81415f8-ada1-4f0a-b1fd-92e2e5f5f511-utilities\") pod \"redhat-marketplace-r7qtl\" (UID: \"e81415f8-ada1-4f0a-b1fd-92e2e5f5f511\") " pod="openshift-marketplace/redhat-marketplace-r7qtl" Mar 13 10:47:05 crc kubenswrapper[4632]: I0313 10:47:05.778574 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z5hv\" (UniqueName: \"kubernetes.io/projected/e81415f8-ada1-4f0a-b1fd-92e2e5f5f511-kube-api-access-5z5hv\") pod \"redhat-marketplace-r7qtl\" (UID: \"e81415f8-ada1-4f0a-b1fd-92e2e5f5f511\") " pod="openshift-marketplace/redhat-marketplace-r7qtl" Mar 13 10:47:05 crc kubenswrapper[4632]: I0313 10:47:05.931040 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r7qtl" Mar 13 10:47:06 crc kubenswrapper[4632]: I0313 10:47:06.539859 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r7qtl"] Mar 13 10:47:06 crc kubenswrapper[4632]: I0313 10:47:06.765152 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8rs94" podUID="4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58" containerName="registry-server" probeResult="failure" output=< Mar 13 10:47:06 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 10:47:06 crc kubenswrapper[4632]: > Mar 13 10:47:07 crc kubenswrapper[4632]: I0313 10:47:07.081450 4632 generic.go:334] "Generic (PLEG): container finished" podID="e81415f8-ada1-4f0a-b1fd-92e2e5f5f511" containerID="b3d79b73fcd58ac20d741acf9d96929a0df0db65f84dbb23ba98c1eff210414e" exitCode=0 Mar 13 10:47:07 crc kubenswrapper[4632]: I0313 10:47:07.081503 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7qtl" event={"ID":"e81415f8-ada1-4f0a-b1fd-92e2e5f5f511","Type":"ContainerDied","Data":"b3d79b73fcd58ac20d741acf9d96929a0df0db65f84dbb23ba98c1eff210414e"} Mar 13 10:47:07 crc kubenswrapper[4632]: I0313 10:47:07.081569 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7qtl" event={"ID":"e81415f8-ada1-4f0a-b1fd-92e2e5f5f511","Type":"ContainerStarted","Data":"874f90ecd39b4972bd013b4da6787ff75bf2a57a461e69b7db2804dbda134bb6"} Mar 13 10:47:08 crc kubenswrapper[4632]: I0313 10:47:08.093292 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7qtl" event={"ID":"e81415f8-ada1-4f0a-b1fd-92e2e5f5f511","Type":"ContainerStarted","Data":"6869bdf68c35077e3e515fc4d4fc9c80e2292fab5161433ed27b5622d65b9902"} Mar 13 10:47:10 crc kubenswrapper[4632]: I0313 10:47:10.044624 4632 scope.go:117] "RemoveContainer" containerID="7e6ad458a7a5f032b976d0f3e06f3cdb95d1f8fc235ab6b7d8f577ae0282cd20" Mar 13 10:47:10 crc kubenswrapper[4632]: E0313 10:47:10.045200 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:47:10 crc kubenswrapper[4632]: I0313 10:47:10.112488 4632 generic.go:334] "Generic (PLEG): container finished" podID="e81415f8-ada1-4f0a-b1fd-92e2e5f5f511" containerID="6869bdf68c35077e3e515fc4d4fc9c80e2292fab5161433ed27b5622d65b9902" exitCode=0 Mar 13 10:47:10 crc kubenswrapper[4632]: I0313 10:47:10.112531 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7qtl" event={"ID":"e81415f8-ada1-4f0a-b1fd-92e2e5f5f511","Type":"ContainerDied","Data":"6869bdf68c35077e3e515fc4d4fc9c80e2292fab5161433ed27b5622d65b9902"} Mar 13 10:47:11 crc kubenswrapper[4632]: I0313 10:47:11.124040 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7qtl" event={"ID":"e81415f8-ada1-4f0a-b1fd-92e2e5f5f511","Type":"ContainerStarted","Data":"1182e2999759e9be37ef50a31fa5017ebfa813a7c00f6df460a1c1c6e3b78a8f"} Mar 13 10:47:15 crc kubenswrapper[4632]: I0313 10:47:15.763244 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8rs94" Mar 13 10:47:15 crc kubenswrapper[4632]: I0313 10:47:15.797959 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r7qtl" podStartSLOduration=7.310138202 podStartE2EDuration="10.797914363s" podCreationTimestamp="2026-03-13 10:47:05 +0000 UTC" firstStartedPulling="2026-03-13 10:47:07.084582902 +0000 UTC m=+2601.107113045" lastFinishedPulling="2026-03-13 10:47:10.572359073 +0000 UTC m=+2604.594889206" observedRunningTime="2026-03-13 10:47:11.150374795 +0000 UTC m=+2605.172904948" watchObservedRunningTime="2026-03-13 10:47:15.797914363 +0000 UTC m=+2609.820444506" Mar 13 10:47:15 crc kubenswrapper[4632]: I0313 10:47:15.815091 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8rs94" Mar 13 10:47:15 crc kubenswrapper[4632]: I0313 10:47:15.931178 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r7qtl" Mar 13 10:47:15 crc kubenswrapper[4632]: I0313 10:47:15.932283 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r7qtl" Mar 13 10:47:15 crc kubenswrapper[4632]: I0313 10:47:15.976628 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r7qtl" Mar 13 10:47:16 crc kubenswrapper[4632]: I0313 10:47:16.014003 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8rs94"] Mar 13 10:47:16 crc kubenswrapper[4632]: I0313 10:47:16.213657 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r7qtl" Mar 13 10:47:17 crc kubenswrapper[4632]: I0313 10:47:17.170724 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8rs94" podUID="4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58" containerName="registry-server" containerID="cri-o://debb15938677e3a625a25bb92735b8bbd7318e3df789b75b917f0f39fb974e78" gracePeriod=2 Mar 13 10:47:17 crc kubenswrapper[4632]: I0313 10:47:17.635040 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8rs94" Mar 13 10:47:17 crc kubenswrapper[4632]: I0313 10:47:17.694052 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58-utilities\") pod \"4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58\" (UID: \"4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58\") " Mar 13 10:47:17 crc kubenswrapper[4632]: I0313 10:47:17.694294 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhljf\" (UniqueName: \"kubernetes.io/projected/4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58-kube-api-access-vhljf\") pod \"4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58\" (UID: \"4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58\") " Mar 13 10:47:17 crc kubenswrapper[4632]: I0313 10:47:17.694426 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58-catalog-content\") pod \"4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58\" (UID: \"4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58\") " Mar 13 10:47:17 crc kubenswrapper[4632]: I0313 10:47:17.694912 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58-utilities" (OuterVolumeSpecName: "utilities") pod "4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58" (UID: "4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:47:17 crc kubenswrapper[4632]: I0313 10:47:17.701238 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58-kube-api-access-vhljf" (OuterVolumeSpecName: "kube-api-access-vhljf") pod "4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58" (UID: "4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58"). InnerVolumeSpecName "kube-api-access-vhljf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:47:17 crc kubenswrapper[4632]: I0313 10:47:17.749739 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58" (UID: "4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:47:17 crc kubenswrapper[4632]: I0313 10:47:17.796253 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 10:47:17 crc kubenswrapper[4632]: I0313 10:47:17.796292 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 10:47:17 crc kubenswrapper[4632]: I0313 10:47:17.796302 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhljf\" (UniqueName: \"kubernetes.io/projected/4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58-kube-api-access-vhljf\") on node \"crc\" DevicePath \"\"" Mar 13 10:47:18 crc kubenswrapper[4632]: I0313 10:47:18.195229 4632 generic.go:334] "Generic (PLEG): container finished" podID="4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58" containerID="debb15938677e3a625a25bb92735b8bbd7318e3df789b75b917f0f39fb974e78" exitCode=0 Mar 13 10:47:18 crc kubenswrapper[4632]: I0313 10:47:18.197111 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8rs94" Mar 13 10:47:18 crc kubenswrapper[4632]: I0313 10:47:18.197163 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rs94" event={"ID":"4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58","Type":"ContainerDied","Data":"debb15938677e3a625a25bb92735b8bbd7318e3df789b75b917f0f39fb974e78"} Mar 13 10:47:18 crc kubenswrapper[4632]: I0313 10:47:18.197235 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rs94" event={"ID":"4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58","Type":"ContainerDied","Data":"2fcd4f33b9170a9e1f17871a87902d5c3a4bca4f5bff191585eaa7b78efa9537"} Mar 13 10:47:18 crc kubenswrapper[4632]: I0313 10:47:18.197262 4632 scope.go:117] "RemoveContainer" containerID="debb15938677e3a625a25bb92735b8bbd7318e3df789b75b917f0f39fb974e78" Mar 13 10:47:18 crc kubenswrapper[4632]: I0313 10:47:18.242718 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r7qtl"] Mar 13 10:47:18 crc kubenswrapper[4632]: I0313 10:47:18.250572 4632 scope.go:117] "RemoveContainer" containerID="073ef57962037ed0b42aebc1b9b9c42dc3c21fd678c44de3f2fc1650fff2b40f" Mar 13 10:47:18 crc kubenswrapper[4632]: I0313 10:47:18.260085 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8rs94"] Mar 13 10:47:18 crc kubenswrapper[4632]: I0313 10:47:18.268765 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8rs94"] Mar 13 10:47:18 crc kubenswrapper[4632]: I0313 10:47:18.272071 4632 scope.go:117] "RemoveContainer" containerID="9a68434b5791583fc729ae45ace124d9bffa3419f6e081364198559e5e722d2d" Mar 13 10:47:18 crc kubenswrapper[4632]: I0313 10:47:18.316905 4632 scope.go:117] "RemoveContainer" containerID="debb15938677e3a625a25bb92735b8bbd7318e3df789b75b917f0f39fb974e78" Mar 13 10:47:18 crc kubenswrapper[4632]: E0313 10:47:18.317834 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"debb15938677e3a625a25bb92735b8bbd7318e3df789b75b917f0f39fb974e78\": container with ID starting with debb15938677e3a625a25bb92735b8bbd7318e3df789b75b917f0f39fb974e78 not found: ID does not exist" containerID="debb15938677e3a625a25bb92735b8bbd7318e3df789b75b917f0f39fb974e78" Mar 13 10:47:18 crc kubenswrapper[4632]: I0313 10:47:18.318042 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"debb15938677e3a625a25bb92735b8bbd7318e3df789b75b917f0f39fb974e78"} err="failed to get container status \"debb15938677e3a625a25bb92735b8bbd7318e3df789b75b917f0f39fb974e78\": rpc error: code = NotFound desc = could not find container \"debb15938677e3a625a25bb92735b8bbd7318e3df789b75b917f0f39fb974e78\": container with ID starting with debb15938677e3a625a25bb92735b8bbd7318e3df789b75b917f0f39fb974e78 not found: ID does not exist" Mar 13 10:47:18 crc kubenswrapper[4632]: I0313 10:47:18.318156 4632 scope.go:117] "RemoveContainer" containerID="073ef57962037ed0b42aebc1b9b9c42dc3c21fd678c44de3f2fc1650fff2b40f" Mar 13 10:47:18 crc kubenswrapper[4632]: E0313 10:47:18.318639 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"073ef57962037ed0b42aebc1b9b9c42dc3c21fd678c44de3f2fc1650fff2b40f\": container with ID starting with 073ef57962037ed0b42aebc1b9b9c42dc3c21fd678c44de3f2fc1650fff2b40f not found: ID does not exist" containerID="073ef57962037ed0b42aebc1b9b9c42dc3c21fd678c44de3f2fc1650fff2b40f" Mar 13 10:47:18 crc kubenswrapper[4632]: I0313 10:47:18.318759 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"073ef57962037ed0b42aebc1b9b9c42dc3c21fd678c44de3f2fc1650fff2b40f"} err="failed to get container status \"073ef57962037ed0b42aebc1b9b9c42dc3c21fd678c44de3f2fc1650fff2b40f\": rpc error: code = NotFound desc = could not find container \"073ef57962037ed0b42aebc1b9b9c42dc3c21fd678c44de3f2fc1650fff2b40f\": container with ID starting with 073ef57962037ed0b42aebc1b9b9c42dc3c21fd678c44de3f2fc1650fff2b40f not found: ID does not exist" Mar 13 10:47:18 crc kubenswrapper[4632]: I0313 10:47:18.318912 4632 scope.go:117] "RemoveContainer" containerID="9a68434b5791583fc729ae45ace124d9bffa3419f6e081364198559e5e722d2d" Mar 13 10:47:18 crc kubenswrapper[4632]: E0313 10:47:18.319559 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a68434b5791583fc729ae45ace124d9bffa3419f6e081364198559e5e722d2d\": container with ID starting with 9a68434b5791583fc729ae45ace124d9bffa3419f6e081364198559e5e722d2d not found: ID does not exist" containerID="9a68434b5791583fc729ae45ace124d9bffa3419f6e081364198559e5e722d2d" Mar 13 10:47:18 crc kubenswrapper[4632]: I0313 10:47:18.319673 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a68434b5791583fc729ae45ace124d9bffa3419f6e081364198559e5e722d2d"} err="failed to get container status \"9a68434b5791583fc729ae45ace124d9bffa3419f6e081364198559e5e722d2d\": rpc error: code = NotFound desc = could not find container \"9a68434b5791583fc729ae45ace124d9bffa3419f6e081364198559e5e722d2d\": container with ID starting with 9a68434b5791583fc729ae45ace124d9bffa3419f6e081364198559e5e722d2d not found: ID does not exist" Mar 13 10:47:19 crc kubenswrapper[4632]: I0313 10:47:19.206224 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r7qtl" podUID="e81415f8-ada1-4f0a-b1fd-92e2e5f5f511" containerName="registry-server" containerID="cri-o://1182e2999759e9be37ef50a31fa5017ebfa813a7c00f6df460a1c1c6e3b78a8f" gracePeriod=2 Mar 13 10:47:19 crc kubenswrapper[4632]: I0313 10:47:19.635932 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r7qtl" Mar 13 10:47:19 crc kubenswrapper[4632]: I0313 10:47:19.782914 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e81415f8-ada1-4f0a-b1fd-92e2e5f5f511-catalog-content\") pod \"e81415f8-ada1-4f0a-b1fd-92e2e5f5f511\" (UID: \"e81415f8-ada1-4f0a-b1fd-92e2e5f5f511\") " Mar 13 10:47:19 crc kubenswrapper[4632]: I0313 10:47:19.782995 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e81415f8-ada1-4f0a-b1fd-92e2e5f5f511-utilities\") pod \"e81415f8-ada1-4f0a-b1fd-92e2e5f5f511\" (UID: \"e81415f8-ada1-4f0a-b1fd-92e2e5f5f511\") " Mar 13 10:47:19 crc kubenswrapper[4632]: I0313 10:47:19.783027 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z5hv\" (UniqueName: \"kubernetes.io/projected/e81415f8-ada1-4f0a-b1fd-92e2e5f5f511-kube-api-access-5z5hv\") pod \"e81415f8-ada1-4f0a-b1fd-92e2e5f5f511\" (UID: \"e81415f8-ada1-4f0a-b1fd-92e2e5f5f511\") " Mar 13 10:47:19 crc kubenswrapper[4632]: I0313 10:47:19.784234 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e81415f8-ada1-4f0a-b1fd-92e2e5f5f511-utilities" (OuterVolumeSpecName: "utilities") pod "e81415f8-ada1-4f0a-b1fd-92e2e5f5f511" (UID: "e81415f8-ada1-4f0a-b1fd-92e2e5f5f511"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:47:19 crc kubenswrapper[4632]: I0313 10:47:19.789922 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e81415f8-ada1-4f0a-b1fd-92e2e5f5f511-kube-api-access-5z5hv" (OuterVolumeSpecName: "kube-api-access-5z5hv") pod "e81415f8-ada1-4f0a-b1fd-92e2e5f5f511" (UID: "e81415f8-ada1-4f0a-b1fd-92e2e5f5f511"). InnerVolumeSpecName "kube-api-access-5z5hv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:47:19 crc kubenswrapper[4632]: I0313 10:47:19.817342 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e81415f8-ada1-4f0a-b1fd-92e2e5f5f511-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e81415f8-ada1-4f0a-b1fd-92e2e5f5f511" (UID: "e81415f8-ada1-4f0a-b1fd-92e2e5f5f511"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:47:19 crc kubenswrapper[4632]: I0313 10:47:19.884653 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e81415f8-ada1-4f0a-b1fd-92e2e5f5f511-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 10:47:19 crc kubenswrapper[4632]: I0313 10:47:19.884686 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e81415f8-ada1-4f0a-b1fd-92e2e5f5f511-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 10:47:19 crc kubenswrapper[4632]: I0313 10:47:19.884698 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5z5hv\" (UniqueName: \"kubernetes.io/projected/e81415f8-ada1-4f0a-b1fd-92e2e5f5f511-kube-api-access-5z5hv\") on node \"crc\" DevicePath \"\"" Mar 13 10:47:20 crc kubenswrapper[4632]: I0313 10:47:20.054616 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58" path="/var/lib/kubelet/pods/4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58/volumes" Mar 13 10:47:20 crc kubenswrapper[4632]: I0313 10:47:20.218237 4632 generic.go:334] "Generic (PLEG): container finished" podID="e81415f8-ada1-4f0a-b1fd-92e2e5f5f511" containerID="1182e2999759e9be37ef50a31fa5017ebfa813a7c00f6df460a1c1c6e3b78a8f" exitCode=0 Mar 13 10:47:20 crc kubenswrapper[4632]: I0313 10:47:20.218295 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7qtl" event={"ID":"e81415f8-ada1-4f0a-b1fd-92e2e5f5f511","Type":"ContainerDied","Data":"1182e2999759e9be37ef50a31fa5017ebfa813a7c00f6df460a1c1c6e3b78a8f"} Mar 13 10:47:20 crc kubenswrapper[4632]: I0313 10:47:20.218327 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7qtl" event={"ID":"e81415f8-ada1-4f0a-b1fd-92e2e5f5f511","Type":"ContainerDied","Data":"874f90ecd39b4972bd013b4da6787ff75bf2a57a461e69b7db2804dbda134bb6"} Mar 13 10:47:20 crc kubenswrapper[4632]: I0313 10:47:20.218347 4632 scope.go:117] "RemoveContainer" containerID="1182e2999759e9be37ef50a31fa5017ebfa813a7c00f6df460a1c1c6e3b78a8f" Mar 13 10:47:20 crc kubenswrapper[4632]: I0313 10:47:20.218477 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r7qtl" Mar 13 10:47:20 crc kubenswrapper[4632]: I0313 10:47:20.246344 4632 scope.go:117] "RemoveContainer" containerID="6869bdf68c35077e3e515fc4d4fc9c80e2292fab5161433ed27b5622d65b9902" Mar 13 10:47:20 crc kubenswrapper[4632]: I0313 10:47:20.250168 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r7qtl"] Mar 13 10:47:20 crc kubenswrapper[4632]: I0313 10:47:20.266891 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r7qtl"] Mar 13 10:47:20 crc kubenswrapper[4632]: I0313 10:47:20.267378 4632 scope.go:117] "RemoveContainer" containerID="b3d79b73fcd58ac20d741acf9d96929a0df0db65f84dbb23ba98c1eff210414e" Mar 13 10:47:20 crc kubenswrapper[4632]: I0313 10:47:20.309028 4632 scope.go:117] "RemoveContainer" containerID="1182e2999759e9be37ef50a31fa5017ebfa813a7c00f6df460a1c1c6e3b78a8f" Mar 13 10:47:20 crc kubenswrapper[4632]: E0313 10:47:20.310021 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1182e2999759e9be37ef50a31fa5017ebfa813a7c00f6df460a1c1c6e3b78a8f\": container with ID starting with 1182e2999759e9be37ef50a31fa5017ebfa813a7c00f6df460a1c1c6e3b78a8f not found: ID does not exist" containerID="1182e2999759e9be37ef50a31fa5017ebfa813a7c00f6df460a1c1c6e3b78a8f" Mar 13 10:47:20 crc kubenswrapper[4632]: I0313 10:47:20.310074 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1182e2999759e9be37ef50a31fa5017ebfa813a7c00f6df460a1c1c6e3b78a8f"} err="failed to get container status \"1182e2999759e9be37ef50a31fa5017ebfa813a7c00f6df460a1c1c6e3b78a8f\": rpc error: code = NotFound desc = could not find container \"1182e2999759e9be37ef50a31fa5017ebfa813a7c00f6df460a1c1c6e3b78a8f\": container with ID starting with 1182e2999759e9be37ef50a31fa5017ebfa813a7c00f6df460a1c1c6e3b78a8f not found: ID does not exist" Mar 13 10:47:20 crc kubenswrapper[4632]: I0313 10:47:20.310115 4632 scope.go:117] "RemoveContainer" containerID="6869bdf68c35077e3e515fc4d4fc9c80e2292fab5161433ed27b5622d65b9902" Mar 13 10:47:20 crc kubenswrapper[4632]: E0313 10:47:20.310650 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6869bdf68c35077e3e515fc4d4fc9c80e2292fab5161433ed27b5622d65b9902\": container with ID starting with 6869bdf68c35077e3e515fc4d4fc9c80e2292fab5161433ed27b5622d65b9902 not found: ID does not exist" containerID="6869bdf68c35077e3e515fc4d4fc9c80e2292fab5161433ed27b5622d65b9902" Mar 13 10:47:20 crc kubenswrapper[4632]: I0313 10:47:20.310750 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6869bdf68c35077e3e515fc4d4fc9c80e2292fab5161433ed27b5622d65b9902"} err="failed to get container status \"6869bdf68c35077e3e515fc4d4fc9c80e2292fab5161433ed27b5622d65b9902\": rpc error: code = NotFound desc = could not find container \"6869bdf68c35077e3e515fc4d4fc9c80e2292fab5161433ed27b5622d65b9902\": container with ID starting with 6869bdf68c35077e3e515fc4d4fc9c80e2292fab5161433ed27b5622d65b9902 not found: ID does not exist" Mar 13 10:47:20 crc kubenswrapper[4632]: I0313 10:47:20.310780 4632 scope.go:117] "RemoveContainer" containerID="b3d79b73fcd58ac20d741acf9d96929a0df0db65f84dbb23ba98c1eff210414e" Mar 13 10:47:20 crc kubenswrapper[4632]: E0313 10:47:20.311142 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3d79b73fcd58ac20d741acf9d96929a0df0db65f84dbb23ba98c1eff210414e\": container with ID starting with b3d79b73fcd58ac20d741acf9d96929a0df0db65f84dbb23ba98c1eff210414e not found: ID does not exist" containerID="b3d79b73fcd58ac20d741acf9d96929a0df0db65f84dbb23ba98c1eff210414e" Mar 13 10:47:20 crc kubenswrapper[4632]: I0313 10:47:20.311401 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3d79b73fcd58ac20d741acf9d96929a0df0db65f84dbb23ba98c1eff210414e"} err="failed to get container status \"b3d79b73fcd58ac20d741acf9d96929a0df0db65f84dbb23ba98c1eff210414e\": rpc error: code = NotFound desc = could not find container \"b3d79b73fcd58ac20d741acf9d96929a0df0db65f84dbb23ba98c1eff210414e\": container with ID starting with b3d79b73fcd58ac20d741acf9d96929a0df0db65f84dbb23ba98c1eff210414e not found: ID does not exist" Mar 13 10:47:22 crc kubenswrapper[4632]: I0313 10:47:22.055978 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e81415f8-ada1-4f0a-b1fd-92e2e5f5f511" path="/var/lib/kubelet/pods/e81415f8-ada1-4f0a-b1fd-92e2e5f5f511/volumes" Mar 13 10:47:23 crc kubenswrapper[4632]: I0313 10:47:23.045759 4632 scope.go:117] "RemoveContainer" containerID="7e6ad458a7a5f032b976d0f3e06f3cdb95d1f8fc235ab6b7d8f577ae0282cd20" Mar 13 10:47:23 crc kubenswrapper[4632]: E0313 10:47:23.045991 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:47:37 crc kubenswrapper[4632]: I0313 10:47:37.044305 4632 scope.go:117] "RemoveContainer" containerID="7e6ad458a7a5f032b976d0f3e06f3cdb95d1f8fc235ab6b7d8f577ae0282cd20" Mar 13 10:47:37 crc kubenswrapper[4632]: E0313 10:47:37.045158 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:47:50 crc kubenswrapper[4632]: I0313 10:47:50.044179 4632 scope.go:117] "RemoveContainer" containerID="7e6ad458a7a5f032b976d0f3e06f3cdb95d1f8fc235ab6b7d8f577ae0282cd20" Mar 13 10:47:50 crc kubenswrapper[4632]: I0313 10:47:50.493751 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerStarted","Data":"a5ebd5748d892637db30e6f25b4cdb7397d5f5e2a1d221a622054fbf7f8b83f2"} Mar 13 10:48:00 crc kubenswrapper[4632]: I0313 10:48:00.165578 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556648-hbbkx"] Mar 13 10:48:00 crc kubenswrapper[4632]: E0313 10:48:00.166787 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e81415f8-ada1-4f0a-b1fd-92e2e5f5f511" containerName="extract-content" Mar 13 10:48:00 crc kubenswrapper[4632]: I0313 10:48:00.166801 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="e81415f8-ada1-4f0a-b1fd-92e2e5f5f511" containerName="extract-content" Mar 13 10:48:00 crc kubenswrapper[4632]: E0313 10:48:00.166818 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e81415f8-ada1-4f0a-b1fd-92e2e5f5f511" containerName="extract-utilities" Mar 13 10:48:00 crc kubenswrapper[4632]: I0313 10:48:00.166825 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="e81415f8-ada1-4f0a-b1fd-92e2e5f5f511" containerName="extract-utilities" Mar 13 10:48:00 crc kubenswrapper[4632]: E0313 10:48:00.166842 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58" containerName="extract-utilities" Mar 13 10:48:00 crc kubenswrapper[4632]: I0313 10:48:00.166851 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58" containerName="extract-utilities" Mar 13 10:48:00 crc kubenswrapper[4632]: E0313 10:48:00.166872 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e81415f8-ada1-4f0a-b1fd-92e2e5f5f511" containerName="registry-server" Mar 13 10:48:00 crc kubenswrapper[4632]: I0313 10:48:00.166880 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="e81415f8-ada1-4f0a-b1fd-92e2e5f5f511" containerName="registry-server" Mar 13 10:48:00 crc kubenswrapper[4632]: E0313 10:48:00.166889 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58" containerName="registry-server" Mar 13 10:48:00 crc kubenswrapper[4632]: I0313 10:48:00.166896 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58" containerName="registry-server" Mar 13 10:48:00 crc kubenswrapper[4632]: E0313 10:48:00.166921 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58" containerName="extract-content" Mar 13 10:48:00 crc kubenswrapper[4632]: I0313 10:48:00.166928 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58" containerName="extract-content" Mar 13 10:48:00 crc kubenswrapper[4632]: I0313 10:48:00.167126 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="e81415f8-ada1-4f0a-b1fd-92e2e5f5f511" containerName="registry-server" Mar 13 10:48:00 crc kubenswrapper[4632]: I0313 10:48:00.167142 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a4a7db9-2c2b-42bc-a6fb-c33922f3ff58" containerName="registry-server" Mar 13 10:48:00 crc kubenswrapper[4632]: I0313 10:48:00.167798 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556648-hbbkx" Mar 13 10:48:00 crc kubenswrapper[4632]: I0313 10:48:00.172520 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 10:48:00 crc kubenswrapper[4632]: I0313 10:48:00.172863 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 10:48:00 crc kubenswrapper[4632]: I0313 10:48:00.175126 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 10:48:00 crc kubenswrapper[4632]: I0313 10:48:00.192402 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556648-hbbkx"] Mar 13 10:48:00 crc kubenswrapper[4632]: I0313 10:48:00.289493 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhpjp\" (UniqueName: \"kubernetes.io/projected/5ab47075-381b-45d4-b6c8-c64ae6433ef1-kube-api-access-xhpjp\") pod \"auto-csr-approver-29556648-hbbkx\" (UID: \"5ab47075-381b-45d4-b6c8-c64ae6433ef1\") " pod="openshift-infra/auto-csr-approver-29556648-hbbkx" Mar 13 10:48:00 crc kubenswrapper[4632]: I0313 10:48:00.392000 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhpjp\" (UniqueName: \"kubernetes.io/projected/5ab47075-381b-45d4-b6c8-c64ae6433ef1-kube-api-access-xhpjp\") pod \"auto-csr-approver-29556648-hbbkx\" (UID: \"5ab47075-381b-45d4-b6c8-c64ae6433ef1\") " pod="openshift-infra/auto-csr-approver-29556648-hbbkx" Mar 13 10:48:00 crc kubenswrapper[4632]: I0313 10:48:00.411588 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhpjp\" (UniqueName: \"kubernetes.io/projected/5ab47075-381b-45d4-b6c8-c64ae6433ef1-kube-api-access-xhpjp\") pod \"auto-csr-approver-29556648-hbbkx\" (UID: \"5ab47075-381b-45d4-b6c8-c64ae6433ef1\") " pod="openshift-infra/auto-csr-approver-29556648-hbbkx" Mar 13 10:48:00 crc kubenswrapper[4632]: I0313 10:48:00.507144 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556648-hbbkx" Mar 13 10:48:01 crc kubenswrapper[4632]: I0313 10:48:01.085820 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556648-hbbkx"] Mar 13 10:48:01 crc kubenswrapper[4632]: I0313 10:48:01.611072 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556648-hbbkx" event={"ID":"5ab47075-381b-45d4-b6c8-c64ae6433ef1","Type":"ContainerStarted","Data":"da07aff5e6aa9184a5f24000561885b285e59acbd073ce5b368af221cda24a12"} Mar 13 10:48:02 crc kubenswrapper[4632]: I0313 10:48:02.622871 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556648-hbbkx" event={"ID":"5ab47075-381b-45d4-b6c8-c64ae6433ef1","Type":"ContainerStarted","Data":"ac1c75bd040311821d7426607144ebc256c3f11219f7a26012d50c7ce3c315ba"} Mar 13 10:48:02 crc kubenswrapper[4632]: I0313 10:48:02.640236 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556648-hbbkx" podStartSLOduration=1.9554097129999999 podStartE2EDuration="2.640216866s" podCreationTimestamp="2026-03-13 10:48:00 +0000 UTC" firstStartedPulling="2026-03-13 10:48:01.08372868 +0000 UTC m=+2655.106258813" lastFinishedPulling="2026-03-13 10:48:01.768535833 +0000 UTC m=+2655.791065966" observedRunningTime="2026-03-13 10:48:02.63755206 +0000 UTC m=+2656.660082213" watchObservedRunningTime="2026-03-13 10:48:02.640216866 +0000 UTC m=+2656.662746999" Mar 13 10:48:03 crc kubenswrapper[4632]: I0313 10:48:03.652288 4632 generic.go:334] "Generic (PLEG): container finished" podID="5ab47075-381b-45d4-b6c8-c64ae6433ef1" containerID="ac1c75bd040311821d7426607144ebc256c3f11219f7a26012d50c7ce3c315ba" exitCode=0 Mar 13 10:48:03 crc kubenswrapper[4632]: I0313 10:48:03.652965 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556648-hbbkx" event={"ID":"5ab47075-381b-45d4-b6c8-c64ae6433ef1","Type":"ContainerDied","Data":"ac1c75bd040311821d7426607144ebc256c3f11219f7a26012d50c7ce3c315ba"} Mar 13 10:48:05 crc kubenswrapper[4632]: I0313 10:48:05.024447 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556648-hbbkx" Mar 13 10:48:05 crc kubenswrapper[4632]: I0313 10:48:05.097381 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhpjp\" (UniqueName: \"kubernetes.io/projected/5ab47075-381b-45d4-b6c8-c64ae6433ef1-kube-api-access-xhpjp\") pod \"5ab47075-381b-45d4-b6c8-c64ae6433ef1\" (UID: \"5ab47075-381b-45d4-b6c8-c64ae6433ef1\") " Mar 13 10:48:05 crc kubenswrapper[4632]: I0313 10:48:05.108324 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ab47075-381b-45d4-b6c8-c64ae6433ef1-kube-api-access-xhpjp" (OuterVolumeSpecName: "kube-api-access-xhpjp") pod "5ab47075-381b-45d4-b6c8-c64ae6433ef1" (UID: "5ab47075-381b-45d4-b6c8-c64ae6433ef1"). InnerVolumeSpecName "kube-api-access-xhpjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:48:05 crc kubenswrapper[4632]: I0313 10:48:05.200433 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhpjp\" (UniqueName: \"kubernetes.io/projected/5ab47075-381b-45d4-b6c8-c64ae6433ef1-kube-api-access-xhpjp\") on node \"crc\" DevicePath \"\"" Mar 13 10:48:05 crc kubenswrapper[4632]: I0313 10:48:05.685176 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556648-hbbkx" event={"ID":"5ab47075-381b-45d4-b6c8-c64ae6433ef1","Type":"ContainerDied","Data":"da07aff5e6aa9184a5f24000561885b285e59acbd073ce5b368af221cda24a12"} Mar 13 10:48:05 crc kubenswrapper[4632]: I0313 10:48:05.685225 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da07aff5e6aa9184a5f24000561885b285e59acbd073ce5b368af221cda24a12" Mar 13 10:48:05 crc kubenswrapper[4632]: I0313 10:48:05.685522 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556648-hbbkx" Mar 13 10:48:05 crc kubenswrapper[4632]: I0313 10:48:05.737170 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556642-f6lzb"] Mar 13 10:48:05 crc kubenswrapper[4632]: I0313 10:48:05.744811 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556642-f6lzb"] Mar 13 10:48:06 crc kubenswrapper[4632]: I0313 10:48:06.058147 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6df4a28-3b7b-4904-aa41-62caa26889a8" path="/var/lib/kubelet/pods/a6df4a28-3b7b-4904-aa41-62caa26889a8/volumes" Mar 13 10:48:09 crc kubenswrapper[4632]: I0313 10:48:09.788865 4632 scope.go:117] "RemoveContainer" containerID="2360a4309504ac747bcde26fcceae28cb04d811f34c5f4e463b65c45b06c70f5" Mar 13 10:48:23 crc kubenswrapper[4632]: I0313 10:48:23.858077 4632 generic.go:334] "Generic (PLEG): container finished" podID="ed1a2c50-a476-43ca-9764-e0ebffb14134" containerID="e94f04b40e30694707c6fd5089936e853269c95465d0464597019d512ad17ad4" exitCode=0 Mar 13 10:48:23 crc kubenswrapper[4632]: I0313 10:48:23.858164 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-skjrh" event={"ID":"ed1a2c50-a476-43ca-9764-e0ebffb14134","Type":"ContainerDied","Data":"e94f04b40e30694707c6fd5089936e853269c95465d0464597019d512ad17ad4"} Mar 13 10:48:25 crc kubenswrapper[4632]: I0313 10:48:25.267246 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-skjrh" Mar 13 10:48:25 crc kubenswrapper[4632]: I0313 10:48:25.339221 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh7rq\" (UniqueName: \"kubernetes.io/projected/ed1a2c50-a476-43ca-9764-e0ebffb14134-kube-api-access-xh7rq\") pod \"ed1a2c50-a476-43ca-9764-e0ebffb14134\" (UID: \"ed1a2c50-a476-43ca-9764-e0ebffb14134\") " Mar 13 10:48:25 crc kubenswrapper[4632]: I0313 10:48:25.339303 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed1a2c50-a476-43ca-9764-e0ebffb14134-ssh-key-openstack-edpm-ipam\") pod \"ed1a2c50-a476-43ca-9764-e0ebffb14134\" (UID: \"ed1a2c50-a476-43ca-9764-e0ebffb14134\") " Mar 13 10:48:25 crc kubenswrapper[4632]: I0313 10:48:25.339426 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed1a2c50-a476-43ca-9764-e0ebffb14134-inventory\") pod \"ed1a2c50-a476-43ca-9764-e0ebffb14134\" (UID: \"ed1a2c50-a476-43ca-9764-e0ebffb14134\") " Mar 13 10:48:25 crc kubenswrapper[4632]: I0313 10:48:25.339474 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed1a2c50-a476-43ca-9764-e0ebffb14134-libvirt-combined-ca-bundle\") pod \"ed1a2c50-a476-43ca-9764-e0ebffb14134\" (UID: \"ed1a2c50-a476-43ca-9764-e0ebffb14134\") " Mar 13 10:48:25 crc kubenswrapper[4632]: I0313 10:48:25.339503 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ed1a2c50-a476-43ca-9764-e0ebffb14134-libvirt-secret-0\") pod \"ed1a2c50-a476-43ca-9764-e0ebffb14134\" (UID: \"ed1a2c50-a476-43ca-9764-e0ebffb14134\") " Mar 13 10:48:25 crc kubenswrapper[4632]: I0313 10:48:25.349061 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed1a2c50-a476-43ca-9764-e0ebffb14134-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "ed1a2c50-a476-43ca-9764-e0ebffb14134" (UID: "ed1a2c50-a476-43ca-9764-e0ebffb14134"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:48:25 crc kubenswrapper[4632]: I0313 10:48:25.359745 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed1a2c50-a476-43ca-9764-e0ebffb14134-kube-api-access-xh7rq" (OuterVolumeSpecName: "kube-api-access-xh7rq") pod "ed1a2c50-a476-43ca-9764-e0ebffb14134" (UID: "ed1a2c50-a476-43ca-9764-e0ebffb14134"). InnerVolumeSpecName "kube-api-access-xh7rq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:48:25 crc kubenswrapper[4632]: I0313 10:48:25.376378 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed1a2c50-a476-43ca-9764-e0ebffb14134-inventory" (OuterVolumeSpecName: "inventory") pod "ed1a2c50-a476-43ca-9764-e0ebffb14134" (UID: "ed1a2c50-a476-43ca-9764-e0ebffb14134"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:48:25 crc kubenswrapper[4632]: I0313 10:48:25.377134 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed1a2c50-a476-43ca-9764-e0ebffb14134-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ed1a2c50-a476-43ca-9764-e0ebffb14134" (UID: "ed1a2c50-a476-43ca-9764-e0ebffb14134"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:48:25 crc kubenswrapper[4632]: I0313 10:48:25.381431 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed1a2c50-a476-43ca-9764-e0ebffb14134-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "ed1a2c50-a476-43ca-9764-e0ebffb14134" (UID: "ed1a2c50-a476-43ca-9764-e0ebffb14134"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:48:25 crc kubenswrapper[4632]: I0313 10:48:25.441816 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh7rq\" (UniqueName: \"kubernetes.io/projected/ed1a2c50-a476-43ca-9764-e0ebffb14134-kube-api-access-xh7rq\") on node \"crc\" DevicePath \"\"" Mar 13 10:48:25 crc kubenswrapper[4632]: I0313 10:48:25.442214 4632 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed1a2c50-a476-43ca-9764-e0ebffb14134-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 13 10:48:25 crc kubenswrapper[4632]: I0313 10:48:25.442231 4632 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed1a2c50-a476-43ca-9764-e0ebffb14134-inventory\") on node \"crc\" DevicePath \"\"" Mar 13 10:48:25 crc kubenswrapper[4632]: I0313 10:48:25.442244 4632 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed1a2c50-a476-43ca-9764-e0ebffb14134-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:48:25 crc kubenswrapper[4632]: I0313 10:48:25.442257 4632 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ed1a2c50-a476-43ca-9764-e0ebffb14134-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Mar 13 10:48:25 crc kubenswrapper[4632]: I0313 10:48:25.877421 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-skjrh" event={"ID":"ed1a2c50-a476-43ca-9764-e0ebffb14134","Type":"ContainerDied","Data":"251492aaae45a40b4ef377f82a35e7e430a26d0b96081ac467780ad353a89a5b"} Mar 13 10:48:25 crc kubenswrapper[4632]: I0313 10:48:25.877484 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="251492aaae45a40b4ef377f82a35e7e430a26d0b96081ac467780ad353a89a5b" Mar 13 10:48:25 crc kubenswrapper[4632]: I0313 10:48:25.877530 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-skjrh" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.008324 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq"] Mar 13 10:48:26 crc kubenswrapper[4632]: E0313 10:48:26.008798 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ab47075-381b-45d4-b6c8-c64ae6433ef1" containerName="oc" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.008822 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ab47075-381b-45d4-b6c8-c64ae6433ef1" containerName="oc" Mar 13 10:48:26 crc kubenswrapper[4632]: E0313 10:48:26.008859 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed1a2c50-a476-43ca-9764-e0ebffb14134" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.008870 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed1a2c50-a476-43ca-9764-e0ebffb14134" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.009161 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed1a2c50-a476-43ca-9764-e0ebffb14134" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.009189 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ab47075-381b-45d4-b6c8-c64ae6433ef1" containerName="oc" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.009829 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.024981 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.025422 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qrzsx" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.025471 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.025521 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.025592 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.025521 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.025666 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.041404 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq"] Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.173343 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dl4cq\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.173681 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dl4cq\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.173722 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwrhx\" (UniqueName: \"kubernetes.io/projected/c897af06-c467-4ec3-aa76-c29a3ea3a462-kube-api-access-wwrhx\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dl4cq\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.173792 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dl4cq\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.173822 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dl4cq\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.173862 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dl4cq\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.173926 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dl4cq\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.173987 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dl4cq\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.174061 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dl4cq\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.174109 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dl4cq\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.174149 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dl4cq\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.275481 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dl4cq\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.275565 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dl4cq\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.275615 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dl4cq\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.275681 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dl4cq\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.275708 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dl4cq\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.275740 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwrhx\" (UniqueName: \"kubernetes.io/projected/c897af06-c467-4ec3-aa76-c29a3ea3a462-kube-api-access-wwrhx\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dl4cq\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.275794 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dl4cq\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.275823 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dl4cq\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.275849 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dl4cq\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.275897 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dl4cq\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.275932 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dl4cq\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.281026 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dl4cq\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.281358 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dl4cq\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.281463 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dl4cq\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.281679 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dl4cq\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.282662 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dl4cq\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.282839 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dl4cq\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.283095 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dl4cq\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.284773 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dl4cq\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.285655 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dl4cq\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.286037 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dl4cq\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.297265 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwrhx\" (UniqueName: \"kubernetes.io/projected/c897af06-c467-4ec3-aa76-c29a3ea3a462-kube-api-access-wwrhx\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dl4cq\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.334688 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.865837 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq"] Mar 13 10:48:26 crc kubenswrapper[4632]: I0313 10:48:26.893694 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" event={"ID":"c897af06-c467-4ec3-aa76-c29a3ea3a462","Type":"ContainerStarted","Data":"d22525a75cd59bffbef3e23ac6c6e8d40f86fed8103fc04e25e403aefa74021b"} Mar 13 10:48:27 crc kubenswrapper[4632]: I0313 10:48:27.915392 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" event={"ID":"c897af06-c467-4ec3-aa76-c29a3ea3a462","Type":"ContainerStarted","Data":"3b4b530ae859a620ce6a4cb1762eca660904ffcd11138b81e9af76e94ecf0906"} Mar 13 10:48:27 crc kubenswrapper[4632]: I0313 10:48:27.942414 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" podStartSLOduration=2.538490863 podStartE2EDuration="2.942397696s" podCreationTimestamp="2026-03-13 10:48:25 +0000 UTC" firstStartedPulling="2026-03-13 10:48:26.882107038 +0000 UTC m=+2680.904637171" lastFinishedPulling="2026-03-13 10:48:27.286013871 +0000 UTC m=+2681.308544004" observedRunningTime="2026-03-13 10:48:27.934751119 +0000 UTC m=+2681.957281252" watchObservedRunningTime="2026-03-13 10:48:27.942397696 +0000 UTC m=+2681.964927829" Mar 13 10:50:00 crc kubenswrapper[4632]: I0313 10:50:00.153394 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556650-rskjc"] Mar 13 10:50:00 crc kubenswrapper[4632]: I0313 10:50:00.158252 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556650-rskjc" Mar 13 10:50:00 crc kubenswrapper[4632]: I0313 10:50:00.168726 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txjrr\" (UniqueName: \"kubernetes.io/projected/cc8dd4ae-e21e-4155-b617-19c85512d4fe-kube-api-access-txjrr\") pod \"auto-csr-approver-29556650-rskjc\" (UID: \"cc8dd4ae-e21e-4155-b617-19c85512d4fe\") " pod="openshift-infra/auto-csr-approver-29556650-rskjc" Mar 13 10:50:00 crc kubenswrapper[4632]: I0313 10:50:00.169400 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556650-rskjc"] Mar 13 10:50:00 crc kubenswrapper[4632]: I0313 10:50:00.171058 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 10:50:00 crc kubenswrapper[4632]: I0313 10:50:00.172033 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 10:50:00 crc kubenswrapper[4632]: I0313 10:50:00.172366 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 10:50:00 crc kubenswrapper[4632]: I0313 10:50:00.271003 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txjrr\" (UniqueName: \"kubernetes.io/projected/cc8dd4ae-e21e-4155-b617-19c85512d4fe-kube-api-access-txjrr\") pod \"auto-csr-approver-29556650-rskjc\" (UID: \"cc8dd4ae-e21e-4155-b617-19c85512d4fe\") " pod="openshift-infra/auto-csr-approver-29556650-rskjc" Mar 13 10:50:00 crc kubenswrapper[4632]: I0313 10:50:00.289458 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txjrr\" (UniqueName: \"kubernetes.io/projected/cc8dd4ae-e21e-4155-b617-19c85512d4fe-kube-api-access-txjrr\") pod \"auto-csr-approver-29556650-rskjc\" (UID: \"cc8dd4ae-e21e-4155-b617-19c85512d4fe\") " pod="openshift-infra/auto-csr-approver-29556650-rskjc" Mar 13 10:50:00 crc kubenswrapper[4632]: I0313 10:50:00.478983 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556650-rskjc" Mar 13 10:50:00 crc kubenswrapper[4632]: I0313 10:50:00.990254 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556650-rskjc"] Mar 13 10:50:01 crc kubenswrapper[4632]: I0313 10:50:01.776785 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556650-rskjc" event={"ID":"cc8dd4ae-e21e-4155-b617-19c85512d4fe","Type":"ContainerStarted","Data":"efa9d0470c5c6d7371d7df6219693ae11fcc4a5a3a75ca627a33c2dbd05b2574"} Mar 13 10:50:02 crc kubenswrapper[4632]: I0313 10:50:02.784749 4632 generic.go:334] "Generic (PLEG): container finished" podID="cc8dd4ae-e21e-4155-b617-19c85512d4fe" containerID="10fbcfced1ba7ba66a4ba615aa3b2aab72091e177631977720b60aac13ae9d0f" exitCode=0 Mar 13 10:50:02 crc kubenswrapper[4632]: I0313 10:50:02.784889 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556650-rskjc" event={"ID":"cc8dd4ae-e21e-4155-b617-19c85512d4fe","Type":"ContainerDied","Data":"10fbcfced1ba7ba66a4ba615aa3b2aab72091e177631977720b60aac13ae9d0f"} Mar 13 10:50:04 crc kubenswrapper[4632]: I0313 10:50:04.325155 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556650-rskjc" Mar 13 10:50:04 crc kubenswrapper[4632]: I0313 10:50:04.478868 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txjrr\" (UniqueName: \"kubernetes.io/projected/cc8dd4ae-e21e-4155-b617-19c85512d4fe-kube-api-access-txjrr\") pod \"cc8dd4ae-e21e-4155-b617-19c85512d4fe\" (UID: \"cc8dd4ae-e21e-4155-b617-19c85512d4fe\") " Mar 13 10:50:04 crc kubenswrapper[4632]: I0313 10:50:04.485499 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc8dd4ae-e21e-4155-b617-19c85512d4fe-kube-api-access-txjrr" (OuterVolumeSpecName: "kube-api-access-txjrr") pod "cc8dd4ae-e21e-4155-b617-19c85512d4fe" (UID: "cc8dd4ae-e21e-4155-b617-19c85512d4fe"). InnerVolumeSpecName "kube-api-access-txjrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:50:04 crc kubenswrapper[4632]: I0313 10:50:04.580814 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txjrr\" (UniqueName: \"kubernetes.io/projected/cc8dd4ae-e21e-4155-b617-19c85512d4fe-kube-api-access-txjrr\") on node \"crc\" DevicePath \"\"" Mar 13 10:50:04 crc kubenswrapper[4632]: I0313 10:50:04.807397 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556650-rskjc" event={"ID":"cc8dd4ae-e21e-4155-b617-19c85512d4fe","Type":"ContainerDied","Data":"efa9d0470c5c6d7371d7df6219693ae11fcc4a5a3a75ca627a33c2dbd05b2574"} Mar 13 10:50:04 crc kubenswrapper[4632]: I0313 10:50:04.807875 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efa9d0470c5c6d7371d7df6219693ae11fcc4a5a3a75ca627a33c2dbd05b2574" Mar 13 10:50:04 crc kubenswrapper[4632]: I0313 10:50:04.807677 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556650-rskjc" Mar 13 10:50:05 crc kubenswrapper[4632]: I0313 10:50:05.416752 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556644-dq2jd"] Mar 13 10:50:05 crc kubenswrapper[4632]: I0313 10:50:05.427494 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556644-dq2jd"] Mar 13 10:50:06 crc kubenswrapper[4632]: I0313 10:50:06.080775 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2462765e-6333-4e22-b4d7-ee2b2c6aa538" path="/var/lib/kubelet/pods/2462765e-6333-4e22-b4d7-ee2b2c6aa538/volumes" Mar 13 10:50:09 crc kubenswrapper[4632]: I0313 10:50:09.918425 4632 scope.go:117] "RemoveContainer" containerID="07b2fe4a97569c9089b7972685eb914fd04195d02c9e7b239121095e54e42352" Mar 13 10:50:10 crc kubenswrapper[4632]: I0313 10:50:10.461449 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 10:50:10 crc kubenswrapper[4632]: I0313 10:50:10.461525 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 10:50:40 crc kubenswrapper[4632]: I0313 10:50:40.460545 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 10:50:40 crc kubenswrapper[4632]: I0313 10:50:40.460990 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 10:50:57 crc kubenswrapper[4632]: I0313 10:50:57.555715 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-92wff"] Mar 13 10:50:57 crc kubenswrapper[4632]: E0313 10:50:57.558200 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc8dd4ae-e21e-4155-b617-19c85512d4fe" containerName="oc" Mar 13 10:50:57 crc kubenswrapper[4632]: I0313 10:50:57.558227 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc8dd4ae-e21e-4155-b617-19c85512d4fe" containerName="oc" Mar 13 10:50:57 crc kubenswrapper[4632]: I0313 10:50:57.558551 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc8dd4ae-e21e-4155-b617-19c85512d4fe" containerName="oc" Mar 13 10:50:57 crc kubenswrapper[4632]: I0313 10:50:57.560329 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-92wff" Mar 13 10:50:57 crc kubenswrapper[4632]: I0313 10:50:57.563430 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-92wff"] Mar 13 10:50:57 crc kubenswrapper[4632]: I0313 10:50:57.702105 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kgss\" (UniqueName: \"kubernetes.io/projected/2c8fcea0-c62d-4557-87e8-e46dee66bc0f-kube-api-access-4kgss\") pod \"redhat-operators-92wff\" (UID: \"2c8fcea0-c62d-4557-87e8-e46dee66bc0f\") " pod="openshift-marketplace/redhat-operators-92wff" Mar 13 10:50:57 crc kubenswrapper[4632]: I0313 10:50:57.702436 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c8fcea0-c62d-4557-87e8-e46dee66bc0f-catalog-content\") pod \"redhat-operators-92wff\" (UID: \"2c8fcea0-c62d-4557-87e8-e46dee66bc0f\") " pod="openshift-marketplace/redhat-operators-92wff" Mar 13 10:50:57 crc kubenswrapper[4632]: I0313 10:50:57.702579 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c8fcea0-c62d-4557-87e8-e46dee66bc0f-utilities\") pod \"redhat-operators-92wff\" (UID: \"2c8fcea0-c62d-4557-87e8-e46dee66bc0f\") " pod="openshift-marketplace/redhat-operators-92wff" Mar 13 10:50:57 crc kubenswrapper[4632]: I0313 10:50:57.804602 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c8fcea0-c62d-4557-87e8-e46dee66bc0f-catalog-content\") pod \"redhat-operators-92wff\" (UID: \"2c8fcea0-c62d-4557-87e8-e46dee66bc0f\") " pod="openshift-marketplace/redhat-operators-92wff" Mar 13 10:50:57 crc kubenswrapper[4632]: I0313 10:50:57.804653 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c8fcea0-c62d-4557-87e8-e46dee66bc0f-utilities\") pod \"redhat-operators-92wff\" (UID: \"2c8fcea0-c62d-4557-87e8-e46dee66bc0f\") " pod="openshift-marketplace/redhat-operators-92wff" Mar 13 10:50:57 crc kubenswrapper[4632]: I0313 10:50:57.804810 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kgss\" (UniqueName: \"kubernetes.io/projected/2c8fcea0-c62d-4557-87e8-e46dee66bc0f-kube-api-access-4kgss\") pod \"redhat-operators-92wff\" (UID: \"2c8fcea0-c62d-4557-87e8-e46dee66bc0f\") " pod="openshift-marketplace/redhat-operators-92wff" Mar 13 10:50:57 crc kubenswrapper[4632]: I0313 10:50:57.805185 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c8fcea0-c62d-4557-87e8-e46dee66bc0f-catalog-content\") pod \"redhat-operators-92wff\" (UID: \"2c8fcea0-c62d-4557-87e8-e46dee66bc0f\") " pod="openshift-marketplace/redhat-operators-92wff" Mar 13 10:50:57 crc kubenswrapper[4632]: I0313 10:50:57.805414 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c8fcea0-c62d-4557-87e8-e46dee66bc0f-utilities\") pod \"redhat-operators-92wff\" (UID: \"2c8fcea0-c62d-4557-87e8-e46dee66bc0f\") " pod="openshift-marketplace/redhat-operators-92wff" Mar 13 10:50:57 crc kubenswrapper[4632]: I0313 10:50:57.829638 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kgss\" (UniqueName: \"kubernetes.io/projected/2c8fcea0-c62d-4557-87e8-e46dee66bc0f-kube-api-access-4kgss\") pod \"redhat-operators-92wff\" (UID: \"2c8fcea0-c62d-4557-87e8-e46dee66bc0f\") " pod="openshift-marketplace/redhat-operators-92wff" Mar 13 10:50:57 crc kubenswrapper[4632]: I0313 10:50:57.891952 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-92wff" Mar 13 10:50:58 crc kubenswrapper[4632]: I0313 10:50:58.478790 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-92wff"] Mar 13 10:50:59 crc kubenswrapper[4632]: I0313 10:50:59.412827 4632 generic.go:334] "Generic (PLEG): container finished" podID="2c8fcea0-c62d-4557-87e8-e46dee66bc0f" containerID="b4b5f53f947256c5e10bbf4379bf34669d1d1bf56e886053918a7d4efb4d213d" exitCode=0 Mar 13 10:50:59 crc kubenswrapper[4632]: I0313 10:50:59.412999 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-92wff" event={"ID":"2c8fcea0-c62d-4557-87e8-e46dee66bc0f","Type":"ContainerDied","Data":"b4b5f53f947256c5e10bbf4379bf34669d1d1bf56e886053918a7d4efb4d213d"} Mar 13 10:50:59 crc kubenswrapper[4632]: I0313 10:50:59.413184 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-92wff" event={"ID":"2c8fcea0-c62d-4557-87e8-e46dee66bc0f","Type":"ContainerStarted","Data":"90a61bc58d69a6d619abaef868b90ed19460d1a8b36e5fcac632e0a2882d9502"} Mar 13 10:51:00 crc kubenswrapper[4632]: I0313 10:51:00.422861 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-92wff" event={"ID":"2c8fcea0-c62d-4557-87e8-e46dee66bc0f","Type":"ContainerStarted","Data":"bcb3e10a9cf45d3b9e94b97d132bc40b54e004f985fa2291ed2a2caea2737a0a"} Mar 13 10:51:04 crc kubenswrapper[4632]: I0313 10:51:04.469255 4632 generic.go:334] "Generic (PLEG): container finished" podID="c897af06-c467-4ec3-aa76-c29a3ea3a462" containerID="3b4b530ae859a620ce6a4cb1762eca660904ffcd11138b81e9af76e94ecf0906" exitCode=0 Mar 13 10:51:04 crc kubenswrapper[4632]: I0313 10:51:04.469336 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" event={"ID":"c897af06-c467-4ec3-aa76-c29a3ea3a462","Type":"ContainerDied","Data":"3b4b530ae859a620ce6a4cb1762eca660904ffcd11138b81e9af76e94ecf0906"} Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.097124 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.178480 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-cell1-compute-config-3\") pod \"c897af06-c467-4ec3-aa76-c29a3ea3a462\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.178576 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-combined-ca-bundle\") pod \"c897af06-c467-4ec3-aa76-c29a3ea3a462\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.178640 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-cell1-compute-config-2\") pod \"c897af06-c467-4ec3-aa76-c29a3ea3a462\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.178678 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-inventory\") pod \"c897af06-c467-4ec3-aa76-c29a3ea3a462\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.178759 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-cell1-compute-config-1\") pod \"c897af06-c467-4ec3-aa76-c29a3ea3a462\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.178795 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-extra-config-0\") pod \"c897af06-c467-4ec3-aa76-c29a3ea3a462\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.178880 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-cell1-compute-config-0\") pod \"c897af06-c467-4ec3-aa76-c29a3ea3a462\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.179019 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-ssh-key-openstack-edpm-ipam\") pod \"c897af06-c467-4ec3-aa76-c29a3ea3a462\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.179049 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwrhx\" (UniqueName: \"kubernetes.io/projected/c897af06-c467-4ec3-aa76-c29a3ea3a462-kube-api-access-wwrhx\") pod \"c897af06-c467-4ec3-aa76-c29a3ea3a462\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.179082 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-migration-ssh-key-1\") pod \"c897af06-c467-4ec3-aa76-c29a3ea3a462\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.179104 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-migration-ssh-key-0\") pod \"c897af06-c467-4ec3-aa76-c29a3ea3a462\" (UID: \"c897af06-c467-4ec3-aa76-c29a3ea3a462\") " Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.185665 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c897af06-c467-4ec3-aa76-c29a3ea3a462-kube-api-access-wwrhx" (OuterVolumeSpecName: "kube-api-access-wwrhx") pod "c897af06-c467-4ec3-aa76-c29a3ea3a462" (UID: "c897af06-c467-4ec3-aa76-c29a3ea3a462"). InnerVolumeSpecName "kube-api-access-wwrhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.190422 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "c897af06-c467-4ec3-aa76-c29a3ea3a462" (UID: "c897af06-c467-4ec3-aa76-c29a3ea3a462"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.220073 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "c897af06-c467-4ec3-aa76-c29a3ea3a462" (UID: "c897af06-c467-4ec3-aa76-c29a3ea3a462"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.227592 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "c897af06-c467-4ec3-aa76-c29a3ea3a462" (UID: "c897af06-c467-4ec3-aa76-c29a3ea3a462"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.234166 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c897af06-c467-4ec3-aa76-c29a3ea3a462" (UID: "c897af06-c467-4ec3-aa76-c29a3ea3a462"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.242847 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "c897af06-c467-4ec3-aa76-c29a3ea3a462" (UID: "c897af06-c467-4ec3-aa76-c29a3ea3a462"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.243262 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "c897af06-c467-4ec3-aa76-c29a3ea3a462" (UID: "c897af06-c467-4ec3-aa76-c29a3ea3a462"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.259839 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-inventory" (OuterVolumeSpecName: "inventory") pod "c897af06-c467-4ec3-aa76-c29a3ea3a462" (UID: "c897af06-c467-4ec3-aa76-c29a3ea3a462"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.260156 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "c897af06-c467-4ec3-aa76-c29a3ea3a462" (UID: "c897af06-c467-4ec3-aa76-c29a3ea3a462"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.267844 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "c897af06-c467-4ec3-aa76-c29a3ea3a462" (UID: "c897af06-c467-4ec3-aa76-c29a3ea3a462"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.277989 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "c897af06-c467-4ec3-aa76-c29a3ea3a462" (UID: "c897af06-c467-4ec3-aa76-c29a3ea3a462"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.281192 4632 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.281222 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwrhx\" (UniqueName: \"kubernetes.io/projected/c897af06-c467-4ec3-aa76-c29a3ea3a462-kube-api-access-wwrhx\") on node \"crc\" DevicePath \"\"" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.281232 4632 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.281241 4632 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.281249 4632 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.281257 4632 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.281265 4632 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.281277 4632 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-inventory\") on node \"crc\" DevicePath \"\"" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.281287 4632 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.281295 4632 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.281305 4632 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c897af06-c467-4ec3-aa76-c29a3ea3a462-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.489365 4632 generic.go:334] "Generic (PLEG): container finished" podID="2c8fcea0-c62d-4557-87e8-e46dee66bc0f" containerID="bcb3e10a9cf45d3b9e94b97d132bc40b54e004f985fa2291ed2a2caea2737a0a" exitCode=0 Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.489433 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-92wff" event={"ID":"2c8fcea0-c62d-4557-87e8-e46dee66bc0f","Type":"ContainerDied","Data":"bcb3e10a9cf45d3b9e94b97d132bc40b54e004f985fa2291ed2a2caea2737a0a"} Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.491919 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" event={"ID":"c897af06-c467-4ec3-aa76-c29a3ea3a462","Type":"ContainerDied","Data":"d22525a75cd59bffbef3e23ac6c6e8d40f86fed8103fc04e25e403aefa74021b"} Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.492085 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d22525a75cd59bffbef3e23ac6c6e8d40f86fed8103fc04e25e403aefa74021b" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.492160 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dl4cq" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.492884 4632 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.625129 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw"] Mar 13 10:51:06 crc kubenswrapper[4632]: E0313 10:51:06.625612 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c897af06-c467-4ec3-aa76-c29a3ea3a462" containerName="nova-edpm-deployment-openstack-edpm-ipam" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.625638 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="c897af06-c467-4ec3-aa76-c29a3ea3a462" containerName="nova-edpm-deployment-openstack-edpm-ipam" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.625881 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="c897af06-c467-4ec3-aa76-c29a3ea3a462" containerName="nova-edpm-deployment-openstack-edpm-ipam" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.626643 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.631065 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.634343 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.634343 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.634584 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.634879 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qrzsx" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.639618 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw"] Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.688653 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw\" (UID: \"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.689002 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw\" (UID: \"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.689183 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw\" (UID: \"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.689302 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw\" (UID: \"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.689574 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxhtb\" (UniqueName: \"kubernetes.io/projected/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-kube-api-access-sxhtb\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw\" (UID: \"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.689708 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw\" (UID: \"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.689834 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw\" (UID: \"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.791370 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw\" (UID: \"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.791446 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw\" (UID: \"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.791493 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxhtb\" (UniqueName: \"kubernetes.io/projected/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-kube-api-access-sxhtb\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw\" (UID: \"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.791536 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw\" (UID: \"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.791583 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw\" (UID: \"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.791612 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw\" (UID: \"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.791636 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw\" (UID: \"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.796999 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw\" (UID: \"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.797042 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw\" (UID: \"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.797253 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw\" (UID: \"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.797460 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw\" (UID: \"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.797733 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw\" (UID: \"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.798084 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw\" (UID: \"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.812348 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxhtb\" (UniqueName: \"kubernetes.io/projected/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-kube-api-access-sxhtb\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw\" (UID: \"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw" Mar 13 10:51:06 crc kubenswrapper[4632]: I0313 10:51:06.950789 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw" Mar 13 10:51:07 crc kubenswrapper[4632]: I0313 10:51:07.502713 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-92wff" event={"ID":"2c8fcea0-c62d-4557-87e8-e46dee66bc0f","Type":"ContainerStarted","Data":"9a3764407a73c02bf1afcfa64ffb7546776165b8c9cd2c18948a6549ce025879"} Mar 13 10:51:07 crc kubenswrapper[4632]: I0313 10:51:07.522864 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-92wff" podStartSLOduration=3.015256656 podStartE2EDuration="10.522844832s" podCreationTimestamp="2026-03-13 10:50:57 +0000 UTC" firstStartedPulling="2026-03-13 10:50:59.414910845 +0000 UTC m=+2833.437440978" lastFinishedPulling="2026-03-13 10:51:06.922499021 +0000 UTC m=+2840.945029154" observedRunningTime="2026-03-13 10:51:07.521073639 +0000 UTC m=+2841.543603782" watchObservedRunningTime="2026-03-13 10:51:07.522844832 +0000 UTC m=+2841.545374965" Mar 13 10:51:07 crc kubenswrapper[4632]: I0313 10:51:07.836227 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw"] Mar 13 10:51:07 crc kubenswrapper[4632]: I0313 10:51:07.893088 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-92wff" Mar 13 10:51:07 crc kubenswrapper[4632]: I0313 10:51:07.893151 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-92wff" Mar 13 10:51:08 crc kubenswrapper[4632]: I0313 10:51:08.510798 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw" event={"ID":"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef","Type":"ContainerStarted","Data":"6ee75f7e51730eeb5489b2c1dd9bb7192cbbc507759459f37404c40b386bbdfc"} Mar 13 10:51:08 crc kubenswrapper[4632]: I0313 10:51:08.994043 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-92wff" podUID="2c8fcea0-c62d-4557-87e8-e46dee66bc0f" containerName="registry-server" probeResult="failure" output=< Mar 13 10:51:08 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 10:51:08 crc kubenswrapper[4632]: > Mar 13 10:51:09 crc kubenswrapper[4632]: I0313 10:51:09.529245 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw" event={"ID":"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef","Type":"ContainerStarted","Data":"4b7c60398c3f33a538800ee1d366657cf73b7c4a7aff19c20e81ba7f933c624f"} Mar 13 10:51:10 crc kubenswrapper[4632]: I0313 10:51:10.461512 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 10:51:10 crc kubenswrapper[4632]: I0313 10:51:10.461593 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 10:51:10 crc kubenswrapper[4632]: I0313 10:51:10.461647 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 10:51:10 crc kubenswrapper[4632]: I0313 10:51:10.462439 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a5ebd5748d892637db30e6f25b4cdb7397d5f5e2a1d221a622054fbf7f8b83f2"} pod="openshift-machine-config-operator/machine-config-daemon-zkscb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 10:51:10 crc kubenswrapper[4632]: I0313 10:51:10.462500 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" containerID="cri-o://a5ebd5748d892637db30e6f25b4cdb7397d5f5e2a1d221a622054fbf7f8b83f2" gracePeriod=600 Mar 13 10:51:11 crc kubenswrapper[4632]: I0313 10:51:11.552714 4632 generic.go:334] "Generic (PLEG): container finished" podID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerID="a5ebd5748d892637db30e6f25b4cdb7397d5f5e2a1d221a622054fbf7f8b83f2" exitCode=0 Mar 13 10:51:11 crc kubenswrapper[4632]: I0313 10:51:11.552776 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerDied","Data":"a5ebd5748d892637db30e6f25b4cdb7397d5f5e2a1d221a622054fbf7f8b83f2"} Mar 13 10:51:11 crc kubenswrapper[4632]: I0313 10:51:11.553256 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerStarted","Data":"d62fcc7d7dd37c1e59dee28bd69ab3bfac7e5412873fdfe93b8d8f0639424c9d"} Mar 13 10:51:11 crc kubenswrapper[4632]: I0313 10:51:11.553286 4632 scope.go:117] "RemoveContainer" containerID="7e6ad458a7a5f032b976d0f3e06f3cdb95d1f8fc235ab6b7d8f577ae0282cd20" Mar 13 10:51:11 crc kubenswrapper[4632]: I0313 10:51:11.574906 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw" podStartSLOduration=5.099910812 podStartE2EDuration="5.574882118s" podCreationTimestamp="2026-03-13 10:51:06 +0000 UTC" firstStartedPulling="2026-03-13 10:51:07.808091547 +0000 UTC m=+2841.830621680" lastFinishedPulling="2026-03-13 10:51:08.283062853 +0000 UTC m=+2842.305592986" observedRunningTime="2026-03-13 10:51:09.55985902 +0000 UTC m=+2843.582389163" watchObservedRunningTime="2026-03-13 10:51:11.574882118 +0000 UTC m=+2845.597412251" Mar 13 10:51:18 crc kubenswrapper[4632]: I0313 10:51:18.933605 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-92wff" podUID="2c8fcea0-c62d-4557-87e8-e46dee66bc0f" containerName="registry-server" probeResult="failure" output=< Mar 13 10:51:18 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 10:51:18 crc kubenswrapper[4632]: > Mar 13 10:51:28 crc kubenswrapper[4632]: I0313 10:51:28.945721 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-92wff" podUID="2c8fcea0-c62d-4557-87e8-e46dee66bc0f" containerName="registry-server" probeResult="failure" output=< Mar 13 10:51:28 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 10:51:28 crc kubenswrapper[4632]: > Mar 13 10:51:38 crc kubenswrapper[4632]: I0313 10:51:38.938177 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-92wff" podUID="2c8fcea0-c62d-4557-87e8-e46dee66bc0f" containerName="registry-server" probeResult="failure" output=< Mar 13 10:51:38 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 10:51:38 crc kubenswrapper[4632]: > Mar 13 10:51:47 crc kubenswrapper[4632]: I0313 10:51:47.977074 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-92wff" Mar 13 10:51:48 crc kubenswrapper[4632]: I0313 10:51:48.061829 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-92wff" Mar 13 10:51:48 crc kubenswrapper[4632]: I0313 10:51:48.968619 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-92wff"] Mar 13 10:51:49 crc kubenswrapper[4632]: I0313 10:51:49.898650 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-92wff" podUID="2c8fcea0-c62d-4557-87e8-e46dee66bc0f" containerName="registry-server" containerID="cri-o://9a3764407a73c02bf1afcfa64ffb7546776165b8c9cd2c18948a6549ce025879" gracePeriod=2 Mar 13 10:51:50 crc kubenswrapper[4632]: I0313 10:51:50.417417 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-92wff" Mar 13 10:51:50 crc kubenswrapper[4632]: I0313 10:51:50.473644 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c8fcea0-c62d-4557-87e8-e46dee66bc0f-catalog-content\") pod \"2c8fcea0-c62d-4557-87e8-e46dee66bc0f\" (UID: \"2c8fcea0-c62d-4557-87e8-e46dee66bc0f\") " Mar 13 10:51:50 crc kubenswrapper[4632]: I0313 10:51:50.473711 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c8fcea0-c62d-4557-87e8-e46dee66bc0f-utilities\") pod \"2c8fcea0-c62d-4557-87e8-e46dee66bc0f\" (UID: \"2c8fcea0-c62d-4557-87e8-e46dee66bc0f\") " Mar 13 10:51:50 crc kubenswrapper[4632]: I0313 10:51:50.473794 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kgss\" (UniqueName: \"kubernetes.io/projected/2c8fcea0-c62d-4557-87e8-e46dee66bc0f-kube-api-access-4kgss\") pod \"2c8fcea0-c62d-4557-87e8-e46dee66bc0f\" (UID: \"2c8fcea0-c62d-4557-87e8-e46dee66bc0f\") " Mar 13 10:51:50 crc kubenswrapper[4632]: I0313 10:51:50.475464 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c8fcea0-c62d-4557-87e8-e46dee66bc0f-utilities" (OuterVolumeSpecName: "utilities") pod "2c8fcea0-c62d-4557-87e8-e46dee66bc0f" (UID: "2c8fcea0-c62d-4557-87e8-e46dee66bc0f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:51:50 crc kubenswrapper[4632]: I0313 10:51:50.482179 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c8fcea0-c62d-4557-87e8-e46dee66bc0f-kube-api-access-4kgss" (OuterVolumeSpecName: "kube-api-access-4kgss") pod "2c8fcea0-c62d-4557-87e8-e46dee66bc0f" (UID: "2c8fcea0-c62d-4557-87e8-e46dee66bc0f"). InnerVolumeSpecName "kube-api-access-4kgss". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:51:50 crc kubenswrapper[4632]: I0313 10:51:50.576578 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c8fcea0-c62d-4557-87e8-e46dee66bc0f-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 10:51:50 crc kubenswrapper[4632]: I0313 10:51:50.576615 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kgss\" (UniqueName: \"kubernetes.io/projected/2c8fcea0-c62d-4557-87e8-e46dee66bc0f-kube-api-access-4kgss\") on node \"crc\" DevicePath \"\"" Mar 13 10:51:50 crc kubenswrapper[4632]: I0313 10:51:50.639683 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c8fcea0-c62d-4557-87e8-e46dee66bc0f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c8fcea0-c62d-4557-87e8-e46dee66bc0f" (UID: "2c8fcea0-c62d-4557-87e8-e46dee66bc0f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:51:50 crc kubenswrapper[4632]: I0313 10:51:50.678155 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c8fcea0-c62d-4557-87e8-e46dee66bc0f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 10:51:50 crc kubenswrapper[4632]: I0313 10:51:50.908722 4632 generic.go:334] "Generic (PLEG): container finished" podID="2c8fcea0-c62d-4557-87e8-e46dee66bc0f" containerID="9a3764407a73c02bf1afcfa64ffb7546776165b8c9cd2c18948a6549ce025879" exitCode=0 Mar 13 10:51:50 crc kubenswrapper[4632]: I0313 10:51:50.908761 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-92wff" event={"ID":"2c8fcea0-c62d-4557-87e8-e46dee66bc0f","Type":"ContainerDied","Data":"9a3764407a73c02bf1afcfa64ffb7546776165b8c9cd2c18948a6549ce025879"} Mar 13 10:51:50 crc kubenswrapper[4632]: I0313 10:51:50.908779 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-92wff" Mar 13 10:51:50 crc kubenswrapper[4632]: I0313 10:51:50.909835 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-92wff" event={"ID":"2c8fcea0-c62d-4557-87e8-e46dee66bc0f","Type":"ContainerDied","Data":"90a61bc58d69a6d619abaef868b90ed19460d1a8b36e5fcac632e0a2882d9502"} Mar 13 10:51:50 crc kubenswrapper[4632]: I0313 10:51:50.909947 4632 scope.go:117] "RemoveContainer" containerID="9a3764407a73c02bf1afcfa64ffb7546776165b8c9cd2c18948a6549ce025879" Mar 13 10:51:50 crc kubenswrapper[4632]: I0313 10:51:50.957651 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-92wff"] Mar 13 10:51:50 crc kubenswrapper[4632]: I0313 10:51:50.958043 4632 scope.go:117] "RemoveContainer" containerID="bcb3e10a9cf45d3b9e94b97d132bc40b54e004f985fa2291ed2a2caea2737a0a" Mar 13 10:51:50 crc kubenswrapper[4632]: I0313 10:51:50.968615 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-92wff"] Mar 13 10:51:51 crc kubenswrapper[4632]: I0313 10:51:51.001567 4632 scope.go:117] "RemoveContainer" containerID="b4b5f53f947256c5e10bbf4379bf34669d1d1bf56e886053918a7d4efb4d213d" Mar 13 10:51:51 crc kubenswrapper[4632]: I0313 10:51:51.039499 4632 scope.go:117] "RemoveContainer" containerID="9a3764407a73c02bf1afcfa64ffb7546776165b8c9cd2c18948a6549ce025879" Mar 13 10:51:51 crc kubenswrapper[4632]: E0313 10:51:51.040073 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a3764407a73c02bf1afcfa64ffb7546776165b8c9cd2c18948a6549ce025879\": container with ID starting with 9a3764407a73c02bf1afcfa64ffb7546776165b8c9cd2c18948a6549ce025879 not found: ID does not exist" containerID="9a3764407a73c02bf1afcfa64ffb7546776165b8c9cd2c18948a6549ce025879" Mar 13 10:51:51 crc kubenswrapper[4632]: I0313 10:51:51.040143 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a3764407a73c02bf1afcfa64ffb7546776165b8c9cd2c18948a6549ce025879"} err="failed to get container status \"9a3764407a73c02bf1afcfa64ffb7546776165b8c9cd2c18948a6549ce025879\": rpc error: code = NotFound desc = could not find container \"9a3764407a73c02bf1afcfa64ffb7546776165b8c9cd2c18948a6549ce025879\": container with ID starting with 9a3764407a73c02bf1afcfa64ffb7546776165b8c9cd2c18948a6549ce025879 not found: ID does not exist" Mar 13 10:51:51 crc kubenswrapper[4632]: I0313 10:51:51.040203 4632 scope.go:117] "RemoveContainer" containerID="bcb3e10a9cf45d3b9e94b97d132bc40b54e004f985fa2291ed2a2caea2737a0a" Mar 13 10:51:51 crc kubenswrapper[4632]: E0313 10:51:51.040679 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcb3e10a9cf45d3b9e94b97d132bc40b54e004f985fa2291ed2a2caea2737a0a\": container with ID starting with bcb3e10a9cf45d3b9e94b97d132bc40b54e004f985fa2291ed2a2caea2737a0a not found: ID does not exist" containerID="bcb3e10a9cf45d3b9e94b97d132bc40b54e004f985fa2291ed2a2caea2737a0a" Mar 13 10:51:51 crc kubenswrapper[4632]: I0313 10:51:51.040715 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcb3e10a9cf45d3b9e94b97d132bc40b54e004f985fa2291ed2a2caea2737a0a"} err="failed to get container status \"bcb3e10a9cf45d3b9e94b97d132bc40b54e004f985fa2291ed2a2caea2737a0a\": rpc error: code = NotFound desc = could not find container \"bcb3e10a9cf45d3b9e94b97d132bc40b54e004f985fa2291ed2a2caea2737a0a\": container with ID starting with bcb3e10a9cf45d3b9e94b97d132bc40b54e004f985fa2291ed2a2caea2737a0a not found: ID does not exist" Mar 13 10:51:51 crc kubenswrapper[4632]: I0313 10:51:51.040735 4632 scope.go:117] "RemoveContainer" containerID="b4b5f53f947256c5e10bbf4379bf34669d1d1bf56e886053918a7d4efb4d213d" Mar 13 10:51:51 crc kubenswrapper[4632]: E0313 10:51:51.041345 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4b5f53f947256c5e10bbf4379bf34669d1d1bf56e886053918a7d4efb4d213d\": container with ID starting with b4b5f53f947256c5e10bbf4379bf34669d1d1bf56e886053918a7d4efb4d213d not found: ID does not exist" containerID="b4b5f53f947256c5e10bbf4379bf34669d1d1bf56e886053918a7d4efb4d213d" Mar 13 10:51:51 crc kubenswrapper[4632]: I0313 10:51:51.041396 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4b5f53f947256c5e10bbf4379bf34669d1d1bf56e886053918a7d4efb4d213d"} err="failed to get container status \"b4b5f53f947256c5e10bbf4379bf34669d1d1bf56e886053918a7d4efb4d213d\": rpc error: code = NotFound desc = could not find container \"b4b5f53f947256c5e10bbf4379bf34669d1d1bf56e886053918a7d4efb4d213d\": container with ID starting with b4b5f53f947256c5e10bbf4379bf34669d1d1bf56e886053918a7d4efb4d213d not found: ID does not exist" Mar 13 10:51:52 crc kubenswrapper[4632]: I0313 10:51:52.057636 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c8fcea0-c62d-4557-87e8-e46dee66bc0f" path="/var/lib/kubelet/pods/2c8fcea0-c62d-4557-87e8-e46dee66bc0f/volumes" Mar 13 10:52:00 crc kubenswrapper[4632]: I0313 10:52:00.157823 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556652-ghswk"] Mar 13 10:52:00 crc kubenswrapper[4632]: E0313 10:52:00.158757 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c8fcea0-c62d-4557-87e8-e46dee66bc0f" containerName="extract-content" Mar 13 10:52:00 crc kubenswrapper[4632]: I0313 10:52:00.158770 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c8fcea0-c62d-4557-87e8-e46dee66bc0f" containerName="extract-content" Mar 13 10:52:00 crc kubenswrapper[4632]: E0313 10:52:00.158784 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c8fcea0-c62d-4557-87e8-e46dee66bc0f" containerName="registry-server" Mar 13 10:52:00 crc kubenswrapper[4632]: I0313 10:52:00.158789 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c8fcea0-c62d-4557-87e8-e46dee66bc0f" containerName="registry-server" Mar 13 10:52:00 crc kubenswrapper[4632]: E0313 10:52:00.158812 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c8fcea0-c62d-4557-87e8-e46dee66bc0f" containerName="extract-utilities" Mar 13 10:52:00 crc kubenswrapper[4632]: I0313 10:52:00.158820 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c8fcea0-c62d-4557-87e8-e46dee66bc0f" containerName="extract-utilities" Mar 13 10:52:00 crc kubenswrapper[4632]: I0313 10:52:00.159024 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c8fcea0-c62d-4557-87e8-e46dee66bc0f" containerName="registry-server" Mar 13 10:52:00 crc kubenswrapper[4632]: I0313 10:52:00.159653 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556652-ghswk" Mar 13 10:52:00 crc kubenswrapper[4632]: I0313 10:52:00.162861 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 10:52:00 crc kubenswrapper[4632]: I0313 10:52:00.162958 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 10:52:00 crc kubenswrapper[4632]: I0313 10:52:00.165230 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 10:52:00 crc kubenswrapper[4632]: I0313 10:52:00.184132 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556652-ghswk"] Mar 13 10:52:00 crc kubenswrapper[4632]: I0313 10:52:00.212152 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttnfv\" (UniqueName: \"kubernetes.io/projected/27a43bdf-8be4-458b-99ff-4135a684962a-kube-api-access-ttnfv\") pod \"auto-csr-approver-29556652-ghswk\" (UID: \"27a43bdf-8be4-458b-99ff-4135a684962a\") " pod="openshift-infra/auto-csr-approver-29556652-ghswk" Mar 13 10:52:00 crc kubenswrapper[4632]: I0313 10:52:00.313386 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttnfv\" (UniqueName: \"kubernetes.io/projected/27a43bdf-8be4-458b-99ff-4135a684962a-kube-api-access-ttnfv\") pod \"auto-csr-approver-29556652-ghswk\" (UID: \"27a43bdf-8be4-458b-99ff-4135a684962a\") " pod="openshift-infra/auto-csr-approver-29556652-ghswk" Mar 13 10:52:00 crc kubenswrapper[4632]: I0313 10:52:00.334553 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttnfv\" (UniqueName: \"kubernetes.io/projected/27a43bdf-8be4-458b-99ff-4135a684962a-kube-api-access-ttnfv\") pod \"auto-csr-approver-29556652-ghswk\" (UID: \"27a43bdf-8be4-458b-99ff-4135a684962a\") " pod="openshift-infra/auto-csr-approver-29556652-ghswk" Mar 13 10:52:00 crc kubenswrapper[4632]: I0313 10:52:00.482554 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556652-ghswk" Mar 13 10:52:00 crc kubenswrapper[4632]: I0313 10:52:00.940956 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556652-ghswk"] Mar 13 10:52:00 crc kubenswrapper[4632]: I0313 10:52:00.993723 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556652-ghswk" event={"ID":"27a43bdf-8be4-458b-99ff-4135a684962a","Type":"ContainerStarted","Data":"6f4c18d16927eaf56fce1884f8280121cccf26e3a8d837a33fa467c468e2742c"} Mar 13 10:52:03 crc kubenswrapper[4632]: I0313 10:52:03.016102 4632 generic.go:334] "Generic (PLEG): container finished" podID="27a43bdf-8be4-458b-99ff-4135a684962a" containerID="5c30aacc6e4fc680fbb7e668912e806d2164d1cac4b7e0e4c5e8b4688d3e76cd" exitCode=0 Mar 13 10:52:03 crc kubenswrapper[4632]: I0313 10:52:03.016529 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556652-ghswk" event={"ID":"27a43bdf-8be4-458b-99ff-4135a684962a","Type":"ContainerDied","Data":"5c30aacc6e4fc680fbb7e668912e806d2164d1cac4b7e0e4c5e8b4688d3e76cd"} Mar 13 10:52:04 crc kubenswrapper[4632]: I0313 10:52:04.441623 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556652-ghswk" Mar 13 10:52:04 crc kubenswrapper[4632]: I0313 10:52:04.497733 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttnfv\" (UniqueName: \"kubernetes.io/projected/27a43bdf-8be4-458b-99ff-4135a684962a-kube-api-access-ttnfv\") pod \"27a43bdf-8be4-458b-99ff-4135a684962a\" (UID: \"27a43bdf-8be4-458b-99ff-4135a684962a\") " Mar 13 10:52:04 crc kubenswrapper[4632]: I0313 10:52:04.517863 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27a43bdf-8be4-458b-99ff-4135a684962a-kube-api-access-ttnfv" (OuterVolumeSpecName: "kube-api-access-ttnfv") pod "27a43bdf-8be4-458b-99ff-4135a684962a" (UID: "27a43bdf-8be4-458b-99ff-4135a684962a"). InnerVolumeSpecName "kube-api-access-ttnfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:52:04 crc kubenswrapper[4632]: I0313 10:52:04.601368 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttnfv\" (UniqueName: \"kubernetes.io/projected/27a43bdf-8be4-458b-99ff-4135a684962a-kube-api-access-ttnfv\") on node \"crc\" DevicePath \"\"" Mar 13 10:52:05 crc kubenswrapper[4632]: I0313 10:52:05.035544 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556652-ghswk" event={"ID":"27a43bdf-8be4-458b-99ff-4135a684962a","Type":"ContainerDied","Data":"6f4c18d16927eaf56fce1884f8280121cccf26e3a8d837a33fa467c468e2742c"} Mar 13 10:52:05 crc kubenswrapper[4632]: I0313 10:52:05.035594 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f4c18d16927eaf56fce1884f8280121cccf26e3a8d837a33fa467c468e2742c" Mar 13 10:52:05 crc kubenswrapper[4632]: I0313 10:52:05.035607 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556652-ghswk" Mar 13 10:52:05 crc kubenswrapper[4632]: I0313 10:52:05.562026 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556646-blkxp"] Mar 13 10:52:05 crc kubenswrapper[4632]: I0313 10:52:05.572578 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556646-blkxp"] Mar 13 10:52:06 crc kubenswrapper[4632]: I0313 10:52:06.055496 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93948d53-dbf3-47ce-8af0-bee10cc7e246" path="/var/lib/kubelet/pods/93948d53-dbf3-47ce-8af0-bee10cc7e246/volumes" Mar 13 10:52:10 crc kubenswrapper[4632]: I0313 10:52:10.013834 4632 scope.go:117] "RemoveContainer" containerID="92f6939c452dda4592aa326adcecce982f4fafb95f93ce909a101db10372c2ab" Mar 13 10:53:10 crc kubenswrapper[4632]: I0313 10:53:10.461094 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 10:53:10 crc kubenswrapper[4632]: I0313 10:53:10.461488 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 10:53:27 crc kubenswrapper[4632]: I0313 10:53:27.868320 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-774lb"] Mar 13 10:53:27 crc kubenswrapper[4632]: E0313 10:53:27.869320 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27a43bdf-8be4-458b-99ff-4135a684962a" containerName="oc" Mar 13 10:53:27 crc kubenswrapper[4632]: I0313 10:53:27.871811 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="27a43bdf-8be4-458b-99ff-4135a684962a" containerName="oc" Mar 13 10:53:27 crc kubenswrapper[4632]: I0313 10:53:27.872117 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="27a43bdf-8be4-458b-99ff-4135a684962a" containerName="oc" Mar 13 10:53:27 crc kubenswrapper[4632]: I0313 10:53:27.873534 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-774lb" Mar 13 10:53:27 crc kubenswrapper[4632]: I0313 10:53:27.888625 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-774lb"] Mar 13 10:53:28 crc kubenswrapper[4632]: I0313 10:53:28.036288 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/560629a7-9dec-4eb7-8c73-a8f097293daa-utilities\") pod \"community-operators-774lb\" (UID: \"560629a7-9dec-4eb7-8c73-a8f097293daa\") " pod="openshift-marketplace/community-operators-774lb" Mar 13 10:53:28 crc kubenswrapper[4632]: I0313 10:53:28.036338 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc4f5\" (UniqueName: \"kubernetes.io/projected/560629a7-9dec-4eb7-8c73-a8f097293daa-kube-api-access-rc4f5\") pod \"community-operators-774lb\" (UID: \"560629a7-9dec-4eb7-8c73-a8f097293daa\") " pod="openshift-marketplace/community-operators-774lb" Mar 13 10:53:28 crc kubenswrapper[4632]: I0313 10:53:28.036477 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/560629a7-9dec-4eb7-8c73-a8f097293daa-catalog-content\") pod \"community-operators-774lb\" (UID: \"560629a7-9dec-4eb7-8c73-a8f097293daa\") " pod="openshift-marketplace/community-operators-774lb" Mar 13 10:53:28 crc kubenswrapper[4632]: I0313 10:53:28.137968 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/560629a7-9dec-4eb7-8c73-a8f097293daa-catalog-content\") pod \"community-operators-774lb\" (UID: \"560629a7-9dec-4eb7-8c73-a8f097293daa\") " pod="openshift-marketplace/community-operators-774lb" Mar 13 10:53:28 crc kubenswrapper[4632]: I0313 10:53:28.138068 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/560629a7-9dec-4eb7-8c73-a8f097293daa-utilities\") pod \"community-operators-774lb\" (UID: \"560629a7-9dec-4eb7-8c73-a8f097293daa\") " pod="openshift-marketplace/community-operators-774lb" Mar 13 10:53:28 crc kubenswrapper[4632]: I0313 10:53:28.138109 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc4f5\" (UniqueName: \"kubernetes.io/projected/560629a7-9dec-4eb7-8c73-a8f097293daa-kube-api-access-rc4f5\") pod \"community-operators-774lb\" (UID: \"560629a7-9dec-4eb7-8c73-a8f097293daa\") " pod="openshift-marketplace/community-operators-774lb" Mar 13 10:53:28 crc kubenswrapper[4632]: I0313 10:53:28.138510 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/560629a7-9dec-4eb7-8c73-a8f097293daa-catalog-content\") pod \"community-operators-774lb\" (UID: \"560629a7-9dec-4eb7-8c73-a8f097293daa\") " pod="openshift-marketplace/community-operators-774lb" Mar 13 10:53:28 crc kubenswrapper[4632]: I0313 10:53:28.138610 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/560629a7-9dec-4eb7-8c73-a8f097293daa-utilities\") pod \"community-operators-774lb\" (UID: \"560629a7-9dec-4eb7-8c73-a8f097293daa\") " pod="openshift-marketplace/community-operators-774lb" Mar 13 10:53:28 crc kubenswrapper[4632]: I0313 10:53:28.162278 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc4f5\" (UniqueName: \"kubernetes.io/projected/560629a7-9dec-4eb7-8c73-a8f097293daa-kube-api-access-rc4f5\") pod \"community-operators-774lb\" (UID: \"560629a7-9dec-4eb7-8c73-a8f097293daa\") " pod="openshift-marketplace/community-operators-774lb" Mar 13 10:53:28 crc kubenswrapper[4632]: I0313 10:53:28.195470 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-774lb" Mar 13 10:53:28 crc kubenswrapper[4632]: I0313 10:53:28.594357 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-774lb"] Mar 13 10:53:29 crc kubenswrapper[4632]: I0313 10:53:29.126036 4632 generic.go:334] "Generic (PLEG): container finished" podID="560629a7-9dec-4eb7-8c73-a8f097293daa" containerID="f9fbff406d14d8da11f86810a1b1b035215dd5c6179ac20e5ddd29194bd3f5d6" exitCode=0 Mar 13 10:53:29 crc kubenswrapper[4632]: I0313 10:53:29.126109 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-774lb" event={"ID":"560629a7-9dec-4eb7-8c73-a8f097293daa","Type":"ContainerDied","Data":"f9fbff406d14d8da11f86810a1b1b035215dd5c6179ac20e5ddd29194bd3f5d6"} Mar 13 10:53:29 crc kubenswrapper[4632]: I0313 10:53:29.126300 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-774lb" event={"ID":"560629a7-9dec-4eb7-8c73-a8f097293daa","Type":"ContainerStarted","Data":"4ae9494de264dfa5dcfb2c9e6166d64886aa8f640f54445b6eadb498ad356c8c"} Mar 13 10:53:34 crc kubenswrapper[4632]: I0313 10:53:34.178701 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-774lb" event={"ID":"560629a7-9dec-4eb7-8c73-a8f097293daa","Type":"ContainerStarted","Data":"d5e77ef64ff23f92ed48258b81d7d0310ada291a691626009608a75068a59888"} Mar 13 10:53:35 crc kubenswrapper[4632]: I0313 10:53:35.192074 4632 generic.go:334] "Generic (PLEG): container finished" podID="560629a7-9dec-4eb7-8c73-a8f097293daa" containerID="d5e77ef64ff23f92ed48258b81d7d0310ada291a691626009608a75068a59888" exitCode=0 Mar 13 10:53:35 crc kubenswrapper[4632]: I0313 10:53:35.192239 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-774lb" event={"ID":"560629a7-9dec-4eb7-8c73-a8f097293daa","Type":"ContainerDied","Data":"d5e77ef64ff23f92ed48258b81d7d0310ada291a691626009608a75068a59888"} Mar 13 10:53:35 crc kubenswrapper[4632]: I0313 10:53:35.192704 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-774lb" event={"ID":"560629a7-9dec-4eb7-8c73-a8f097293daa","Type":"ContainerStarted","Data":"1b995d3ea46318dbc1da1ae83e15d1a1943f08993ba4772ae9cb4b946ae10e86"} Mar 13 10:53:35 crc kubenswrapper[4632]: I0313 10:53:35.228176 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-774lb" podStartSLOduration=2.410377626 podStartE2EDuration="8.228154509s" podCreationTimestamp="2026-03-13 10:53:27 +0000 UTC" firstStartedPulling="2026-03-13 10:53:29.128371059 +0000 UTC m=+2983.150901202" lastFinishedPulling="2026-03-13 10:53:34.946147942 +0000 UTC m=+2988.968678085" observedRunningTime="2026-03-13 10:53:35.221011094 +0000 UTC m=+2989.243541227" watchObservedRunningTime="2026-03-13 10:53:35.228154509 +0000 UTC m=+2989.250684652" Mar 13 10:53:38 crc kubenswrapper[4632]: I0313 10:53:38.196349 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-774lb" Mar 13 10:53:38 crc kubenswrapper[4632]: I0313 10:53:38.196613 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-774lb" Mar 13 10:53:38 crc kubenswrapper[4632]: I0313 10:53:38.245906 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-774lb" Mar 13 10:53:40 crc kubenswrapper[4632]: I0313 10:53:40.460721 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 10:53:40 crc kubenswrapper[4632]: I0313 10:53:40.461083 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 10:53:48 crc kubenswrapper[4632]: I0313 10:53:48.253230 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-774lb" Mar 13 10:53:48 crc kubenswrapper[4632]: I0313 10:53:48.356738 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-774lb"] Mar 13 10:53:48 crc kubenswrapper[4632]: I0313 10:53:48.418740 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fdtl7"] Mar 13 10:53:48 crc kubenswrapper[4632]: I0313 10:53:48.418997 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fdtl7" podUID="a01bcaf0-e2c1-495b-bc6d-a57978c7817b" containerName="registry-server" containerID="cri-o://e2fda5f5d13b5663527978c3fcfd3cccd016f3f222280313a4a0c5c88b5212d7" gracePeriod=2 Mar 13 10:53:48 crc kubenswrapper[4632]: I0313 10:53:48.894769 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fdtl7" Mar 13 10:53:49 crc kubenswrapper[4632]: I0313 10:53:49.068510 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a01bcaf0-e2c1-495b-bc6d-a57978c7817b-utilities\") pod \"a01bcaf0-e2c1-495b-bc6d-a57978c7817b\" (UID: \"a01bcaf0-e2c1-495b-bc6d-a57978c7817b\") " Mar 13 10:53:49 crc kubenswrapper[4632]: I0313 10:53:49.068880 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgb8q\" (UniqueName: \"kubernetes.io/projected/a01bcaf0-e2c1-495b-bc6d-a57978c7817b-kube-api-access-sgb8q\") pod \"a01bcaf0-e2c1-495b-bc6d-a57978c7817b\" (UID: \"a01bcaf0-e2c1-495b-bc6d-a57978c7817b\") " Mar 13 10:53:49 crc kubenswrapper[4632]: I0313 10:53:49.069217 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a01bcaf0-e2c1-495b-bc6d-a57978c7817b-catalog-content\") pod \"a01bcaf0-e2c1-495b-bc6d-a57978c7817b\" (UID: \"a01bcaf0-e2c1-495b-bc6d-a57978c7817b\") " Mar 13 10:53:49 crc kubenswrapper[4632]: I0313 10:53:49.069646 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a01bcaf0-e2c1-495b-bc6d-a57978c7817b-utilities" (OuterVolumeSpecName: "utilities") pod "a01bcaf0-e2c1-495b-bc6d-a57978c7817b" (UID: "a01bcaf0-e2c1-495b-bc6d-a57978c7817b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:53:49 crc kubenswrapper[4632]: I0313 10:53:49.070443 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a01bcaf0-e2c1-495b-bc6d-a57978c7817b-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 10:53:49 crc kubenswrapper[4632]: I0313 10:53:49.074570 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a01bcaf0-e2c1-495b-bc6d-a57978c7817b-kube-api-access-sgb8q" (OuterVolumeSpecName: "kube-api-access-sgb8q") pod "a01bcaf0-e2c1-495b-bc6d-a57978c7817b" (UID: "a01bcaf0-e2c1-495b-bc6d-a57978c7817b"). InnerVolumeSpecName "kube-api-access-sgb8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:53:49 crc kubenswrapper[4632]: I0313 10:53:49.131511 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a01bcaf0-e2c1-495b-bc6d-a57978c7817b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a01bcaf0-e2c1-495b-bc6d-a57978c7817b" (UID: "a01bcaf0-e2c1-495b-bc6d-a57978c7817b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:53:49 crc kubenswrapper[4632]: I0313 10:53:49.172544 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a01bcaf0-e2c1-495b-bc6d-a57978c7817b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 10:53:49 crc kubenswrapper[4632]: I0313 10:53:49.172584 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgb8q\" (UniqueName: \"kubernetes.io/projected/a01bcaf0-e2c1-495b-bc6d-a57978c7817b-kube-api-access-sgb8q\") on node \"crc\" DevicePath \"\"" Mar 13 10:53:49 crc kubenswrapper[4632]: I0313 10:53:49.328198 4632 generic.go:334] "Generic (PLEG): container finished" podID="a01bcaf0-e2c1-495b-bc6d-a57978c7817b" containerID="e2fda5f5d13b5663527978c3fcfd3cccd016f3f222280313a4a0c5c88b5212d7" exitCode=0 Mar 13 10:53:49 crc kubenswrapper[4632]: I0313 10:53:49.328240 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fdtl7" event={"ID":"a01bcaf0-e2c1-495b-bc6d-a57978c7817b","Type":"ContainerDied","Data":"e2fda5f5d13b5663527978c3fcfd3cccd016f3f222280313a4a0c5c88b5212d7"} Mar 13 10:53:49 crc kubenswrapper[4632]: I0313 10:53:49.328266 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fdtl7" event={"ID":"a01bcaf0-e2c1-495b-bc6d-a57978c7817b","Type":"ContainerDied","Data":"b009a6765de2ecd804f8d033b53e49abb32cd27ba05ed7eacca8b430a75a2575"} Mar 13 10:53:49 crc kubenswrapper[4632]: I0313 10:53:49.328285 4632 scope.go:117] "RemoveContainer" containerID="e2fda5f5d13b5663527978c3fcfd3cccd016f3f222280313a4a0c5c88b5212d7" Mar 13 10:53:49 crc kubenswrapper[4632]: I0313 10:53:49.328417 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fdtl7" Mar 13 10:53:49 crc kubenswrapper[4632]: I0313 10:53:49.364106 4632 scope.go:117] "RemoveContainer" containerID="90d2b05d55e78b9f9829c2ee4bf7bbc01510b17dfbff9b47dda76cd10b610010" Mar 13 10:53:49 crc kubenswrapper[4632]: I0313 10:53:49.379471 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fdtl7"] Mar 13 10:53:49 crc kubenswrapper[4632]: I0313 10:53:49.387813 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fdtl7"] Mar 13 10:53:49 crc kubenswrapper[4632]: I0313 10:53:49.412908 4632 scope.go:117] "RemoveContainer" containerID="e0b81f7099d2b6e974290907876a0f00faa065b0c428d84017f9ee0db229bc73" Mar 13 10:53:49 crc kubenswrapper[4632]: I0313 10:53:49.449767 4632 scope.go:117] "RemoveContainer" containerID="e2fda5f5d13b5663527978c3fcfd3cccd016f3f222280313a4a0c5c88b5212d7" Mar 13 10:53:49 crc kubenswrapper[4632]: E0313 10:53:49.450475 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2fda5f5d13b5663527978c3fcfd3cccd016f3f222280313a4a0c5c88b5212d7\": container with ID starting with e2fda5f5d13b5663527978c3fcfd3cccd016f3f222280313a4a0c5c88b5212d7 not found: ID does not exist" containerID="e2fda5f5d13b5663527978c3fcfd3cccd016f3f222280313a4a0c5c88b5212d7" Mar 13 10:53:49 crc kubenswrapper[4632]: I0313 10:53:49.450527 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2fda5f5d13b5663527978c3fcfd3cccd016f3f222280313a4a0c5c88b5212d7"} err="failed to get container status \"e2fda5f5d13b5663527978c3fcfd3cccd016f3f222280313a4a0c5c88b5212d7\": rpc error: code = NotFound desc = could not find container \"e2fda5f5d13b5663527978c3fcfd3cccd016f3f222280313a4a0c5c88b5212d7\": container with ID starting with e2fda5f5d13b5663527978c3fcfd3cccd016f3f222280313a4a0c5c88b5212d7 not found: ID does not exist" Mar 13 10:53:49 crc kubenswrapper[4632]: I0313 10:53:49.450560 4632 scope.go:117] "RemoveContainer" containerID="90d2b05d55e78b9f9829c2ee4bf7bbc01510b17dfbff9b47dda76cd10b610010" Mar 13 10:53:49 crc kubenswrapper[4632]: E0313 10:53:49.450970 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90d2b05d55e78b9f9829c2ee4bf7bbc01510b17dfbff9b47dda76cd10b610010\": container with ID starting with 90d2b05d55e78b9f9829c2ee4bf7bbc01510b17dfbff9b47dda76cd10b610010 not found: ID does not exist" containerID="90d2b05d55e78b9f9829c2ee4bf7bbc01510b17dfbff9b47dda76cd10b610010" Mar 13 10:53:49 crc kubenswrapper[4632]: I0313 10:53:49.451012 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90d2b05d55e78b9f9829c2ee4bf7bbc01510b17dfbff9b47dda76cd10b610010"} err="failed to get container status \"90d2b05d55e78b9f9829c2ee4bf7bbc01510b17dfbff9b47dda76cd10b610010\": rpc error: code = NotFound desc = could not find container \"90d2b05d55e78b9f9829c2ee4bf7bbc01510b17dfbff9b47dda76cd10b610010\": container with ID starting with 90d2b05d55e78b9f9829c2ee4bf7bbc01510b17dfbff9b47dda76cd10b610010 not found: ID does not exist" Mar 13 10:53:49 crc kubenswrapper[4632]: I0313 10:53:49.451036 4632 scope.go:117] "RemoveContainer" containerID="e0b81f7099d2b6e974290907876a0f00faa065b0c428d84017f9ee0db229bc73" Mar 13 10:53:49 crc kubenswrapper[4632]: E0313 10:53:49.451285 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0b81f7099d2b6e974290907876a0f00faa065b0c428d84017f9ee0db229bc73\": container with ID starting with e0b81f7099d2b6e974290907876a0f00faa065b0c428d84017f9ee0db229bc73 not found: ID does not exist" containerID="e0b81f7099d2b6e974290907876a0f00faa065b0c428d84017f9ee0db229bc73" Mar 13 10:53:49 crc kubenswrapper[4632]: I0313 10:53:49.451312 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0b81f7099d2b6e974290907876a0f00faa065b0c428d84017f9ee0db229bc73"} err="failed to get container status \"e0b81f7099d2b6e974290907876a0f00faa065b0c428d84017f9ee0db229bc73\": rpc error: code = NotFound desc = could not find container \"e0b81f7099d2b6e974290907876a0f00faa065b0c428d84017f9ee0db229bc73\": container with ID starting with e0b81f7099d2b6e974290907876a0f00faa065b0c428d84017f9ee0db229bc73 not found: ID does not exist" Mar 13 10:53:50 crc kubenswrapper[4632]: I0313 10:53:50.058502 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a01bcaf0-e2c1-495b-bc6d-a57978c7817b" path="/var/lib/kubelet/pods/a01bcaf0-e2c1-495b-bc6d-a57978c7817b/volumes" Mar 13 10:54:00 crc kubenswrapper[4632]: I0313 10:54:00.143986 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556654-htcrm"] Mar 13 10:54:00 crc kubenswrapper[4632]: E0313 10:54:00.144900 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a01bcaf0-e2c1-495b-bc6d-a57978c7817b" containerName="extract-utilities" Mar 13 10:54:00 crc kubenswrapper[4632]: I0313 10:54:00.144914 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="a01bcaf0-e2c1-495b-bc6d-a57978c7817b" containerName="extract-utilities" Mar 13 10:54:00 crc kubenswrapper[4632]: E0313 10:54:00.144956 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a01bcaf0-e2c1-495b-bc6d-a57978c7817b" containerName="registry-server" Mar 13 10:54:00 crc kubenswrapper[4632]: I0313 10:54:00.144962 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="a01bcaf0-e2c1-495b-bc6d-a57978c7817b" containerName="registry-server" Mar 13 10:54:00 crc kubenswrapper[4632]: E0313 10:54:00.144976 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a01bcaf0-e2c1-495b-bc6d-a57978c7817b" containerName="extract-content" Mar 13 10:54:00 crc kubenswrapper[4632]: I0313 10:54:00.144983 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="a01bcaf0-e2c1-495b-bc6d-a57978c7817b" containerName="extract-content" Mar 13 10:54:00 crc kubenswrapper[4632]: I0313 10:54:00.145161 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="a01bcaf0-e2c1-495b-bc6d-a57978c7817b" containerName="registry-server" Mar 13 10:54:00 crc kubenswrapper[4632]: I0313 10:54:00.145729 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556654-htcrm" Mar 13 10:54:00 crc kubenswrapper[4632]: I0313 10:54:00.155805 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 10:54:00 crc kubenswrapper[4632]: I0313 10:54:00.156009 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 10:54:00 crc kubenswrapper[4632]: I0313 10:54:00.156193 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 10:54:00 crc kubenswrapper[4632]: I0313 10:54:00.160781 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556654-htcrm"] Mar 13 10:54:00 crc kubenswrapper[4632]: I0313 10:54:00.190033 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8t4l\" (UniqueName: \"kubernetes.io/projected/ba5d59c2-ece8-4b66-9a10-c3ef740d7e45-kube-api-access-x8t4l\") pod \"auto-csr-approver-29556654-htcrm\" (UID: \"ba5d59c2-ece8-4b66-9a10-c3ef740d7e45\") " pod="openshift-infra/auto-csr-approver-29556654-htcrm" Mar 13 10:54:00 crc kubenswrapper[4632]: I0313 10:54:00.292154 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8t4l\" (UniqueName: \"kubernetes.io/projected/ba5d59c2-ece8-4b66-9a10-c3ef740d7e45-kube-api-access-x8t4l\") pod \"auto-csr-approver-29556654-htcrm\" (UID: \"ba5d59c2-ece8-4b66-9a10-c3ef740d7e45\") " pod="openshift-infra/auto-csr-approver-29556654-htcrm" Mar 13 10:54:00 crc kubenswrapper[4632]: I0313 10:54:00.312174 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8t4l\" (UniqueName: \"kubernetes.io/projected/ba5d59c2-ece8-4b66-9a10-c3ef740d7e45-kube-api-access-x8t4l\") pod \"auto-csr-approver-29556654-htcrm\" (UID: \"ba5d59c2-ece8-4b66-9a10-c3ef740d7e45\") " pod="openshift-infra/auto-csr-approver-29556654-htcrm" Mar 13 10:54:00 crc kubenswrapper[4632]: I0313 10:54:00.472044 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556654-htcrm" Mar 13 10:54:01 crc kubenswrapper[4632]: I0313 10:54:01.060807 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556654-htcrm"] Mar 13 10:54:01 crc kubenswrapper[4632]: I0313 10:54:01.446286 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556654-htcrm" event={"ID":"ba5d59c2-ece8-4b66-9a10-c3ef740d7e45","Type":"ContainerStarted","Data":"56848811c020d11ba44de736b6154580ea88d07320adf5703633c722f7971769"} Mar 13 10:54:03 crc kubenswrapper[4632]: I0313 10:54:03.464900 4632 generic.go:334] "Generic (PLEG): container finished" podID="ba5d59c2-ece8-4b66-9a10-c3ef740d7e45" containerID="0151dac58382ec9dba1fe485dee8519ba248333bc8e6aeae5349b66a4c5fa931" exitCode=0 Mar 13 10:54:03 crc kubenswrapper[4632]: I0313 10:54:03.464991 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556654-htcrm" event={"ID":"ba5d59c2-ece8-4b66-9a10-c3ef740d7e45","Type":"ContainerDied","Data":"0151dac58382ec9dba1fe485dee8519ba248333bc8e6aeae5349b66a4c5fa931"} Mar 13 10:54:04 crc kubenswrapper[4632]: I0313 10:54:04.870354 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556654-htcrm" Mar 13 10:54:04 crc kubenswrapper[4632]: I0313 10:54:04.982005 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8t4l\" (UniqueName: \"kubernetes.io/projected/ba5d59c2-ece8-4b66-9a10-c3ef740d7e45-kube-api-access-x8t4l\") pod \"ba5d59c2-ece8-4b66-9a10-c3ef740d7e45\" (UID: \"ba5d59c2-ece8-4b66-9a10-c3ef740d7e45\") " Mar 13 10:54:04 crc kubenswrapper[4632]: I0313 10:54:04.990699 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba5d59c2-ece8-4b66-9a10-c3ef740d7e45-kube-api-access-x8t4l" (OuterVolumeSpecName: "kube-api-access-x8t4l") pod "ba5d59c2-ece8-4b66-9a10-c3ef740d7e45" (UID: "ba5d59c2-ece8-4b66-9a10-c3ef740d7e45"). InnerVolumeSpecName "kube-api-access-x8t4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:54:05 crc kubenswrapper[4632]: I0313 10:54:05.084919 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8t4l\" (UniqueName: \"kubernetes.io/projected/ba5d59c2-ece8-4b66-9a10-c3ef740d7e45-kube-api-access-x8t4l\") on node \"crc\" DevicePath \"\"" Mar 13 10:54:05 crc kubenswrapper[4632]: I0313 10:54:05.485044 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556654-htcrm" event={"ID":"ba5d59c2-ece8-4b66-9a10-c3ef740d7e45","Type":"ContainerDied","Data":"56848811c020d11ba44de736b6154580ea88d07320adf5703633c722f7971769"} Mar 13 10:54:05 crc kubenswrapper[4632]: I0313 10:54:05.485376 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56848811c020d11ba44de736b6154580ea88d07320adf5703633c722f7971769" Mar 13 10:54:05 crc kubenswrapper[4632]: I0313 10:54:05.485087 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556654-htcrm" Mar 13 10:54:05 crc kubenswrapper[4632]: I0313 10:54:05.966493 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556648-hbbkx"] Mar 13 10:54:05 crc kubenswrapper[4632]: I0313 10:54:05.973502 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556648-hbbkx"] Mar 13 10:54:06 crc kubenswrapper[4632]: I0313 10:54:06.057036 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ab47075-381b-45d4-b6c8-c64ae6433ef1" path="/var/lib/kubelet/pods/5ab47075-381b-45d4-b6c8-c64ae6433ef1/volumes" Mar 13 10:54:08 crc kubenswrapper[4632]: I0313 10:54:08.518196 4632 generic.go:334] "Generic (PLEG): container finished" podID="4656b24f-4b10-481a-ba5b-1c17e5f2f7ef" containerID="4b7c60398c3f33a538800ee1d366657cf73b7c4a7aff19c20e81ba7f933c624f" exitCode=0 Mar 13 10:54:08 crc kubenswrapper[4632]: I0313 10:54:08.518463 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw" event={"ID":"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef","Type":"ContainerDied","Data":"4b7c60398c3f33a538800ee1d366657cf73b7c4a7aff19c20e81ba7f933c624f"} Mar 13 10:54:10 crc kubenswrapper[4632]: I0313 10:54:10.155854 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw" Mar 13 10:54:10 crc kubenswrapper[4632]: I0313 10:54:10.238758 4632 scope.go:117] "RemoveContainer" containerID="ac1c75bd040311821d7426607144ebc256c3f11219f7a26012d50c7ce3c315ba" Mar 13 10:54:10 crc kubenswrapper[4632]: I0313 10:54:10.295791 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-ssh-key-openstack-edpm-ipam\") pod \"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef\" (UID: \"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef\") " Mar 13 10:54:10 crc kubenswrapper[4632]: I0313 10:54:10.295907 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-ceilometer-compute-config-data-1\") pod \"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef\" (UID: \"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef\") " Mar 13 10:54:10 crc kubenswrapper[4632]: I0313 10:54:10.295973 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-telemetry-combined-ca-bundle\") pod \"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef\" (UID: \"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef\") " Mar 13 10:54:10 crc kubenswrapper[4632]: I0313 10:54:10.295999 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-ceilometer-compute-config-data-2\") pod \"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef\" (UID: \"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef\") " Mar 13 10:54:10 crc kubenswrapper[4632]: I0313 10:54:10.296049 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxhtb\" (UniqueName: \"kubernetes.io/projected/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-kube-api-access-sxhtb\") pod \"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef\" (UID: \"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef\") " Mar 13 10:54:10 crc kubenswrapper[4632]: I0313 10:54:10.296151 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-inventory\") pod \"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef\" (UID: \"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef\") " Mar 13 10:54:10 crc kubenswrapper[4632]: I0313 10:54:10.296206 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-ceilometer-compute-config-data-0\") pod \"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef\" (UID: \"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef\") " Mar 13 10:54:10 crc kubenswrapper[4632]: I0313 10:54:10.302662 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-kube-api-access-sxhtb" (OuterVolumeSpecName: "kube-api-access-sxhtb") pod "4656b24f-4b10-481a-ba5b-1c17e5f2f7ef" (UID: "4656b24f-4b10-481a-ba5b-1c17e5f2f7ef"). InnerVolumeSpecName "kube-api-access-sxhtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:54:10 crc kubenswrapper[4632]: I0313 10:54:10.303551 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "4656b24f-4b10-481a-ba5b-1c17e5f2f7ef" (UID: "4656b24f-4b10-481a-ba5b-1c17e5f2f7ef"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:54:10 crc kubenswrapper[4632]: I0313 10:54:10.330077 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "4656b24f-4b10-481a-ba5b-1c17e5f2f7ef" (UID: "4656b24f-4b10-481a-ba5b-1c17e5f2f7ef"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:54:10 crc kubenswrapper[4632]: I0313 10:54:10.330346 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "4656b24f-4b10-481a-ba5b-1c17e5f2f7ef" (UID: "4656b24f-4b10-481a-ba5b-1c17e5f2f7ef"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:54:10 crc kubenswrapper[4632]: I0313 10:54:10.333272 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "4656b24f-4b10-481a-ba5b-1c17e5f2f7ef" (UID: "4656b24f-4b10-481a-ba5b-1c17e5f2f7ef"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:54:10 crc kubenswrapper[4632]: I0313 10:54:10.340309 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-inventory" (OuterVolumeSpecName: "inventory") pod "4656b24f-4b10-481a-ba5b-1c17e5f2f7ef" (UID: "4656b24f-4b10-481a-ba5b-1c17e5f2f7ef"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:54:10 crc kubenswrapper[4632]: I0313 10:54:10.348049 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4656b24f-4b10-481a-ba5b-1c17e5f2f7ef" (UID: "4656b24f-4b10-481a-ba5b-1c17e5f2f7ef"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:54:10 crc kubenswrapper[4632]: I0313 10:54:10.398797 4632 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-inventory\") on node \"crc\" DevicePath \"\"" Mar 13 10:54:10 crc kubenswrapper[4632]: I0313 10:54:10.398840 4632 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Mar 13 10:54:10 crc kubenswrapper[4632]: I0313 10:54:10.398859 4632 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 13 10:54:10 crc kubenswrapper[4632]: I0313 10:54:10.398877 4632 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Mar 13 10:54:10 crc kubenswrapper[4632]: I0313 10:54:10.398891 4632 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 10:54:10 crc kubenswrapper[4632]: I0313 10:54:10.398903 4632 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Mar 13 10:54:10 crc kubenswrapper[4632]: I0313 10:54:10.398918 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxhtb\" (UniqueName: \"kubernetes.io/projected/4656b24f-4b10-481a-ba5b-1c17e5f2f7ef-kube-api-access-sxhtb\") on node \"crc\" DevicePath \"\"" Mar 13 10:54:10 crc kubenswrapper[4632]: I0313 10:54:10.461120 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 10:54:10 crc kubenswrapper[4632]: I0313 10:54:10.461196 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 10:54:10 crc kubenswrapper[4632]: I0313 10:54:10.461259 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 10:54:10 crc kubenswrapper[4632]: I0313 10:54:10.462421 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d62fcc7d7dd37c1e59dee28bd69ab3bfac7e5412873fdfe93b8d8f0639424c9d"} pod="openshift-machine-config-operator/machine-config-daemon-zkscb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 10:54:10 crc kubenswrapper[4632]: I0313 10:54:10.462528 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" containerID="cri-o://d62fcc7d7dd37c1e59dee28bd69ab3bfac7e5412873fdfe93b8d8f0639424c9d" gracePeriod=600 Mar 13 10:54:10 crc kubenswrapper[4632]: I0313 10:54:10.542713 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw" event={"ID":"4656b24f-4b10-481a-ba5b-1c17e5f2f7ef","Type":"ContainerDied","Data":"6ee75f7e51730eeb5489b2c1dd9bb7192cbbc507759459f37404c40b386bbdfc"} Mar 13 10:54:10 crc kubenswrapper[4632]: I0313 10:54:10.543022 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ee75f7e51730eeb5489b2c1dd9bb7192cbbc507759459f37404c40b386bbdfc" Mar 13 10:54:10 crc kubenswrapper[4632]: I0313 10:54:10.542770 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw" Mar 13 10:54:10 crc kubenswrapper[4632]: E0313 10:54:10.605504 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:54:11 crc kubenswrapper[4632]: I0313 10:54:11.559032 4632 generic.go:334] "Generic (PLEG): container finished" podID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerID="d62fcc7d7dd37c1e59dee28bd69ab3bfac7e5412873fdfe93b8d8f0639424c9d" exitCode=0 Mar 13 10:54:11 crc kubenswrapper[4632]: I0313 10:54:11.559088 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerDied","Data":"d62fcc7d7dd37c1e59dee28bd69ab3bfac7e5412873fdfe93b8d8f0639424c9d"} Mar 13 10:54:11 crc kubenswrapper[4632]: I0313 10:54:11.559133 4632 scope.go:117] "RemoveContainer" containerID="a5ebd5748d892637db30e6f25b4cdb7397d5f5e2a1d221a622054fbf7f8b83f2" Mar 13 10:54:11 crc kubenswrapper[4632]: I0313 10:54:11.559804 4632 scope.go:117] "RemoveContainer" containerID="d62fcc7d7dd37c1e59dee28bd69ab3bfac7e5412873fdfe93b8d8f0639424c9d" Mar 13 10:54:11 crc kubenswrapper[4632]: E0313 10:54:11.560175 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:54:25 crc kubenswrapper[4632]: I0313 10:54:25.044701 4632 scope.go:117] "RemoveContainer" containerID="d62fcc7d7dd37c1e59dee28bd69ab3bfac7e5412873fdfe93b8d8f0639424c9d" Mar 13 10:54:25 crc kubenswrapper[4632]: E0313 10:54:25.045661 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:54:37 crc kubenswrapper[4632]: I0313 10:54:37.044204 4632 scope.go:117] "RemoveContainer" containerID="d62fcc7d7dd37c1e59dee28bd69ab3bfac7e5412873fdfe93b8d8f0639424c9d" Mar 13 10:54:37 crc kubenswrapper[4632]: E0313 10:54:37.045543 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:54:47 crc kubenswrapper[4632]: E0313 10:54:47.523413 4632 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.182:57266->38.102.83.182:37465: write tcp 38.102.83.182:57266->38.102.83.182:37465: write: broken pipe Mar 13 10:54:51 crc kubenswrapper[4632]: I0313 10:54:51.044715 4632 scope.go:117] "RemoveContainer" containerID="d62fcc7d7dd37c1e59dee28bd69ab3bfac7e5412873fdfe93b8d8f0639424c9d" Mar 13 10:54:51 crc kubenswrapper[4632]: E0313 10:54:51.046973 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:55:04 crc kubenswrapper[4632]: I0313 10:55:04.044706 4632 scope.go:117] "RemoveContainer" containerID="d62fcc7d7dd37c1e59dee28bd69ab3bfac7e5412873fdfe93b8d8f0639424c9d" Mar 13 10:55:04 crc kubenswrapper[4632]: E0313 10:55:04.045598 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:55:08 crc kubenswrapper[4632]: I0313 10:55:08.981923 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Mar 13 10:55:08 crc kubenswrapper[4632]: E0313 10:55:08.982753 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba5d59c2-ece8-4b66-9a10-c3ef740d7e45" containerName="oc" Mar 13 10:55:08 crc kubenswrapper[4632]: I0313 10:55:08.982769 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba5d59c2-ece8-4b66-9a10-c3ef740d7e45" containerName="oc" Mar 13 10:55:08 crc kubenswrapper[4632]: E0313 10:55:08.982796 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4656b24f-4b10-481a-ba5b-1c17e5f2f7ef" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 13 10:55:08 crc kubenswrapper[4632]: I0313 10:55:08.982804 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="4656b24f-4b10-481a-ba5b-1c17e5f2f7ef" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 13 10:55:08 crc kubenswrapper[4632]: I0313 10:55:08.983031 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="4656b24f-4b10-481a-ba5b-1c17e5f2f7ef" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 13 10:55:08 crc kubenswrapper[4632]: I0313 10:55:08.983050 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba5d59c2-ece8-4b66-9a10-c3ef740d7e45" containerName="oc" Mar 13 10:55:08 crc kubenswrapper[4632]: I0313 10:55:08.983706 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 13 10:55:08 crc kubenswrapper[4632]: I0313 10:55:08.986672 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Mar 13 10:55:08 crc kubenswrapper[4632]: I0313 10:55:08.987220 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-9w9qk" Mar 13 10:55:08 crc kubenswrapper[4632]: I0313 10:55:08.988126 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Mar 13 10:55:08 crc kubenswrapper[4632]: I0313 10:55:08.988482 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Mar 13 10:55:09 crc kubenswrapper[4632]: I0313 10:55:09.004326 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Mar 13 10:55:09 crc kubenswrapper[4632]: I0313 10:55:09.104519 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a62e0eae-95dd-40a3-a489-80646fde4301-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 13 10:55:09 crc kubenswrapper[4632]: I0313 10:55:09.104587 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a62e0eae-95dd-40a3-a489-80646fde4301-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 13 10:55:09 crc kubenswrapper[4632]: I0313 10:55:09.104610 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a62e0eae-95dd-40a3-a489-80646fde4301-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 13 10:55:09 crc kubenswrapper[4632]: I0313 10:55:09.104635 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w5vp\" (UniqueName: \"kubernetes.io/projected/a62e0eae-95dd-40a3-a489-80646fde4301-kube-api-access-8w5vp\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 13 10:55:09 crc kubenswrapper[4632]: I0313 10:55:09.105237 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a62e0eae-95dd-40a3-a489-80646fde4301-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 13 10:55:09 crc kubenswrapper[4632]: I0313 10:55:09.105300 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a62e0eae-95dd-40a3-a489-80646fde4301-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 13 10:55:09 crc kubenswrapper[4632]: I0313 10:55:09.105323 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 13 10:55:09 crc kubenswrapper[4632]: I0313 10:55:09.105351 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a62e0eae-95dd-40a3-a489-80646fde4301-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 13 10:55:09 crc kubenswrapper[4632]: I0313 10:55:09.105368 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a62e0eae-95dd-40a3-a489-80646fde4301-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 13 10:55:09 crc kubenswrapper[4632]: I0313 10:55:09.208257 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a62e0eae-95dd-40a3-a489-80646fde4301-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 13 10:55:09 crc kubenswrapper[4632]: I0313 10:55:09.208326 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 13 10:55:09 crc kubenswrapper[4632]: I0313 10:55:09.208387 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a62e0eae-95dd-40a3-a489-80646fde4301-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 13 10:55:09 crc kubenswrapper[4632]: I0313 10:55:09.208408 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a62e0eae-95dd-40a3-a489-80646fde4301-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 13 10:55:09 crc kubenswrapper[4632]: I0313 10:55:09.208629 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a62e0eae-95dd-40a3-a489-80646fde4301-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 13 10:55:09 crc kubenswrapper[4632]: I0313 10:55:09.208659 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a62e0eae-95dd-40a3-a489-80646fde4301-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 13 10:55:09 crc kubenswrapper[4632]: I0313 10:55:09.208710 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a62e0eae-95dd-40a3-a489-80646fde4301-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 13 10:55:09 crc kubenswrapper[4632]: I0313 10:55:09.208735 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w5vp\" (UniqueName: \"kubernetes.io/projected/a62e0eae-95dd-40a3-a489-80646fde4301-kube-api-access-8w5vp\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 13 10:55:09 crc kubenswrapper[4632]: I0313 10:55:09.208844 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a62e0eae-95dd-40a3-a489-80646fde4301-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 13 10:55:09 crc kubenswrapper[4632]: I0313 10:55:09.209127 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a62e0eae-95dd-40a3-a489-80646fde4301-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 13 10:55:09 crc kubenswrapper[4632]: I0313 10:55:09.209259 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a62e0eae-95dd-40a3-a489-80646fde4301-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 13 10:55:09 crc kubenswrapper[4632]: I0313 10:55:09.211315 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a62e0eae-95dd-40a3-a489-80646fde4301-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 13 10:55:09 crc kubenswrapper[4632]: I0313 10:55:09.211994 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a62e0eae-95dd-40a3-a489-80646fde4301-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 13 10:55:09 crc kubenswrapper[4632]: I0313 10:55:09.212031 4632 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 13 10:55:09 crc kubenswrapper[4632]: I0313 10:55:09.214519 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a62e0eae-95dd-40a3-a489-80646fde4301-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 13 10:55:09 crc kubenswrapper[4632]: I0313 10:55:09.214643 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a62e0eae-95dd-40a3-a489-80646fde4301-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 13 10:55:09 crc kubenswrapper[4632]: I0313 10:55:09.229451 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a62e0eae-95dd-40a3-a489-80646fde4301-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 13 10:55:09 crc kubenswrapper[4632]: I0313 10:55:09.231634 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w5vp\" (UniqueName: \"kubernetes.io/projected/a62e0eae-95dd-40a3-a489-80646fde4301-kube-api-access-8w5vp\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 13 10:55:09 crc kubenswrapper[4632]: I0313 10:55:09.257082 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 13 10:55:09 crc kubenswrapper[4632]: I0313 10:55:09.308473 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 13 10:55:09 crc kubenswrapper[4632]: I0313 10:55:09.899269 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Mar 13 10:55:10 crc kubenswrapper[4632]: I0313 10:55:10.132120 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"a62e0eae-95dd-40a3-a489-80646fde4301","Type":"ContainerStarted","Data":"959458908fe1f2c8aa4edafce9f9395e573f668491b9554e12daf71db7b5cc6a"} Mar 13 10:55:15 crc kubenswrapper[4632]: I0313 10:55:15.044452 4632 scope.go:117] "RemoveContainer" containerID="d62fcc7d7dd37c1e59dee28bd69ab3bfac7e5412873fdfe93b8d8f0639424c9d" Mar 13 10:55:15 crc kubenswrapper[4632]: E0313 10:55:15.045204 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:55:28 crc kubenswrapper[4632]: I0313 10:55:28.050499 4632 scope.go:117] "RemoveContainer" containerID="d62fcc7d7dd37c1e59dee28bd69ab3bfac7e5412873fdfe93b8d8f0639424c9d" Mar 13 10:55:28 crc kubenswrapper[4632]: E0313 10:55:28.051327 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:55:39 crc kubenswrapper[4632]: I0313 10:55:39.044832 4632 scope.go:117] "RemoveContainer" containerID="d62fcc7d7dd37c1e59dee28bd69ab3bfac7e5412873fdfe93b8d8f0639424c9d" Mar 13 10:55:39 crc kubenswrapper[4632]: E0313 10:55:39.045624 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:55:48 crc kubenswrapper[4632]: E0313 10:55:48.152463 4632 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-tempest-all:e43235cb19da04699a53f42b6a75afe9" Mar 13 10:55:48 crc kubenswrapper[4632]: E0313 10:55:48.153453 4632 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-tempest-all:e43235cb19da04699a53f42b6a75afe9" Mar 13 10:55:48 crc kubenswrapper[4632]: E0313 10:55:48.156685 4632 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:38.102.83.132:5001/podified-antelope-centos9/openstack-tempest-all:e43235cb19da04699a53f42b6a75afe9,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8w5vp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest-s00-multi-thread-testing_openstack(a62e0eae-95dd-40a3-a489-80646fde4301): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 10:55:48 crc kubenswrapper[4632]: E0313 10:55:48.157908 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podUID="a62e0eae-95dd-40a3-a489-80646fde4301" Mar 13 10:55:48 crc kubenswrapper[4632]: E0313 10:55:48.576216 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.132:5001/podified-antelope-centos9/openstack-tempest-all:e43235cb19da04699a53f42b6a75afe9\\\"\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podUID="a62e0eae-95dd-40a3-a489-80646fde4301" Mar 13 10:55:52 crc kubenswrapper[4632]: I0313 10:55:52.044895 4632 scope.go:117] "RemoveContainer" containerID="d62fcc7d7dd37c1e59dee28bd69ab3bfac7e5412873fdfe93b8d8f0639424c9d" Mar 13 10:55:52 crc kubenswrapper[4632]: E0313 10:55:52.045811 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:56:00 crc kubenswrapper[4632]: I0313 10:56:00.165313 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556656-vzb8p"] Mar 13 10:56:00 crc kubenswrapper[4632]: I0313 10:56:00.169904 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556656-vzb8p" Mar 13 10:56:00 crc kubenswrapper[4632]: I0313 10:56:00.172601 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 10:56:00 crc kubenswrapper[4632]: I0313 10:56:00.174108 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 10:56:00 crc kubenswrapper[4632]: I0313 10:56:00.174187 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 10:56:00 crc kubenswrapper[4632]: I0313 10:56:00.186672 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556656-vzb8p"] Mar 13 10:56:00 crc kubenswrapper[4632]: I0313 10:56:00.334744 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26vmr\" (UniqueName: \"kubernetes.io/projected/f70e2037-a5d6-4479-af7f-18fe8ff9e952-kube-api-access-26vmr\") pod \"auto-csr-approver-29556656-vzb8p\" (UID: \"f70e2037-a5d6-4479-af7f-18fe8ff9e952\") " pod="openshift-infra/auto-csr-approver-29556656-vzb8p" Mar 13 10:56:00 crc kubenswrapper[4632]: I0313 10:56:00.436886 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26vmr\" (UniqueName: \"kubernetes.io/projected/f70e2037-a5d6-4479-af7f-18fe8ff9e952-kube-api-access-26vmr\") pod \"auto-csr-approver-29556656-vzb8p\" (UID: \"f70e2037-a5d6-4479-af7f-18fe8ff9e952\") " pod="openshift-infra/auto-csr-approver-29556656-vzb8p" Mar 13 10:56:00 crc kubenswrapper[4632]: I0313 10:56:00.464435 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26vmr\" (UniqueName: \"kubernetes.io/projected/f70e2037-a5d6-4479-af7f-18fe8ff9e952-kube-api-access-26vmr\") pod \"auto-csr-approver-29556656-vzb8p\" (UID: \"f70e2037-a5d6-4479-af7f-18fe8ff9e952\") " pod="openshift-infra/auto-csr-approver-29556656-vzb8p" Mar 13 10:56:00 crc kubenswrapper[4632]: I0313 10:56:00.499673 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556656-vzb8p" Mar 13 10:56:01 crc kubenswrapper[4632]: I0313 10:56:01.064067 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556656-vzb8p"] Mar 13 10:56:01 crc kubenswrapper[4632]: I0313 10:56:01.707224 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556656-vzb8p" event={"ID":"f70e2037-a5d6-4479-af7f-18fe8ff9e952","Type":"ContainerStarted","Data":"461a346934711ed3bdfd4aa55ae2ec81a65d1c4197e9cf719d8b6ec477df66ce"} Mar 13 10:56:02 crc kubenswrapper[4632]: I0313 10:56:02.716603 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556656-vzb8p" event={"ID":"f70e2037-a5d6-4479-af7f-18fe8ff9e952","Type":"ContainerStarted","Data":"6dd075c6962fa13da67ea22e1c7e0f24f4fdd06a675abd3b301b6ea671a2f51e"} Mar 13 10:56:02 crc kubenswrapper[4632]: I0313 10:56:02.737331 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556656-vzb8p" podStartSLOduration=1.809424227 podStartE2EDuration="2.737312503s" podCreationTimestamp="2026-03-13 10:56:00 +0000 UTC" firstStartedPulling="2026-03-13 10:56:01.07056543 +0000 UTC m=+3135.093095563" lastFinishedPulling="2026-03-13 10:56:01.998453716 +0000 UTC m=+3136.020983839" observedRunningTime="2026-03-13 10:56:02.730576237 +0000 UTC m=+3136.753106390" watchObservedRunningTime="2026-03-13 10:56:02.737312503 +0000 UTC m=+3136.759842626" Mar 13 10:56:03 crc kubenswrapper[4632]: I0313 10:56:03.729259 4632 generic.go:334] "Generic (PLEG): container finished" podID="f70e2037-a5d6-4479-af7f-18fe8ff9e952" containerID="6dd075c6962fa13da67ea22e1c7e0f24f4fdd06a675abd3b301b6ea671a2f51e" exitCode=0 Mar 13 10:56:03 crc kubenswrapper[4632]: I0313 10:56:03.729330 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556656-vzb8p" event={"ID":"f70e2037-a5d6-4479-af7f-18fe8ff9e952","Type":"ContainerDied","Data":"6dd075c6962fa13da67ea22e1c7e0f24f4fdd06a675abd3b301b6ea671a2f51e"} Mar 13 10:56:04 crc kubenswrapper[4632]: I0313 10:56:04.115600 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Mar 13 10:56:05 crc kubenswrapper[4632]: I0313 10:56:05.124321 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556656-vzb8p" Mar 13 10:56:05 crc kubenswrapper[4632]: I0313 10:56:05.249121 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26vmr\" (UniqueName: \"kubernetes.io/projected/f70e2037-a5d6-4479-af7f-18fe8ff9e952-kube-api-access-26vmr\") pod \"f70e2037-a5d6-4479-af7f-18fe8ff9e952\" (UID: \"f70e2037-a5d6-4479-af7f-18fe8ff9e952\") " Mar 13 10:56:05 crc kubenswrapper[4632]: I0313 10:56:05.256172 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f70e2037-a5d6-4479-af7f-18fe8ff9e952-kube-api-access-26vmr" (OuterVolumeSpecName: "kube-api-access-26vmr") pod "f70e2037-a5d6-4479-af7f-18fe8ff9e952" (UID: "f70e2037-a5d6-4479-af7f-18fe8ff9e952"). InnerVolumeSpecName "kube-api-access-26vmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:56:05 crc kubenswrapper[4632]: I0313 10:56:05.351770 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26vmr\" (UniqueName: \"kubernetes.io/projected/f70e2037-a5d6-4479-af7f-18fe8ff9e952-kube-api-access-26vmr\") on node \"crc\" DevicePath \"\"" Mar 13 10:56:05 crc kubenswrapper[4632]: I0313 10:56:05.749840 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556656-vzb8p" event={"ID":"f70e2037-a5d6-4479-af7f-18fe8ff9e952","Type":"ContainerDied","Data":"461a346934711ed3bdfd4aa55ae2ec81a65d1c4197e9cf719d8b6ec477df66ce"} Mar 13 10:56:05 crc kubenswrapper[4632]: I0313 10:56:05.750147 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="461a346934711ed3bdfd4aa55ae2ec81a65d1c4197e9cf719d8b6ec477df66ce" Mar 13 10:56:05 crc kubenswrapper[4632]: I0313 10:56:05.750073 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556656-vzb8p" Mar 13 10:56:05 crc kubenswrapper[4632]: I0313 10:56:05.752467 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"a62e0eae-95dd-40a3-a489-80646fde4301","Type":"ContainerStarted","Data":"ec15c016ac8280363b8fb347025993466f5b7492f2d0ac470ef8fc423974c0e2"} Mar 13 10:56:05 crc kubenswrapper[4632]: I0313 10:56:05.788026 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podStartSLOduration=4.587990695 podStartE2EDuration="58.787996943s" podCreationTimestamp="2026-03-13 10:55:07 +0000 UTC" firstStartedPulling="2026-03-13 10:55:09.913419149 +0000 UTC m=+3083.935949282" lastFinishedPulling="2026-03-13 10:56:04.113425397 +0000 UTC m=+3138.135955530" observedRunningTime="2026-03-13 10:56:05.775993357 +0000 UTC m=+3139.798523500" watchObservedRunningTime="2026-03-13 10:56:05.787996943 +0000 UTC m=+3139.810527096" Mar 13 10:56:05 crc kubenswrapper[4632]: I0313 10:56:05.816928 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556650-rskjc"] Mar 13 10:56:05 crc kubenswrapper[4632]: I0313 10:56:05.824956 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556650-rskjc"] Mar 13 10:56:06 crc kubenswrapper[4632]: I0313 10:56:06.044888 4632 scope.go:117] "RemoveContainer" containerID="d62fcc7d7dd37c1e59dee28bd69ab3bfac7e5412873fdfe93b8d8f0639424c9d" Mar 13 10:56:06 crc kubenswrapper[4632]: E0313 10:56:06.045191 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:56:06 crc kubenswrapper[4632]: I0313 10:56:06.056508 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc8dd4ae-e21e-4155-b617-19c85512d4fe" path="/var/lib/kubelet/pods/cc8dd4ae-e21e-4155-b617-19c85512d4fe/volumes" Mar 13 10:56:10 crc kubenswrapper[4632]: I0313 10:56:10.380595 4632 scope.go:117] "RemoveContainer" containerID="10fbcfced1ba7ba66a4ba615aa3b2aab72091e177631977720b60aac13ae9d0f" Mar 13 10:56:17 crc kubenswrapper[4632]: I0313 10:56:17.044200 4632 scope.go:117] "RemoveContainer" containerID="d62fcc7d7dd37c1e59dee28bd69ab3bfac7e5412873fdfe93b8d8f0639424c9d" Mar 13 10:56:17 crc kubenswrapper[4632]: E0313 10:56:17.045223 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:56:31 crc kubenswrapper[4632]: I0313 10:56:31.045629 4632 scope.go:117] "RemoveContainer" containerID="d62fcc7d7dd37c1e59dee28bd69ab3bfac7e5412873fdfe93b8d8f0639424c9d" Mar 13 10:56:31 crc kubenswrapper[4632]: E0313 10:56:31.046423 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:56:45 crc kubenswrapper[4632]: I0313 10:56:45.045908 4632 scope.go:117] "RemoveContainer" containerID="d62fcc7d7dd37c1e59dee28bd69ab3bfac7e5412873fdfe93b8d8f0639424c9d" Mar 13 10:56:45 crc kubenswrapper[4632]: E0313 10:56:45.047060 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:56:58 crc kubenswrapper[4632]: I0313 10:56:58.050354 4632 scope.go:117] "RemoveContainer" containerID="d62fcc7d7dd37c1e59dee28bd69ab3bfac7e5412873fdfe93b8d8f0639424c9d" Mar 13 10:56:58 crc kubenswrapper[4632]: E0313 10:56:58.051413 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:57:09 crc kubenswrapper[4632]: I0313 10:57:09.044137 4632 scope.go:117] "RemoveContainer" containerID="d62fcc7d7dd37c1e59dee28bd69ab3bfac7e5412873fdfe93b8d8f0639424c9d" Mar 13 10:57:09 crc kubenswrapper[4632]: E0313 10:57:09.045806 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:57:24 crc kubenswrapper[4632]: I0313 10:57:24.044226 4632 scope.go:117] "RemoveContainer" containerID="d62fcc7d7dd37c1e59dee28bd69ab3bfac7e5412873fdfe93b8d8f0639424c9d" Mar 13 10:57:24 crc kubenswrapper[4632]: E0313 10:57:24.044920 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:57:35 crc kubenswrapper[4632]: I0313 10:57:35.044492 4632 scope.go:117] "RemoveContainer" containerID="d62fcc7d7dd37c1e59dee28bd69ab3bfac7e5412873fdfe93b8d8f0639424c9d" Mar 13 10:57:35 crc kubenswrapper[4632]: E0313 10:57:35.045320 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:57:47 crc kubenswrapper[4632]: I0313 10:57:47.044518 4632 scope.go:117] "RemoveContainer" containerID="d62fcc7d7dd37c1e59dee28bd69ab3bfac7e5412873fdfe93b8d8f0639424c9d" Mar 13 10:57:47 crc kubenswrapper[4632]: E0313 10:57:47.045675 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:57:54 crc kubenswrapper[4632]: I0313 10:57:54.535540 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fmml7"] Mar 13 10:57:54 crc kubenswrapper[4632]: E0313 10:57:54.551066 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f70e2037-a5d6-4479-af7f-18fe8ff9e952" containerName="oc" Mar 13 10:57:54 crc kubenswrapper[4632]: I0313 10:57:54.551895 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="f70e2037-a5d6-4479-af7f-18fe8ff9e952" containerName="oc" Mar 13 10:57:54 crc kubenswrapper[4632]: I0313 10:57:54.556673 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="f70e2037-a5d6-4479-af7f-18fe8ff9e952" containerName="oc" Mar 13 10:57:54 crc kubenswrapper[4632]: I0313 10:57:54.560429 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fmml7" Mar 13 10:57:54 crc kubenswrapper[4632]: I0313 10:57:54.664881 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7c91e27-6596-4471-81e9-4a65e55379cc-utilities\") pod \"certified-operators-fmml7\" (UID: \"d7c91e27-6596-4471-81e9-4a65e55379cc\") " pod="openshift-marketplace/certified-operators-fmml7" Mar 13 10:57:54 crc kubenswrapper[4632]: I0313 10:57:54.665210 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7c91e27-6596-4471-81e9-4a65e55379cc-catalog-content\") pod \"certified-operators-fmml7\" (UID: \"d7c91e27-6596-4471-81e9-4a65e55379cc\") " pod="openshift-marketplace/certified-operators-fmml7" Mar 13 10:57:54 crc kubenswrapper[4632]: I0313 10:57:54.665347 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzvbq\" (UniqueName: \"kubernetes.io/projected/d7c91e27-6596-4471-81e9-4a65e55379cc-kube-api-access-hzvbq\") pod \"certified-operators-fmml7\" (UID: \"d7c91e27-6596-4471-81e9-4a65e55379cc\") " pod="openshift-marketplace/certified-operators-fmml7" Mar 13 10:57:54 crc kubenswrapper[4632]: I0313 10:57:54.699096 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fmml7"] Mar 13 10:57:54 crc kubenswrapper[4632]: I0313 10:57:54.767614 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7c91e27-6596-4471-81e9-4a65e55379cc-utilities\") pod \"certified-operators-fmml7\" (UID: \"d7c91e27-6596-4471-81e9-4a65e55379cc\") " pod="openshift-marketplace/certified-operators-fmml7" Mar 13 10:57:54 crc kubenswrapper[4632]: I0313 10:57:54.767887 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7c91e27-6596-4471-81e9-4a65e55379cc-catalog-content\") pod \"certified-operators-fmml7\" (UID: \"d7c91e27-6596-4471-81e9-4a65e55379cc\") " pod="openshift-marketplace/certified-operators-fmml7" Mar 13 10:57:54 crc kubenswrapper[4632]: I0313 10:57:54.767926 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzvbq\" (UniqueName: \"kubernetes.io/projected/d7c91e27-6596-4471-81e9-4a65e55379cc-kube-api-access-hzvbq\") pod \"certified-operators-fmml7\" (UID: \"d7c91e27-6596-4471-81e9-4a65e55379cc\") " pod="openshift-marketplace/certified-operators-fmml7" Mar 13 10:57:54 crc kubenswrapper[4632]: I0313 10:57:54.774245 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7c91e27-6596-4471-81e9-4a65e55379cc-utilities\") pod \"certified-operators-fmml7\" (UID: \"d7c91e27-6596-4471-81e9-4a65e55379cc\") " pod="openshift-marketplace/certified-operators-fmml7" Mar 13 10:57:54 crc kubenswrapper[4632]: I0313 10:57:54.776025 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7c91e27-6596-4471-81e9-4a65e55379cc-catalog-content\") pod \"certified-operators-fmml7\" (UID: \"d7c91e27-6596-4471-81e9-4a65e55379cc\") " pod="openshift-marketplace/certified-operators-fmml7" Mar 13 10:57:54 crc kubenswrapper[4632]: I0313 10:57:54.800043 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzvbq\" (UniqueName: \"kubernetes.io/projected/d7c91e27-6596-4471-81e9-4a65e55379cc-kube-api-access-hzvbq\") pod \"certified-operators-fmml7\" (UID: \"d7c91e27-6596-4471-81e9-4a65e55379cc\") " pod="openshift-marketplace/certified-operators-fmml7" Mar 13 10:57:54 crc kubenswrapper[4632]: I0313 10:57:54.890858 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fmml7" Mar 13 10:57:56 crc kubenswrapper[4632]: I0313 10:57:56.073402 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fmml7"] Mar 13 10:57:56 crc kubenswrapper[4632]: I0313 10:57:56.814466 4632 generic.go:334] "Generic (PLEG): container finished" podID="d7c91e27-6596-4471-81e9-4a65e55379cc" containerID="145a4083a5a5cb7cab6b80318c7e624a2791db3949fe0566ce5173ba6c6e5bc8" exitCode=0 Mar 13 10:57:56 crc kubenswrapper[4632]: I0313 10:57:56.814580 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmml7" event={"ID":"d7c91e27-6596-4471-81e9-4a65e55379cc","Type":"ContainerDied","Data":"145a4083a5a5cb7cab6b80318c7e624a2791db3949fe0566ce5173ba6c6e5bc8"} Mar 13 10:57:56 crc kubenswrapper[4632]: I0313 10:57:56.814838 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmml7" event={"ID":"d7c91e27-6596-4471-81e9-4a65e55379cc","Type":"ContainerStarted","Data":"4dd1def3762efd7a2c7321b10db0c95910e7988e1bb090633291fc5c968853b7"} Mar 13 10:57:56 crc kubenswrapper[4632]: I0313 10:57:56.822255 4632 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 10:57:57 crc kubenswrapper[4632]: I0313 10:57:57.824576 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmml7" event={"ID":"d7c91e27-6596-4471-81e9-4a65e55379cc","Type":"ContainerStarted","Data":"8d0330408b68a3dc8f003bcb5a971e2a87e41ec5aadc0fa59d29938015321612"} Mar 13 10:57:59 crc kubenswrapper[4632]: I0313 10:57:59.044568 4632 scope.go:117] "RemoveContainer" containerID="d62fcc7d7dd37c1e59dee28bd69ab3bfac7e5412873fdfe93b8d8f0639424c9d" Mar 13 10:57:59 crc kubenswrapper[4632]: E0313 10:57:59.045110 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:58:00 crc kubenswrapper[4632]: I0313 10:58:00.207091 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556658-mbpfd"] Mar 13 10:58:00 crc kubenswrapper[4632]: I0313 10:58:00.208478 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556658-mbpfd" Mar 13 10:58:00 crc kubenswrapper[4632]: I0313 10:58:00.213524 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 10:58:00 crc kubenswrapper[4632]: I0313 10:58:00.213907 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 10:58:00 crc kubenswrapper[4632]: I0313 10:58:00.214509 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 10:58:00 crc kubenswrapper[4632]: I0313 10:58:00.220060 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556658-mbpfd"] Mar 13 10:58:00 crc kubenswrapper[4632]: I0313 10:58:00.277271 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-984mt\" (UniqueName: \"kubernetes.io/projected/4db08ac3-f768-407d-a321-ed9032c5c015-kube-api-access-984mt\") pod \"auto-csr-approver-29556658-mbpfd\" (UID: \"4db08ac3-f768-407d-a321-ed9032c5c015\") " pod="openshift-infra/auto-csr-approver-29556658-mbpfd" Mar 13 10:58:00 crc kubenswrapper[4632]: I0313 10:58:00.379085 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-984mt\" (UniqueName: \"kubernetes.io/projected/4db08ac3-f768-407d-a321-ed9032c5c015-kube-api-access-984mt\") pod \"auto-csr-approver-29556658-mbpfd\" (UID: \"4db08ac3-f768-407d-a321-ed9032c5c015\") " pod="openshift-infra/auto-csr-approver-29556658-mbpfd" Mar 13 10:58:00 crc kubenswrapper[4632]: I0313 10:58:00.406813 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-984mt\" (UniqueName: \"kubernetes.io/projected/4db08ac3-f768-407d-a321-ed9032c5c015-kube-api-access-984mt\") pod \"auto-csr-approver-29556658-mbpfd\" (UID: \"4db08ac3-f768-407d-a321-ed9032c5c015\") " pod="openshift-infra/auto-csr-approver-29556658-mbpfd" Mar 13 10:58:00 crc kubenswrapper[4632]: I0313 10:58:00.564490 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556658-mbpfd" Mar 13 10:58:00 crc kubenswrapper[4632]: I0313 10:58:00.876894 4632 generic.go:334] "Generic (PLEG): container finished" podID="d7c91e27-6596-4471-81e9-4a65e55379cc" containerID="8d0330408b68a3dc8f003bcb5a971e2a87e41ec5aadc0fa59d29938015321612" exitCode=0 Mar 13 10:58:00 crc kubenswrapper[4632]: I0313 10:58:00.877048 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmml7" event={"ID":"d7c91e27-6596-4471-81e9-4a65e55379cc","Type":"ContainerDied","Data":"8d0330408b68a3dc8f003bcb5a971e2a87e41ec5aadc0fa59d29938015321612"} Mar 13 10:58:01 crc kubenswrapper[4632]: I0313 10:58:01.152842 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556658-mbpfd"] Mar 13 10:58:01 crc kubenswrapper[4632]: I0313 10:58:01.900854 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmml7" event={"ID":"d7c91e27-6596-4471-81e9-4a65e55379cc","Type":"ContainerStarted","Data":"d9aa3193c9ba513408053eae5959957c7f7db68e45389eba91fd0f384d6e744b"} Mar 13 10:58:01 crc kubenswrapper[4632]: I0313 10:58:01.902910 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556658-mbpfd" event={"ID":"4db08ac3-f768-407d-a321-ed9032c5c015","Type":"ContainerStarted","Data":"1ad1e2f33bb0810479cdd95e85cc645315ff0eeeea2f7b9d25a8d8a7b46bcceb"} Mar 13 10:58:01 crc kubenswrapper[4632]: I0313 10:58:01.969879 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fmml7" podStartSLOduration=3.414099043 podStartE2EDuration="7.969846258s" podCreationTimestamp="2026-03-13 10:57:54 +0000 UTC" firstStartedPulling="2026-03-13 10:57:56.816925341 +0000 UTC m=+3250.839455474" lastFinishedPulling="2026-03-13 10:58:01.372672556 +0000 UTC m=+3255.395202689" observedRunningTime="2026-03-13 10:58:01.930731526 +0000 UTC m=+3255.953261669" watchObservedRunningTime="2026-03-13 10:58:01.969846258 +0000 UTC m=+3255.992376391" Mar 13 10:58:03 crc kubenswrapper[4632]: I0313 10:58:03.926826 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556658-mbpfd" event={"ID":"4db08ac3-f768-407d-a321-ed9032c5c015","Type":"ContainerStarted","Data":"6a752c085ec4dd2121b36385f753ab45221d95dd428ca910155d9e3c585e4dbc"} Mar 13 10:58:04 crc kubenswrapper[4632]: I0313 10:58:04.892347 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fmml7" Mar 13 10:58:04 crc kubenswrapper[4632]: I0313 10:58:04.892844 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fmml7" Mar 13 10:58:04 crc kubenswrapper[4632]: I0313 10:58:04.939113 4632 generic.go:334] "Generic (PLEG): container finished" podID="4db08ac3-f768-407d-a321-ed9032c5c015" containerID="6a752c085ec4dd2121b36385f753ab45221d95dd428ca910155d9e3c585e4dbc" exitCode=0 Mar 13 10:58:04 crc kubenswrapper[4632]: I0313 10:58:04.941335 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556658-mbpfd" event={"ID":"4db08ac3-f768-407d-a321-ed9032c5c015","Type":"ContainerDied","Data":"6a752c085ec4dd2121b36385f753ab45221d95dd428ca910155d9e3c585e4dbc"} Mar 13 10:58:05 crc kubenswrapper[4632]: I0313 10:58:05.948375 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fmml7" podUID="d7c91e27-6596-4471-81e9-4a65e55379cc" containerName="registry-server" probeResult="failure" output=< Mar 13 10:58:05 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 10:58:05 crc kubenswrapper[4632]: > Mar 13 10:58:06 crc kubenswrapper[4632]: I0313 10:58:06.566406 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556658-mbpfd" Mar 13 10:58:06 crc kubenswrapper[4632]: I0313 10:58:06.639361 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-984mt\" (UniqueName: \"kubernetes.io/projected/4db08ac3-f768-407d-a321-ed9032c5c015-kube-api-access-984mt\") pod \"4db08ac3-f768-407d-a321-ed9032c5c015\" (UID: \"4db08ac3-f768-407d-a321-ed9032c5c015\") " Mar 13 10:58:06 crc kubenswrapper[4632]: I0313 10:58:06.663166 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4db08ac3-f768-407d-a321-ed9032c5c015-kube-api-access-984mt" (OuterVolumeSpecName: "kube-api-access-984mt") pod "4db08ac3-f768-407d-a321-ed9032c5c015" (UID: "4db08ac3-f768-407d-a321-ed9032c5c015"). InnerVolumeSpecName "kube-api-access-984mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:58:06 crc kubenswrapper[4632]: I0313 10:58:06.743923 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-984mt\" (UniqueName: \"kubernetes.io/projected/4db08ac3-f768-407d-a321-ed9032c5c015-kube-api-access-984mt\") on node \"crc\" DevicePath \"\"" Mar 13 10:58:06 crc kubenswrapper[4632]: I0313 10:58:06.959506 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556658-mbpfd" event={"ID":"4db08ac3-f768-407d-a321-ed9032c5c015","Type":"ContainerDied","Data":"1ad1e2f33bb0810479cdd95e85cc645315ff0eeeea2f7b9d25a8d8a7b46bcceb"} Mar 13 10:58:06 crc kubenswrapper[4632]: I0313 10:58:06.959548 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ad1e2f33bb0810479cdd95e85cc645315ff0eeeea2f7b9d25a8d8a7b46bcceb" Mar 13 10:58:06 crc kubenswrapper[4632]: I0313 10:58:06.959596 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556658-mbpfd" Mar 13 10:58:07 crc kubenswrapper[4632]: I0313 10:58:07.095769 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556652-ghswk"] Mar 13 10:58:07 crc kubenswrapper[4632]: I0313 10:58:07.108682 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556652-ghswk"] Mar 13 10:58:08 crc kubenswrapper[4632]: I0313 10:58:08.063035 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27a43bdf-8be4-458b-99ff-4135a684962a" path="/var/lib/kubelet/pods/27a43bdf-8be4-458b-99ff-4135a684962a/volumes" Mar 13 10:58:10 crc kubenswrapper[4632]: I0313 10:58:10.045138 4632 scope.go:117] "RemoveContainer" containerID="d62fcc7d7dd37c1e59dee28bd69ab3bfac7e5412873fdfe93b8d8f0639424c9d" Mar 13 10:58:10 crc kubenswrapper[4632]: E0313 10:58:10.049139 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:58:10 crc kubenswrapper[4632]: I0313 10:58:10.501875 4632 scope.go:117] "RemoveContainer" containerID="5c30aacc6e4fc680fbb7e668912e806d2164d1cac4b7e0e4c5e8b4688d3e76cd" Mar 13 10:58:12 crc kubenswrapper[4632]: I0313 10:58:12.968807 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q9g65"] Mar 13 10:58:12 crc kubenswrapper[4632]: E0313 10:58:12.969496 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4db08ac3-f768-407d-a321-ed9032c5c015" containerName="oc" Mar 13 10:58:12 crc kubenswrapper[4632]: I0313 10:58:12.969510 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="4db08ac3-f768-407d-a321-ed9032c5c015" containerName="oc" Mar 13 10:58:12 crc kubenswrapper[4632]: I0313 10:58:12.969721 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="4db08ac3-f768-407d-a321-ed9032c5c015" containerName="oc" Mar 13 10:58:12 crc kubenswrapper[4632]: I0313 10:58:12.971102 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q9g65" Mar 13 10:58:13 crc kubenswrapper[4632]: I0313 10:58:13.011331 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q9g65"] Mar 13 10:58:13 crc kubenswrapper[4632]: I0313 10:58:13.090258 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48d9de0e-efea-443b-89a7-e02d3264020f-utilities\") pod \"redhat-marketplace-q9g65\" (UID: \"48d9de0e-efea-443b-89a7-e02d3264020f\") " pod="openshift-marketplace/redhat-marketplace-q9g65" Mar 13 10:58:13 crc kubenswrapper[4632]: I0313 10:58:13.090351 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-459zd\" (UniqueName: \"kubernetes.io/projected/48d9de0e-efea-443b-89a7-e02d3264020f-kube-api-access-459zd\") pod \"redhat-marketplace-q9g65\" (UID: \"48d9de0e-efea-443b-89a7-e02d3264020f\") " pod="openshift-marketplace/redhat-marketplace-q9g65" Mar 13 10:58:13 crc kubenswrapper[4632]: I0313 10:58:13.090391 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48d9de0e-efea-443b-89a7-e02d3264020f-catalog-content\") pod \"redhat-marketplace-q9g65\" (UID: \"48d9de0e-efea-443b-89a7-e02d3264020f\") " pod="openshift-marketplace/redhat-marketplace-q9g65" Mar 13 10:58:13 crc kubenswrapper[4632]: I0313 10:58:13.192843 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48d9de0e-efea-443b-89a7-e02d3264020f-utilities\") pod \"redhat-marketplace-q9g65\" (UID: \"48d9de0e-efea-443b-89a7-e02d3264020f\") " pod="openshift-marketplace/redhat-marketplace-q9g65" Mar 13 10:58:13 crc kubenswrapper[4632]: I0313 10:58:13.192995 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-459zd\" (UniqueName: \"kubernetes.io/projected/48d9de0e-efea-443b-89a7-e02d3264020f-kube-api-access-459zd\") pod \"redhat-marketplace-q9g65\" (UID: \"48d9de0e-efea-443b-89a7-e02d3264020f\") " pod="openshift-marketplace/redhat-marketplace-q9g65" Mar 13 10:58:13 crc kubenswrapper[4632]: I0313 10:58:13.193069 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48d9de0e-efea-443b-89a7-e02d3264020f-catalog-content\") pod \"redhat-marketplace-q9g65\" (UID: \"48d9de0e-efea-443b-89a7-e02d3264020f\") " pod="openshift-marketplace/redhat-marketplace-q9g65" Mar 13 10:58:13 crc kubenswrapper[4632]: I0313 10:58:13.193616 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48d9de0e-efea-443b-89a7-e02d3264020f-catalog-content\") pod \"redhat-marketplace-q9g65\" (UID: \"48d9de0e-efea-443b-89a7-e02d3264020f\") " pod="openshift-marketplace/redhat-marketplace-q9g65" Mar 13 10:58:13 crc kubenswrapper[4632]: I0313 10:58:13.193633 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48d9de0e-efea-443b-89a7-e02d3264020f-utilities\") pod \"redhat-marketplace-q9g65\" (UID: \"48d9de0e-efea-443b-89a7-e02d3264020f\") " pod="openshift-marketplace/redhat-marketplace-q9g65" Mar 13 10:58:13 crc kubenswrapper[4632]: I0313 10:58:13.223304 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-459zd\" (UniqueName: \"kubernetes.io/projected/48d9de0e-efea-443b-89a7-e02d3264020f-kube-api-access-459zd\") pod \"redhat-marketplace-q9g65\" (UID: \"48d9de0e-efea-443b-89a7-e02d3264020f\") " pod="openshift-marketplace/redhat-marketplace-q9g65" Mar 13 10:58:13 crc kubenswrapper[4632]: I0313 10:58:13.296634 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q9g65" Mar 13 10:58:14 crc kubenswrapper[4632]: I0313 10:58:14.070091 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q9g65"] Mar 13 10:58:15 crc kubenswrapper[4632]: I0313 10:58:15.064678 4632 generic.go:334] "Generic (PLEG): container finished" podID="48d9de0e-efea-443b-89a7-e02d3264020f" containerID="89e85aed0ace971ea27aa6e4381a5b9c378475b156c657d5f7f64cf10fffb7e7" exitCode=0 Mar 13 10:58:15 crc kubenswrapper[4632]: I0313 10:58:15.064760 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q9g65" event={"ID":"48d9de0e-efea-443b-89a7-e02d3264020f","Type":"ContainerDied","Data":"89e85aed0ace971ea27aa6e4381a5b9c378475b156c657d5f7f64cf10fffb7e7"} Mar 13 10:58:15 crc kubenswrapper[4632]: I0313 10:58:15.064961 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q9g65" event={"ID":"48d9de0e-efea-443b-89a7-e02d3264020f","Type":"ContainerStarted","Data":"791d787bc94f7eeb2e183a040f421e97af028bfea047785789f3496908b775af"} Mar 13 10:58:15 crc kubenswrapper[4632]: I0313 10:58:15.952076 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fmml7" podUID="d7c91e27-6596-4471-81e9-4a65e55379cc" containerName="registry-server" probeResult="failure" output=< Mar 13 10:58:15 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 10:58:15 crc kubenswrapper[4632]: > Mar 13 10:58:16 crc kubenswrapper[4632]: I0313 10:58:16.086044 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q9g65" event={"ID":"48d9de0e-efea-443b-89a7-e02d3264020f","Type":"ContainerStarted","Data":"e3f2fccdad049285eb344aa03ec4c45ce2e653e042872421419872537e65b0ce"} Mar 13 10:58:18 crc kubenswrapper[4632]: I0313 10:58:18.104682 4632 generic.go:334] "Generic (PLEG): container finished" podID="48d9de0e-efea-443b-89a7-e02d3264020f" containerID="e3f2fccdad049285eb344aa03ec4c45ce2e653e042872421419872537e65b0ce" exitCode=0 Mar 13 10:58:18 crc kubenswrapper[4632]: I0313 10:58:18.104746 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q9g65" event={"ID":"48d9de0e-efea-443b-89a7-e02d3264020f","Type":"ContainerDied","Data":"e3f2fccdad049285eb344aa03ec4c45ce2e653e042872421419872537e65b0ce"} Mar 13 10:58:20 crc kubenswrapper[4632]: I0313 10:58:20.126988 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q9g65" event={"ID":"48d9de0e-efea-443b-89a7-e02d3264020f","Type":"ContainerStarted","Data":"a35447b594c379c0f9e9337a36f14fcb40355b64f78e7eff5cb0f3ce866933cc"} Mar 13 10:58:20 crc kubenswrapper[4632]: I0313 10:58:20.152932 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q9g65" podStartSLOduration=4.423860649 podStartE2EDuration="8.152907296s" podCreationTimestamp="2026-03-13 10:58:12 +0000 UTC" firstStartedPulling="2026-03-13 10:58:15.067169904 +0000 UTC m=+3269.089700037" lastFinishedPulling="2026-03-13 10:58:18.796216551 +0000 UTC m=+3272.818746684" observedRunningTime="2026-03-13 10:58:20.146087079 +0000 UTC m=+3274.168617212" watchObservedRunningTime="2026-03-13 10:58:20.152907296 +0000 UTC m=+3274.175437439" Mar 13 10:58:21 crc kubenswrapper[4632]: I0313 10:58:21.045206 4632 scope.go:117] "RemoveContainer" containerID="d62fcc7d7dd37c1e59dee28bd69ab3bfac7e5412873fdfe93b8d8f0639424c9d" Mar 13 10:58:21 crc kubenswrapper[4632]: E0313 10:58:21.045590 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:58:23 crc kubenswrapper[4632]: I0313 10:58:23.296723 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q9g65" Mar 13 10:58:23 crc kubenswrapper[4632]: I0313 10:58:23.298204 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q9g65" Mar 13 10:58:24 crc kubenswrapper[4632]: I0313 10:58:24.498462 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-q9g65" podUID="48d9de0e-efea-443b-89a7-e02d3264020f" containerName="registry-server" probeResult="failure" output=< Mar 13 10:58:24 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 10:58:24 crc kubenswrapper[4632]: > Mar 13 10:58:24 crc kubenswrapper[4632]: I0313 10:58:24.948306 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fmml7" Mar 13 10:58:25 crc kubenswrapper[4632]: I0313 10:58:25.004848 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fmml7" Mar 13 10:58:25 crc kubenswrapper[4632]: I0313 10:58:25.740099 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fmml7"] Mar 13 10:58:26 crc kubenswrapper[4632]: I0313 10:58:26.174185 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fmml7" podUID="d7c91e27-6596-4471-81e9-4a65e55379cc" containerName="registry-server" containerID="cri-o://d9aa3193c9ba513408053eae5959957c7f7db68e45389eba91fd0f384d6e744b" gracePeriod=2 Mar 13 10:58:27 crc kubenswrapper[4632]: I0313 10:58:27.195652 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmml7" event={"ID":"d7c91e27-6596-4471-81e9-4a65e55379cc","Type":"ContainerDied","Data":"d9aa3193c9ba513408053eae5959957c7f7db68e45389eba91fd0f384d6e744b"} Mar 13 10:58:27 crc kubenswrapper[4632]: I0313 10:58:27.195492 4632 generic.go:334] "Generic (PLEG): container finished" podID="d7c91e27-6596-4471-81e9-4a65e55379cc" containerID="d9aa3193c9ba513408053eae5959957c7f7db68e45389eba91fd0f384d6e744b" exitCode=0 Mar 13 10:58:27 crc kubenswrapper[4632]: I0313 10:58:27.404186 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fmml7" Mar 13 10:58:27 crc kubenswrapper[4632]: I0313 10:58:27.605114 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7c91e27-6596-4471-81e9-4a65e55379cc-utilities\") pod \"d7c91e27-6596-4471-81e9-4a65e55379cc\" (UID: \"d7c91e27-6596-4471-81e9-4a65e55379cc\") " Mar 13 10:58:27 crc kubenswrapper[4632]: I0313 10:58:27.605201 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7c91e27-6596-4471-81e9-4a65e55379cc-catalog-content\") pod \"d7c91e27-6596-4471-81e9-4a65e55379cc\" (UID: \"d7c91e27-6596-4471-81e9-4a65e55379cc\") " Mar 13 10:58:27 crc kubenswrapper[4632]: I0313 10:58:27.605243 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzvbq\" (UniqueName: \"kubernetes.io/projected/d7c91e27-6596-4471-81e9-4a65e55379cc-kube-api-access-hzvbq\") pod \"d7c91e27-6596-4471-81e9-4a65e55379cc\" (UID: \"d7c91e27-6596-4471-81e9-4a65e55379cc\") " Mar 13 10:58:27 crc kubenswrapper[4632]: I0313 10:58:27.609572 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7c91e27-6596-4471-81e9-4a65e55379cc-utilities" (OuterVolumeSpecName: "utilities") pod "d7c91e27-6596-4471-81e9-4a65e55379cc" (UID: "d7c91e27-6596-4471-81e9-4a65e55379cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:58:27 crc kubenswrapper[4632]: I0313 10:58:27.652409 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7c91e27-6596-4471-81e9-4a65e55379cc-kube-api-access-hzvbq" (OuterVolumeSpecName: "kube-api-access-hzvbq") pod "d7c91e27-6596-4471-81e9-4a65e55379cc" (UID: "d7c91e27-6596-4471-81e9-4a65e55379cc"). InnerVolumeSpecName "kube-api-access-hzvbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:58:27 crc kubenswrapper[4632]: I0313 10:58:27.710081 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7c91e27-6596-4471-81e9-4a65e55379cc-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 10:58:27 crc kubenswrapper[4632]: I0313 10:58:27.710127 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzvbq\" (UniqueName: \"kubernetes.io/projected/d7c91e27-6596-4471-81e9-4a65e55379cc-kube-api-access-hzvbq\") on node \"crc\" DevicePath \"\"" Mar 13 10:58:27 crc kubenswrapper[4632]: I0313 10:58:27.876387 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7c91e27-6596-4471-81e9-4a65e55379cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7c91e27-6596-4471-81e9-4a65e55379cc" (UID: "d7c91e27-6596-4471-81e9-4a65e55379cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:58:27 crc kubenswrapper[4632]: I0313 10:58:27.921024 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7c91e27-6596-4471-81e9-4a65e55379cc-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 10:58:28 crc kubenswrapper[4632]: I0313 10:58:28.212643 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmml7" event={"ID":"d7c91e27-6596-4471-81e9-4a65e55379cc","Type":"ContainerDied","Data":"4dd1def3762efd7a2c7321b10db0c95910e7988e1bb090633291fc5c968853b7"} Mar 13 10:58:28 crc kubenswrapper[4632]: I0313 10:58:28.212708 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fmml7" Mar 13 10:58:28 crc kubenswrapper[4632]: I0313 10:58:28.212739 4632 scope.go:117] "RemoveContainer" containerID="d9aa3193c9ba513408053eae5959957c7f7db68e45389eba91fd0f384d6e744b" Mar 13 10:58:28 crc kubenswrapper[4632]: I0313 10:58:28.247987 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fmml7"] Mar 13 10:58:28 crc kubenswrapper[4632]: I0313 10:58:28.258080 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fmml7"] Mar 13 10:58:28 crc kubenswrapper[4632]: I0313 10:58:28.266592 4632 scope.go:117] "RemoveContainer" containerID="8d0330408b68a3dc8f003bcb5a971e2a87e41ec5aadc0fa59d29938015321612" Mar 13 10:58:28 crc kubenswrapper[4632]: I0313 10:58:28.304283 4632 scope.go:117] "RemoveContainer" containerID="145a4083a5a5cb7cab6b80318c7e624a2791db3949fe0566ce5173ba6c6e5bc8" Mar 13 10:58:30 crc kubenswrapper[4632]: I0313 10:58:30.054621 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7c91e27-6596-4471-81e9-4a65e55379cc" path="/var/lib/kubelet/pods/d7c91e27-6596-4471-81e9-4a65e55379cc/volumes" Mar 13 10:58:33 crc kubenswrapper[4632]: I0313 10:58:33.373883 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q9g65" Mar 13 10:58:33 crc kubenswrapper[4632]: I0313 10:58:33.430882 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q9g65" Mar 13 10:58:33 crc kubenswrapper[4632]: I0313 10:58:33.619414 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q9g65"] Mar 13 10:58:35 crc kubenswrapper[4632]: I0313 10:58:35.288041 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q9g65" podUID="48d9de0e-efea-443b-89a7-e02d3264020f" containerName="registry-server" containerID="cri-o://a35447b594c379c0f9e9337a36f14fcb40355b64f78e7eff5cb0f3ce866933cc" gracePeriod=2 Mar 13 10:58:35 crc kubenswrapper[4632]: I0313 10:58:35.940288 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q9g65" Mar 13 10:58:36 crc kubenswrapper[4632]: I0313 10:58:36.045025 4632 scope.go:117] "RemoveContainer" containerID="d62fcc7d7dd37c1e59dee28bd69ab3bfac7e5412873fdfe93b8d8f0639424c9d" Mar 13 10:58:36 crc kubenswrapper[4632]: E0313 10:58:36.045380 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:58:36 crc kubenswrapper[4632]: I0313 10:58:36.069877 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-459zd\" (UniqueName: \"kubernetes.io/projected/48d9de0e-efea-443b-89a7-e02d3264020f-kube-api-access-459zd\") pod \"48d9de0e-efea-443b-89a7-e02d3264020f\" (UID: \"48d9de0e-efea-443b-89a7-e02d3264020f\") " Mar 13 10:58:36 crc kubenswrapper[4632]: I0313 10:58:36.070141 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48d9de0e-efea-443b-89a7-e02d3264020f-utilities\") pod \"48d9de0e-efea-443b-89a7-e02d3264020f\" (UID: \"48d9de0e-efea-443b-89a7-e02d3264020f\") " Mar 13 10:58:36 crc kubenswrapper[4632]: I0313 10:58:36.070227 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48d9de0e-efea-443b-89a7-e02d3264020f-catalog-content\") pod \"48d9de0e-efea-443b-89a7-e02d3264020f\" (UID: \"48d9de0e-efea-443b-89a7-e02d3264020f\") " Mar 13 10:58:36 crc kubenswrapper[4632]: I0313 10:58:36.070740 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48d9de0e-efea-443b-89a7-e02d3264020f-utilities" (OuterVolumeSpecName: "utilities") pod "48d9de0e-efea-443b-89a7-e02d3264020f" (UID: "48d9de0e-efea-443b-89a7-e02d3264020f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:58:36 crc kubenswrapper[4632]: I0313 10:58:36.079497 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48d9de0e-efea-443b-89a7-e02d3264020f-kube-api-access-459zd" (OuterVolumeSpecName: "kube-api-access-459zd") pod "48d9de0e-efea-443b-89a7-e02d3264020f" (UID: "48d9de0e-efea-443b-89a7-e02d3264020f"). InnerVolumeSpecName "kube-api-access-459zd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:58:36 crc kubenswrapper[4632]: I0313 10:58:36.117683 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48d9de0e-efea-443b-89a7-e02d3264020f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48d9de0e-efea-443b-89a7-e02d3264020f" (UID: "48d9de0e-efea-443b-89a7-e02d3264020f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:58:36 crc kubenswrapper[4632]: I0313 10:58:36.173334 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-459zd\" (UniqueName: \"kubernetes.io/projected/48d9de0e-efea-443b-89a7-e02d3264020f-kube-api-access-459zd\") on node \"crc\" DevicePath \"\"" Mar 13 10:58:36 crc kubenswrapper[4632]: I0313 10:58:36.173368 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48d9de0e-efea-443b-89a7-e02d3264020f-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 10:58:36 crc kubenswrapper[4632]: I0313 10:58:36.173379 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48d9de0e-efea-443b-89a7-e02d3264020f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 10:58:36 crc kubenswrapper[4632]: I0313 10:58:36.312348 4632 generic.go:334] "Generic (PLEG): container finished" podID="48d9de0e-efea-443b-89a7-e02d3264020f" containerID="a35447b594c379c0f9e9337a36f14fcb40355b64f78e7eff5cb0f3ce866933cc" exitCode=0 Mar 13 10:58:36 crc kubenswrapper[4632]: I0313 10:58:36.312400 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q9g65" event={"ID":"48d9de0e-efea-443b-89a7-e02d3264020f","Type":"ContainerDied","Data":"a35447b594c379c0f9e9337a36f14fcb40355b64f78e7eff5cb0f3ce866933cc"} Mar 13 10:58:36 crc kubenswrapper[4632]: I0313 10:58:36.312429 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q9g65" Mar 13 10:58:36 crc kubenswrapper[4632]: I0313 10:58:36.312443 4632 scope.go:117] "RemoveContainer" containerID="a35447b594c379c0f9e9337a36f14fcb40355b64f78e7eff5cb0f3ce866933cc" Mar 13 10:58:36 crc kubenswrapper[4632]: I0313 10:58:36.312431 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q9g65" event={"ID":"48d9de0e-efea-443b-89a7-e02d3264020f","Type":"ContainerDied","Data":"791d787bc94f7eeb2e183a040f421e97af028bfea047785789f3496908b775af"} Mar 13 10:58:36 crc kubenswrapper[4632]: I0313 10:58:36.357159 4632 scope.go:117] "RemoveContainer" containerID="e3f2fccdad049285eb344aa03ec4c45ce2e653e042872421419872537e65b0ce" Mar 13 10:58:36 crc kubenswrapper[4632]: I0313 10:58:36.362256 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q9g65"] Mar 13 10:58:36 crc kubenswrapper[4632]: I0313 10:58:36.375174 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q9g65"] Mar 13 10:58:36 crc kubenswrapper[4632]: I0313 10:58:36.395386 4632 scope.go:117] "RemoveContainer" containerID="89e85aed0ace971ea27aa6e4381a5b9c378475b156c657d5f7f64cf10fffb7e7" Mar 13 10:58:36 crc kubenswrapper[4632]: I0313 10:58:36.430207 4632 scope.go:117] "RemoveContainer" containerID="a35447b594c379c0f9e9337a36f14fcb40355b64f78e7eff5cb0f3ce866933cc" Mar 13 10:58:36 crc kubenswrapper[4632]: E0313 10:58:36.438347 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a35447b594c379c0f9e9337a36f14fcb40355b64f78e7eff5cb0f3ce866933cc\": container with ID starting with a35447b594c379c0f9e9337a36f14fcb40355b64f78e7eff5cb0f3ce866933cc not found: ID does not exist" containerID="a35447b594c379c0f9e9337a36f14fcb40355b64f78e7eff5cb0f3ce866933cc" Mar 13 10:58:36 crc kubenswrapper[4632]: I0313 10:58:36.439203 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a35447b594c379c0f9e9337a36f14fcb40355b64f78e7eff5cb0f3ce866933cc"} err="failed to get container status \"a35447b594c379c0f9e9337a36f14fcb40355b64f78e7eff5cb0f3ce866933cc\": rpc error: code = NotFound desc = could not find container \"a35447b594c379c0f9e9337a36f14fcb40355b64f78e7eff5cb0f3ce866933cc\": container with ID starting with a35447b594c379c0f9e9337a36f14fcb40355b64f78e7eff5cb0f3ce866933cc not found: ID does not exist" Mar 13 10:58:36 crc kubenswrapper[4632]: I0313 10:58:36.439246 4632 scope.go:117] "RemoveContainer" containerID="e3f2fccdad049285eb344aa03ec4c45ce2e653e042872421419872537e65b0ce" Mar 13 10:58:36 crc kubenswrapper[4632]: E0313 10:58:36.439748 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3f2fccdad049285eb344aa03ec4c45ce2e653e042872421419872537e65b0ce\": container with ID starting with e3f2fccdad049285eb344aa03ec4c45ce2e653e042872421419872537e65b0ce not found: ID does not exist" containerID="e3f2fccdad049285eb344aa03ec4c45ce2e653e042872421419872537e65b0ce" Mar 13 10:58:36 crc kubenswrapper[4632]: I0313 10:58:36.439775 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3f2fccdad049285eb344aa03ec4c45ce2e653e042872421419872537e65b0ce"} err="failed to get container status \"e3f2fccdad049285eb344aa03ec4c45ce2e653e042872421419872537e65b0ce\": rpc error: code = NotFound desc = could not find container \"e3f2fccdad049285eb344aa03ec4c45ce2e653e042872421419872537e65b0ce\": container with ID starting with e3f2fccdad049285eb344aa03ec4c45ce2e653e042872421419872537e65b0ce not found: ID does not exist" Mar 13 10:58:36 crc kubenswrapper[4632]: I0313 10:58:36.439790 4632 scope.go:117] "RemoveContainer" containerID="89e85aed0ace971ea27aa6e4381a5b9c378475b156c657d5f7f64cf10fffb7e7" Mar 13 10:58:36 crc kubenswrapper[4632]: E0313 10:58:36.440032 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89e85aed0ace971ea27aa6e4381a5b9c378475b156c657d5f7f64cf10fffb7e7\": container with ID starting with 89e85aed0ace971ea27aa6e4381a5b9c378475b156c657d5f7f64cf10fffb7e7 not found: ID does not exist" containerID="89e85aed0ace971ea27aa6e4381a5b9c378475b156c657d5f7f64cf10fffb7e7" Mar 13 10:58:36 crc kubenswrapper[4632]: I0313 10:58:36.440051 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89e85aed0ace971ea27aa6e4381a5b9c378475b156c657d5f7f64cf10fffb7e7"} err="failed to get container status \"89e85aed0ace971ea27aa6e4381a5b9c378475b156c657d5f7f64cf10fffb7e7\": rpc error: code = NotFound desc = could not find container \"89e85aed0ace971ea27aa6e4381a5b9c378475b156c657d5f7f64cf10fffb7e7\": container with ID starting with 89e85aed0ace971ea27aa6e4381a5b9c378475b156c657d5f7f64cf10fffb7e7 not found: ID does not exist" Mar 13 10:58:38 crc kubenswrapper[4632]: I0313 10:58:38.062349 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48d9de0e-efea-443b-89a7-e02d3264020f" path="/var/lib/kubelet/pods/48d9de0e-efea-443b-89a7-e02d3264020f/volumes" Mar 13 10:58:49 crc kubenswrapper[4632]: I0313 10:58:49.044905 4632 scope.go:117] "RemoveContainer" containerID="d62fcc7d7dd37c1e59dee28bd69ab3bfac7e5412873fdfe93b8d8f0639424c9d" Mar 13 10:58:49 crc kubenswrapper[4632]: E0313 10:58:49.045937 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:59:00 crc kubenswrapper[4632]: I0313 10:59:00.044481 4632 scope.go:117] "RemoveContainer" containerID="d62fcc7d7dd37c1e59dee28bd69ab3bfac7e5412873fdfe93b8d8f0639424c9d" Mar 13 10:59:00 crc kubenswrapper[4632]: E0313 10:59:00.045661 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 10:59:12 crc kubenswrapper[4632]: I0313 10:59:12.045768 4632 scope.go:117] "RemoveContainer" containerID="d62fcc7d7dd37c1e59dee28bd69ab3bfac7e5412873fdfe93b8d8f0639424c9d" Mar 13 10:59:12 crc kubenswrapper[4632]: I0313 10:59:12.665883 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerStarted","Data":"06408f7526caaaeae759484ccf3ff85a146655a7d51ff7049c7be79b39fe96ba"} Mar 13 11:00:00 crc kubenswrapper[4632]: I0313 11:00:00.674879 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556660-dxrjl"] Mar 13 11:00:00 crc kubenswrapper[4632]: E0313 11:00:00.683172 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7c91e27-6596-4471-81e9-4a65e55379cc" containerName="registry-server" Mar 13 11:00:00 crc kubenswrapper[4632]: I0313 11:00:00.683213 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7c91e27-6596-4471-81e9-4a65e55379cc" containerName="registry-server" Mar 13 11:00:00 crc kubenswrapper[4632]: E0313 11:00:00.683546 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7c91e27-6596-4471-81e9-4a65e55379cc" containerName="extract-utilities" Mar 13 11:00:00 crc kubenswrapper[4632]: I0313 11:00:00.683561 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7c91e27-6596-4471-81e9-4a65e55379cc" containerName="extract-utilities" Mar 13 11:00:00 crc kubenswrapper[4632]: E0313 11:00:00.683581 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48d9de0e-efea-443b-89a7-e02d3264020f" containerName="registry-server" Mar 13 11:00:00 crc kubenswrapper[4632]: I0313 11:00:00.683589 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="48d9de0e-efea-443b-89a7-e02d3264020f" containerName="registry-server" Mar 13 11:00:00 crc kubenswrapper[4632]: E0313 11:00:00.683609 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7c91e27-6596-4471-81e9-4a65e55379cc" containerName="extract-content" Mar 13 11:00:00 crc kubenswrapper[4632]: I0313 11:00:00.683616 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7c91e27-6596-4471-81e9-4a65e55379cc" containerName="extract-content" Mar 13 11:00:00 crc kubenswrapper[4632]: E0313 11:00:00.683627 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48d9de0e-efea-443b-89a7-e02d3264020f" containerName="extract-content" Mar 13 11:00:00 crc kubenswrapper[4632]: I0313 11:00:00.683633 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="48d9de0e-efea-443b-89a7-e02d3264020f" containerName="extract-content" Mar 13 11:00:00 crc kubenswrapper[4632]: E0313 11:00:00.683643 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48d9de0e-efea-443b-89a7-e02d3264020f" containerName="extract-utilities" Mar 13 11:00:00 crc kubenswrapper[4632]: I0313 11:00:00.683658 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="48d9de0e-efea-443b-89a7-e02d3264020f" containerName="extract-utilities" Mar 13 11:00:00 crc kubenswrapper[4632]: I0313 11:00:00.683906 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="48d9de0e-efea-443b-89a7-e02d3264020f" containerName="registry-server" Mar 13 11:00:00 crc kubenswrapper[4632]: I0313 11:00:00.683931 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7c91e27-6596-4471-81e9-4a65e55379cc" containerName="registry-server" Mar 13 11:00:00 crc kubenswrapper[4632]: I0313 11:00:00.691477 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556660-7vph8"] Mar 13 11:00:00 crc kubenswrapper[4632]: I0313 11:00:00.696701 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556660-7vph8" Mar 13 11:00:00 crc kubenswrapper[4632]: I0313 11:00:00.696691 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556660-dxrjl" Mar 13 11:00:00 crc kubenswrapper[4632]: I0313 11:00:00.723594 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 13 11:00:00 crc kubenswrapper[4632]: I0313 11:00:00.723604 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 11:00:00 crc kubenswrapper[4632]: I0313 11:00:00.723703 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 11:00:00 crc kubenswrapper[4632]: I0313 11:00:00.723733 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 13 11:00:00 crc kubenswrapper[4632]: I0313 11:00:00.726600 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 11:00:00 crc kubenswrapper[4632]: I0313 11:00:00.750923 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556660-7vph8"] Mar 13 11:00:00 crc kubenswrapper[4632]: I0313 11:00:00.794075 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556660-dxrjl"] Mar 13 11:00:00 crc kubenswrapper[4632]: I0313 11:00:00.849996 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ktg5\" (UniqueName: \"kubernetes.io/projected/f506e288-f3da-4d62-a6a2-bb598a62ed13-kube-api-access-2ktg5\") pod \"collect-profiles-29556660-7vph8\" (UID: \"f506e288-f3da-4d62-a6a2-bb598a62ed13\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556660-7vph8" Mar 13 11:00:00 crc kubenswrapper[4632]: I0313 11:00:00.850357 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f506e288-f3da-4d62-a6a2-bb598a62ed13-secret-volume\") pod \"collect-profiles-29556660-7vph8\" (UID: \"f506e288-f3da-4d62-a6a2-bb598a62ed13\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556660-7vph8" Mar 13 11:00:00 crc kubenswrapper[4632]: I0313 11:00:00.850738 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f506e288-f3da-4d62-a6a2-bb598a62ed13-config-volume\") pod \"collect-profiles-29556660-7vph8\" (UID: \"f506e288-f3da-4d62-a6a2-bb598a62ed13\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556660-7vph8" Mar 13 11:00:00 crc kubenswrapper[4632]: I0313 11:00:00.850819 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjskv\" (UniqueName: \"kubernetes.io/projected/ae414ebe-e9fa-4c30-965a-e368234bbb18-kube-api-access-cjskv\") pod \"auto-csr-approver-29556660-dxrjl\" (UID: \"ae414ebe-e9fa-4c30-965a-e368234bbb18\") " pod="openshift-infra/auto-csr-approver-29556660-dxrjl" Mar 13 11:00:00 crc kubenswrapper[4632]: I0313 11:00:00.952302 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f506e288-f3da-4d62-a6a2-bb598a62ed13-config-volume\") pod \"collect-profiles-29556660-7vph8\" (UID: \"f506e288-f3da-4d62-a6a2-bb598a62ed13\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556660-7vph8" Mar 13 11:00:00 crc kubenswrapper[4632]: I0313 11:00:00.952367 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjskv\" (UniqueName: \"kubernetes.io/projected/ae414ebe-e9fa-4c30-965a-e368234bbb18-kube-api-access-cjskv\") pod \"auto-csr-approver-29556660-dxrjl\" (UID: \"ae414ebe-e9fa-4c30-965a-e368234bbb18\") " pod="openshift-infra/auto-csr-approver-29556660-dxrjl" Mar 13 11:00:00 crc kubenswrapper[4632]: I0313 11:00:00.952492 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ktg5\" (UniqueName: \"kubernetes.io/projected/f506e288-f3da-4d62-a6a2-bb598a62ed13-kube-api-access-2ktg5\") pod \"collect-profiles-29556660-7vph8\" (UID: \"f506e288-f3da-4d62-a6a2-bb598a62ed13\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556660-7vph8" Mar 13 11:00:00 crc kubenswrapper[4632]: I0313 11:00:00.952522 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f506e288-f3da-4d62-a6a2-bb598a62ed13-secret-volume\") pod \"collect-profiles-29556660-7vph8\" (UID: \"f506e288-f3da-4d62-a6a2-bb598a62ed13\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556660-7vph8" Mar 13 11:00:00 crc kubenswrapper[4632]: I0313 11:00:00.962792 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f506e288-f3da-4d62-a6a2-bb598a62ed13-config-volume\") pod \"collect-profiles-29556660-7vph8\" (UID: \"f506e288-f3da-4d62-a6a2-bb598a62ed13\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556660-7vph8" Mar 13 11:00:00 crc kubenswrapper[4632]: I0313 11:00:00.977816 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f506e288-f3da-4d62-a6a2-bb598a62ed13-secret-volume\") pod \"collect-profiles-29556660-7vph8\" (UID: \"f506e288-f3da-4d62-a6a2-bb598a62ed13\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556660-7vph8" Mar 13 11:00:00 crc kubenswrapper[4632]: I0313 11:00:00.979704 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ktg5\" (UniqueName: \"kubernetes.io/projected/f506e288-f3da-4d62-a6a2-bb598a62ed13-kube-api-access-2ktg5\") pod \"collect-profiles-29556660-7vph8\" (UID: \"f506e288-f3da-4d62-a6a2-bb598a62ed13\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556660-7vph8" Mar 13 11:00:00 crc kubenswrapper[4632]: I0313 11:00:00.983798 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjskv\" (UniqueName: \"kubernetes.io/projected/ae414ebe-e9fa-4c30-965a-e368234bbb18-kube-api-access-cjskv\") pod \"auto-csr-approver-29556660-dxrjl\" (UID: \"ae414ebe-e9fa-4c30-965a-e368234bbb18\") " pod="openshift-infra/auto-csr-approver-29556660-dxrjl" Mar 13 11:00:01 crc kubenswrapper[4632]: I0313 11:00:01.088618 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556660-7vph8" Mar 13 11:00:01 crc kubenswrapper[4632]: I0313 11:00:01.126609 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556660-dxrjl" Mar 13 11:00:02 crc kubenswrapper[4632]: I0313 11:00:02.583698 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556660-7vph8"] Mar 13 11:00:02 crc kubenswrapper[4632]: I0313 11:00:02.598207 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556660-dxrjl"] Mar 13 11:00:03 crc kubenswrapper[4632]: I0313 11:00:03.245201 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556660-7vph8" event={"ID":"f506e288-f3da-4d62-a6a2-bb598a62ed13","Type":"ContainerStarted","Data":"df99b126bcdc13810e89ae823dc76bf43eab9d932c52b6dd430fa449a698c642"} Mar 13 11:00:03 crc kubenswrapper[4632]: I0313 11:00:03.245525 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556660-7vph8" event={"ID":"f506e288-f3da-4d62-a6a2-bb598a62ed13","Type":"ContainerStarted","Data":"34d565725937589b17417cd9a8d096a43573544687add3bac92bfe268458bb39"} Mar 13 11:00:03 crc kubenswrapper[4632]: I0313 11:00:03.247433 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556660-dxrjl" event={"ID":"ae414ebe-e9fa-4c30-965a-e368234bbb18","Type":"ContainerStarted","Data":"6ccc5a36abf2c1c45c76ea18c7f259f925ab314e33e7f60a14b3018f1f22a313"} Mar 13 11:00:03 crc kubenswrapper[4632]: I0313 11:00:03.273257 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29556660-7vph8" podStartSLOduration=3.271039179 podStartE2EDuration="3.271039179s" podCreationTimestamp="2026-03-13 11:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:00:03.262089487 +0000 UTC m=+3377.284619640" watchObservedRunningTime="2026-03-13 11:00:03.271039179 +0000 UTC m=+3377.293569322" Mar 13 11:00:04 crc kubenswrapper[4632]: I0313 11:00:04.259404 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556660-7vph8" event={"ID":"f506e288-f3da-4d62-a6a2-bb598a62ed13","Type":"ContainerDied","Data":"df99b126bcdc13810e89ae823dc76bf43eab9d932c52b6dd430fa449a698c642"} Mar 13 11:00:04 crc kubenswrapper[4632]: I0313 11:00:04.259885 4632 generic.go:334] "Generic (PLEG): container finished" podID="f506e288-f3da-4d62-a6a2-bb598a62ed13" containerID="df99b126bcdc13810e89ae823dc76bf43eab9d932c52b6dd430fa449a698c642" exitCode=0 Mar 13 11:00:05 crc kubenswrapper[4632]: I0313 11:00:05.885673 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556660-7vph8" Mar 13 11:00:05 crc kubenswrapper[4632]: I0313 11:00:05.955997 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f506e288-f3da-4d62-a6a2-bb598a62ed13-config-volume\") pod \"f506e288-f3da-4d62-a6a2-bb598a62ed13\" (UID: \"f506e288-f3da-4d62-a6a2-bb598a62ed13\") " Mar 13 11:00:05 crc kubenswrapper[4632]: I0313 11:00:05.956165 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f506e288-f3da-4d62-a6a2-bb598a62ed13-secret-volume\") pod \"f506e288-f3da-4d62-a6a2-bb598a62ed13\" (UID: \"f506e288-f3da-4d62-a6a2-bb598a62ed13\") " Mar 13 11:00:05 crc kubenswrapper[4632]: I0313 11:00:05.956280 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ktg5\" (UniqueName: \"kubernetes.io/projected/f506e288-f3da-4d62-a6a2-bb598a62ed13-kube-api-access-2ktg5\") pod \"f506e288-f3da-4d62-a6a2-bb598a62ed13\" (UID: \"f506e288-f3da-4d62-a6a2-bb598a62ed13\") " Mar 13 11:00:06 crc kubenswrapper[4632]: I0313 11:00:05.971904 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f506e288-f3da-4d62-a6a2-bb598a62ed13-config-volume" (OuterVolumeSpecName: "config-volume") pod "f506e288-f3da-4d62-a6a2-bb598a62ed13" (UID: "f506e288-f3da-4d62-a6a2-bb598a62ed13"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:00:06 crc kubenswrapper[4632]: I0313 11:00:06.062022 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f506e288-f3da-4d62-a6a2-bb598a62ed13-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f506e288-f3da-4d62-a6a2-bb598a62ed13" (UID: "f506e288-f3da-4d62-a6a2-bb598a62ed13"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:00:06 crc kubenswrapper[4632]: I0313 11:00:06.078878 4632 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f506e288-f3da-4d62-a6a2-bb598a62ed13-config-volume\") on node \"crc\" DevicePath \"\"" Mar 13 11:00:06 crc kubenswrapper[4632]: I0313 11:00:06.080221 4632 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f506e288-f3da-4d62-a6a2-bb598a62ed13-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 13 11:00:06 crc kubenswrapper[4632]: I0313 11:00:06.121199 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f506e288-f3da-4d62-a6a2-bb598a62ed13-kube-api-access-2ktg5" (OuterVolumeSpecName: "kube-api-access-2ktg5") pod "f506e288-f3da-4d62-a6a2-bb598a62ed13" (UID: "f506e288-f3da-4d62-a6a2-bb598a62ed13"). InnerVolumeSpecName "kube-api-access-2ktg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:00:06 crc kubenswrapper[4632]: I0313 11:00:06.182340 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ktg5\" (UniqueName: \"kubernetes.io/projected/f506e288-f3da-4d62-a6a2-bb598a62ed13-kube-api-access-2ktg5\") on node \"crc\" DevicePath \"\"" Mar 13 11:00:06 crc kubenswrapper[4632]: I0313 11:00:06.277411 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556660-7vph8" event={"ID":"f506e288-f3da-4d62-a6a2-bb598a62ed13","Type":"ContainerDied","Data":"34d565725937589b17417cd9a8d096a43573544687add3bac92bfe268458bb39"} Mar 13 11:00:06 crc kubenswrapper[4632]: I0313 11:00:06.277469 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34d565725937589b17417cd9a8d096a43573544687add3bac92bfe268458bb39" Mar 13 11:00:06 crc kubenswrapper[4632]: I0313 11:00:06.277552 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556660-7vph8" Mar 13 11:00:06 crc kubenswrapper[4632]: E0313 11:00:06.365949 4632 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf506e288_f3da_4d62_a6a2_bb598a62ed13.slice/crio-34d565725937589b17417cd9a8d096a43573544687add3bac92bfe268458bb39\": RecentStats: unable to find data in memory cache]" Mar 13 11:00:07 crc kubenswrapper[4632]: I0313 11:00:07.023932 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556615-zj2fm"] Mar 13 11:00:07 crc kubenswrapper[4632]: I0313 11:00:07.032314 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556615-zj2fm"] Mar 13 11:00:07 crc kubenswrapper[4632]: I0313 11:00:07.288073 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556660-dxrjl" event={"ID":"ae414ebe-e9fa-4c30-965a-e368234bbb18","Type":"ContainerStarted","Data":"b3d4b9e8bcea3a6dbdeee6316ce9071df3a8c8906a4c416a00caede29a1de5ca"} Mar 13 11:00:07 crc kubenswrapper[4632]: I0313 11:00:07.305740 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556660-dxrjl" podStartSLOduration=4.645795119 podStartE2EDuration="7.305713231s" podCreationTimestamp="2026-03-13 11:00:00 +0000 UTC" firstStartedPulling="2026-03-13 11:00:02.657927528 +0000 UTC m=+3376.680457661" lastFinishedPulling="2026-03-13 11:00:05.31784564 +0000 UTC m=+3379.340375773" observedRunningTime="2026-03-13 11:00:07.30042434 +0000 UTC m=+3381.322954473" watchObservedRunningTime="2026-03-13 11:00:07.305713231 +0000 UTC m=+3381.328243374" Mar 13 11:00:08 crc kubenswrapper[4632]: I0313 11:00:08.072524 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c3c6392-454c-4131-90a0-6584565cef4c" path="/var/lib/kubelet/pods/3c3c6392-454c-4131-90a0-6584565cef4c/volumes" Mar 13 11:00:08 crc kubenswrapper[4632]: I0313 11:00:08.308821 4632 generic.go:334] "Generic (PLEG): container finished" podID="ae414ebe-e9fa-4c30-965a-e368234bbb18" containerID="b3d4b9e8bcea3a6dbdeee6316ce9071df3a8c8906a4c416a00caede29a1de5ca" exitCode=0 Mar 13 11:00:08 crc kubenswrapper[4632]: I0313 11:00:08.308876 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556660-dxrjl" event={"ID":"ae414ebe-e9fa-4c30-965a-e368234bbb18","Type":"ContainerDied","Data":"b3d4b9e8bcea3a6dbdeee6316ce9071df3a8c8906a4c416a00caede29a1de5ca"} Mar 13 11:00:09 crc kubenswrapper[4632]: I0313 11:00:09.897970 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556660-dxrjl" Mar 13 11:00:10 crc kubenswrapper[4632]: I0313 11:00:10.056228 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjskv\" (UniqueName: \"kubernetes.io/projected/ae414ebe-e9fa-4c30-965a-e368234bbb18-kube-api-access-cjskv\") pod \"ae414ebe-e9fa-4c30-965a-e368234bbb18\" (UID: \"ae414ebe-e9fa-4c30-965a-e368234bbb18\") " Mar 13 11:00:10 crc kubenswrapper[4632]: I0313 11:00:10.068403 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae414ebe-e9fa-4c30-965a-e368234bbb18-kube-api-access-cjskv" (OuterVolumeSpecName: "kube-api-access-cjskv") pod "ae414ebe-e9fa-4c30-965a-e368234bbb18" (UID: "ae414ebe-e9fa-4c30-965a-e368234bbb18"). InnerVolumeSpecName "kube-api-access-cjskv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:00:10 crc kubenswrapper[4632]: I0313 11:00:10.158983 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjskv\" (UniqueName: \"kubernetes.io/projected/ae414ebe-e9fa-4c30-965a-e368234bbb18-kube-api-access-cjskv\") on node \"crc\" DevicePath \"\"" Mar 13 11:00:10 crc kubenswrapper[4632]: I0313 11:00:10.325979 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556660-dxrjl" event={"ID":"ae414ebe-e9fa-4c30-965a-e368234bbb18","Type":"ContainerDied","Data":"6ccc5a36abf2c1c45c76ea18c7f259f925ab314e33e7f60a14b3018f1f22a313"} Mar 13 11:00:10 crc kubenswrapper[4632]: I0313 11:00:10.326027 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ccc5a36abf2c1c45c76ea18c7f259f925ab314e33e7f60a14b3018f1f22a313" Mar 13 11:00:10 crc kubenswrapper[4632]: I0313 11:00:10.326068 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556660-dxrjl" Mar 13 11:00:10 crc kubenswrapper[4632]: I0313 11:00:10.388699 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556654-htcrm"] Mar 13 11:00:10 crc kubenswrapper[4632]: I0313 11:00:10.420478 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556654-htcrm"] Mar 13 11:00:10 crc kubenswrapper[4632]: I0313 11:00:10.928040 4632 scope.go:117] "RemoveContainer" containerID="6acdfd407705651773e15ca9493f2efdac886dce6f04123c798b57f93aa775b6" Mar 13 11:00:12 crc kubenswrapper[4632]: I0313 11:00:12.060216 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba5d59c2-ece8-4b66-9a10-c3ef740d7e45" path="/var/lib/kubelet/pods/ba5d59c2-ece8-4b66-9a10-c3ef740d7e45/volumes" Mar 13 11:01:01 crc kubenswrapper[4632]: I0313 11:01:01.137196 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29556661-2p4wf"] Mar 13 11:01:01 crc kubenswrapper[4632]: E0313 11:01:01.142839 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f506e288-f3da-4d62-a6a2-bb598a62ed13" containerName="collect-profiles" Mar 13 11:01:01 crc kubenswrapper[4632]: I0313 11:01:01.142869 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="f506e288-f3da-4d62-a6a2-bb598a62ed13" containerName="collect-profiles" Mar 13 11:01:01 crc kubenswrapper[4632]: E0313 11:01:01.143883 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae414ebe-e9fa-4c30-965a-e368234bbb18" containerName="oc" Mar 13 11:01:01 crc kubenswrapper[4632]: I0313 11:01:01.143913 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae414ebe-e9fa-4c30-965a-e368234bbb18" containerName="oc" Mar 13 11:01:01 crc kubenswrapper[4632]: I0313 11:01:01.145911 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="f506e288-f3da-4d62-a6a2-bb598a62ed13" containerName="collect-profiles" Mar 13 11:01:01 crc kubenswrapper[4632]: I0313 11:01:01.145969 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae414ebe-e9fa-4c30-965a-e368234bbb18" containerName="oc" Mar 13 11:01:01 crc kubenswrapper[4632]: I0313 11:01:01.160794 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29556661-2p4wf" Mar 13 11:01:01 crc kubenswrapper[4632]: I0313 11:01:01.262751 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwj9v\" (UniqueName: \"kubernetes.io/projected/6c20fa3e-2873-4076-b17a-3ee171199959-kube-api-access-hwj9v\") pod \"keystone-cron-29556661-2p4wf\" (UID: \"6c20fa3e-2873-4076-b17a-3ee171199959\") " pod="openstack/keystone-cron-29556661-2p4wf" Mar 13 11:01:01 crc kubenswrapper[4632]: I0313 11:01:01.263032 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c20fa3e-2873-4076-b17a-3ee171199959-config-data\") pod \"keystone-cron-29556661-2p4wf\" (UID: \"6c20fa3e-2873-4076-b17a-3ee171199959\") " pod="openstack/keystone-cron-29556661-2p4wf" Mar 13 11:01:01 crc kubenswrapper[4632]: I0313 11:01:01.263138 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6c20fa3e-2873-4076-b17a-3ee171199959-fernet-keys\") pod \"keystone-cron-29556661-2p4wf\" (UID: \"6c20fa3e-2873-4076-b17a-3ee171199959\") " pod="openstack/keystone-cron-29556661-2p4wf" Mar 13 11:01:01 crc kubenswrapper[4632]: I0313 11:01:01.263172 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c20fa3e-2873-4076-b17a-3ee171199959-combined-ca-bundle\") pod \"keystone-cron-29556661-2p4wf\" (UID: \"6c20fa3e-2873-4076-b17a-3ee171199959\") " pod="openstack/keystone-cron-29556661-2p4wf" Mar 13 11:01:01 crc kubenswrapper[4632]: I0313 11:01:01.363017 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29556661-2p4wf"] Mar 13 11:01:01 crc kubenswrapper[4632]: I0313 11:01:01.364764 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c20fa3e-2873-4076-b17a-3ee171199959-config-data\") pod \"keystone-cron-29556661-2p4wf\" (UID: \"6c20fa3e-2873-4076-b17a-3ee171199959\") " pod="openstack/keystone-cron-29556661-2p4wf" Mar 13 11:01:01 crc kubenswrapper[4632]: I0313 11:01:01.364864 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6c20fa3e-2873-4076-b17a-3ee171199959-fernet-keys\") pod \"keystone-cron-29556661-2p4wf\" (UID: \"6c20fa3e-2873-4076-b17a-3ee171199959\") " pod="openstack/keystone-cron-29556661-2p4wf" Mar 13 11:01:01 crc kubenswrapper[4632]: I0313 11:01:01.364900 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c20fa3e-2873-4076-b17a-3ee171199959-combined-ca-bundle\") pod \"keystone-cron-29556661-2p4wf\" (UID: \"6c20fa3e-2873-4076-b17a-3ee171199959\") " pod="openstack/keystone-cron-29556661-2p4wf" Mar 13 11:01:01 crc kubenswrapper[4632]: I0313 11:01:01.365029 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwj9v\" (UniqueName: \"kubernetes.io/projected/6c20fa3e-2873-4076-b17a-3ee171199959-kube-api-access-hwj9v\") pod \"keystone-cron-29556661-2p4wf\" (UID: \"6c20fa3e-2873-4076-b17a-3ee171199959\") " pod="openstack/keystone-cron-29556661-2p4wf" Mar 13 11:01:01 crc kubenswrapper[4632]: I0313 11:01:01.411556 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwj9v\" (UniqueName: \"kubernetes.io/projected/6c20fa3e-2873-4076-b17a-3ee171199959-kube-api-access-hwj9v\") pod \"keystone-cron-29556661-2p4wf\" (UID: \"6c20fa3e-2873-4076-b17a-3ee171199959\") " pod="openstack/keystone-cron-29556661-2p4wf" Mar 13 11:01:01 crc kubenswrapper[4632]: I0313 11:01:01.414840 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c20fa3e-2873-4076-b17a-3ee171199959-combined-ca-bundle\") pod \"keystone-cron-29556661-2p4wf\" (UID: \"6c20fa3e-2873-4076-b17a-3ee171199959\") " pod="openstack/keystone-cron-29556661-2p4wf" Mar 13 11:01:01 crc kubenswrapper[4632]: I0313 11:01:01.419393 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c20fa3e-2873-4076-b17a-3ee171199959-config-data\") pod \"keystone-cron-29556661-2p4wf\" (UID: \"6c20fa3e-2873-4076-b17a-3ee171199959\") " pod="openstack/keystone-cron-29556661-2p4wf" Mar 13 11:01:01 crc kubenswrapper[4632]: I0313 11:01:01.423815 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6c20fa3e-2873-4076-b17a-3ee171199959-fernet-keys\") pod \"keystone-cron-29556661-2p4wf\" (UID: \"6c20fa3e-2873-4076-b17a-3ee171199959\") " pod="openstack/keystone-cron-29556661-2p4wf" Mar 13 11:01:01 crc kubenswrapper[4632]: I0313 11:01:01.528455 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29556661-2p4wf" Mar 13 11:01:03 crc kubenswrapper[4632]: I0313 11:01:03.082761 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29556661-2p4wf"] Mar 13 11:01:03 crc kubenswrapper[4632]: I0313 11:01:03.806927 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29556661-2p4wf" event={"ID":"6c20fa3e-2873-4076-b17a-3ee171199959","Type":"ContainerStarted","Data":"50957f6ac51a786ec085c100c45b3293f56673563593051170c3bcfe4bce73f5"} Mar 13 11:01:03 crc kubenswrapper[4632]: I0313 11:01:03.808427 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29556661-2p4wf" event={"ID":"6c20fa3e-2873-4076-b17a-3ee171199959","Type":"ContainerStarted","Data":"c1622395247bf78009a91c0c227d8aea33bd7858a8489000b7092798dfc7bfe0"} Mar 13 11:01:03 crc kubenswrapper[4632]: I0313 11:01:03.864184 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29556661-2p4wf" podStartSLOduration=3.8623800299999997 podStartE2EDuration="3.86238003s" podCreationTimestamp="2026-03-13 11:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:01:03.856938545 +0000 UTC m=+3437.879468688" watchObservedRunningTime="2026-03-13 11:01:03.86238003 +0000 UTC m=+3437.884910163" Mar 13 11:01:09 crc kubenswrapper[4632]: I0313 11:01:09.888154 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29556661-2p4wf" event={"ID":"6c20fa3e-2873-4076-b17a-3ee171199959","Type":"ContainerDied","Data":"50957f6ac51a786ec085c100c45b3293f56673563593051170c3bcfe4bce73f5"} Mar 13 11:01:09 crc kubenswrapper[4632]: I0313 11:01:09.888604 4632 generic.go:334] "Generic (PLEG): container finished" podID="6c20fa3e-2873-4076-b17a-3ee171199959" containerID="50957f6ac51a786ec085c100c45b3293f56673563593051170c3bcfe4bce73f5" exitCode=0 Mar 13 11:01:11 crc kubenswrapper[4632]: I0313 11:01:11.104744 4632 scope.go:117] "RemoveContainer" containerID="0151dac58382ec9dba1fe485dee8519ba248333bc8e6aeae5349b66a4c5fa931" Mar 13 11:01:12 crc kubenswrapper[4632]: I0313 11:01:12.505436 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29556661-2p4wf" Mar 13 11:01:12 crc kubenswrapper[4632]: I0313 11:01:12.642194 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6c20fa3e-2873-4076-b17a-3ee171199959-fernet-keys\") pod \"6c20fa3e-2873-4076-b17a-3ee171199959\" (UID: \"6c20fa3e-2873-4076-b17a-3ee171199959\") " Mar 13 11:01:12 crc kubenswrapper[4632]: I0313 11:01:12.642328 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwj9v\" (UniqueName: \"kubernetes.io/projected/6c20fa3e-2873-4076-b17a-3ee171199959-kube-api-access-hwj9v\") pod \"6c20fa3e-2873-4076-b17a-3ee171199959\" (UID: \"6c20fa3e-2873-4076-b17a-3ee171199959\") " Mar 13 11:01:12 crc kubenswrapper[4632]: I0313 11:01:12.642532 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c20fa3e-2873-4076-b17a-3ee171199959-combined-ca-bundle\") pod \"6c20fa3e-2873-4076-b17a-3ee171199959\" (UID: \"6c20fa3e-2873-4076-b17a-3ee171199959\") " Mar 13 11:01:12 crc kubenswrapper[4632]: I0313 11:01:12.642588 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c20fa3e-2873-4076-b17a-3ee171199959-config-data\") pod \"6c20fa3e-2873-4076-b17a-3ee171199959\" (UID: \"6c20fa3e-2873-4076-b17a-3ee171199959\") " Mar 13 11:01:12 crc kubenswrapper[4632]: I0313 11:01:12.694791 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c20fa3e-2873-4076-b17a-3ee171199959-kube-api-access-hwj9v" (OuterVolumeSpecName: "kube-api-access-hwj9v") pod "6c20fa3e-2873-4076-b17a-3ee171199959" (UID: "6c20fa3e-2873-4076-b17a-3ee171199959"). InnerVolumeSpecName "kube-api-access-hwj9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:01:12 crc kubenswrapper[4632]: I0313 11:01:12.696660 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c20fa3e-2873-4076-b17a-3ee171199959-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6c20fa3e-2873-4076-b17a-3ee171199959" (UID: "6c20fa3e-2873-4076-b17a-3ee171199959"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:01:12 crc kubenswrapper[4632]: I0313 11:01:12.724977 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c20fa3e-2873-4076-b17a-3ee171199959-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c20fa3e-2873-4076-b17a-3ee171199959" (UID: "6c20fa3e-2873-4076-b17a-3ee171199959"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:01:12 crc kubenswrapper[4632]: I0313 11:01:12.736869 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c20fa3e-2873-4076-b17a-3ee171199959-config-data" (OuterVolumeSpecName: "config-data") pod "6c20fa3e-2873-4076-b17a-3ee171199959" (UID: "6c20fa3e-2873-4076-b17a-3ee171199959"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:01:12 crc kubenswrapper[4632]: I0313 11:01:12.745603 4632 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6c20fa3e-2873-4076-b17a-3ee171199959-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 13 11:01:12 crc kubenswrapper[4632]: I0313 11:01:12.745801 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwj9v\" (UniqueName: \"kubernetes.io/projected/6c20fa3e-2873-4076-b17a-3ee171199959-kube-api-access-hwj9v\") on node \"crc\" DevicePath \"\"" Mar 13 11:01:12 crc kubenswrapper[4632]: I0313 11:01:12.745885 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c20fa3e-2873-4076-b17a-3ee171199959-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 11:01:12 crc kubenswrapper[4632]: I0313 11:01:12.746170 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c20fa3e-2873-4076-b17a-3ee171199959-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 11:01:12 crc kubenswrapper[4632]: I0313 11:01:12.921550 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29556661-2p4wf" event={"ID":"6c20fa3e-2873-4076-b17a-3ee171199959","Type":"ContainerDied","Data":"c1622395247bf78009a91c0c227d8aea33bd7858a8489000b7092798dfc7bfe0"} Mar 13 11:01:12 crc kubenswrapper[4632]: I0313 11:01:12.922125 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29556661-2p4wf" Mar 13 11:01:12 crc kubenswrapper[4632]: I0313 11:01:12.922521 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1622395247bf78009a91c0c227d8aea33bd7858a8489000b7092798dfc7bfe0" Mar 13 11:01:40 crc kubenswrapper[4632]: I0313 11:01:40.470500 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 11:01:40 crc kubenswrapper[4632]: I0313 11:01:40.482260 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 11:02:00 crc kubenswrapper[4632]: I0313 11:02:00.817260 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556662-pw9tk"] Mar 13 11:02:00 crc kubenswrapper[4632]: E0313 11:02:00.821758 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c20fa3e-2873-4076-b17a-3ee171199959" containerName="keystone-cron" Mar 13 11:02:00 crc kubenswrapper[4632]: I0313 11:02:00.822245 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c20fa3e-2873-4076-b17a-3ee171199959" containerName="keystone-cron" Mar 13 11:02:00 crc kubenswrapper[4632]: I0313 11:02:00.824993 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c20fa3e-2873-4076-b17a-3ee171199959" containerName="keystone-cron" Mar 13 11:02:00 crc kubenswrapper[4632]: I0313 11:02:00.833305 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556662-pw9tk" Mar 13 11:02:00 crc kubenswrapper[4632]: I0313 11:02:00.851620 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 11:02:00 crc kubenswrapper[4632]: I0313 11:02:00.851636 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 11:02:00 crc kubenswrapper[4632]: I0313 11:02:00.851651 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 11:02:00 crc kubenswrapper[4632]: I0313 11:02:00.927962 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbhfw\" (UniqueName: \"kubernetes.io/projected/62b0f696-9e5c-4535-a181-fa2f4b645711-kube-api-access-hbhfw\") pod \"auto-csr-approver-29556662-pw9tk\" (UID: \"62b0f696-9e5c-4535-a181-fa2f4b645711\") " pod="openshift-infra/auto-csr-approver-29556662-pw9tk" Mar 13 11:02:00 crc kubenswrapper[4632]: I0313 11:02:00.998568 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556662-pw9tk"] Mar 13 11:02:01 crc kubenswrapper[4632]: I0313 11:02:01.030000 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbhfw\" (UniqueName: \"kubernetes.io/projected/62b0f696-9e5c-4535-a181-fa2f4b645711-kube-api-access-hbhfw\") pod \"auto-csr-approver-29556662-pw9tk\" (UID: \"62b0f696-9e5c-4535-a181-fa2f4b645711\") " pod="openshift-infra/auto-csr-approver-29556662-pw9tk" Mar 13 11:02:01 crc kubenswrapper[4632]: I0313 11:02:01.084852 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbhfw\" (UniqueName: \"kubernetes.io/projected/62b0f696-9e5c-4535-a181-fa2f4b645711-kube-api-access-hbhfw\") pod \"auto-csr-approver-29556662-pw9tk\" (UID: \"62b0f696-9e5c-4535-a181-fa2f4b645711\") " pod="openshift-infra/auto-csr-approver-29556662-pw9tk" Mar 13 11:02:01 crc kubenswrapper[4632]: I0313 11:02:01.169122 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556662-pw9tk" Mar 13 11:02:03 crc kubenswrapper[4632]: I0313 11:02:03.038114 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556662-pw9tk"] Mar 13 11:02:03 crc kubenswrapper[4632]: I0313 11:02:03.390324 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556662-pw9tk" event={"ID":"62b0f696-9e5c-4535-a181-fa2f4b645711","Type":"ContainerStarted","Data":"1af7d2f0857a2610ddaf36212c411393a094b387bb07f0eb12ad83a933bc22f0"} Mar 13 11:02:05 crc kubenswrapper[4632]: I0313 11:02:05.411467 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556662-pw9tk" event={"ID":"62b0f696-9e5c-4535-a181-fa2f4b645711","Type":"ContainerStarted","Data":"69d080c6683237a330690584133c6005521df29f2dcf4c21ed9a518e4de4e991"} Mar 13 11:02:05 crc kubenswrapper[4632]: I0313 11:02:05.435409 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556662-pw9tk" podStartSLOduration=4.413347009 podStartE2EDuration="5.434390052s" podCreationTimestamp="2026-03-13 11:02:00 +0000 UTC" firstStartedPulling="2026-03-13 11:02:03.090780692 +0000 UTC m=+3497.113310825" lastFinishedPulling="2026-03-13 11:02:04.111823735 +0000 UTC m=+3498.134353868" observedRunningTime="2026-03-13 11:02:05.430828474 +0000 UTC m=+3499.453358627" watchObservedRunningTime="2026-03-13 11:02:05.434390052 +0000 UTC m=+3499.456920185" Mar 13 11:02:07 crc kubenswrapper[4632]: I0313 11:02:07.454786 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556662-pw9tk" event={"ID":"62b0f696-9e5c-4535-a181-fa2f4b645711","Type":"ContainerDied","Data":"69d080c6683237a330690584133c6005521df29f2dcf4c21ed9a518e4de4e991"} Mar 13 11:02:07 crc kubenswrapper[4632]: I0313 11:02:07.454715 4632 generic.go:334] "Generic (PLEG): container finished" podID="62b0f696-9e5c-4535-a181-fa2f4b645711" containerID="69d080c6683237a330690584133c6005521df29f2dcf4c21ed9a518e4de4e991" exitCode=0 Mar 13 11:02:09 crc kubenswrapper[4632]: I0313 11:02:09.900893 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556662-pw9tk" Mar 13 11:02:10 crc kubenswrapper[4632]: I0313 11:02:09.999479 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbhfw\" (UniqueName: \"kubernetes.io/projected/62b0f696-9e5c-4535-a181-fa2f4b645711-kube-api-access-hbhfw\") pod \"62b0f696-9e5c-4535-a181-fa2f4b645711\" (UID: \"62b0f696-9e5c-4535-a181-fa2f4b645711\") " Mar 13 11:02:10 crc kubenswrapper[4632]: I0313 11:02:10.044238 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62b0f696-9e5c-4535-a181-fa2f4b645711-kube-api-access-hbhfw" (OuterVolumeSpecName: "kube-api-access-hbhfw") pod "62b0f696-9e5c-4535-a181-fa2f4b645711" (UID: "62b0f696-9e5c-4535-a181-fa2f4b645711"). InnerVolumeSpecName "kube-api-access-hbhfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:02:10 crc kubenswrapper[4632]: I0313 11:02:10.101920 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbhfw\" (UniqueName: \"kubernetes.io/projected/62b0f696-9e5c-4535-a181-fa2f4b645711-kube-api-access-hbhfw\") on node \"crc\" DevicePath \"\"" Mar 13 11:02:10 crc kubenswrapper[4632]: I0313 11:02:10.463498 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 11:02:10 crc kubenswrapper[4632]: I0313 11:02:10.465242 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 11:02:10 crc kubenswrapper[4632]: I0313 11:02:10.489492 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556662-pw9tk" event={"ID":"62b0f696-9e5c-4535-a181-fa2f4b645711","Type":"ContainerDied","Data":"1af7d2f0857a2610ddaf36212c411393a094b387bb07f0eb12ad83a933bc22f0"} Mar 13 11:02:10 crc kubenswrapper[4632]: I0313 11:02:10.489542 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1af7d2f0857a2610ddaf36212c411393a094b387bb07f0eb12ad83a933bc22f0" Mar 13 11:02:10 crc kubenswrapper[4632]: I0313 11:02:10.489553 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556662-pw9tk" Mar 13 11:02:11 crc kubenswrapper[4632]: I0313 11:02:11.049061 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556656-vzb8p"] Mar 13 11:02:11 crc kubenswrapper[4632]: I0313 11:02:11.068997 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556656-vzb8p"] Mar 13 11:02:12 crc kubenswrapper[4632]: I0313 11:02:12.057850 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f70e2037-a5d6-4479-af7f-18fe8ff9e952" path="/var/lib/kubelet/pods/f70e2037-a5d6-4479-af7f-18fe8ff9e952/volumes" Mar 13 11:02:40 crc kubenswrapper[4632]: I0313 11:02:40.465015 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 11:02:40 crc kubenswrapper[4632]: I0313 11:02:40.469605 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 11:02:40 crc kubenswrapper[4632]: I0313 11:02:40.472251 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 11:02:40 crc kubenswrapper[4632]: I0313 11:02:40.479160 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"06408f7526caaaeae759484ccf3ff85a146655a7d51ff7049c7be79b39fe96ba"} pod="openshift-machine-config-operator/machine-config-daemon-zkscb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 11:02:40 crc kubenswrapper[4632]: I0313 11:02:40.479300 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" containerID="cri-o://06408f7526caaaeae759484ccf3ff85a146655a7d51ff7049c7be79b39fe96ba" gracePeriod=600 Mar 13 11:02:40 crc kubenswrapper[4632]: I0313 11:02:40.806598 4632 generic.go:334] "Generic (PLEG): container finished" podID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerID="06408f7526caaaeae759484ccf3ff85a146655a7d51ff7049c7be79b39fe96ba" exitCode=0 Mar 13 11:02:40 crc kubenswrapper[4632]: I0313 11:02:40.806666 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerDied","Data":"06408f7526caaaeae759484ccf3ff85a146655a7d51ff7049c7be79b39fe96ba"} Mar 13 11:02:40 crc kubenswrapper[4632]: I0313 11:02:40.809183 4632 scope.go:117] "RemoveContainer" containerID="d62fcc7d7dd37c1e59dee28bd69ab3bfac7e5412873fdfe93b8d8f0639424c9d" Mar 13 11:02:41 crc kubenswrapper[4632]: I0313 11:02:41.818695 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerStarted","Data":"8a4edabf825a9fe82d4af3664ef24831617306062afee5f4494f20215f557582"} Mar 13 11:02:54 crc kubenswrapper[4632]: I0313 11:02:54.010437 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mm6fq"] Mar 13 11:02:54 crc kubenswrapper[4632]: E0313 11:02:54.012465 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62b0f696-9e5c-4535-a181-fa2f4b645711" containerName="oc" Mar 13 11:02:54 crc kubenswrapper[4632]: I0313 11:02:54.012514 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="62b0f696-9e5c-4535-a181-fa2f4b645711" containerName="oc" Mar 13 11:02:54 crc kubenswrapper[4632]: I0313 11:02:54.012965 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="62b0f696-9e5c-4535-a181-fa2f4b645711" containerName="oc" Mar 13 11:02:54 crc kubenswrapper[4632]: I0313 11:02:54.022019 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mm6fq" Mar 13 11:02:54 crc kubenswrapper[4632]: I0313 11:02:54.126361 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mm6fq"] Mar 13 11:02:54 crc kubenswrapper[4632]: I0313 11:02:54.183092 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tfk7\" (UniqueName: \"kubernetes.io/projected/269ac923-f4f9-43f2-934f-8b0f26f6c4af-kube-api-access-9tfk7\") pod \"redhat-operators-mm6fq\" (UID: \"269ac923-f4f9-43f2-934f-8b0f26f6c4af\") " pod="openshift-marketplace/redhat-operators-mm6fq" Mar 13 11:02:54 crc kubenswrapper[4632]: I0313 11:02:54.183253 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/269ac923-f4f9-43f2-934f-8b0f26f6c4af-catalog-content\") pod \"redhat-operators-mm6fq\" (UID: \"269ac923-f4f9-43f2-934f-8b0f26f6c4af\") " pod="openshift-marketplace/redhat-operators-mm6fq" Mar 13 11:02:54 crc kubenswrapper[4632]: I0313 11:02:54.183275 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/269ac923-f4f9-43f2-934f-8b0f26f6c4af-utilities\") pod \"redhat-operators-mm6fq\" (UID: \"269ac923-f4f9-43f2-934f-8b0f26f6c4af\") " pod="openshift-marketplace/redhat-operators-mm6fq" Mar 13 11:02:54 crc kubenswrapper[4632]: I0313 11:02:54.285095 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/269ac923-f4f9-43f2-934f-8b0f26f6c4af-catalog-content\") pod \"redhat-operators-mm6fq\" (UID: \"269ac923-f4f9-43f2-934f-8b0f26f6c4af\") " pod="openshift-marketplace/redhat-operators-mm6fq" Mar 13 11:02:54 crc kubenswrapper[4632]: I0313 11:02:54.285466 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/269ac923-f4f9-43f2-934f-8b0f26f6c4af-utilities\") pod \"redhat-operators-mm6fq\" (UID: \"269ac923-f4f9-43f2-934f-8b0f26f6c4af\") " pod="openshift-marketplace/redhat-operators-mm6fq" Mar 13 11:02:54 crc kubenswrapper[4632]: I0313 11:02:54.285563 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tfk7\" (UniqueName: \"kubernetes.io/projected/269ac923-f4f9-43f2-934f-8b0f26f6c4af-kube-api-access-9tfk7\") pod \"redhat-operators-mm6fq\" (UID: \"269ac923-f4f9-43f2-934f-8b0f26f6c4af\") " pod="openshift-marketplace/redhat-operators-mm6fq" Mar 13 11:02:54 crc kubenswrapper[4632]: I0313 11:02:54.394140 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/269ac923-f4f9-43f2-934f-8b0f26f6c4af-utilities\") pod \"redhat-operators-mm6fq\" (UID: \"269ac923-f4f9-43f2-934f-8b0f26f6c4af\") " pod="openshift-marketplace/redhat-operators-mm6fq" Mar 13 11:02:54 crc kubenswrapper[4632]: I0313 11:02:54.448901 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/269ac923-f4f9-43f2-934f-8b0f26f6c4af-catalog-content\") pod \"redhat-operators-mm6fq\" (UID: \"269ac923-f4f9-43f2-934f-8b0f26f6c4af\") " pod="openshift-marketplace/redhat-operators-mm6fq" Mar 13 11:02:54 crc kubenswrapper[4632]: I0313 11:02:54.503779 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tfk7\" (UniqueName: \"kubernetes.io/projected/269ac923-f4f9-43f2-934f-8b0f26f6c4af-kube-api-access-9tfk7\") pod \"redhat-operators-mm6fq\" (UID: \"269ac923-f4f9-43f2-934f-8b0f26f6c4af\") " pod="openshift-marketplace/redhat-operators-mm6fq" Mar 13 11:02:54 crc kubenswrapper[4632]: I0313 11:02:54.670161 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mm6fq" Mar 13 11:02:56 crc kubenswrapper[4632]: I0313 11:02:56.406099 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mm6fq"] Mar 13 11:02:56 crc kubenswrapper[4632]: W0313 11:02:56.447815 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod269ac923_f4f9_43f2_934f_8b0f26f6c4af.slice/crio-c53342fac7b10f1bef54be90bcf0e83cc2e423f561f4ea27cefd68e4947d5bb4 WatchSource:0}: Error finding container c53342fac7b10f1bef54be90bcf0e83cc2e423f561f4ea27cefd68e4947d5bb4: Status 404 returned error can't find the container with id c53342fac7b10f1bef54be90bcf0e83cc2e423f561f4ea27cefd68e4947d5bb4 Mar 13 11:02:56 crc kubenswrapper[4632]: I0313 11:02:56.981602 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm6fq" event={"ID":"269ac923-f4f9-43f2-934f-8b0f26f6c4af","Type":"ContainerDied","Data":"93ddcb9911b3bbd33b20a1520c077d1ce20ed42dceb52f18631471d802d7e139"} Mar 13 11:02:56 crc kubenswrapper[4632]: I0313 11:02:56.982657 4632 generic.go:334] "Generic (PLEG): container finished" podID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerID="93ddcb9911b3bbd33b20a1520c077d1ce20ed42dceb52f18631471d802d7e139" exitCode=0 Mar 13 11:02:56 crc kubenswrapper[4632]: I0313 11:02:56.983723 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm6fq" event={"ID":"269ac923-f4f9-43f2-934f-8b0f26f6c4af","Type":"ContainerStarted","Data":"c53342fac7b10f1bef54be90bcf0e83cc2e423f561f4ea27cefd68e4947d5bb4"} Mar 13 11:02:56 crc kubenswrapper[4632]: I0313 11:02:56.988395 4632 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 11:03:00 crc kubenswrapper[4632]: I0313 11:03:00.028101 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm6fq" event={"ID":"269ac923-f4f9-43f2-934f-8b0f26f6c4af","Type":"ContainerStarted","Data":"d03ceb30503b22a2ad94cc53347c3b0ae54c134bb2b9db1bd0c47dcfc27a8ece"} Mar 13 11:03:09 crc kubenswrapper[4632]: I0313 11:03:09.239060 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm6fq" event={"ID":"269ac923-f4f9-43f2-934f-8b0f26f6c4af","Type":"ContainerDied","Data":"d03ceb30503b22a2ad94cc53347c3b0ae54c134bb2b9db1bd0c47dcfc27a8ece"} Mar 13 11:03:09 crc kubenswrapper[4632]: I0313 11:03:09.239561 4632 generic.go:334] "Generic (PLEG): container finished" podID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerID="d03ceb30503b22a2ad94cc53347c3b0ae54c134bb2b9db1bd0c47dcfc27a8ece" exitCode=0 Mar 13 11:03:11 crc kubenswrapper[4632]: I0313 11:03:11.262130 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm6fq" event={"ID":"269ac923-f4f9-43f2-934f-8b0f26f6c4af","Type":"ContainerStarted","Data":"67b531d65834622b374c34e759c46150ba93cade0961705aa2b576c0c27e19d2"} Mar 13 11:03:11 crc kubenswrapper[4632]: I0313 11:03:11.306393 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mm6fq" podStartSLOduration=5.142205059 podStartE2EDuration="18.303378772s" podCreationTimestamp="2026-03-13 11:02:53 +0000 UTC" firstStartedPulling="2026-03-13 11:02:56.984219177 +0000 UTC m=+3551.006749310" lastFinishedPulling="2026-03-13 11:03:10.14539289 +0000 UTC m=+3564.167923023" observedRunningTime="2026-03-13 11:03:11.301077695 +0000 UTC m=+3565.323607828" watchObservedRunningTime="2026-03-13 11:03:11.303378772 +0000 UTC m=+3565.325908905" Mar 13 11:03:12 crc kubenswrapper[4632]: I0313 11:03:12.006167 4632 scope.go:117] "RemoveContainer" containerID="6dd075c6962fa13da67ea22e1c7e0f24f4fdd06a675abd3b301b6ea671a2f51e" Mar 13 11:03:14 crc kubenswrapper[4632]: I0313 11:03:14.671738 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mm6fq" Mar 13 11:03:14 crc kubenswrapper[4632]: I0313 11:03:14.672371 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mm6fq" Mar 13 11:03:15 crc kubenswrapper[4632]: I0313 11:03:15.734487 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mm6fq" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="registry-server" probeResult="failure" output=< Mar 13 11:03:15 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:03:15 crc kubenswrapper[4632]: > Mar 13 11:03:25 crc kubenswrapper[4632]: I0313 11:03:25.753742 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mm6fq" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="registry-server" probeResult="failure" output=< Mar 13 11:03:25 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:03:25 crc kubenswrapper[4632]: > Mar 13 11:03:35 crc kubenswrapper[4632]: I0313 11:03:35.725626 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mm6fq" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="registry-server" probeResult="failure" output=< Mar 13 11:03:35 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:03:35 crc kubenswrapper[4632]: > Mar 13 11:03:45 crc kubenswrapper[4632]: I0313 11:03:45.768324 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mm6fq" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="registry-server" probeResult="failure" output=< Mar 13 11:03:45 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:03:45 crc kubenswrapper[4632]: > Mar 13 11:03:55 crc kubenswrapper[4632]: I0313 11:03:55.736845 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mm6fq" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="registry-server" probeResult="failure" output=< Mar 13 11:03:55 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:03:55 crc kubenswrapper[4632]: > Mar 13 11:04:01 crc kubenswrapper[4632]: I0313 11:04:01.342186 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556664-vgmtg"] Mar 13 11:04:01 crc kubenswrapper[4632]: I0313 11:04:01.375829 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556664-vgmtg" Mar 13 11:04:01 crc kubenswrapper[4632]: I0313 11:04:01.459644 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 11:04:01 crc kubenswrapper[4632]: I0313 11:04:01.459647 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 11:04:01 crc kubenswrapper[4632]: I0313 11:04:01.459653 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 11:04:01 crc kubenswrapper[4632]: I0313 11:04:01.558458 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqslx\" (UniqueName: \"kubernetes.io/projected/6ecca46c-1e06-43be-bacc-eae4a1a474b7-kube-api-access-xqslx\") pod \"auto-csr-approver-29556664-vgmtg\" (UID: \"6ecca46c-1e06-43be-bacc-eae4a1a474b7\") " pod="openshift-infra/auto-csr-approver-29556664-vgmtg" Mar 13 11:04:01 crc kubenswrapper[4632]: I0313 11:04:01.661263 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqslx\" (UniqueName: \"kubernetes.io/projected/6ecca46c-1e06-43be-bacc-eae4a1a474b7-kube-api-access-xqslx\") pod \"auto-csr-approver-29556664-vgmtg\" (UID: \"6ecca46c-1e06-43be-bacc-eae4a1a474b7\") " pod="openshift-infra/auto-csr-approver-29556664-vgmtg" Mar 13 11:04:01 crc kubenswrapper[4632]: I0313 11:04:01.914145 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqslx\" (UniqueName: \"kubernetes.io/projected/6ecca46c-1e06-43be-bacc-eae4a1a474b7-kube-api-access-xqslx\") pod \"auto-csr-approver-29556664-vgmtg\" (UID: \"6ecca46c-1e06-43be-bacc-eae4a1a474b7\") " pod="openshift-infra/auto-csr-approver-29556664-vgmtg" Mar 13 11:04:02 crc kubenswrapper[4632]: I0313 11:04:02.041550 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556664-vgmtg" Mar 13 11:04:02 crc kubenswrapper[4632]: I0313 11:04:02.504859 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556664-vgmtg"] Mar 13 11:04:04 crc kubenswrapper[4632]: I0313 11:04:04.031615 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556664-vgmtg"] Mar 13 11:04:04 crc kubenswrapper[4632]: I0313 11:04:04.748845 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556664-vgmtg" event={"ID":"6ecca46c-1e06-43be-bacc-eae4a1a474b7","Type":"ContainerStarted","Data":"280361caa31f92ed27e2f7c50d8a879ce6f0bb804fc4144396a778455ffd2cf2"} Mar 13 11:04:05 crc kubenswrapper[4632]: I0313 11:04:05.746012 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mm6fq" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="registry-server" probeResult="failure" output=< Mar 13 11:04:05 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:04:05 crc kubenswrapper[4632]: > Mar 13 11:04:06 crc kubenswrapper[4632]: I0313 11:04:06.773565 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556664-vgmtg" event={"ID":"6ecca46c-1e06-43be-bacc-eae4a1a474b7","Type":"ContainerStarted","Data":"5e0a7ac81434eac7eff8520645fc1fc30caa50af82d06bce9d4415863d0b9aa2"} Mar 13 11:04:06 crc kubenswrapper[4632]: I0313 11:04:06.874377 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556664-vgmtg" podStartSLOduration=5.884281338 podStartE2EDuration="6.872581259s" podCreationTimestamp="2026-03-13 11:04:00 +0000 UTC" firstStartedPulling="2026-03-13 11:04:04.100679674 +0000 UTC m=+3618.123209807" lastFinishedPulling="2026-03-13 11:04:05.088979595 +0000 UTC m=+3619.111509728" observedRunningTime="2026-03-13 11:04:06.865171526 +0000 UTC m=+3620.887701699" watchObservedRunningTime="2026-03-13 11:04:06.872581259 +0000 UTC m=+3620.895111412" Mar 13 11:04:08 crc kubenswrapper[4632]: I0313 11:04:08.791636 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556664-vgmtg" event={"ID":"6ecca46c-1e06-43be-bacc-eae4a1a474b7","Type":"ContainerDied","Data":"5e0a7ac81434eac7eff8520645fc1fc30caa50af82d06bce9d4415863d0b9aa2"} Mar 13 11:04:08 crc kubenswrapper[4632]: I0313 11:04:08.793756 4632 generic.go:334] "Generic (PLEG): container finished" podID="6ecca46c-1e06-43be-bacc-eae4a1a474b7" containerID="5e0a7ac81434eac7eff8520645fc1fc30caa50af82d06bce9d4415863d0b9aa2" exitCode=0 Mar 13 11:04:10 crc kubenswrapper[4632]: I0313 11:04:10.954649 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556664-vgmtg" Mar 13 11:04:11 crc kubenswrapper[4632]: I0313 11:04:11.068830 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqslx\" (UniqueName: \"kubernetes.io/projected/6ecca46c-1e06-43be-bacc-eae4a1a474b7-kube-api-access-xqslx\") pod \"6ecca46c-1e06-43be-bacc-eae4a1a474b7\" (UID: \"6ecca46c-1e06-43be-bacc-eae4a1a474b7\") " Mar 13 11:04:11 crc kubenswrapper[4632]: I0313 11:04:11.127599 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ecca46c-1e06-43be-bacc-eae4a1a474b7-kube-api-access-xqslx" (OuterVolumeSpecName: "kube-api-access-xqslx") pod "6ecca46c-1e06-43be-bacc-eae4a1a474b7" (UID: "6ecca46c-1e06-43be-bacc-eae4a1a474b7"). InnerVolumeSpecName "kube-api-access-xqslx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:04:11 crc kubenswrapper[4632]: I0313 11:04:11.172114 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqslx\" (UniqueName: \"kubernetes.io/projected/6ecca46c-1e06-43be-bacc-eae4a1a474b7-kube-api-access-xqslx\") on node \"crc\" DevicePath \"\"" Mar 13 11:04:11 crc kubenswrapper[4632]: I0313 11:04:11.829357 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556664-vgmtg" event={"ID":"6ecca46c-1e06-43be-bacc-eae4a1a474b7","Type":"ContainerDied","Data":"280361caa31f92ed27e2f7c50d8a879ce6f0bb804fc4144396a778455ffd2cf2"} Mar 13 11:04:11 crc kubenswrapper[4632]: I0313 11:04:11.829589 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556664-vgmtg" Mar 13 11:04:11 crc kubenswrapper[4632]: I0313 11:04:11.830324 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="280361caa31f92ed27e2f7c50d8a879ce6f0bb804fc4144396a778455ffd2cf2" Mar 13 11:04:12 crc kubenswrapper[4632]: I0313 11:04:12.109528 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556658-mbpfd"] Mar 13 11:04:12 crc kubenswrapper[4632]: I0313 11:04:12.118748 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556658-mbpfd"] Mar 13 11:04:14 crc kubenswrapper[4632]: I0313 11:04:14.056894 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4db08ac3-f768-407d-a321-ed9032c5c015" path="/var/lib/kubelet/pods/4db08ac3-f768-407d-a321-ed9032c5c015/volumes" Mar 13 11:04:15 crc kubenswrapper[4632]: I0313 11:04:15.721319 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mm6fq" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="registry-server" probeResult="failure" output=< Mar 13 11:04:15 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:04:15 crc kubenswrapper[4632]: > Mar 13 11:04:19 crc kubenswrapper[4632]: I0313 11:04:19.042450 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6cbqr"] Mar 13 11:04:19 crc kubenswrapper[4632]: E0313 11:04:19.053066 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ecca46c-1e06-43be-bacc-eae4a1a474b7" containerName="oc" Mar 13 11:04:19 crc kubenswrapper[4632]: I0313 11:04:19.053128 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ecca46c-1e06-43be-bacc-eae4a1a474b7" containerName="oc" Mar 13 11:04:19 crc kubenswrapper[4632]: I0313 11:04:19.055485 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ecca46c-1e06-43be-bacc-eae4a1a474b7" containerName="oc" Mar 13 11:04:19 crc kubenswrapper[4632]: I0313 11:04:19.063262 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6cbqr" Mar 13 11:04:19 crc kubenswrapper[4632]: I0313 11:04:19.175647 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6cbqr"] Mar 13 11:04:19 crc kubenswrapper[4632]: I0313 11:04:19.239321 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c07de1e-84e2-4dae-a3c3-ced19801c8c2-utilities\") pod \"community-operators-6cbqr\" (UID: \"1c07de1e-84e2-4dae-a3c3-ced19801c8c2\") " pod="openshift-marketplace/community-operators-6cbqr" Mar 13 11:04:19 crc kubenswrapper[4632]: I0313 11:04:19.239641 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4gb7\" (UniqueName: \"kubernetes.io/projected/1c07de1e-84e2-4dae-a3c3-ced19801c8c2-kube-api-access-d4gb7\") pod \"community-operators-6cbqr\" (UID: \"1c07de1e-84e2-4dae-a3c3-ced19801c8c2\") " pod="openshift-marketplace/community-operators-6cbqr" Mar 13 11:04:19 crc kubenswrapper[4632]: I0313 11:04:19.239717 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c07de1e-84e2-4dae-a3c3-ced19801c8c2-catalog-content\") pod \"community-operators-6cbqr\" (UID: \"1c07de1e-84e2-4dae-a3c3-ced19801c8c2\") " pod="openshift-marketplace/community-operators-6cbqr" Mar 13 11:04:19 crc kubenswrapper[4632]: I0313 11:04:19.341910 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4gb7\" (UniqueName: \"kubernetes.io/projected/1c07de1e-84e2-4dae-a3c3-ced19801c8c2-kube-api-access-d4gb7\") pod \"community-operators-6cbqr\" (UID: \"1c07de1e-84e2-4dae-a3c3-ced19801c8c2\") " pod="openshift-marketplace/community-operators-6cbqr" Mar 13 11:04:19 crc kubenswrapper[4632]: I0313 11:04:19.342033 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c07de1e-84e2-4dae-a3c3-ced19801c8c2-catalog-content\") pod \"community-operators-6cbqr\" (UID: \"1c07de1e-84e2-4dae-a3c3-ced19801c8c2\") " pod="openshift-marketplace/community-operators-6cbqr" Mar 13 11:04:19 crc kubenswrapper[4632]: I0313 11:04:19.342109 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c07de1e-84e2-4dae-a3c3-ced19801c8c2-utilities\") pod \"community-operators-6cbqr\" (UID: \"1c07de1e-84e2-4dae-a3c3-ced19801c8c2\") " pod="openshift-marketplace/community-operators-6cbqr" Mar 13 11:04:19 crc kubenswrapper[4632]: I0313 11:04:19.349894 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c07de1e-84e2-4dae-a3c3-ced19801c8c2-catalog-content\") pod \"community-operators-6cbqr\" (UID: \"1c07de1e-84e2-4dae-a3c3-ced19801c8c2\") " pod="openshift-marketplace/community-operators-6cbqr" Mar 13 11:04:19 crc kubenswrapper[4632]: I0313 11:04:19.351014 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c07de1e-84e2-4dae-a3c3-ced19801c8c2-utilities\") pod \"community-operators-6cbqr\" (UID: \"1c07de1e-84e2-4dae-a3c3-ced19801c8c2\") " pod="openshift-marketplace/community-operators-6cbqr" Mar 13 11:04:19 crc kubenswrapper[4632]: I0313 11:04:19.397641 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4gb7\" (UniqueName: \"kubernetes.io/projected/1c07de1e-84e2-4dae-a3c3-ced19801c8c2-kube-api-access-d4gb7\") pod \"community-operators-6cbqr\" (UID: \"1c07de1e-84e2-4dae-a3c3-ced19801c8c2\") " pod="openshift-marketplace/community-operators-6cbqr" Mar 13 11:04:19 crc kubenswrapper[4632]: I0313 11:04:19.417848 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6cbqr" Mar 13 11:04:21 crc kubenswrapper[4632]: I0313 11:04:21.082388 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6cbqr"] Mar 13 11:04:21 crc kubenswrapper[4632]: I0313 11:04:21.929011 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6cbqr" event={"ID":"1c07de1e-84e2-4dae-a3c3-ced19801c8c2","Type":"ContainerDied","Data":"c772f3e31ea57b90bb99ca7ef746eba3e104a41a227d8c675887f2261f06ab48"} Mar 13 11:04:21 crc kubenswrapper[4632]: I0313 11:04:21.931027 4632 generic.go:334] "Generic (PLEG): container finished" podID="1c07de1e-84e2-4dae-a3c3-ced19801c8c2" containerID="c772f3e31ea57b90bb99ca7ef746eba3e104a41a227d8c675887f2261f06ab48" exitCode=0 Mar 13 11:04:21 crc kubenswrapper[4632]: I0313 11:04:21.931140 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6cbqr" event={"ID":"1c07de1e-84e2-4dae-a3c3-ced19801c8c2","Type":"ContainerStarted","Data":"2dcd2355772a6d6f5fad347e41d6b1e9baff67bf2bdf1773631d76e760c8ca38"} Mar 13 11:04:22 crc kubenswrapper[4632]: I0313 11:04:22.951667 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6cbqr" event={"ID":"1c07de1e-84e2-4dae-a3c3-ced19801c8c2","Type":"ContainerStarted","Data":"c4e602b48052cce5414a1759fd3d99f56ebde469321edc3b351de56e308a589e"} Mar 13 11:04:25 crc kubenswrapper[4632]: I0313 11:04:25.751530 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mm6fq" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="registry-server" probeResult="failure" output=< Mar 13 11:04:25 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:04:25 crc kubenswrapper[4632]: > Mar 13 11:04:25 crc kubenswrapper[4632]: I0313 11:04:25.982310 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6cbqr" event={"ID":"1c07de1e-84e2-4dae-a3c3-ced19801c8c2","Type":"ContainerDied","Data":"c4e602b48052cce5414a1759fd3d99f56ebde469321edc3b351de56e308a589e"} Mar 13 11:04:25 crc kubenswrapper[4632]: I0313 11:04:25.982169 4632 generic.go:334] "Generic (PLEG): container finished" podID="1c07de1e-84e2-4dae-a3c3-ced19801c8c2" containerID="c4e602b48052cce5414a1759fd3d99f56ebde469321edc3b351de56e308a589e" exitCode=0 Mar 13 11:04:26 crc kubenswrapper[4632]: I0313 11:04:26.995839 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6cbqr" event={"ID":"1c07de1e-84e2-4dae-a3c3-ced19801c8c2","Type":"ContainerStarted","Data":"e7435d3cb1416970cf6b4162802419aa1bcf01c76e855132a1393d7b353e8c78"} Mar 13 11:04:27 crc kubenswrapper[4632]: I0313 11:04:27.038204 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6cbqr" podStartSLOduration=4.357588645 podStartE2EDuration="9.036865848s" podCreationTimestamp="2026-03-13 11:04:18 +0000 UTC" firstStartedPulling="2026-03-13 11:04:21.933619966 +0000 UTC m=+3635.956150099" lastFinishedPulling="2026-03-13 11:04:26.612897169 +0000 UTC m=+3640.635427302" observedRunningTime="2026-03-13 11:04:27.035536915 +0000 UTC m=+3641.058067048" watchObservedRunningTime="2026-03-13 11:04:27.036865848 +0000 UTC m=+3641.059395981" Mar 13 11:04:29 crc kubenswrapper[4632]: I0313 11:04:29.418558 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6cbqr" Mar 13 11:04:29 crc kubenswrapper[4632]: I0313 11:04:29.419068 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6cbqr" Mar 13 11:04:30 crc kubenswrapper[4632]: I0313 11:04:30.481955 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-6cbqr" podUID="1c07de1e-84e2-4dae-a3c3-ced19801c8c2" containerName="registry-server" probeResult="failure" output=< Mar 13 11:04:30 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:04:30 crc kubenswrapper[4632]: > Mar 13 11:04:35 crc kubenswrapper[4632]: I0313 11:04:35.724988 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mm6fq" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="registry-server" probeResult="failure" output=< Mar 13 11:04:35 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:04:35 crc kubenswrapper[4632]: > Mar 13 11:04:40 crc kubenswrapper[4632]: I0313 11:04:40.461614 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 11:04:40 crc kubenswrapper[4632]: I0313 11:04:40.464249 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 11:04:40 crc kubenswrapper[4632]: I0313 11:04:40.547538 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-6cbqr" podUID="1c07de1e-84e2-4dae-a3c3-ced19801c8c2" containerName="registry-server" probeResult="failure" output=< Mar 13 11:04:40 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:04:40 crc kubenswrapper[4632]: > Mar 13 11:04:45 crc kubenswrapper[4632]: I0313 11:04:45.726097 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mm6fq" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="registry-server" probeResult="failure" output=< Mar 13 11:04:45 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:04:45 crc kubenswrapper[4632]: > Mar 13 11:04:45 crc kubenswrapper[4632]: I0313 11:04:45.731461 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mm6fq" Mar 13 11:04:45 crc kubenswrapper[4632]: I0313 11:04:45.735651 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"67b531d65834622b374c34e759c46150ba93cade0961705aa2b576c0c27e19d2"} pod="openshift-marketplace/redhat-operators-mm6fq" containerMessage="Container registry-server failed startup probe, will be restarted" Mar 13 11:04:45 crc kubenswrapper[4632]: I0313 11:04:45.736727 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mm6fq" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="registry-server" containerID="cri-o://67b531d65834622b374c34e759c46150ba93cade0961705aa2b576c0c27e19d2" gracePeriod=30 Mar 13 11:04:50 crc kubenswrapper[4632]: I0313 11:04:50.251582 4632 generic.go:334] "Generic (PLEG): container finished" podID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerID="67b531d65834622b374c34e759c46150ba93cade0961705aa2b576c0c27e19d2" exitCode=0 Mar 13 11:04:50 crc kubenswrapper[4632]: I0313 11:04:50.252250 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm6fq" event={"ID":"269ac923-f4f9-43f2-934f-8b0f26f6c4af","Type":"ContainerDied","Data":"67b531d65834622b374c34e759c46150ba93cade0961705aa2b576c0c27e19d2"} Mar 13 11:04:50 crc kubenswrapper[4632]: I0313 11:04:50.535247 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-6cbqr" podUID="1c07de1e-84e2-4dae-a3c3-ced19801c8c2" containerName="registry-server" probeResult="failure" output=< Mar 13 11:04:50 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:04:50 crc kubenswrapper[4632]: > Mar 13 11:04:51 crc kubenswrapper[4632]: I0313 11:04:51.263255 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm6fq" event={"ID":"269ac923-f4f9-43f2-934f-8b0f26f6c4af","Type":"ContainerStarted","Data":"0bbbe65ea71f36a37f33d902708fe700b15c322c14c94c121a3ca523a54d026b"} Mar 13 11:04:54 crc kubenswrapper[4632]: I0313 11:04:54.692074 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mm6fq" Mar 13 11:04:54 crc kubenswrapper[4632]: I0313 11:04:54.697461 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mm6fq" Mar 13 11:04:55 crc kubenswrapper[4632]: I0313 11:04:55.737331 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mm6fq" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="registry-server" probeResult="failure" output=< Mar 13 11:04:55 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:04:55 crc kubenswrapper[4632]: > Mar 13 11:04:59 crc kubenswrapper[4632]: I0313 11:04:59.636109 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6cbqr" Mar 13 11:04:59 crc kubenswrapper[4632]: I0313 11:04:59.761512 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6cbqr" Mar 13 11:05:00 crc kubenswrapper[4632]: I0313 11:05:00.766639 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6cbqr"] Mar 13 11:05:01 crc kubenswrapper[4632]: I0313 11:05:01.383657 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6cbqr" podUID="1c07de1e-84e2-4dae-a3c3-ced19801c8c2" containerName="registry-server" containerID="cri-o://e7435d3cb1416970cf6b4162802419aa1bcf01c76e855132a1393d7b353e8c78" gracePeriod=2 Mar 13 11:05:02 crc kubenswrapper[4632]: I0313 11:05:02.401814 4632 generic.go:334] "Generic (PLEG): container finished" podID="1c07de1e-84e2-4dae-a3c3-ced19801c8c2" containerID="e7435d3cb1416970cf6b4162802419aa1bcf01c76e855132a1393d7b353e8c78" exitCode=0 Mar 13 11:05:02 crc kubenswrapper[4632]: I0313 11:05:02.402472 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6cbqr" event={"ID":"1c07de1e-84e2-4dae-a3c3-ced19801c8c2","Type":"ContainerDied","Data":"e7435d3cb1416970cf6b4162802419aa1bcf01c76e855132a1393d7b353e8c78"} Mar 13 11:05:03 crc kubenswrapper[4632]: I0313 11:05:03.778604 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="2cb2f546-c8c5-4ec9-aba8-d3782431de10" containerName="galera" probeResult="failure" output="command timed out" Mar 13 11:05:03 crc kubenswrapper[4632]: I0313 11:05:03.778603 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="2cb2f546-c8c5-4ec9-aba8-d3782431de10" containerName="galera" probeResult="failure" output="command timed out" Mar 13 11:05:04 crc kubenswrapper[4632]: I0313 11:05:04.452316 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-6c7bf5ddc5-v6t5l" podUID="712b2002-4fce-4983-926a-99a4b2dc7a8c" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.48:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:04 crc kubenswrapper[4632]: I0313 11:05:04.452322 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-6c7bf5ddc5-v6t5l" podUID="712b2002-4fce-4983-926a-99a4b2dc7a8c" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.48:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:04 crc kubenswrapper[4632]: I0313 11:05:04.802953 4632 patch_prober.go:28] interesting pod/console-5678554f8b-n7dcv container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.45:8443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:04 crc kubenswrapper[4632]: I0313 11:05:04.820893 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-5678554f8b-n7dcv" podUID="a59bb7d3-da4a-4275-9dcb-b851215a9cd0" containerName="console" probeResult="failure" output="Get \"https://10.217.0.45:8443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:05 crc kubenswrapper[4632]: I0313 11:05:05.221170 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-lvlxj" podUID="85b58bb0-63f5-4c85-8759-ce28d2c7db58" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:05 crc kubenswrapper[4632]: I0313 11:05:05.320109 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v927j" podUID="2d221857-ee77-4165-a351-ecd5fc424970" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:05 crc kubenswrapper[4632]: I0313 11:05:05.489085 4632 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-sk2l6 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:05 crc kubenswrapper[4632]: I0313 11:05:05.489442 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" podUID="f660255f-8f78-4876-973d-db58f2ee7020" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:05 crc kubenswrapper[4632]: I0313 11:05:05.489096 4632 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-sk2l6 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:05 crc kubenswrapper[4632]: I0313 11:05:05.489516 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" podUID="f660255f-8f78-4876-973d-db58f2ee7020" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:05 crc kubenswrapper[4632]: I0313 11:05:05.895756 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mm6fq" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="registry-server" probeResult="failure" output=< Mar 13 11:05:05 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:05:05 crc kubenswrapper[4632]: > Mar 13 11:05:06 crc kubenswrapper[4632]: I0313 11:05:06.203082 4632 patch_prober.go:28] interesting pod/controller-manager-7469657588-kpf64 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:06 crc kubenswrapper[4632]: I0313 11:05:06.203140 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7469657588-kpf64" podUID="a8ff14f9-e25c-4839-acab-a622f6f70f88" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:06 crc kubenswrapper[4632]: I0313 11:05:06.204332 4632 patch_prober.go:28] interesting pod/controller-manager-7469657588-kpf64 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:06 crc kubenswrapper[4632]: I0313 11:05:06.204371 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7469657588-kpf64" podUID="a8ff14f9-e25c-4839-acab-a622f6f70f88" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:07 crc kubenswrapper[4632]: I0313 11:05:07.191046 4632 patch_prober.go:28] interesting pod/console-operator-58897d9998-sbtn5 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:07 crc kubenswrapper[4632]: I0313 11:05:07.197138 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-sbtn5" podUID="ef269b18-ea84-43c2-971c-e772149acbf6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:07 crc kubenswrapper[4632]: I0313 11:05:07.204101 4632 patch_prober.go:28] interesting pod/console-operator-58897d9998-sbtn5 container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:07 crc kubenswrapper[4632]: I0313 11:05:07.204185 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-sbtn5" podUID="ef269b18-ea84-43c2-971c-e772149acbf6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:07 crc kubenswrapper[4632]: I0313 11:05:07.310158 4632 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-svhr5 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:07 crc kubenswrapper[4632]: I0313 11:05:07.310248 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-svhr5" podUID="353e9ca9-cb3b-4c6e-b1ca-446611a12dca" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:07 crc kubenswrapper[4632]: I0313 11:05:07.564200 4632 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-zgxcd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:07 crc kubenswrapper[4632]: I0313 11:05:07.564276 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" podUID="49c520f1-fb05-48ca-8435-1985ce668451" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:07 crc kubenswrapper[4632]: I0313 11:05:07.564220 4632 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-zgxcd container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:07 crc kubenswrapper[4632]: I0313 11:05:07.564341 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" podUID="49c520f1-fb05-48ca-8435-1985ce668451" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:09 crc kubenswrapper[4632]: I0313 11:05:09.408182 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-62gpm" podUID="9e1d6ac6-c4ee-4381-86b8-c337f8c2d6a5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:09 crc kubenswrapper[4632]: E0313 11:05:09.432307 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e7435d3cb1416970cf6b4162802419aa1bcf01c76e855132a1393d7b353e8c78 is running failed: container process not found" containerID="e7435d3cb1416970cf6b4162802419aa1bcf01c76e855132a1393d7b353e8c78" cmd=["grpc_health_probe","-addr=:50051"] Mar 13 11:05:09 crc kubenswrapper[4632]: E0313 11:05:09.433498 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e7435d3cb1416970cf6b4162802419aa1bcf01c76e855132a1393d7b353e8c78 is running failed: container process not found" containerID="e7435d3cb1416970cf6b4162802419aa1bcf01c76e855132a1393d7b353e8c78" cmd=["grpc_health_probe","-addr=:50051"] Mar 13 11:05:09 crc kubenswrapper[4632]: E0313 11:05:09.434212 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e7435d3cb1416970cf6b4162802419aa1bcf01c76e855132a1393d7b353e8c78 is running failed: container process not found" containerID="e7435d3cb1416970cf6b4162802419aa1bcf01c76e855132a1393d7b353e8c78" cmd=["grpc_health_probe","-addr=:50051"] Mar 13 11:05:09 crc kubenswrapper[4632]: E0313 11:05:09.434264 4632 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e7435d3cb1416970cf6b4162802419aa1bcf01c76e855132a1393d7b353e8c78 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-6cbqr" podUID="1c07de1e-84e2-4dae-a3c3-ced19801c8c2" containerName="registry-server" Mar 13 11:05:10 crc kubenswrapper[4632]: I0313 11:05:10.476471 4632 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:10 crc kubenswrapper[4632]: I0313 11:05:10.487172 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 11:05:10 crc kubenswrapper[4632]: I0313 11:05:10.512984 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 11:05:10 crc kubenswrapper[4632]: I0313 11:05:10.513207 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:11 crc kubenswrapper[4632]: I0313 11:05:11.333366 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-2jqnk" podUID="7de02b7f-4e1c-4ba1-9659-c864e9080092" containerName="registry-server" probeResult="failure" output=< Mar 13 11:05:11 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:05:11 crc kubenswrapper[4632]: > Mar 13 11:05:11 crc kubenswrapper[4632]: I0313 11:05:11.333647 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-2jqnk" podUID="7de02b7f-4e1c-4ba1-9659-c864e9080092" containerName="registry-server" probeResult="failure" output=< Mar 13 11:05:11 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:05:11 crc kubenswrapper[4632]: > Mar 13 11:05:11 crc kubenswrapper[4632]: I0313 11:05:11.460117 4632 patch_prober.go:28] interesting pod/oauth-openshift-75bb75cfd7-8sh2x container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:11 crc kubenswrapper[4632]: I0313 11:05:11.460138 4632 patch_prober.go:28] interesting pod/oauth-openshift-75bb75cfd7-8sh2x container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:11 crc kubenswrapper[4632]: I0313 11:05:11.460182 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" podUID="48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:11 crc kubenswrapper[4632]: I0313 11:05:11.460197 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" podUID="48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:12 crc kubenswrapper[4632]: I0313 11:05:12.181078 4632 trace.go:236] Trace[978839734]: "Calculate volume metrics of registry-storage for pod openshift-image-registry/image-registry-66df7c8f76-sbzcm" (13-Mar-2026 11:05:10.654) (total time: 1498ms): Mar 13 11:05:12 crc kubenswrapper[4632]: Trace[978839734]: [1.498091103s] [1.498091103s] END Mar 13 11:05:12 crc kubenswrapper[4632]: I0313 11:05:12.409056 4632 scope.go:117] "RemoveContainer" containerID="6a752c085ec4dd2121b36385f753ab45221d95dd428ca910155d9e3c585e4dbc" Mar 13 11:05:13 crc kubenswrapper[4632]: I0313 11:05:13.878001 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="2cb2f546-c8c5-4ec9-aba8-d3782431de10" containerName="galera" probeResult="failure" output="command timed out" Mar 13 11:05:13 crc kubenswrapper[4632]: I0313 11:05:13.877998 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="2cb2f546-c8c5-4ec9-aba8-d3782431de10" containerName="galera" probeResult="failure" output="command timed out" Mar 13 11:05:14 crc kubenswrapper[4632]: I0313 11:05:14.471720 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-6c7bf5ddc5-v6t5l" podUID="712b2002-4fce-4983-926a-99a4b2dc7a8c" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.48:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:14 crc kubenswrapper[4632]: I0313 11:05:14.471738 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-6c7bf5ddc5-v6t5l" podUID="712b2002-4fce-4983-926a-99a4b2dc7a8c" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.48:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:14 crc kubenswrapper[4632]: I0313 11:05:14.693478 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-flfxh" podUID="1542a9c8-92f6-4bc9-8231-829f649b0b8f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.67:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:14 crc kubenswrapper[4632]: I0313 11:05:14.693496 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-flfxh" podUID="1542a9c8-92f6-4bc9-8231-829f649b0b8f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.67:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:15 crc kubenswrapper[4632]: I0313 11:05:15.073695 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6cbqr" event={"ID":"1c07de1e-84e2-4dae-a3c3-ced19801c8c2","Type":"ContainerDied","Data":"2dcd2355772a6d6f5fad347e41d6b1e9baff67bf2bdf1773631d76e760c8ca38"} Mar 13 11:05:15 crc kubenswrapper[4632]: I0313 11:05:15.074420 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dcd2355772a6d6f5fad347e41d6b1e9baff67bf2bdf1773631d76e760c8ca38" Mar 13 11:05:15 crc kubenswrapper[4632]: I0313 11:05:15.073672 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6cbqr" Mar 13 11:05:15 crc kubenswrapper[4632]: I0313 11:05:15.114895 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c07de1e-84e2-4dae-a3c3-ced19801c8c2-utilities\") pod \"1c07de1e-84e2-4dae-a3c3-ced19801c8c2\" (UID: \"1c07de1e-84e2-4dae-a3c3-ced19801c8c2\") " Mar 13 11:05:15 crc kubenswrapper[4632]: I0313 11:05:15.115153 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4gb7\" (UniqueName: \"kubernetes.io/projected/1c07de1e-84e2-4dae-a3c3-ced19801c8c2-kube-api-access-d4gb7\") pod \"1c07de1e-84e2-4dae-a3c3-ced19801c8c2\" (UID: \"1c07de1e-84e2-4dae-a3c3-ced19801c8c2\") " Mar 13 11:05:15 crc kubenswrapper[4632]: I0313 11:05:15.115331 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c07de1e-84e2-4dae-a3c3-ced19801c8c2-catalog-content\") pod \"1c07de1e-84e2-4dae-a3c3-ced19801c8c2\" (UID: \"1c07de1e-84e2-4dae-a3c3-ced19801c8c2\") " Mar 13 11:05:15 crc kubenswrapper[4632]: I0313 11:05:15.149735 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c07de1e-84e2-4dae-a3c3-ced19801c8c2-utilities" (OuterVolumeSpecName: "utilities") pod "1c07de1e-84e2-4dae-a3c3-ced19801c8c2" (UID: "1c07de1e-84e2-4dae-a3c3-ced19801c8c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:05:15 crc kubenswrapper[4632]: I0313 11:05:15.161537 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c07de1e-84e2-4dae-a3c3-ced19801c8c2-kube-api-access-d4gb7" (OuterVolumeSpecName: "kube-api-access-d4gb7") pod "1c07de1e-84e2-4dae-a3c3-ced19801c8c2" (UID: "1c07de1e-84e2-4dae-a3c3-ced19801c8c2"). InnerVolumeSpecName "kube-api-access-d4gb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:05:15 crc kubenswrapper[4632]: I0313 11:05:15.218160 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c07de1e-84e2-4dae-a3c3-ced19801c8c2-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 11:05:15 crc kubenswrapper[4632]: I0313 11:05:15.218200 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4gb7\" (UniqueName: \"kubernetes.io/projected/1c07de1e-84e2-4dae-a3c3-ced19801c8c2-kube-api-access-d4gb7\") on node \"crc\" DevicePath \"\"" Mar 13 11:05:15 crc kubenswrapper[4632]: I0313 11:05:15.246152 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-lvlxj" podUID="85b58bb0-63f5-4c85-8759-ce28d2c7db58" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:15 crc kubenswrapper[4632]: I0313 11:05:15.265511 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c07de1e-84e2-4dae-a3c3-ced19801c8c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c07de1e-84e2-4dae-a3c3-ced19801c8c2" (UID: "1c07de1e-84e2-4dae-a3c3-ced19801c8c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:05:15 crc kubenswrapper[4632]: I0313 11:05:15.335831 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c07de1e-84e2-4dae-a3c3-ced19801c8c2-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 11:05:15 crc kubenswrapper[4632]: I0313 11:05:15.790379 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mm6fq" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="registry-server" probeResult="failure" output=< Mar 13 11:05:15 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:05:15 crc kubenswrapper[4632]: > Mar 13 11:05:16 crc kubenswrapper[4632]: I0313 11:05:16.173664 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6cbqr" Mar 13 11:05:16 crc kubenswrapper[4632]: I0313 11:05:16.876999 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6cbqr"] Mar 13 11:05:17 crc kubenswrapper[4632]: I0313 11:05:17.049338 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6cbqr"] Mar 13 11:05:17 crc kubenswrapper[4632]: I0313 11:05:17.305209 4632 patch_prober.go:28] interesting pod/console-operator-58897d9998-sbtn5 container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:17 crc kubenswrapper[4632]: I0313 11:05:17.305299 4632 patch_prober.go:28] interesting pod/console-operator-58897d9998-sbtn5 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:17 crc kubenswrapper[4632]: I0313 11:05:17.337158 4632 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-svhr5 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:17 crc kubenswrapper[4632]: I0313 11:05:17.350807 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-svhr5" podUID="353e9ca9-cb3b-4c6e-b1ca-446611a12dca" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:17 crc kubenswrapper[4632]: I0313 11:05:17.350720 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-sbtn5" podUID="ef269b18-ea84-43c2-971c-e772149acbf6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:17 crc kubenswrapper[4632]: I0313 11:05:17.350871 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-sbtn5" podUID="ef269b18-ea84-43c2-971c-e772149acbf6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:17 crc kubenswrapper[4632]: I0313 11:05:17.566297 4632 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-zgxcd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:17 crc kubenswrapper[4632]: I0313 11:05:17.566310 4632 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-zgxcd container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:17 crc kubenswrapper[4632]: I0313 11:05:17.566721 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" podUID="49c520f1-fb05-48ca-8435-1985ce668451" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:17 crc kubenswrapper[4632]: I0313 11:05:17.566666 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" podUID="49c520f1-fb05-48ca-8435-1985ce668451" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:17 crc kubenswrapper[4632]: I0313 11:05:17.940257 4632 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-xfvsc container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:17 crc kubenswrapper[4632]: I0313 11:05:17.940603 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfvsc" podUID="4e100e6e-7259-4262-be47-9c2b5be7a53a" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:17 crc kubenswrapper[4632]: I0313 11:05:17.940261 4632 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-xfvsc container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:17 crc kubenswrapper[4632]: I0313 11:05:17.940710 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfvsc" podUID="4e100e6e-7259-4262-be47-9c2b5be7a53a" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:18 crc kubenswrapper[4632]: I0313 11:05:18.117450 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c07de1e-84e2-4dae-a3c3-ced19801c8c2" path="/var/lib/kubelet/pods/1c07de1e-84e2-4dae-a3c3-ced19801c8c2/volumes" Mar 13 11:05:18 crc kubenswrapper[4632]: I0313 11:05:18.647257 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-wj9qs" podUID="68c5eb80-4214-42c5-a08d-de6012969621" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.58:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:18 crc kubenswrapper[4632]: I0313 11:05:18.689224 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-wj9qs" podUID="68c5eb80-4214-42c5-a08d-de6012969621" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.58:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:18 crc kubenswrapper[4632]: I0313 11:05:18.785154 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-cgh6c" podUID="ff6d4dcb-9eb8-44fc-951e-f2aecd77a639" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.62:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:18 crc kubenswrapper[4632]: I0313 11:05:18.785212 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-cgh6c" podUID="ff6d4dcb-9eb8-44fc-951e-f2aecd77a639" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.62:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:19 crc kubenswrapper[4632]: I0313 11:05:19.643063 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-6nb82" podUID="f0be4a6b-e3ac-4141-b5bf-b3fafcca5fda" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:19 crc kubenswrapper[4632]: I0313 11:05:19.643075 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-6nb82" podUID="f0be4a6b-e3ac-4141-b5bf-b3fafcca5fda" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:19 crc kubenswrapper[4632]: I0313 11:05:19.938197 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-nt7np" podUID="ee081327-4c3f-4c0a-9085-71085c6487b5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:19 crc kubenswrapper[4632]: I0313 11:05:19.938234 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-nt7np" podUID="ee081327-4c3f-4c0a-9085-71085c6487b5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:20 crc kubenswrapper[4632]: I0313 11:05:20.431360 4632 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:20 crc kubenswrapper[4632]: I0313 11:05:20.480901 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:21 crc kubenswrapper[4632]: I0313 11:05:21.382220 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-2jqnk" podUID="7de02b7f-4e1c-4ba1-9659-c864e9080092" containerName="registry-server" probeResult="failure" output=< Mar 13 11:05:21 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:05:21 crc kubenswrapper[4632]: > Mar 13 11:05:21 crc kubenswrapper[4632]: I0313 11:05:21.388906 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-2jqnk" podUID="7de02b7f-4e1c-4ba1-9659-c864e9080092" containerName="registry-server" probeResult="failure" output=< Mar 13 11:05:21 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:05:21 crc kubenswrapper[4632]: > Mar 13 11:05:21 crc kubenswrapper[4632]: I0313 11:05:21.451181 4632 patch_prober.go:28] interesting pod/oauth-openshift-75bb75cfd7-8sh2x container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:21 crc kubenswrapper[4632]: I0313 11:05:21.451158 4632 patch_prober.go:28] interesting pod/oauth-openshift-75bb75cfd7-8sh2x container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:21 crc kubenswrapper[4632]: I0313 11:05:21.451253 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" podUID="48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:21 crc kubenswrapper[4632]: I0313 11:05:21.451302 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" podUID="48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:25 crc kubenswrapper[4632]: I0313 11:05:25.248916 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-lvlxj" podUID="85b58bb0-63f5-4c85-8759-ce28d2c7db58" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:25 crc kubenswrapper[4632]: I0313 11:05:25.309163 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-lvlxj" Mar 13 11:05:25 crc kubenswrapper[4632]: I0313 11:05:25.320349 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr" containerStatusID={"Type":"cri-o","ID":"59d96a138cf7adeb4d273db270ca9998a9b75447d7d6c92e875e751afba3f9b8"} pod="metallb-system/frr-k8s-lvlxj" containerMessage="Container frr failed liveness probe, will be restarted" Mar 13 11:05:25 crc kubenswrapper[4632]: I0313 11:05:25.321830 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-lvlxj" podUID="85b58bb0-63f5-4c85-8759-ce28d2c7db58" containerName="frr" containerID="cri-o://59d96a138cf7adeb4d273db270ca9998a9b75447d7d6c92e875e751afba3f9b8" gracePeriod=2 Mar 13 11:05:25 crc kubenswrapper[4632]: I0313 11:05:25.765604 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mm6fq" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="registry-server" probeResult="failure" output=< Mar 13 11:05:25 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:05:25 crc kubenswrapper[4632]: > Mar 13 11:05:26 crc kubenswrapper[4632]: I0313 11:05:26.351969 4632 generic.go:334] "Generic (PLEG): container finished" podID="85b58bb0-63f5-4c85-8759-ce28d2c7db58" containerID="59d96a138cf7adeb4d273db270ca9998a9b75447d7d6c92e875e751afba3f9b8" exitCode=143 Mar 13 11:05:26 crc kubenswrapper[4632]: I0313 11:05:26.351989 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lvlxj" event={"ID":"85b58bb0-63f5-4c85-8759-ce28d2c7db58","Type":"ContainerDied","Data":"59d96a138cf7adeb4d273db270ca9998a9b75447d7d6c92e875e751afba3f9b8"} Mar 13 11:05:27 crc kubenswrapper[4632]: I0313 11:05:27.497435 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lvlxj" event={"ID":"85b58bb0-63f5-4c85-8759-ce28d2c7db58","Type":"ContainerStarted","Data":"39cdf7120397ce6bc7ca222c9f810a74a54099eea3fa7559bd511f04a2d4ba8e"} Mar 13 11:05:29 crc kubenswrapper[4632]: I0313 11:05:29.212165 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-lvlxj" Mar 13 11:05:29 crc kubenswrapper[4632]: I0313 11:05:29.644812 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-qkr9n" podUID="e66fe20e-05b5-42cd-ac1d-bc4eaee4c8e5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:30 crc kubenswrapper[4632]: I0313 11:05:30.221185 4632 prober.go:107] "Probe failed" probeType="Startup" pod="metallb-system/frr-k8s-lvlxj" podUID="85b58bb0-63f5-4c85-8759-ce28d2c7db58" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:30 crc kubenswrapper[4632]: I0313 11:05:30.441852 4632 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:30 crc kubenswrapper[4632]: I0313 11:05:30.450132 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:30 crc kubenswrapper[4632]: I0313 11:05:30.450538 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 13 11:05:30 crc kubenswrapper[4632]: I0313 11:05:30.457212 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-scheduler" containerStatusID={"Type":"cri-o","ID":"ba32b01674f111328387620c028407e11c9d44b3154c3dce8f415f79e1db54c1"} pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" containerMessage="Container kube-scheduler failed liveness probe, will be restarted" Mar 13 11:05:30 crc kubenswrapper[4632]: I0313 11:05:30.459351 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" containerID="cri-o://ba32b01674f111328387620c028407e11c9d44b3154c3dce8f415f79e1db54c1" gracePeriod=30 Mar 13 11:05:31 crc kubenswrapper[4632]: I0313 11:05:31.355228 4632 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": dial tcp 192.168.126.11:10259: connect: connection refused" start-of-body= Mar 13 11:05:31 crc kubenswrapper[4632]: I0313 11:05:31.355295 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": dial tcp 192.168.126.11:10259: connect: connection refused" Mar 13 11:05:31 crc kubenswrapper[4632]: I0313 11:05:31.367818 4632 patch_prober.go:28] interesting pod/oauth-openshift-75bb75cfd7-8sh2x container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:31 crc kubenswrapper[4632]: I0313 11:05:31.367832 4632 patch_prober.go:28] interesting pod/oauth-openshift-75bb75cfd7-8sh2x container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:31 crc kubenswrapper[4632]: I0313 11:05:31.367885 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" podUID="48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:31 crc kubenswrapper[4632]: I0313 11:05:31.367888 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" podUID="48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:31 crc kubenswrapper[4632]: I0313 11:05:31.375614 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 11:05:31 crc kubenswrapper[4632]: I0313 11:05:31.375654 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 11:05:31 crc kubenswrapper[4632]: I0313 11:05:31.378906 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="oauth-openshift" containerStatusID={"Type":"cri-o","ID":"224837f104bcdbc6545d62209161e349a9d07cdcaf5c66e47c1de75b3af4b369"} pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" containerMessage="Container oauth-openshift failed liveness probe, will be restarted" Mar 13 11:05:31 crc kubenswrapper[4632]: I0313 11:05:31.425584 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-2jqnk" podUID="7de02b7f-4e1c-4ba1-9659-c864e9080092" containerName="registry-server" probeResult="failure" output=< Mar 13 11:05:31 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:05:31 crc kubenswrapper[4632]: > Mar 13 11:05:31 crc kubenswrapper[4632]: I0313 11:05:31.425902 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-index-2jqnk" Mar 13 11:05:31 crc kubenswrapper[4632]: I0313 11:05:31.425855 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-2jqnk" podUID="7de02b7f-4e1c-4ba1-9659-c864e9080092" containerName="registry-server" probeResult="failure" output=< Mar 13 11:05:31 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:05:31 crc kubenswrapper[4632]: > Mar 13 11:05:31 crc kubenswrapper[4632]: I0313 11:05:31.426056 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-2jqnk" Mar 13 11:05:31 crc kubenswrapper[4632]: I0313 11:05:31.429874 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"956a137089f814594898450d78be9fb64aa26d046a610f28da0c9756520f90c8"} pod="openstack-operators/openstack-operator-index-2jqnk" containerMessage="Container registry-server failed liveness probe, will be restarted" Mar 13 11:05:31 crc kubenswrapper[4632]: I0313 11:05:31.429918 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-2jqnk" podUID="7de02b7f-4e1c-4ba1-9659-c864e9080092" containerName="registry-server" containerID="cri-o://956a137089f814594898450d78be9fb64aa26d046a610f28da0c9756520f90c8" gracePeriod=30 Mar 13 11:05:31 crc kubenswrapper[4632]: E0313 11:05:31.461135 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="956a137089f814594898450d78be9fb64aa26d046a610f28da0c9756520f90c8" cmd=["grpc_health_probe","-addr=:50051"] Mar 13 11:05:31 crc kubenswrapper[4632]: E0313 11:05:31.470815 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="956a137089f814594898450d78be9fb64aa26d046a610f28da0c9756520f90c8" cmd=["grpc_health_probe","-addr=:50051"] Mar 13 11:05:31 crc kubenswrapper[4632]: E0313 11:05:31.472764 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="956a137089f814594898450d78be9fb64aa26d046a610f28da0c9756520f90c8" cmd=["grpc_health_probe","-addr=:50051"] Mar 13 11:05:31 crc kubenswrapper[4632]: E0313 11:05:31.472826 4632 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack-operators/openstack-operator-index-2jqnk" podUID="7de02b7f-4e1c-4ba1-9659-c864e9080092" containerName="registry-server" Mar 13 11:05:31 crc kubenswrapper[4632]: I0313 11:05:31.967184 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-tjkbb" podUID="a0d52d98-fe87-4bc8-890e-5c5efb1f30d6" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.81:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:31 crc kubenswrapper[4632]: I0313 11:05:31.967668 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-tjkbb" podUID="a0d52d98-fe87-4bc8-890e-5c5efb1f30d6" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.81:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:32 crc kubenswrapper[4632]: I0313 11:05:32.175228 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-85c677895b-thbc4" podUID="3fdb377f-5a78-4687-82e1-50718514290d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:32 crc kubenswrapper[4632]: I0313 11:05:32.175244 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-85c677895b-thbc4" podUID="3fdb377f-5a78-4687-82e1-50718514290d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:32 crc kubenswrapper[4632]: I0313 11:05:32.417223 4632 patch_prober.go:28] interesting pod/oauth-openshift-75bb75cfd7-8sh2x container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:32 crc kubenswrapper[4632]: I0313 11:05:32.417315 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" podUID="48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:32 crc kubenswrapper[4632]: I0313 11:05:32.697054 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"ba32b01674f111328387620c028407e11c9d44b3154c3dce8f415f79e1db54c1"} Mar 13 11:05:32 crc kubenswrapper[4632]: I0313 11:05:32.697102 4632 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="ba32b01674f111328387620c028407e11c9d44b3154c3dce8f415f79e1db54c1" exitCode=0 Mar 13 11:05:32 crc kubenswrapper[4632]: I0313 11:05:32.883959 4632 patch_prober.go:28] interesting pod/route-controller-manager-db6b8fbf8-pllt2 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:32 crc kubenswrapper[4632]: I0313 11:05:32.884033 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2" podUID="2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:32 crc kubenswrapper[4632]: I0313 11:05:32.884390 4632 patch_prober.go:28] interesting pod/route-controller-manager-db6b8fbf8-pllt2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:32 crc kubenswrapper[4632]: I0313 11:05:32.884452 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2" podUID="2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:33 crc kubenswrapper[4632]: I0313 11:05:33.720188 4632 generic.go:334] "Generic (PLEG): container finished" podID="7de02b7f-4e1c-4ba1-9659-c864e9080092" containerID="956a137089f814594898450d78be9fb64aa26d046a610f28da0c9756520f90c8" exitCode=0 Mar 13 11:05:33 crc kubenswrapper[4632]: I0313 11:05:33.720421 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2jqnk" event={"ID":"7de02b7f-4e1c-4ba1-9659-c864e9080092","Type":"ContainerDied","Data":"956a137089f814594898450d78be9fb64aa26d046a610f28da0c9756520f90c8"} Mar 13 11:05:33 crc kubenswrapper[4632]: I0313 11:05:33.756029 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="2cb2f546-c8c5-4ec9-aba8-d3782431de10" containerName="galera" probeResult="failure" output="command timed out" Mar 13 11:05:33 crc kubenswrapper[4632]: I0313 11:05:33.758629 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="2cb2f546-c8c5-4ec9-aba8-d3782431de10" containerName="galera" probeResult="failure" output="command timed out" Mar 13 11:05:34 crc kubenswrapper[4632]: I0313 11:05:34.738030 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a3aae7b1e1166a2d7565d7709da6786bb6123dbabf32f4ef389dd47642569340"} Mar 13 11:05:34 crc kubenswrapper[4632]: I0313 11:05:34.739444 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 13 11:05:34 crc kubenswrapper[4632]: I0313 11:05:34.756405 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="1761ca69-46fd-4375-af60-22b3e77c19a2" containerName="galera" probeResult="failure" output="command timed out" Mar 13 11:05:34 crc kubenswrapper[4632]: I0313 11:05:34.759799 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="1761ca69-46fd-4375-af60-22b3e77c19a2" containerName="galera" probeResult="failure" output="command timed out" Mar 13 11:05:34 crc kubenswrapper[4632]: I0313 11:05:34.812150 4632 patch_prober.go:28] interesting pod/console-5678554f8b-n7dcv container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.45:8443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:34 crc kubenswrapper[4632]: I0313 11:05:34.844788 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-5678554f8b-n7dcv" podUID="a59bb7d3-da4a-4275-9dcb-b851215a9cd0" containerName="console" probeResult="failure" output="Get \"https://10.217.0.45:8443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:35 crc kubenswrapper[4632]: I0313 11:05:35.303123 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-lvlxj" podUID="85b58bb0-63f5-4c85-8759-ce28d2c7db58" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:35 crc kubenswrapper[4632]: I0313 11:05:35.303695 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-lvlxj" podUID="85b58bb0-63f5-4c85-8759-ce28d2c7db58" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:35 crc kubenswrapper[4632]: I0313 11:05:35.303741 4632 prober.go:107] "Probe failed" probeType="Startup" pod="metallb-system/frr-k8s-lvlxj" podUID="85b58bb0-63f5-4c85-8759-ce28d2c7db58" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:35 crc kubenswrapper[4632]: I0313 11:05:35.321146 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v927j" podUID="2d221857-ee77-4165-a351-ecd5fc424970" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:35 crc kubenswrapper[4632]: I0313 11:05:35.754807 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mm6fq" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="registry-server" probeResult="failure" output=< Mar 13 11:05:35 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:05:35 crc kubenswrapper[4632]: > Mar 13 11:05:35 crc kubenswrapper[4632]: I0313 11:05:35.777545 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2jqnk" event={"ID":"7de02b7f-4e1c-4ba1-9659-c864e9080092","Type":"ContainerStarted","Data":"fa1f92cf71967f1ece5dd0d584d63e1d80dc100a2f3b28938056aa32105d1b6c"} Mar 13 11:05:35 crc kubenswrapper[4632]: I0313 11:05:35.875192 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9zbh8" podUID="b33bccd8-6f28-4ffe-9500-069a52aab5df" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.50:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:35 crc kubenswrapper[4632]: I0313 11:05:35.875205 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9zbh8" podUID="b33bccd8-6f28-4ffe-9500-069a52aab5df" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.50:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:36 crc kubenswrapper[4632]: I0313 11:05:36.202284 4632 patch_prober.go:28] interesting pod/controller-manager-7469657588-kpf64 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:36 crc kubenswrapper[4632]: I0313 11:05:36.202379 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7469657588-kpf64" podUID="a8ff14f9-e25c-4839-acab-a622f6f70f88" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:36 crc kubenswrapper[4632]: I0313 11:05:36.212760 4632 patch_prober.go:28] interesting pod/controller-manager-7469657588-kpf64 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:36 crc kubenswrapper[4632]: I0313 11:05:36.213099 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7469657588-kpf64" podUID="a8ff14f9-e25c-4839-acab-a622f6f70f88" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:36 crc kubenswrapper[4632]: I0313 11:05:36.297511 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-hvrrc" podUID="09ddc697-7ac1-4896-b9e2-1ae6c59c6f47" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 11:05:36 crc kubenswrapper[4632]: I0313 11:05:36.811466 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-tztd9" podUID="8f51973a-596d-40dc-9b5b-b2c95a60ea0c" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:36 crc kubenswrapper[4632]: I0313 11:05:36.811491 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-tztd9" podUID="8f51973a-596d-40dc-9b5b-b2c95a60ea0c" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:36 crc kubenswrapper[4632]: I0313 11:05:36.928092 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-tjkbb" podUID="a0d52d98-fe87-4bc8-890e-5c5efb1f30d6" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.81:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:37 crc kubenswrapper[4632]: I0313 11:05:37.100962 4632 patch_prober.go:28] interesting pod/console-operator-58897d9998-sbtn5 container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:37 crc kubenswrapper[4632]: I0313 11:05:37.101326 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-sbtn5" podUID="ef269b18-ea84-43c2-971c-e772149acbf6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:37 crc kubenswrapper[4632]: I0313 11:05:37.101080 4632 patch_prober.go:28] interesting pod/console-operator-58897d9998-sbtn5 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:37 crc kubenswrapper[4632]: I0313 11:05:37.101386 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-sbtn5" podUID="ef269b18-ea84-43c2-971c-e772149acbf6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:37 crc kubenswrapper[4632]: I0313 11:05:37.223918 4632 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-r5v5p container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:37 crc kubenswrapper[4632]: I0313 11:05:37.224024 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p" podUID="32f62e32-732b-4646-85f0-45b8ea6544a6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:37 crc kubenswrapper[4632]: I0313 11:05:37.265121 4632 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-r5v5p container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:37 crc kubenswrapper[4632]: I0313 11:05:37.265187 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p" podUID="32f62e32-732b-4646-85f0-45b8ea6544a6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:37 crc kubenswrapper[4632]: I0313 11:05:37.348107 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:37 crc kubenswrapper[4632]: I0313 11:05:37.348169 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:37 crc kubenswrapper[4632]: I0313 11:05:37.390176 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:37 crc kubenswrapper[4632]: I0313 11:05:37.390270 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:37 crc kubenswrapper[4632]: I0313 11:05:37.390178 4632 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-svhr5 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:37 crc kubenswrapper[4632]: I0313 11:05:37.390384 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-svhr5" podUID="353e9ca9-cb3b-4c6e-b1ca-446611a12dca" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:37 crc kubenswrapper[4632]: I0313 11:05:37.564204 4632 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-zgxcd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:37 crc kubenswrapper[4632]: I0313 11:05:37.564269 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" podUID="49c520f1-fb05-48ca-8435-1985ce668451" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:37 crc kubenswrapper[4632]: I0313 11:05:37.565414 4632 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-zgxcd container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:37 crc kubenswrapper[4632]: I0313 11:05:37.565468 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" podUID="49c520f1-fb05-48ca-8435-1985ce668451" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:38 crc kubenswrapper[4632]: I0313 11:05:38.695804 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-wj9qs" podUID="68c5eb80-4214-42c5-a08d-de6012969621" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.58:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:38 crc kubenswrapper[4632]: I0313 11:05:38.695807 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-wj9qs" podUID="68c5eb80-4214-42c5-a08d-de6012969621" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.58:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:38 crc kubenswrapper[4632]: I0313 11:05:38.758759 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="ac97dc03-9537-4f95-bb79-5bb60a99089d" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Mar 13 11:05:38 crc kubenswrapper[4632]: I0313 11:05:38.815336 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-2jqnk" Mar 13 11:05:38 crc kubenswrapper[4632]: I0313 11:05:38.815575 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-2jqnk" Mar 13 11:05:39 crc kubenswrapper[4632]: I0313 11:05:39.264009 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-6nb82" podUID="f0be4a6b-e3ac-4141-b5bf-b3fafcca5fda" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:39 crc kubenswrapper[4632]: I0313 11:05:39.265540 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-6nb82" podUID="f0be4a6b-e3ac-4141-b5bf-b3fafcca5fda" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:39 crc kubenswrapper[4632]: I0313 11:05:39.411347 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-628ss" podUID="d04e9aa6-f234-4ffa-81e2-1a2407addb77" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:39 crc kubenswrapper[4632]: I0313 11:05:39.411363 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-628ss" podUID="d04e9aa6-f234-4ffa-81e2-1a2407addb77" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:39 crc kubenswrapper[4632]: I0313 11:05:39.487776 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-62gpm" podUID="9e1d6ac6-c4ee-4381-86b8-c337f8c2d6a5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:39 crc kubenswrapper[4632]: I0313 11:05:39.488191 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-62gpm" podUID="9e1d6ac6-c4ee-4381-86b8-c337f8c2d6a5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:39 crc kubenswrapper[4632]: I0313 11:05:39.543277 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-4m8kf" podUID="0a9d48f4-d68b-4ef9-826e-ed619c761405" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:39 crc kubenswrapper[4632]: I0313 11:05:39.543336 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-4m8kf" podUID="0a9d48f4-d68b-4ef9-826e-ed619c761405" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:40 crc kubenswrapper[4632]: I0313 11:05:40.181066 4632 prober.go:107] "Probe failed" probeType="Startup" pod="metallb-system/frr-k8s-lvlxj" podUID="85b58bb0-63f5-4c85-8759-ce28d2c7db58" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:40 crc kubenswrapper[4632]: I0313 11:05:40.376031 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 11:05:40 crc kubenswrapper[4632]: I0313 11:05:40.460951 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 11:05:40 crc kubenswrapper[4632]: I0313 11:05:40.462528 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 11:05:40 crc kubenswrapper[4632]: I0313 11:05:40.462593 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 11:05:40 crc kubenswrapper[4632]: I0313 11:05:40.467180 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8a4edabf825a9fe82d4af3664ef24831617306062afee5f4494f20215f557582"} pod="openshift-machine-config-operator/machine-config-daemon-zkscb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 11:05:40 crc kubenswrapper[4632]: I0313 11:05:40.468394 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" containerID="cri-o://8a4edabf825a9fe82d4af3664ef24831617306062afee5f4494f20215f557582" gracePeriod=600 Mar 13 11:05:41 crc kubenswrapper[4632]: E0313 11:05:41.342066 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:05:41 crc kubenswrapper[4632]: I0313 11:05:41.440661 4632 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-sk2l6 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:41 crc kubenswrapper[4632]: I0313 11:05:41.441038 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" podUID="f660255f-8f78-4876-973d-db58f2ee7020" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:41 crc kubenswrapper[4632]: I0313 11:05:41.442460 4632 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-sk2l6 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:41 crc kubenswrapper[4632]: I0313 11:05:41.442547 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" podUID="f660255f-8f78-4876-973d-db58f2ee7020" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:41 crc kubenswrapper[4632]: I0313 11:05:41.869793 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerDied","Data":"8a4edabf825a9fe82d4af3664ef24831617306062afee5f4494f20215f557582"} Mar 13 11:05:41 crc kubenswrapper[4632]: I0313 11:05:41.870404 4632 generic.go:334] "Generic (PLEG): container finished" podID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerID="8a4edabf825a9fe82d4af3664ef24831617306062afee5f4494f20215f557582" exitCode=0 Mar 13 11:05:41 crc kubenswrapper[4632]: I0313 11:05:41.874444 4632 scope.go:117] "RemoveContainer" containerID="06408f7526caaaeae759484ccf3ff85a146655a7d51ff7049c7be79b39fe96ba" Mar 13 11:05:41 crc kubenswrapper[4632]: I0313 11:05:41.874621 4632 scope.go:117] "RemoveContainer" containerID="8a4edabf825a9fe82d4af3664ef24831617306062afee5f4494f20215f557582" Mar 13 11:05:41 crc kubenswrapper[4632]: E0313 11:05:41.875040 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:05:41 crc kubenswrapper[4632]: I0313 11:05:41.891273 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-774lb" podUID="560629a7-9dec-4eb7-8c73-a8f097293daa" containerName="registry-server" probeResult="failure" output=< Mar 13 11:05:41 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:05:41 crc kubenswrapper[4632]: > Mar 13 11:05:41 crc kubenswrapper[4632]: I0313 11:05:41.891273 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-774lb" podUID="560629a7-9dec-4eb7-8c73-a8f097293daa" containerName="registry-server" probeResult="failure" output=< Mar 13 11:05:41 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:05:41 crc kubenswrapper[4632]: > Mar 13 11:05:42 crc kubenswrapper[4632]: I0313 11:05:42.290691 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-hvrrc" podUID="09ddc697-7ac1-4896-b9e2-1ae6c59c6f47" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 11:05:42 crc kubenswrapper[4632]: I0313 11:05:42.780625 4632 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-d9n25 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.70:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:42 crc kubenswrapper[4632]: I0313 11:05:42.781763 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-d9n25" podUID="023be687-a773-401c-981b-e3d7136f53b6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.70:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:42 crc kubenswrapper[4632]: I0313 11:05:42.780797 4632 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-d9n25 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.70:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:42 crc kubenswrapper[4632]: I0313 11:05:42.781866 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-d9n25" podUID="023be687-a773-401c-981b-e3d7136f53b6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.70:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:42 crc kubenswrapper[4632]: I0313 11:05:42.890011 4632 patch_prober.go:28] interesting pod/route-controller-manager-db6b8fbf8-pllt2 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:42 crc kubenswrapper[4632]: I0313 11:05:42.890005 4632 patch_prober.go:28] interesting pod/route-controller-manager-db6b8fbf8-pllt2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:42 crc kubenswrapper[4632]: I0313 11:05:42.890078 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2" podUID="2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:42 crc kubenswrapper[4632]: I0313 11:05:42.890107 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2" podUID="2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:43 crc kubenswrapper[4632]: I0313 11:05:43.100533 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack-operators/openstack-operator-index-2jqnk" podUID="7de02b7f-4e1c-4ba1-9659-c864e9080092" containerName="registry-server" probeResult="failure" output=< Mar 13 11:05:43 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:05:43 crc kubenswrapper[4632]: > Mar 13 11:05:43 crc kubenswrapper[4632]: I0313 11:05:43.757981 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="2cb2f546-c8c5-4ec9-aba8-d3782431de10" containerName="galera" probeResult="failure" output="command timed out" Mar 13 11:05:43 crc kubenswrapper[4632]: I0313 11:05:43.757981 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="2cb2f546-c8c5-4ec9-aba8-d3782431de10" containerName="galera" probeResult="failure" output="command timed out" Mar 13 11:05:43 crc kubenswrapper[4632]: I0313 11:05:43.760587 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="ac97dc03-9537-4f95-bb79-5bb60a99089d" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Mar 13 11:05:43 crc kubenswrapper[4632]: I0313 11:05:43.798352 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-hvrrc" podUID="09ddc697-7ac1-4896-b9e2-1ae6c59c6f47" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 11:05:44 crc kubenswrapper[4632]: I0313 11:05:44.475173 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-6c7bf5ddc5-v6t5l" podUID="712b2002-4fce-4983-926a-99a4b2dc7a8c" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.48:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:44 crc kubenswrapper[4632]: I0313 11:05:44.475200 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-6c7bf5ddc5-v6t5l" podUID="712b2002-4fce-4983-926a-99a4b2dc7a8c" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.48:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:44 crc kubenswrapper[4632]: I0313 11:05:44.558712 4632 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-sk2l6 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:44 crc kubenswrapper[4632]: I0313 11:05:44.558782 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" podUID="f660255f-8f78-4876-973d-db58f2ee7020" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:44 crc kubenswrapper[4632]: I0313 11:05:44.558712 4632 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-sk2l6 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:44 crc kubenswrapper[4632]: I0313 11:05:44.558828 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" podUID="f660255f-8f78-4876-973d-db58f2ee7020" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:44 crc kubenswrapper[4632]: I0313 11:05:44.713200 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-flfxh" podUID="1542a9c8-92f6-4bc9-8231-829f649b0b8f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.67:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:44 crc kubenswrapper[4632]: I0313 11:05:44.714797 4632 patch_prober.go:28] interesting pod/nmstate-webhook-5f558f5558-gcngd container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.32:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:44 crc kubenswrapper[4632]: I0313 11:05:44.714876 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-5f558f5558-gcngd" podUID="9bf11778-d854-4c97-acd1-ed4822ee5f47" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.32:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:44 crc kubenswrapper[4632]: I0313 11:05:44.756389 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="1761ca69-46fd-4375-af60-22b3e77c19a2" containerName="galera" probeResult="failure" output="command timed out" Mar 13 11:05:44 crc kubenswrapper[4632]: I0313 11:05:44.756460 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="1761ca69-46fd-4375-af60-22b3e77c19a2" containerName="galera" probeResult="failure" output="command timed out" Mar 13 11:05:44 crc kubenswrapper[4632]: I0313 11:05:44.758480 4632 patch_prober.go:28] interesting pod/console-5678554f8b-n7dcv container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.45:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:44 crc kubenswrapper[4632]: I0313 11:05:44.758516 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-5678554f8b-n7dcv" podUID="a59bb7d3-da4a-4275-9dcb-b851215a9cd0" containerName="console" probeResult="failure" output="Get \"https://10.217.0.45:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:45 crc kubenswrapper[4632]: I0313 11:05:45.303338 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-lvlxj" podUID="85b58bb0-63f5-4c85-8759-ce28d2c7db58" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:45 crc kubenswrapper[4632]: I0313 11:05:45.303858 4632 prober.go:107] "Probe failed" probeType="Startup" pod="metallb-system/frr-k8s-lvlxj" podUID="85b58bb0-63f5-4c85-8759-ce28d2c7db58" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:45 crc kubenswrapper[4632]: I0313 11:05:45.303969 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-lvlxj" podUID="85b58bb0-63f5-4c85-8759-ce28d2c7db58" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:45 crc kubenswrapper[4632]: I0313 11:05:45.345209 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v927j" podUID="2d221857-ee77-4165-a351-ecd5fc424970" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:45 crc kubenswrapper[4632]: I0313 11:05:45.874235 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9zbh8" podUID="b33bccd8-6f28-4ffe-9500-069a52aab5df" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.50:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:45 crc kubenswrapper[4632]: I0313 11:05:45.874660 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9zbh8" podUID="b33bccd8-6f28-4ffe-9500-069a52aab5df" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.50:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:45 crc kubenswrapper[4632]: I0313 11:05:45.899812 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mm6fq" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="registry-server" probeResult="failure" output=< Mar 13 11:05:45 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:05:45 crc kubenswrapper[4632]: > Mar 13 11:05:46 crc kubenswrapper[4632]: I0313 11:05:46.202447 4632 patch_prober.go:28] interesting pod/controller-manager-7469657588-kpf64 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:46 crc kubenswrapper[4632]: I0313 11:05:46.202529 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7469657588-kpf64" podUID="a8ff14f9-e25c-4839-acab-a622f6f70f88" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:46 crc kubenswrapper[4632]: I0313 11:05:46.203643 4632 patch_prober.go:28] interesting pod/controller-manager-7469657588-kpf64 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:46 crc kubenswrapper[4632]: I0313 11:05:46.203699 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7469657588-kpf64" podUID="a8ff14f9-e25c-4839-acab-a622f6f70f88" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:46 crc kubenswrapper[4632]: I0313 11:05:46.861643 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-tztd9" podUID="8f51973a-596d-40dc-9b5b-b2c95a60ea0c" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:46 crc kubenswrapper[4632]: I0313 11:05:46.863412 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-tztd9" podUID="8f51973a-596d-40dc-9b5b-b2c95a60ea0c" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:46 crc kubenswrapper[4632]: I0313 11:05:46.926213 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-tjkbb" podUID="a0d52d98-fe87-4bc8-890e-5c5efb1f30d6" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.81:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:47 crc kubenswrapper[4632]: I0313 11:05:47.182189 4632 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-r5v5p container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:47 crc kubenswrapper[4632]: I0313 11:05:47.182270 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p" podUID="32f62e32-732b-4646-85f0-45b8ea6544a6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:47 crc kubenswrapper[4632]: I0313 11:05:47.182343 4632 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-r5v5p container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:47 crc kubenswrapper[4632]: I0313 11:05:47.182396 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p" podUID="32f62e32-732b-4646-85f0-45b8ea6544a6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:47 crc kubenswrapper[4632]: I0313 11:05:47.188556 4632 patch_prober.go:28] interesting pod/console-operator-58897d9998-sbtn5 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:47 crc kubenswrapper[4632]: I0313 11:05:47.188637 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-sbtn5" podUID="ef269b18-ea84-43c2-971c-e772149acbf6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:47 crc kubenswrapper[4632]: I0313 11:05:47.189271 4632 patch_prober.go:28] interesting pod/console-operator-58897d9998-sbtn5 container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:47 crc kubenswrapper[4632]: I0313 11:05:47.189365 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-sbtn5" podUID="ef269b18-ea84-43c2-971c-e772149acbf6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:47 crc kubenswrapper[4632]: I0313 11:05:47.273573 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:47 crc kubenswrapper[4632]: I0313 11:05:47.273639 4632 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-svhr5 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:47 crc kubenswrapper[4632]: I0313 11:05:47.273955 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-svhr5" podUID="353e9ca9-cb3b-4c6e-b1ca-446611a12dca" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:47 crc kubenswrapper[4632]: I0313 11:05:47.273868 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:47 crc kubenswrapper[4632]: I0313 11:05:47.273731 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:47 crc kubenswrapper[4632]: I0313 11:05:47.274036 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:47 crc kubenswrapper[4632]: I0313 11:05:47.438201 4632 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-sk2l6 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:47 crc kubenswrapper[4632]: I0313 11:05:47.438277 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" podUID="f660255f-8f78-4876-973d-db58f2ee7020" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:47 crc kubenswrapper[4632]: I0313 11:05:47.438154 4632 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-sk2l6 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:47 crc kubenswrapper[4632]: I0313 11:05:47.438418 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" podUID="f660255f-8f78-4876-973d-db58f2ee7020" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:47 crc kubenswrapper[4632]: I0313 11:05:47.439616 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" Mar 13 11:05:47 crc kubenswrapper[4632]: I0313 11:05:47.439707 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" Mar 13 11:05:47 crc kubenswrapper[4632]: I0313 11:05:47.444157 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"e98f0e8253db82d7fc1c628a628a0d9ea91c85c3796f3abe0d968983b3e782e2"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 13 11:05:47 crc kubenswrapper[4632]: I0313 11:05:47.444605 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" podUID="f660255f-8f78-4876-973d-db58f2ee7020" containerName="openshift-config-operator" containerID="cri-o://e98f0e8253db82d7fc1c628a628a0d9ea91c85c3796f3abe0d968983b3e782e2" gracePeriod=30 Mar 13 11:05:47 crc kubenswrapper[4632]: I0313 11:05:47.564835 4632 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-zgxcd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:47 crc kubenswrapper[4632]: I0313 11:05:47.564890 4632 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-zgxcd container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:47 crc kubenswrapper[4632]: I0313 11:05:47.564991 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" podUID="49c520f1-fb05-48ca-8435-1985ce668451" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:47 crc kubenswrapper[4632]: I0313 11:05:47.564908 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" podUID="49c520f1-fb05-48ca-8435-1985ce668451" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:47 crc kubenswrapper[4632]: I0313 11:05:47.945094 4632 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tqbl9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:47 crc kubenswrapper[4632]: I0313 11:05:47.945439 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tqbl9" podUID="ebf1040d-57dd-47ef-b839-6f78a7c5c75f" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:47 crc kubenswrapper[4632]: I0313 11:05:47.945093 4632 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-xfvsc container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:47 crc kubenswrapper[4632]: I0313 11:05:47.945511 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfvsc" podUID="4e100e6e-7259-4262-be47-9c2b5be7a53a" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:47 crc kubenswrapper[4632]: I0313 11:05:47.945147 4632 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tqbl9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:47 crc kubenswrapper[4632]: I0313 11:05:47.945581 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tqbl9" podUID="ebf1040d-57dd-47ef-b839-6f78a7c5c75f" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:47 crc kubenswrapper[4632]: I0313 11:05:47.945204 4632 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-xfvsc container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:47 crc kubenswrapper[4632]: I0313 11:05:47.945809 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfvsc" podUID="4e100e6e-7259-4262-be47-9c2b5be7a53a" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:48 crc kubenswrapper[4632]: I0313 11:05:48.482175 4632 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-sk2l6 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:48 crc kubenswrapper[4632]: I0313 11:05:48.482251 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" podUID="f660255f-8f78-4876-973d-db58f2ee7020" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:48 crc kubenswrapper[4632]: I0313 11:05:48.644241 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-wj9qs" podUID="68c5eb80-4214-42c5-a08d-de6012969621" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.58:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:48 crc kubenswrapper[4632]: I0313 11:05:48.685161 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-f6c87" podUID="3f3a462e-4d89-45b3-8611-181aca5f8558" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.60:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:48 crc kubenswrapper[4632]: I0313 11:05:48.759476 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="ac97dc03-9537-4f95-bb79-5bb60a99089d" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Mar 13 11:05:48 crc kubenswrapper[4632]: I0313 11:05:48.759596 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Mar 13 11:05:48 crc kubenswrapper[4632]: I0313 11:05:48.761111 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="ac97dc03-9537-4f95-bb79-5bb60a99089d" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Mar 13 11:05:48 crc kubenswrapper[4632]: I0313 11:05:48.775372 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"3da76186915cfbbbe688750a6110b1e64143d37e61c44ef62a9740eabb32c983"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Mar 13 11:05:48 crc kubenswrapper[4632]: I0313 11:05:48.775493 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ac97dc03-9537-4f95-bb79-5bb60a99089d" containerName="ceilometer-central-agent" containerID="cri-o://3da76186915cfbbbe688750a6110b1e64143d37e61c44ef62a9740eabb32c983" gracePeriod=30 Mar 13 11:05:48 crc kubenswrapper[4632]: I0313 11:05:48.776834 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-cgh6c" podUID="ff6d4dcb-9eb8-44fc-951e-f2aecd77a639" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.62:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:48 crc kubenswrapper[4632]: I0313 11:05:48.818194 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-2rv7s" podUID="9a963f9c-ac58-4e21-abfa-fca1279a192d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.64:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:48 crc kubenswrapper[4632]: I0313 11:05:48.862202 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-wtzrw" podUID="c8fc6f03-c43b-4ade-92a8-acc5537a4eeb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:48 crc kubenswrapper[4632]: I0313 11:05:48.903659 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-865685cd99-ls9jq" podUID="82fe7ef6-50a5-41d4-9419-787812e16bd6" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.57:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:49 crc kubenswrapper[4632]: I0313 11:05:49.008164 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-szd7c" podUID="9040a0e0-2a56-4331-ba50-b19ff05ef0c0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:49 crc kubenswrapper[4632]: I0313 11:05:49.226091 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-6nb82" podUID="f0be4a6b-e3ac-4141-b5bf-b3fafcca5fda" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:49 crc kubenswrapper[4632]: I0313 11:05:49.274085 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-sxw8d" podUID="7b491335-6a73-46de-8098-f27ff4c6f795" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:49 crc kubenswrapper[4632]: I0313 11:05:49.370132 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-628ss" podUID="d04e9aa6-f234-4ffa-81e2-1a2407addb77" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:49 crc kubenswrapper[4632]: I0313 11:05:49.427182 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-62gpm" podUID="9e1d6ac6-c4ee-4381-86b8-c337f8c2d6a5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:49 crc kubenswrapper[4632]: I0313 11:05:49.440836 4632 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-sk2l6 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Mar 13 11:05:49 crc kubenswrapper[4632]: I0313 11:05:49.440906 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" podUID="f660255f-8f78-4876-973d-db58f2ee7020" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Mar 13 11:05:49 crc kubenswrapper[4632]: I0313 11:05:49.543190 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-4m8kf" podUID="0a9d48f4-d68b-4ef9-826e-ed619c761405" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:49 crc kubenswrapper[4632]: I0313 11:05:49.623258 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-qkr9n" podUID="e66fe20e-05b5-42cd-ac1d-bc4eaee4c8e5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:49 crc kubenswrapper[4632]: I0313 11:05:49.760058 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-mpfnk" podUID="33445a2b-7fa8-4198-a60a-09caeb69b8ed" containerName="nmstate-handler" probeResult="failure" output="command timed out" Mar 13 11:05:49 crc kubenswrapper[4632]: I0313 11:05:49.897128 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-nt7np" podUID="ee081327-4c3f-4c0a-9085-71085c6487b5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:50 crc kubenswrapper[4632]: I0313 11:05:50.105300 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-jwrgq" podUID="7bab78c8-7dac-48dc-a426-ccd4ae00a428" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:50 crc kubenswrapper[4632]: I0313 11:05:50.222131 4632 prober.go:107] "Probe failed" probeType="Startup" pod="metallb-system/frr-k8s-lvlxj" podUID="85b58bb0-63f5-4c85-8759-ce28d2c7db58" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:51 crc kubenswrapper[4632]: I0313 11:05:51.365504 4632 patch_prober.go:28] interesting pod/oauth-openshift-75bb75cfd7-8sh2x container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:51 crc kubenswrapper[4632]: I0313 11:05:51.365913 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" podUID="48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:51 crc kubenswrapper[4632]: I0313 11:05:51.757025 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-khrch" podUID="b6e936db-ec1c-447a-894d-49bd7c74c315" containerName="ovnkube-controller" probeResult="failure" output="command timed out" Mar 13 11:05:51 crc kubenswrapper[4632]: I0313 11:05:51.966200 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-tjkbb" podUID="a0d52d98-fe87-4bc8-890e-5c5efb1f30d6" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.81:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:51 crc kubenswrapper[4632]: I0313 11:05:51.966206 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-tjkbb" podUID="a0d52d98-fe87-4bc8-890e-5c5efb1f30d6" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.81:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:52 crc kubenswrapper[4632]: I0313 11:05:52.048788 4632 generic.go:334] "Generic (PLEG): container finished" podID="f660255f-8f78-4876-973d-db58f2ee7020" containerID="e98f0e8253db82d7fc1c628a628a0d9ea91c85c3796f3abe0d968983b3e782e2" exitCode=0 Mar 13 11:05:52 crc kubenswrapper[4632]: I0313 11:05:52.057331 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" event={"ID":"f660255f-8f78-4876-973d-db58f2ee7020","Type":"ContainerDied","Data":"e98f0e8253db82d7fc1c628a628a0d9ea91c85c3796f3abe0d968983b3e782e2"} Mar 13 11:05:52 crc kubenswrapper[4632]: I0313 11:05:52.175128 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-85c677895b-thbc4" podUID="3fdb377f-5a78-4687-82e1-50718514290d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:52 crc kubenswrapper[4632]: I0313 11:05:52.175290 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-85c677895b-thbc4" podUID="3fdb377f-5a78-4687-82e1-50718514290d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:52 crc kubenswrapper[4632]: I0313 11:05:52.438364 4632 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-sk2l6 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Mar 13 11:05:52 crc kubenswrapper[4632]: I0313 11:05:52.439411 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" podUID="f660255f-8f78-4876-973d-db58f2ee7020" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Mar 13 11:05:52 crc kubenswrapper[4632]: I0313 11:05:52.884810 4632 patch_prober.go:28] interesting pod/route-controller-manager-db6b8fbf8-pllt2 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:52 crc kubenswrapper[4632]: I0313 11:05:52.884885 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2" podUID="2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:52 crc kubenswrapper[4632]: I0313 11:05:52.885167 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2" Mar 13 11:05:52 crc kubenswrapper[4632]: I0313 11:05:52.885720 4632 patch_prober.go:28] interesting pod/route-controller-manager-db6b8fbf8-pllt2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:52 crc kubenswrapper[4632]: I0313 11:05:52.885840 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2" podUID="2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:52 crc kubenswrapper[4632]: I0313 11:05:52.893544 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-hvrrc" podUID="09ddc697-7ac1-4896-b9e2-1ae6c59c6f47" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 11:05:52 crc kubenswrapper[4632]: I0313 11:05:52.903225 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="route-controller-manager" containerStatusID={"Type":"cri-o","ID":"71808a85287e54b9fb184ad4c73a074a1ff3d6b35824bd6122d42af589681e05"} pod="openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2" containerMessage="Container route-controller-manager failed liveness probe, will be restarted" Mar 13 11:05:52 crc kubenswrapper[4632]: I0313 11:05:52.904167 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2" podUID="2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3" containerName="route-controller-manager" containerID="cri-o://71808a85287e54b9fb184ad4c73a074a1ff3d6b35824bd6122d42af589681e05" gracePeriod=30 Mar 13 11:05:53 crc kubenswrapper[4632]: I0313 11:05:53.601396 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-774lb" podUID="560629a7-9dec-4eb7-8c73-a8f097293daa" containerName="registry-server" probeResult="failure" output=< Mar 13 11:05:53 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:05:53 crc kubenswrapper[4632]: > Mar 13 11:05:53 crc kubenswrapper[4632]: I0313 11:05:53.613184 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-774lb" podUID="560629a7-9dec-4eb7-8c73-a8f097293daa" containerName="registry-server" probeResult="failure" output=< Mar 13 11:05:53 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:05:53 crc kubenswrapper[4632]: > Mar 13 11:05:53 crc kubenswrapper[4632]: I0313 11:05:53.688490 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="26ce3314-15f1-490c-83e5-a1c609212437" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.231:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:53 crc kubenswrapper[4632]: I0313 11:05:53.755596 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="2cb2f546-c8c5-4ec9-aba8-d3782431de10" containerName="galera" probeResult="failure" output="command timed out" Mar 13 11:05:53 crc kubenswrapper[4632]: I0313 11:05:53.755623 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="2cb2f546-c8c5-4ec9-aba8-d3782431de10" containerName="galera" probeResult="failure" output="command timed out" Mar 13 11:05:53 crc kubenswrapper[4632]: I0313 11:05:53.759387 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack-operators/openstack-operator-index-2jqnk" podUID="7de02b7f-4e1c-4ba1-9659-c864e9080092" containerName="registry-server" probeResult="failure" output="command timed out" Mar 13 11:05:53 crc kubenswrapper[4632]: I0313 11:05:53.803662 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 13 11:05:53 crc kubenswrapper[4632]: I0313 11:05:53.803714 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Mar 13 11:05:53 crc kubenswrapper[4632]: I0313 11:05:53.818522 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"cdf295326a62a01129c4f9b5741f57b3d80d103e3c5a6bf64f5cc1951034264f"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Mar 13 11:05:53 crc kubenswrapper[4632]: I0313 11:05:53.897439 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-hvrrc" podUID="09ddc697-7ac1-4896-b9e2-1ae6c59c6f47" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 11:05:54 crc kubenswrapper[4632]: I0313 11:05:54.155775 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" event={"ID":"f660255f-8f78-4876-973d-db58f2ee7020","Type":"ContainerStarted","Data":"bdfa238d1dda3afead970f6c0c59d9c82cc9066974eef2637a5f643bcf655e99"} Mar 13 11:05:54 crc kubenswrapper[4632]: I0313 11:05:54.167691 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" Mar 13 11:05:54 crc kubenswrapper[4632]: I0313 11:05:54.480326 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-6c7bf5ddc5-v6t5l" podUID="712b2002-4fce-4983-926a-99a4b2dc7a8c" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.48:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:54 crc kubenswrapper[4632]: I0313 11:05:54.480369 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-6c7bf5ddc5-v6t5l" podUID="712b2002-4fce-4983-926a-99a4b2dc7a8c" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.48:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:54 crc kubenswrapper[4632]: I0313 11:05:54.753118 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-flfxh" podUID="1542a9c8-92f6-4bc9-8231-829f649b0b8f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.67:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:54 crc kubenswrapper[4632]: I0313 11:05:54.753219 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-flfxh" podUID="1542a9c8-92f6-4bc9-8231-829f649b0b8f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.67:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:54 crc kubenswrapper[4632]: I0313 11:05:54.756894 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="1761ca69-46fd-4375-af60-22b3e77c19a2" containerName="galera" probeResult="failure" output="command timed out" Mar 13 11:05:54 crc kubenswrapper[4632]: I0313 11:05:54.756912 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="1761ca69-46fd-4375-af60-22b3e77c19a2" containerName="galera" probeResult="failure" output="command timed out" Mar 13 11:05:54 crc kubenswrapper[4632]: I0313 11:05:54.757020 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 13 11:05:54 crc kubenswrapper[4632]: I0313 11:05:54.757168 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="2cb2f546-c8c5-4ec9-aba8-d3782431de10" containerName="galera" probeResult="failure" output="command timed out" Mar 13 11:05:54 crc kubenswrapper[4632]: I0313 11:05:54.758536 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 13 11:05:54 crc kubenswrapper[4632]: I0313 11:05:54.754714 4632 patch_prober.go:28] interesting pod/nmstate-webhook-5f558f5558-gcngd container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.32:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:54 crc kubenswrapper[4632]: I0313 11:05:54.761190 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-5f558f5558-gcngd" podUID="9bf11778-d854-4c97-acd1-ed4822ee5f47" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.32:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:54 crc kubenswrapper[4632]: I0313 11:05:54.763385 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"8d6e3c3bf2cc94b4f346233606a1c5c55a2993e7644d8a78c77dc12972c98a9e"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Mar 13 11:05:54 crc kubenswrapper[4632]: I0313 11:05:54.804464 4632 patch_prober.go:28] interesting pod/console-5678554f8b-n7dcv container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.45:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:54 crc kubenswrapper[4632]: I0313 11:05:54.804580 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-5678554f8b-n7dcv" podUID="a59bb7d3-da4a-4275-9dcb-b851215a9cd0" containerName="console" probeResult="failure" output="Get \"https://10.217.0.45:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:54 crc kubenswrapper[4632]: I0313 11:05:54.804657 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5678554f8b-n7dcv" Mar 13 11:05:55 crc kubenswrapper[4632]: I0313 11:05:55.001350 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-gdt8x" podUID="f7f61b75-16bf-4c5a-be30-c88d155c203f" containerName="registry-server" probeResult="failure" output=< Mar 13 11:05:55 crc kubenswrapper[4632]: timeout: health rpc did not complete within 1s Mar 13 11:05:55 crc kubenswrapper[4632]: > Mar 13 11:05:55 crc kubenswrapper[4632]: I0313 11:05:55.018657 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-7ksc5" podUID="0fa3faab-9e82-4fde-afff-3de6939a17d1" containerName="registry-server" probeResult="failure" output=< Mar 13 11:05:55 crc kubenswrapper[4632]: timeout: health rpc did not complete within 1s Mar 13 11:05:55 crc kubenswrapper[4632]: > Mar 13 11:05:55 crc kubenswrapper[4632]: I0313 11:05:55.018773 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-7ksc5" podUID="0fa3faab-9e82-4fde-afff-3de6939a17d1" containerName="registry-server" probeResult="failure" output=< Mar 13 11:05:55 crc kubenswrapper[4632]: timeout: health rpc did not complete within 1s Mar 13 11:05:55 crc kubenswrapper[4632]: > Mar 13 11:05:55 crc kubenswrapper[4632]: I0313 11:05:55.019321 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-gdt8x" podUID="f7f61b75-16bf-4c5a-be30-c88d155c203f" containerName="registry-server" probeResult="failure" output=< Mar 13 11:05:55 crc kubenswrapper[4632]: timeout: health rpc did not complete within 1s Mar 13 11:05:55 crc kubenswrapper[4632]: > Mar 13 11:05:55 crc kubenswrapper[4632]: I0313 11:05:55.068562 4632 scope.go:117] "RemoveContainer" containerID="8a4edabf825a9fe82d4af3664ef24831617306062afee5f4494f20215f557582" Mar 13 11:05:55 crc kubenswrapper[4632]: E0313 11:05:55.075723 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:05:55 crc kubenswrapper[4632]: I0313 11:05:55.330263 4632 prober.go:107] "Probe failed" probeType="Startup" pod="metallb-system/frr-k8s-lvlxj" podUID="85b58bb0-63f5-4c85-8759-ce28d2c7db58" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:55 crc kubenswrapper[4632]: I0313 11:05:55.330323 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-lvlxj" podUID="85b58bb0-63f5-4c85-8759-ce28d2c7db58" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:55 crc kubenswrapper[4632]: I0313 11:05:55.330453 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-lvlxj" Mar 13 11:05:55 crc kubenswrapper[4632]: I0313 11:05:55.371219 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v927j" podUID="2d221857-ee77-4165-a351-ecd5fc424970" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:55 crc kubenswrapper[4632]: I0313 11:05:55.412254 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-lvlxj" podUID="85b58bb0-63f5-4c85-8759-ce28d2c7db58" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:55 crc kubenswrapper[4632]: I0313 11:05:55.412332 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-lvlxj" Mar 13 11:05:55 crc kubenswrapper[4632]: I0313 11:05:55.412372 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v927j" podUID="2d221857-ee77-4165-a351-ecd5fc424970" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:55 crc kubenswrapper[4632]: I0313 11:05:55.495182 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-7bb4cc7c98-62bwr" podUID="277ddd7f-fd9c-4b27-9563-c904f1dffd40" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.51:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:55 crc kubenswrapper[4632]: I0313 11:05:55.495828 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-7bb4cc7c98-62bwr" podUID="277ddd7f-fd9c-4b27-9563-c904f1dffd40" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.51:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:55 crc kubenswrapper[4632]: I0313 11:05:55.756740 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="1761ca69-46fd-4375-af60-22b3e77c19a2" containerName="galera" probeResult="failure" output="command timed out" Mar 13 11:05:55 crc kubenswrapper[4632]: I0313 11:05:55.811712 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mm6fq" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="registry-server" probeResult="failure" output=< Mar 13 11:05:55 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:05:55 crc kubenswrapper[4632]: > Mar 13 11:05:55 crc kubenswrapper[4632]: I0313 11:05:55.875157 4632 patch_prober.go:28] interesting pod/console-5678554f8b-n7dcv container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.45:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:55 crc kubenswrapper[4632]: I0313 11:05:55.875189 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9zbh8" podUID="b33bccd8-6f28-4ffe-9500-069a52aab5df" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.50:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:55 crc kubenswrapper[4632]: I0313 11:05:55.875222 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-5678554f8b-n7dcv" podUID="a59bb7d3-da4a-4275-9dcb-b851215a9cd0" containerName="console" probeResult="failure" output="Get \"https://10.217.0.45:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:55 crc kubenswrapper[4632]: I0313 11:05:55.875260 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9zbh8" Mar 13 11:05:55 crc kubenswrapper[4632]: I0313 11:05:55.875321 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9zbh8" podUID="b33bccd8-6f28-4ffe-9500-069a52aab5df" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.50:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:55 crc kubenswrapper[4632]: I0313 11:05:55.875427 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9zbh8" Mar 13 11:05:55 crc kubenswrapper[4632]: I0313 11:05:55.885173 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr-k8s-webhook-server" containerStatusID={"Type":"cri-o","ID":"6b277b3621566e90d2ea8a306394444270adbf026557398f5520284a63c356df"} pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9zbh8" containerMessage="Container frr-k8s-webhook-server failed liveness probe, will be restarted" Mar 13 11:05:55 crc kubenswrapper[4632]: I0313 11:05:55.898766 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9zbh8" podUID="b33bccd8-6f28-4ffe-9500-069a52aab5df" containerName="frr-k8s-webhook-server" containerID="cri-o://6b277b3621566e90d2ea8a306394444270adbf026557398f5520284a63c356df" gracePeriod=10 Mar 13 11:05:56 crc kubenswrapper[4632]: I0313 11:05:56.172050 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller" containerStatusID={"Type":"cri-o","ID":"bfa960455f207db762d901f5af9c2b35ade8cd1c5f43d1bc1d4a40a5bfd8199d"} pod="metallb-system/frr-k8s-lvlxj" containerMessage="Container controller failed liveness probe, will be restarted" Mar 13 11:05:56 crc kubenswrapper[4632]: I0313 11:05:56.172223 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-lvlxj" podUID="85b58bb0-63f5-4c85-8759-ce28d2c7db58" containerName="controller" containerID="cri-o://bfa960455f207db762d901f5af9c2b35ade8cd1c5f43d1bc1d4a40a5bfd8199d" gracePeriod=2 Mar 13 11:05:56 crc kubenswrapper[4632]: I0313 11:05:56.201990 4632 patch_prober.go:28] interesting pod/controller-manager-7469657588-kpf64 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:56 crc kubenswrapper[4632]: I0313 11:05:56.202087 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7469657588-kpf64" podUID="a8ff14f9-e25c-4839-acab-a622f6f70f88" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:56 crc kubenswrapper[4632]: I0313 11:05:56.202156 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-7469657588-kpf64" Mar 13 11:05:56 crc kubenswrapper[4632]: I0313 11:05:56.202155 4632 patch_prober.go:28] interesting pod/controller-manager-7469657588-kpf64 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:56 crc kubenswrapper[4632]: I0313 11:05:56.202250 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7469657588-kpf64" podUID="a8ff14f9-e25c-4839-acab-a622f6f70f88" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:56 crc kubenswrapper[4632]: I0313 11:05:56.203485 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller-manager" containerStatusID={"Type":"cri-o","ID":"432a739d763fd09cb52fbc4a7bbe481e0fb4c89b88f7822f73b594d3596d0d39"} pod="openshift-controller-manager/controller-manager-7469657588-kpf64" containerMessage="Container controller-manager failed liveness probe, will be restarted" Mar 13 11:05:56 crc kubenswrapper[4632]: I0313 11:05:56.203532 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7469657588-kpf64" podUID="a8ff14f9-e25c-4839-acab-a622f6f70f88" containerName="controller-manager" containerID="cri-o://432a739d763fd09cb52fbc4a7bbe481e0fb4c89b88f7822f73b594d3596d0d39" gracePeriod=30 Mar 13 11:05:56 crc kubenswrapper[4632]: I0313 11:05:56.374155 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-lvlxj" podUID="85b58bb0-63f5-4c85-8759-ce28d2c7db58" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:56 crc kubenswrapper[4632]: I0313 11:05:56.709437 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" podUID="48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd" containerName="oauth-openshift" containerID="cri-o://224837f104bcdbc6545d62209161e349a9d07cdcaf5c66e47c1de75b3af4b369" gracePeriod=15 Mar 13 11:05:56 crc kubenswrapper[4632]: I0313 11:05:56.892110 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-tztd9" podUID="8f51973a-596d-40dc-9b5b-b2c95a60ea0c" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:56 crc kubenswrapper[4632]: I0313 11:05:56.892123 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-tztd9" podUID="8f51973a-596d-40dc-9b5b-b2c95a60ea0c" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:56 crc kubenswrapper[4632]: I0313 11:05:56.893532 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-tztd9" Mar 13 11:05:56 crc kubenswrapper[4632]: I0313 11:05:56.893595 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/speaker-tztd9" Mar 13 11:05:56 crc kubenswrapper[4632]: I0313 11:05:56.900091 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="speaker" containerStatusID={"Type":"cri-o","ID":"7369028ab3380b8162926288f2a66e0780eba331066b6d04106bd606debba692"} pod="metallb-system/speaker-tztd9" containerMessage="Container speaker failed liveness probe, will be restarted" Mar 13 11:05:56 crc kubenswrapper[4632]: I0313 11:05:56.900219 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/speaker-tztd9" podUID="8f51973a-596d-40dc-9b5b-b2c95a60ea0c" containerName="speaker" containerID="cri-o://7369028ab3380b8162926288f2a66e0780eba331066b6d04106bd606debba692" gracePeriod=2 Mar 13 11:05:56 crc kubenswrapper[4632]: I0313 11:05:56.933106 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9zbh8" podUID="b33bccd8-6f28-4ffe-9500-069a52aab5df" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.50:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:56 crc kubenswrapper[4632]: I0313 11:05:56.933369 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-tjkbb" podUID="a0d52d98-fe87-4bc8-890e-5c5efb1f30d6" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.81:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:56 crc kubenswrapper[4632]: I0313 11:05:56.933437 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-tjkbb" Mar 13 11:05:56 crc kubenswrapper[4632]: I0313 11:05:56.934464 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-tjkbb" Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.017452 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-lvlxj" Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.100359 4632 patch_prober.go:28] interesting pod/console-operator-58897d9998-sbtn5 container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.100416 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-sbtn5" podUID="ef269b18-ea84-43c2-971c-e772149acbf6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.100459 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-58897d9998-sbtn5" Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.100368 4632 patch_prober.go:28] interesting pod/console-operator-58897d9998-sbtn5 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.100504 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-sbtn5" podUID="ef269b18-ea84-43c2-971c-e772149acbf6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.100560 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-sbtn5" Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.101247 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console-operator" containerStatusID={"Type":"cri-o","ID":"faaf308e22c1a8d08431430b330cacf53efc9923cc70f0515be295533e608c79"} pod="openshift-console-operator/console-operator-58897d9998-sbtn5" containerMessage="Container console-operator failed liveness probe, will be restarted" Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.101298 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console-operator/console-operator-58897d9998-sbtn5" podUID="ef269b18-ea84-43c2-971c-e772149acbf6" containerName="console-operator" containerID="cri-o://faaf308e22c1a8d08431430b330cacf53efc9923cc70f0515be295533e608c79" gracePeriod=30 Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.182826 4632 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-r5v5p container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.182880 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p" podUID="32f62e32-732b-4646-85f0-45b8ea6544a6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.182897 4632 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-r5v5p container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.182926 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p" Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.183003 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p" podUID="32f62e32-732b-4646-85f0-45b8ea6544a6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.183076 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p" Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.192674 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="catalog-operator" containerStatusID={"Type":"cri-o","ID":"4f41aedb607002fa771d4b82bf1fb15a527c048ee3048ce7cd9db7dc1d8b7961"} pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p" containerMessage="Container catalog-operator failed liveness probe, will be restarted" Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.192751 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p" podUID="32f62e32-732b-4646-85f0-45b8ea6544a6" containerName="catalog-operator" containerID="cri-o://4f41aedb607002fa771d4b82bf1fb15a527c048ee3048ce7cd9db7dc1d8b7961" gracePeriod=30 Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.222842 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2" event={"ID":"2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3","Type":"ContainerDied","Data":"71808a85287e54b9fb184ad4c73a074a1ff3d6b35824bd6122d42af589681e05"} Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.226359 4632 generic.go:334] "Generic (PLEG): container finished" podID="2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3" containerID="71808a85287e54b9fb184ad4c73a074a1ff3d6b35824bd6122d42af589681e05" exitCode=0 Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.279607 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.280066 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.280161 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-t9vht" Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.279924 4632 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-svhr5 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.280253 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-svhr5" podUID="353e9ca9-cb3b-4c6e-b1ca-446611a12dca" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.280323 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-69f744f599-svhr5" Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.279891 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.282095 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.282138 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-ingress/router-default-5444994796-t9vht" Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.288381 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"c0db1ffabe3d33862c8266179a821f8fd8c1a4906081849cc73b575a98544e3b"} pod="openshift-ingress/router-default-5444994796-t9vht" containerMessage="Container router failed liveness probe, will be restarted" Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.288452 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" containerID="cri-o://c0db1ffabe3d33862c8266179a821f8fd8c1a4906081849cc73b575a98544e3b" gracePeriod=10 Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.290032 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"696b18c58833c0581e6bf36ae1881e00a6717c6dc6b1a5150c21fe634a2b6edb"} pod="openshift-authentication-operator/authentication-operator-69f744f599-svhr5" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.290136 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-69f744f599-svhr5" podUID="353e9ca9-cb3b-4c6e-b1ca-446611a12dca" containerName="authentication-operator" containerID="cri-o://696b18c58833c0581e6bf36ae1881e00a6717c6dc6b1a5150c21fe634a2b6edb" gracePeriod=30 Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.564913 4632 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-zgxcd container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.565053 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" podUID="49c520f1-fb05-48ca-8435-1985ce668451" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.568125 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.575929 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="packageserver" containerStatusID={"Type":"cri-o","ID":"35b32e739ccce4a6f84a62ef541fb840a3cf0ce2a60fb788f618073e6f79bd60"} pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" containerMessage="Container packageserver failed liveness probe, will be restarted" Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.576009 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" podUID="49c520f1-fb05-48ca-8435-1985ce668451" containerName="packageserver" containerID="cri-o://35b32e739ccce4a6f84a62ef541fb840a3cf0ce2a60fb788f618073e6f79bd60" gracePeriod=30 Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.576182 4632 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-zgxcd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.576217 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" podUID="49c520f1-fb05-48ca-8435-1985ce668451" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.576328 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.696413 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-hvrrc" podUID="09ddc697-7ac1-4896-b9e2-1ae6c59c6f47" containerName="hostpath-provisioner" probeResult="failure" output="Get \"http://10.217.0.42:9898/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.947132 4632 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tqbl9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.947168 4632 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tqbl9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.947204 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tqbl9" podUID="ebf1040d-57dd-47ef-b839-6f78a7c5c75f" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.947132 4632 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-xfvsc container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.947246 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfvsc" podUID="4e100e6e-7259-4262-be47-9c2b5be7a53a" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.947203 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tqbl9" podUID="ebf1040d-57dd-47ef-b839-6f78a7c5c75f" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.988165 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-tztd9" podUID="8f51973a-596d-40dc-9b5b-b2c95a60ea0c" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.988266 4632 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-xfvsc container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:57 crc kubenswrapper[4632]: I0313 11:05:57.988292 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfvsc" podUID="4e100e6e-7259-4262-be47-9c2b5be7a53a" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:58 crc kubenswrapper[4632]: I0313 11:05:58.146714 4632 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-r5v5p container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": EOF" start-of-body= Mar 13 11:05:58 crc kubenswrapper[4632]: I0313 11:05:58.147067 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p" podUID="32f62e32-732b-4646-85f0-45b8ea6544a6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": EOF" Mar 13 11:05:58 crc kubenswrapper[4632]: I0313 11:05:58.146749 4632 patch_prober.go:28] interesting pod/console-operator-58897d9998-sbtn5 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:58 crc kubenswrapper[4632]: I0313 11:05:58.147138 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-sbtn5" podUID="ef269b18-ea84-43c2-971c-e772149acbf6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:58 crc kubenswrapper[4632]: I0313 11:05:58.247741 4632 generic.go:334] "Generic (PLEG): container finished" podID="85b58bb0-63f5-4c85-8759-ce28d2c7db58" containerID="bfa960455f207db762d901f5af9c2b35ade8cd1c5f43d1bc1d4a40a5bfd8199d" exitCode=0 Mar 13 11:05:58 crc kubenswrapper[4632]: I0313 11:05:58.247829 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lvlxj" event={"ID":"85b58bb0-63f5-4c85-8759-ce28d2c7db58","Type":"ContainerDied","Data":"bfa960455f207db762d901f5af9c2b35ade8cd1c5f43d1bc1d4a40a5bfd8199d"} Mar 13 11:05:58 crc kubenswrapper[4632]: I0313 11:05:58.252077 4632 generic.go:334] "Generic (PLEG): container finished" podID="b33bccd8-6f28-4ffe-9500-069a52aab5df" containerID="6b277b3621566e90d2ea8a306394444270adbf026557398f5520284a63c356df" exitCode=0 Mar 13 11:05:58 crc kubenswrapper[4632]: I0313 11:05:58.252115 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9zbh8" event={"ID":"b33bccd8-6f28-4ffe-9500-069a52aab5df","Type":"ContainerDied","Data":"6b277b3621566e90d2ea8a306394444270adbf026557398f5520284a63c356df"} Mar 13 11:05:58 crc kubenswrapper[4632]: I0313 11:05:58.323433 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 11:05:58 crc kubenswrapper[4632]: I0313 11:05:58.323505 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:58 crc kubenswrapper[4632]: I0313 11:05:58.438930 4632 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-sk2l6 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Mar 13 11:05:58 crc kubenswrapper[4632]: I0313 11:05:58.438969 4632 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-sk2l6 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Mar 13 11:05:58 crc kubenswrapper[4632]: I0313 11:05:58.439017 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" podUID="f660255f-8f78-4876-973d-db58f2ee7020" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Mar 13 11:05:58 crc kubenswrapper[4632]: I0313 11:05:58.439030 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" podUID="f660255f-8f78-4876-973d-db58f2ee7020" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Mar 13 11:05:58 crc kubenswrapper[4632]: I0313 11:05:58.690500 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-wj9qs" podUID="68c5eb80-4214-42c5-a08d-de6012969621" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.58:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:58 crc kubenswrapper[4632]: I0313 11:05:58.690958 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-wj9qs" Mar 13 11:05:58 crc kubenswrapper[4632]: I0313 11:05:58.857177 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-wj9qs" podUID="68c5eb80-4214-42c5-a08d-de6012969621" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.58:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:58 crc kubenswrapper[4632]: I0313 11:05:58.857241 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-wj9qs" Mar 13 11:05:58 crc kubenswrapper[4632]: I0313 11:05:58.857190 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-cfcgn" podUID="75d652c7-8521-4039-913a-fa625f89b094" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.63:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:58 crc kubenswrapper[4632]: I0313 11:05:58.939159 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qg79l" podUID="20f92131-aca4-41ea-9144-a23bd9216f49" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.61:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.022096 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-f6c87" podUID="3f3a462e-4d89-45b3-8611-181aca5f8558" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.60:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.022162 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-cgh6c" podUID="ff6d4dcb-9eb8-44fc-951e-f2aecd77a639" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.62:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.104191 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-f6c87" podUID="3f3a462e-4d89-45b3-8611-181aca5f8558" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.60:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.104192 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-2rv7s" podUID="9a963f9c-ac58-4e21-abfa-fca1279a192d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.64:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.104504 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-cfcgn" podUID="75d652c7-8521-4039-913a-fa625f89b094" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.63:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.147756 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-wtzrw" podUID="c8fc6f03-c43b-4ade-92a8-acc5537a4eeb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.230109 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qg79l" podUID="20f92131-aca4-41ea-9144-a23bd9216f49" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.61:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.230160 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-865685cd99-ls9jq" podUID="82fe7ef6-50a5-41d4-9419-787812e16bd6" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.57:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.275398 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lvlxj" event={"ID":"85b58bb0-63f5-4c85-8759-ce28d2c7db58","Type":"ContainerStarted","Data":"b22a68c5198c461a36773fe0b66eb17c50943b0a9d7a1785a9982b7ddc2598b3"} Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.275997 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-lvlxj" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.310481 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-sbtn5_ef269b18-ea84-43c2-971c-e772149acbf6/console-operator/0.log" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.310609 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-sbtn5" event={"ID":"ef269b18-ea84-43c2-971c-e772149acbf6","Type":"ContainerDied","Data":"faaf308e22c1a8d08431430b330cacf53efc9923cc70f0515be295533e608c79"} Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.310632 4632 generic.go:334] "Generic (PLEG): container finished" podID="ef269b18-ea84-43c2-971c-e772149acbf6" containerID="faaf308e22c1a8d08431430b330cacf53efc9923cc70f0515be295533e608c79" exitCode=1 Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.314145 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-cgh6c" podUID="ff6d4dcb-9eb8-44fc-951e-f2aecd77a639" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.62:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.314234 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-szd7c" podUID="9040a0e0-2a56-4331-ba50-b19ff05ef0c0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.315111 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-2rv7s" podUID="9a963f9c-ac58-4e21-abfa-fca1279a192d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.64:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.315363 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-wtzrw" podUID="c8fc6f03-c43b-4ade-92a8-acc5537a4eeb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.319844 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2" event={"ID":"2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3","Type":"ContainerStarted","Data":"4ec063236f8aa26a1d317386ccdd18403efd3b518cbac7f9c5b11ea9c585aba7"} Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.320180 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.324761 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9zbh8" event={"ID":"b33bccd8-6f28-4ffe-9500-069a52aab5df","Type":"ContainerStarted","Data":"7b741b052f66780121aafd6b779efc40ca5030202933ec65b5ae41819bfe4649"} Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.325135 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9zbh8" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.333687 4632 generic.go:334] "Generic (PLEG): container finished" podID="32f62e32-732b-4646-85f0-45b8ea6544a6" containerID="4f41aedb607002fa771d4b82bf1fb15a527c048ee3048ce7cd9db7dc1d8b7961" exitCode=0 Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.333887 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p" event={"ID":"32f62e32-732b-4646-85f0-45b8ea6544a6","Type":"ContainerDied","Data":"4f41aedb607002fa771d4b82bf1fb15a527c048ee3048ce7cd9db7dc1d8b7961"} Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.333917 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p" event={"ID":"32f62e32-732b-4646-85f0-45b8ea6544a6","Type":"ContainerStarted","Data":"2de143252a2a2cf135b056cea81707b81fee21a3cac7b6c26a55bce23c3d8eb4"} Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.334156 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.334498 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manager" containerStatusID={"Type":"cri-o","ID":"579d286b9eb7e56fb8f1cb6d18127cc0ece5c920fbbbc7e2c67943e4800bb183"} pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-wj9qs" containerMessage="Container manager failed liveness probe, will be restarted" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.334536 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-wj9qs" podUID="68c5eb80-4214-42c5-a08d-de6012969621" containerName="manager" containerID="cri-o://579d286b9eb7e56fb8f1cb6d18127cc0ece5c920fbbbc7e2c67943e4800bb183" gracePeriod=10 Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.357151 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-6nb82" podUID="f0be4a6b-e3ac-4141-b5bf-b3fafcca5fda" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.357224 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-6nb82" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.358830 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manager" containerStatusID={"Type":"cri-o","ID":"b79dde4b0109a751bfba6b9882a550b5aaf0de838fae99b2eeecdc581770755b"} pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-6nb82" containerMessage="Container manager failed liveness probe, will be restarted" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.358880 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-6nb82" podUID="f0be4a6b-e3ac-4141-b5bf-b3fafcca5fda" containerName="manager" containerID="cri-o://b79dde4b0109a751bfba6b9882a550b5aaf0de838fae99b2eeecdc581770755b" gracePeriod=10 Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.399174 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-6nb82" podUID="f0be4a6b-e3ac-4141-b5bf-b3fafcca5fda" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.399289 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-6nb82" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.399287 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-865685cd99-ls9jq" podUID="82fe7ef6-50a5-41d4-9419-787812e16bd6" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.57:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.399932 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-wj9qs" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.481185 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-szd7c" podUID="9040a0e0-2a56-4331-ba50-b19ff05ef0c0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.481577 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-sxw8d" podUID="7b491335-6a73-46de-8098-f27ff4c6f795" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.564155 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-628ss" podUID="d04e9aa6-f234-4ffa-81e2-1a2407addb77" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.564486 4632 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-r5v5p container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.564514 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-sxw8d" podUID="7b491335-6a73-46de-8098-f27ff4c6f795" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.564578 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p" podUID="32f62e32-732b-4646-85f0-45b8ea6544a6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.564670 4632 patch_prober.go:28] interesting pod/route-controller-manager-db6b8fbf8-pllt2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" start-of-body= Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.564767 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2" podUID="2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.607669 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-628ss" podUID="d04e9aa6-f234-4ffa-81e2-1a2407addb77" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.607822 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-628ss" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.608412 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-4m8kf" podUID="0a9d48f4-d68b-4ef9-826e-ed619c761405" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.608518 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-4m8kf" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.693526 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-qkr9n" podUID="e66fe20e-05b5-42cd-ac1d-bc4eaee4c8e5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.693806 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-4m8kf" podUID="0a9d48f4-d68b-4ef9-826e-ed619c761405" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.694277 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-qkr9n" podUID="e66fe20e-05b5-42cd-ac1d-bc4eaee4c8e5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.696403 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-4m8kf" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.757133 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-mpfnk" podUID="33445a2b-7fa8-4198-a60a-09caeb69b8ed" containerName="nmstate-handler" probeResult="failure" output="command timed out" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.939085 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-nt7np" podUID="ee081327-4c3f-4c0a-9085-71085c6487b5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:05:59 crc kubenswrapper[4632]: I0313 11:05:59.939088 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-nt7np" podUID="ee081327-4c3f-4c0a-9085-71085c6487b5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:06:00 crc kubenswrapper[4632]: I0313 11:06:00.146150 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-jwrgq" podUID="7bab78c8-7dac-48dc-a426-ccd4ae00a428" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:06:00 crc kubenswrapper[4632]: I0313 11:06:00.146219 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-jwrgq" podUID="7bab78c8-7dac-48dc-a426-ccd4ae00a428" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:06:00 crc kubenswrapper[4632]: I0313 11:06:00.229096 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-kv8b2" podUID="e0d1d349-d63d-498b-ae15-3121f9ae73f8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:06:00 crc kubenswrapper[4632]: I0313 11:06:00.271095 4632 prober.go:107] "Probe failed" probeType="Startup" pod="metallb-system/frr-k8s-lvlxj" podUID="85b58bb0-63f5-4c85-8759-ce28d2c7db58" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:06:00 crc kubenswrapper[4632]: I0313 11:06:00.271097 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-kv8b2" podUID="e0d1d349-d63d-498b-ae15-3121f9ae73f8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:06:00 crc kubenswrapper[4632]: I0313 11:06:00.356106 4632 generic.go:334] "Generic (PLEG): container finished" podID="a8ff14f9-e25c-4839-acab-a622f6f70f88" containerID="432a739d763fd09cb52fbc4a7bbe481e0fb4c89b88f7822f73b594d3596d0d39" exitCode=0 Mar 13 11:06:00 crc kubenswrapper[4632]: I0313 11:06:00.356150 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7469657588-kpf64" event={"ID":"a8ff14f9-e25c-4839-acab-a622f6f70f88","Type":"ContainerDied","Data":"432a739d763fd09cb52fbc4a7bbe481e0fb4c89b88f7822f73b594d3596d0d39"} Mar 13 11:06:00 crc kubenswrapper[4632]: I0313 11:06:00.360357 4632 generic.go:334] "Generic (PLEG): container finished" podID="353e9ca9-cb3b-4c6e-b1ca-446611a12dca" containerID="696b18c58833c0581e6bf36ae1881e00a6717c6dc6b1a5150c21fe634a2b6edb" exitCode=0 Mar 13 11:06:00 crc kubenswrapper[4632]: I0313 11:06:00.360423 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-svhr5" event={"ID":"353e9ca9-cb3b-4c6e-b1ca-446611a12dca","Type":"ContainerDied","Data":"696b18c58833c0581e6bf36ae1881e00a6717c6dc6b1a5150c21fe634a2b6edb"} Mar 13 11:06:00 crc kubenswrapper[4632]: I0313 11:06:00.367216 4632 patch_prober.go:28] interesting pod/oauth-openshift-75bb75cfd7-8sh2x container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": dial tcp 10.217.0.68:6443: connect: connection refused" start-of-body= Mar 13 11:06:00 crc kubenswrapper[4632]: I0313 11:06:00.367255 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" podUID="48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": dial tcp 10.217.0.68:6443: connect: connection refused" Mar 13 11:06:00 crc kubenswrapper[4632]: I0313 11:06:00.368128 4632 generic.go:334] "Generic (PLEG): container finished" podID="8f51973a-596d-40dc-9b5b-b2c95a60ea0c" containerID="7369028ab3380b8162926288f2a66e0780eba331066b6d04106bd606debba692" exitCode=137 Mar 13 11:06:00 crc kubenswrapper[4632]: I0313 11:06:00.369422 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tztd9" event={"ID":"8f51973a-596d-40dc-9b5b-b2c95a60ea0c","Type":"ContainerDied","Data":"7369028ab3380b8162926288f2a66e0780eba331066b6d04106bd606debba692"} Mar 13 11:06:00 crc kubenswrapper[4632]: I0313 11:06:00.370146 4632 patch_prober.go:28] interesting pod/route-controller-manager-db6b8fbf8-pllt2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" start-of-body= Mar 13 11:06:00 crc kubenswrapper[4632]: I0313 11:06:00.370199 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2" podUID="2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" Mar 13 11:06:00 crc kubenswrapper[4632]: I0313 11:06:00.370216 4632 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-r5v5p container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Mar 13 11:06:00 crc kubenswrapper[4632]: I0313 11:06:00.370238 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p" podUID="32f62e32-732b-4646-85f0-45b8ea6544a6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Mar 13 11:06:00 crc kubenswrapper[4632]: I0313 11:06:00.538428 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-628ss" Mar 13 11:06:01 crc kubenswrapper[4632]: I0313 11:06:01.408203 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-sbtn5_ef269b18-ea84-43c2-971c-e772149acbf6/console-operator/0.log" Mar 13 11:06:01 crc kubenswrapper[4632]: I0313 11:06:01.410121 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-sbtn5" event={"ID":"ef269b18-ea84-43c2-971c-e772149acbf6","Type":"ContainerStarted","Data":"6d6064b7502063ece3533259b5cf853c2915d2ee3e3d73d55543f1192104d84d"} Mar 13 11:06:01 crc kubenswrapper[4632]: I0313 11:06:01.410153 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-sbtn5" Mar 13 11:06:01 crc kubenswrapper[4632]: I0313 11:06:01.421185 4632 patch_prober.go:28] interesting pod/console-operator-58897d9998-sbtn5 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Mar 13 11:06:01 crc kubenswrapper[4632]: I0313 11:06:01.421237 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-sbtn5" podUID="ef269b18-ea84-43c2-971c-e772149acbf6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Mar 13 11:06:01 crc kubenswrapper[4632]: I0313 11:06:01.426598 4632 generic.go:334] "Generic (PLEG): container finished" podID="49c520f1-fb05-48ca-8435-1985ce668451" containerID="35b32e739ccce4a6f84a62ef541fb840a3cf0ce2a60fb788f618073e6f79bd60" exitCode=0 Mar 13 11:06:01 crc kubenswrapper[4632]: I0313 11:06:01.426677 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" event={"ID":"49c520f1-fb05-48ca-8435-1985ce668451","Type":"ContainerDied","Data":"35b32e739ccce4a6f84a62ef541fb840a3cf0ce2a60fb788f618073e6f79bd60"} Mar 13 11:06:01 crc kubenswrapper[4632]: I0313 11:06:01.438579 4632 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-sk2l6 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Mar 13 11:06:01 crc kubenswrapper[4632]: I0313 11:06:01.438652 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" podUID="f660255f-8f78-4876-973d-db58f2ee7020" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Mar 13 11:06:01 crc kubenswrapper[4632]: I0313 11:06:01.438582 4632 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-sk2l6 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Mar 13 11:06:01 crc kubenswrapper[4632]: I0313 11:06:01.438721 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" podUID="f660255f-8f78-4876-973d-db58f2ee7020" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Mar 13 11:06:01 crc kubenswrapper[4632]: I0313 11:06:01.442831 4632 generic.go:334] "Generic (PLEG): container finished" podID="48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd" containerID="224837f104bcdbc6545d62209161e349a9d07cdcaf5c66e47c1de75b3af4b369" exitCode=0 Mar 13 11:06:01 crc kubenswrapper[4632]: I0313 11:06:01.442969 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" event={"ID":"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd","Type":"ContainerDied","Data":"224837f104bcdbc6545d62209161e349a9d07cdcaf5c66e47c1de75b3af4b369"} Mar 13 11:06:01 crc kubenswrapper[4632]: I0313 11:06:01.450437 4632 generic.go:334] "Generic (PLEG): container finished" podID="68c5eb80-4214-42c5-a08d-de6012969621" containerID="579d286b9eb7e56fb8f1cb6d18127cc0ece5c920fbbbc7e2c67943e4800bb183" exitCode=0 Mar 13 11:06:01 crc kubenswrapper[4632]: I0313 11:06:01.450815 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-wj9qs" event={"ID":"68c5eb80-4214-42c5-a08d-de6012969621","Type":"ContainerDied","Data":"579d286b9eb7e56fb8f1cb6d18127cc0ece5c920fbbbc7e2c67943e4800bb183"} Mar 13 11:06:01 crc kubenswrapper[4632]: I0313 11:06:01.884693 4632 patch_prober.go:28] interesting pod/route-controller-manager-db6b8fbf8-pllt2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" start-of-body= Mar 13 11:06:01 crc kubenswrapper[4632]: I0313 11:06:01.885125 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2" podUID="2f5d4f7c-4d7b-4347-bd38-d5fd29fed3f3" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" Mar 13 11:06:02 crc kubenswrapper[4632]: I0313 11:06:02.043332 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="1761ca69-46fd-4375-af60-22b3e77c19a2" containerName="galera" containerID="cri-o://8d6e3c3bf2cc94b4f346233606a1c5c55a2993e7644d8a78c77dc12972c98a9e" gracePeriod=23 Mar 13 11:06:02 crc kubenswrapper[4632]: I0313 11:06:02.076928 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="2cb2f546-c8c5-4ec9-aba8-d3782431de10" containerName="galera" containerID="cri-o://cdf295326a62a01129c4f9b5741f57b3d80d103e3c5a6bf64f5cc1951034264f" gracePeriod=22 Mar 13 11:06:02 crc kubenswrapper[4632]: I0313 11:06:02.134098 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-85c677895b-thbc4" podUID="3fdb377f-5a78-4687-82e1-50718514290d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:06:02 crc kubenswrapper[4632]: I0313 11:06:02.463540 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-svhr5" event={"ID":"353e9ca9-cb3b-4c6e-b1ca-446611a12dca","Type":"ContainerStarted","Data":"1576ef1b44221dac835a1417f0e95e462cbd508e0e4f9c0f2281aeaa1ae366d8"} Mar 13 11:06:02 crc kubenswrapper[4632]: I0313 11:06:02.472236 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" event={"ID":"49c520f1-fb05-48ca-8435-1985ce668451","Type":"ContainerStarted","Data":"3b3b129c9e469a5415a2dda08053f620658cbc9d22814b96f2371d28d2669ac4"} Mar 13 11:06:02 crc kubenswrapper[4632]: I0313 11:06:02.473514 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" Mar 13 11:06:02 crc kubenswrapper[4632]: I0313 11:06:02.473591 4632 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-zgxcd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" start-of-body= Mar 13 11:06:02 crc kubenswrapper[4632]: I0313 11:06:02.473618 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" podUID="49c520f1-fb05-48ca-8435-1985ce668451" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" Mar 13 11:06:02 crc kubenswrapper[4632]: I0313 11:06:02.476038 4632 generic.go:334] "Generic (PLEG): container finished" podID="f0be4a6b-e3ac-4141-b5bf-b3fafcca5fda" containerID="b79dde4b0109a751bfba6b9882a550b5aaf0de838fae99b2eeecdc581770755b" exitCode=0 Mar 13 11:06:02 crc kubenswrapper[4632]: I0313 11:06:02.476076 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-6nb82" event={"ID":"f0be4a6b-e3ac-4141-b5bf-b3fafcca5fda","Type":"ContainerDied","Data":"b79dde4b0109a751bfba6b9882a550b5aaf0de838fae99b2eeecdc581770755b"} Mar 13 11:06:02 crc kubenswrapper[4632]: I0313 11:06:02.481724 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-wj9qs" event={"ID":"68c5eb80-4214-42c5-a08d-de6012969621","Type":"ContainerStarted","Data":"1147eea75904622c793225e4c48cb3f7549325cc9fa40143c5548037a2e7beba"} Mar 13 11:06:02 crc kubenswrapper[4632]: I0313 11:06:02.484870 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7469657588-kpf64" event={"ID":"a8ff14f9-e25c-4839-acab-a622f6f70f88","Type":"ContainerStarted","Data":"d85ac86ee31b16bfd2373d1e2caf130e6b8e409fe626bfe294b211f86639098c"} Mar 13 11:06:02 crc kubenswrapper[4632]: I0313 11:06:02.485389 4632 patch_prober.go:28] interesting pod/controller-manager-7469657588-kpf64 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" start-of-body= Mar 13 11:06:02 crc kubenswrapper[4632]: I0313 11:06:02.485429 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7469657588-kpf64" podUID="a8ff14f9-e25c-4839-acab-a622f6f70f88" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" Mar 13 11:06:02 crc kubenswrapper[4632]: I0313 11:06:02.485394 4632 patch_prober.go:28] interesting pod/console-operator-58897d9998-sbtn5 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Mar 13 11:06:02 crc kubenswrapper[4632]: I0313 11:06:02.485574 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-sbtn5" podUID="ef269b18-ea84-43c2-971c-e772149acbf6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Mar 13 11:06:02 crc kubenswrapper[4632]: E0313 11:06:02.496922 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cdf295326a62a01129c4f9b5741f57b3d80d103e3c5a6bf64f5cc1951034264f" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 13 11:06:02 crc kubenswrapper[4632]: E0313 11:06:02.501287 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cdf295326a62a01129c4f9b5741f57b3d80d103e3c5a6bf64f5cc1951034264f" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 13 11:06:02 crc kubenswrapper[4632]: E0313 11:06:02.505190 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cdf295326a62a01129c4f9b5741f57b3d80d103e3c5a6bf64f5cc1951034264f" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 13 11:06:02 crc kubenswrapper[4632]: E0313 11:06:02.505243 4632 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="2cb2f546-c8c5-4ec9-aba8-d3782431de10" containerName="galera" Mar 13 11:06:03 crc kubenswrapper[4632]: I0313 11:06:03.501618 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7469657588-kpf64" Mar 13 11:06:03 crc kubenswrapper[4632]: I0313 11:06:03.502334 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-wj9qs" podUID="68c5eb80-4214-42c5-a08d-de6012969621" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.58:8081/readyz\": dial tcp 10.217.0.58:8081: connect: connection refused" Mar 13 11:06:03 crc kubenswrapper[4632]: I0313 11:06:03.501769 4632 patch_prober.go:28] interesting pod/controller-manager-7469657588-kpf64 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" start-of-body= Mar 13 11:06:03 crc kubenswrapper[4632]: I0313 11:06:03.504004 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7469657588-kpf64" podUID="a8ff14f9-e25c-4839-acab-a622f6f70f88" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" Mar 13 11:06:03 crc kubenswrapper[4632]: I0313 11:06:03.501963 4632 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-zgxcd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" start-of-body= Mar 13 11:06:03 crc kubenswrapper[4632]: I0313 11:06:03.504060 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" podUID="49c520f1-fb05-48ca-8435-1985ce668451" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" Mar 13 11:06:03 crc kubenswrapper[4632]: I0313 11:06:03.590161 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openstack-operators/openstack-operator-index-2jqnk" podUID="7de02b7f-4e1c-4ba1-9659-c864e9080092" containerName="registry-server" probeResult="failure" output=< Mar 13 11:06:03 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:06:03 crc kubenswrapper[4632]: > Mar 13 11:06:03 crc kubenswrapper[4632]: E0313 11:06:03.737498 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8d6e3c3bf2cc94b4f346233606a1c5c55a2993e7644d8a78c77dc12972c98a9e" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 13 11:06:03 crc kubenswrapper[4632]: E0313 11:06:03.739425 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8d6e3c3bf2cc94b4f346233606a1c5c55a2993e7644d8a78c77dc12972c98a9e" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 13 11:06:03 crc kubenswrapper[4632]: E0313 11:06:03.740825 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8d6e3c3bf2cc94b4f346233606a1c5c55a2993e7644d8a78c77dc12972c98a9e" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 13 11:06:03 crc kubenswrapper[4632]: E0313 11:06:03.740855 4632 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="1761ca69-46fd-4375-af60-22b3e77c19a2" containerName="galera" Mar 13 11:06:04 crc kubenswrapper[4632]: I0313 11:06:04.120181 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5678554f8b-n7dcv" Mar 13 11:06:04 crc kubenswrapper[4632]: I0313 11:06:04.357512 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="d2c1c19b-95a5-4db1-8e54-36fe83704b25" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 11:06:04 crc kubenswrapper[4632]: I0313 11:06:04.438719 4632 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-sk2l6 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Mar 13 11:06:04 crc kubenswrapper[4632]: I0313 11:06:04.438785 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" podUID="f660255f-8f78-4876-973d-db58f2ee7020" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Mar 13 11:06:04 crc kubenswrapper[4632]: I0313 11:06:04.438840 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" Mar 13 11:06:04 crc kubenswrapper[4632]: I0313 11:06:04.439165 4632 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-sk2l6 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Mar 13 11:06:04 crc kubenswrapper[4632]: I0313 11:06:04.439239 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" podUID="f660255f-8f78-4876-973d-db58f2ee7020" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Mar 13 11:06:04 crc kubenswrapper[4632]: I0313 11:06:04.439508 4632 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-sk2l6 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Mar 13 11:06:04 crc kubenswrapper[4632]: I0313 11:06:04.439534 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" podUID="f660255f-8f78-4876-973d-db58f2ee7020" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Mar 13 11:06:04 crc kubenswrapper[4632]: I0313 11:06:04.441762 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"bdfa238d1dda3afead970f6c0c59d9c82cc9066974eef2637a5f643bcf655e99"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 13 11:06:04 crc kubenswrapper[4632]: I0313 11:06:04.441896 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" podUID="f660255f-8f78-4876-973d-db58f2ee7020" containerName="openshift-config-operator" containerID="cri-o://bdfa238d1dda3afead970f6c0c59d9c82cc9066974eef2637a5f643bcf655e99" gracePeriod=30 Mar 13 11:06:04 crc kubenswrapper[4632]: I0313 11:06:04.536913 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-6nb82" event={"ID":"f0be4a6b-e3ac-4141-b5bf-b3fafcca5fda","Type":"ContainerStarted","Data":"94c387b11711608204f9f44972b019fe580f00c8f5f04ef461b483111e42908d"} Mar 13 11:06:04 crc kubenswrapper[4632]: I0313 11:06:04.537685 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-6nb82" Mar 13 11:06:04 crc kubenswrapper[4632]: I0313 11:06:04.538666 4632 patch_prober.go:28] interesting pod/controller-manager-7469657588-kpf64 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" start-of-body= Mar 13 11:06:04 crc kubenswrapper[4632]: I0313 11:06:04.538714 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7469657588-kpf64" podUID="a8ff14f9-e25c-4839-acab-a622f6f70f88" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" Mar 13 11:06:04 crc kubenswrapper[4632]: I0313 11:06:04.538986 4632 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-zgxcd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" start-of-body= Mar 13 11:06:04 crc kubenswrapper[4632]: I0313 11:06:04.539031 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" podUID="49c520f1-fb05-48ca-8435-1985ce668451" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" Mar 13 11:06:04 crc kubenswrapper[4632]: I0313 11:06:04.698171 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-lvlxj" Mar 13 11:06:05 crc kubenswrapper[4632]: I0313 11:06:05.202503 4632 patch_prober.go:28] interesting pod/controller-manager-7469657588-kpf64 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" start-of-body= Mar 13 11:06:05 crc kubenswrapper[4632]: I0313 11:06:05.202975 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7469657588-kpf64" podUID="a8ff14f9-e25c-4839-acab-a622f6f70f88" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" Mar 13 11:06:05 crc kubenswrapper[4632]: I0313 11:06:05.563952 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tztd9" event={"ID":"8f51973a-596d-40dc-9b5b-b2c95a60ea0c","Type":"ContainerStarted","Data":"e99ea45426a904ea957ea0ea2fbbe1c2c9717dbd8a660fe6124f64d95546a1bf"} Mar 13 11:06:05 crc kubenswrapper[4632]: I0313 11:06:05.564014 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-tztd9" Mar 13 11:06:05 crc kubenswrapper[4632]: I0313 11:06:05.570741 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-sk2l6_f660255f-8f78-4876-973d-db58f2ee7020/openshift-config-operator/1.log" Mar 13 11:06:05 crc kubenswrapper[4632]: I0313 11:06:05.576009 4632 generic.go:334] "Generic (PLEG): container finished" podID="f660255f-8f78-4876-973d-db58f2ee7020" containerID="bdfa238d1dda3afead970f6c0c59d9c82cc9066974eef2637a5f643bcf655e99" exitCode=2 Mar 13 11:06:05 crc kubenswrapper[4632]: I0313 11:06:05.576099 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" event={"ID":"f660255f-8f78-4876-973d-db58f2ee7020","Type":"ContainerDied","Data":"bdfa238d1dda3afead970f6c0c59d9c82cc9066974eef2637a5f643bcf655e99"} Mar 13 11:06:05 crc kubenswrapper[4632]: I0313 11:06:05.576166 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" event={"ID":"f660255f-8f78-4876-973d-db58f2ee7020","Type":"ContainerStarted","Data":"dd3e92e888d4cb87e3f4eb1d1058f6b2c0167d20d2613589ae205bd1b8bf5ea0"} Mar 13 11:06:05 crc kubenswrapper[4632]: I0313 11:06:05.576515 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" Mar 13 11:06:05 crc kubenswrapper[4632]: I0313 11:06:05.592899 4632 scope.go:117] "RemoveContainer" containerID="e98f0e8253db82d7fc1c628a628a0d9ea91c85c3796f3abe0d968983b3e782e2" Mar 13 11:06:05 crc kubenswrapper[4632]: I0313 11:06:05.639955 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556666-pjsh7"] Mar 13 11:06:05 crc kubenswrapper[4632]: E0313 11:06:05.699170 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c07de1e-84e2-4dae-a3c3-ced19801c8c2" containerName="extract-utilities" Mar 13 11:06:05 crc kubenswrapper[4632]: I0313 11:06:05.699212 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c07de1e-84e2-4dae-a3c3-ced19801c8c2" containerName="extract-utilities" Mar 13 11:06:05 crc kubenswrapper[4632]: E0313 11:06:05.699243 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c07de1e-84e2-4dae-a3c3-ced19801c8c2" containerName="extract-content" Mar 13 11:06:05 crc kubenswrapper[4632]: I0313 11:06:05.699250 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c07de1e-84e2-4dae-a3c3-ced19801c8c2" containerName="extract-content" Mar 13 11:06:05 crc kubenswrapper[4632]: E0313 11:06:05.699263 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c07de1e-84e2-4dae-a3c3-ced19801c8c2" containerName="registry-server" Mar 13 11:06:05 crc kubenswrapper[4632]: I0313 11:06:05.699270 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c07de1e-84e2-4dae-a3c3-ced19801c8c2" containerName="registry-server" Mar 13 11:06:05 crc kubenswrapper[4632]: I0313 11:06:05.703395 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c07de1e-84e2-4dae-a3c3-ced19801c8c2" containerName="registry-server" Mar 13 11:06:05 crc kubenswrapper[4632]: I0313 11:06:05.717454 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556666-pjsh7" Mar 13 11:06:05 crc kubenswrapper[4632]: I0313 11:06:05.790549 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mm6fq" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="registry-server" probeResult="failure" output=< Mar 13 11:06:05 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:06:05 crc kubenswrapper[4632]: > Mar 13 11:06:05 crc kubenswrapper[4632]: I0313 11:06:05.841601 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 11:06:05 crc kubenswrapper[4632]: I0313 11:06:05.842017 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 11:06:05 crc kubenswrapper[4632]: I0313 11:06:05.841602 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 11:06:05 crc kubenswrapper[4632]: I0313 11:06:05.906096 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9fsz\" (UniqueName: \"kubernetes.io/projected/c793856b-f941-4c9e-b70e-36b4844e4eac-kube-api-access-f9fsz\") pod \"auto-csr-approver-29556666-pjsh7\" (UID: \"c793856b-f941-4c9e-b70e-36b4844e4eac\") " pod="openshift-infra/auto-csr-approver-29556666-pjsh7" Mar 13 11:06:06 crc kubenswrapper[4632]: I0313 11:06:06.009361 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9fsz\" (UniqueName: \"kubernetes.io/projected/c793856b-f941-4c9e-b70e-36b4844e4eac-kube-api-access-f9fsz\") pod \"auto-csr-approver-29556666-pjsh7\" (UID: \"c793856b-f941-4c9e-b70e-36b4844e4eac\") " pod="openshift-infra/auto-csr-approver-29556666-pjsh7" Mar 13 11:06:06 crc kubenswrapper[4632]: I0313 11:06:06.161582 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9fsz\" (UniqueName: \"kubernetes.io/projected/c793856b-f941-4c9e-b70e-36b4844e4eac-kube-api-access-f9fsz\") pod \"auto-csr-approver-29556666-pjsh7\" (UID: \"c793856b-f941-4c9e-b70e-36b4844e4eac\") " pod="openshift-infra/auto-csr-approver-29556666-pjsh7" Mar 13 11:06:06 crc kubenswrapper[4632]: I0313 11:06:06.189347 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r5v5p" Mar 13 11:06:06 crc kubenswrapper[4632]: I0313 11:06:06.211821 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556666-pjsh7" Mar 13 11:06:06 crc kubenswrapper[4632]: I0313 11:06:06.303331 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]backend-http ok Mar 13 11:06:06 crc kubenswrapper[4632]: [+]has-synced ok Mar 13 11:06:06 crc kubenswrapper[4632]: [-]process-running failed: reason withheld Mar 13 11:06:06 crc kubenswrapper[4632]: healthz check failed Mar 13 11:06:06 crc kubenswrapper[4632]: I0313 11:06:06.303390 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 11:06:06 crc kubenswrapper[4632]: I0313 11:06:06.492712 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556666-pjsh7"] Mar 13 11:06:06 crc kubenswrapper[4632]: I0313 11:06:06.555647 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-sbtn5" Mar 13 11:06:06 crc kubenswrapper[4632]: I0313 11:06:06.589705 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-sk2l6_f660255f-8f78-4876-973d-db58f2ee7020/openshift-config-operator/1.log" Mar 13 11:06:06 crc kubenswrapper[4632]: I0313 11:06:06.622418 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" event={"ID":"48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd","Type":"ContainerStarted","Data":"40af26cca0da5fdf2bc9e8a5f824f238d6ed7789f9bf66f1ba1f3a52d86ef473"} Mar 13 11:06:06 crc kubenswrapper[4632]: I0313 11:06:06.622825 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 11:06:06 crc kubenswrapper[4632]: I0313 11:06:06.623312 4632 patch_prober.go:28] interesting pod/oauth-openshift-75bb75cfd7-8sh2x container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": dial tcp 10.217.0.68:6443: connect: connection refused" start-of-body= Mar 13 11:06:06 crc kubenswrapper[4632]: I0313 11:06:06.623347 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" podUID="48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": dial tcp 10.217.0.68:6443: connect: connection refused" Mar 13 11:06:06 crc kubenswrapper[4632]: I0313 11:06:06.625330 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zgxcd" Mar 13 11:06:06 crc kubenswrapper[4632]: I0313 11:06:06.710974 4632 generic.go:334] "Generic (PLEG): container finished" podID="1761ca69-46fd-4375-af60-22b3e77c19a2" containerID="8d6e3c3bf2cc94b4f346233606a1c5c55a2993e7644d8a78c77dc12972c98a9e" exitCode=0 Mar 13 11:06:06 crc kubenswrapper[4632]: I0313 11:06:06.711465 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1761ca69-46fd-4375-af60-22b3e77c19a2","Type":"ContainerDied","Data":"8d6e3c3bf2cc94b4f346233606a1c5c55a2993e7644d8a78c77dc12972c98a9e"} Mar 13 11:06:07 crc kubenswrapper[4632]: I0313 11:06:07.603634 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-wj9qs" Mar 13 11:06:07 crc kubenswrapper[4632]: I0313 11:06:07.610133 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-wj9qs" Mar 13 11:06:07 crc kubenswrapper[4632]: I0313 11:06:07.720687 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="d2c1c19b-95a5-4db1-8e54-36fe83704b25" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 11:06:07 crc kubenswrapper[4632]: I0313 11:06:07.807403 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-t9vht_7b959a85-56a5-4296-9cf3-87741e1f9c39/router/0.log" Mar 13 11:06:07 crc kubenswrapper[4632]: I0313 11:06:07.807484 4632 generic.go:334] "Generic (PLEG): container finished" podID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerID="c0db1ffabe3d33862c8266179a821f8fd8c1a4906081849cc73b575a98544e3b" exitCode=137 Mar 13 11:06:07 crc kubenswrapper[4632]: I0313 11:06:07.808505 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-t9vht" event={"ID":"7b959a85-56a5-4296-9cf3-87741e1f9c39","Type":"ContainerDied","Data":"c0db1ffabe3d33862c8266179a821f8fd8c1a4906081849cc73b575a98544e3b"} Mar 13 11:06:07 crc kubenswrapper[4632]: I0313 11:06:07.809345 4632 patch_prober.go:28] interesting pod/oauth-openshift-75bb75cfd7-8sh2x container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": dial tcp 10.217.0.68:6443: connect: connection refused" start-of-body= Mar 13 11:06:07 crc kubenswrapper[4632]: I0313 11:06:07.809391 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" podUID="48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": dial tcp 10.217.0.68:6443: connect: connection refused" Mar 13 11:06:08 crc kubenswrapper[4632]: I0313 11:06:08.184359 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-6nb82" podUID="f0be4a6b-e3ac-4141-b5bf-b3fafcca5fda" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": dial tcp 10.217.0.75:8081: connect: connection refused" Mar 13 11:06:08 crc kubenswrapper[4632]: I0313 11:06:08.867416 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1761ca69-46fd-4375-af60-22b3e77c19a2","Type":"ContainerStarted","Data":"ac8180dd063c8f549a7901dcd6b082d73376e180684216596e3f93393b60968e"} Mar 13 11:06:08 crc kubenswrapper[4632]: I0313 11:06:08.872530 4632 generic.go:334] "Generic (PLEG): container finished" podID="ac97dc03-9537-4f95-bb79-5bb60a99089d" containerID="3da76186915cfbbbe688750a6110b1e64143d37e61c44ef62a9740eabb32c983" exitCode=0 Mar 13 11:06:08 crc kubenswrapper[4632]: I0313 11:06:08.872569 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac97dc03-9537-4f95-bb79-5bb60a99089d","Type":"ContainerDied","Data":"3da76186915cfbbbe688750a6110b1e64143d37e61c44ef62a9740eabb32c983"} Mar 13 11:06:09 crc kubenswrapper[4632]: I0313 11:06:09.363194 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-bkmbn" podUID="c33d0da9-5a04-42d6-80d3-2f558b4a90b0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:06:10 crc kubenswrapper[4632]: I0313 11:06:10.070530 4632 scope.go:117] "RemoveContainer" containerID="8a4edabf825a9fe82d4af3664ef24831617306062afee5f4494f20215f557582" Mar 13 11:06:10 crc kubenswrapper[4632]: E0313 11:06:10.072402 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:06:10 crc kubenswrapper[4632]: I0313 11:06:10.103145 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-jwrgq" podUID="7bab78c8-7dac-48dc-a426-ccd4ae00a428" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:06:10 crc kubenswrapper[4632]: I0313 11:06:10.103316 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-jwrgq" Mar 13 11:06:10 crc kubenswrapper[4632]: I0313 11:06:10.144360 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-jwrgq" Mar 13 11:06:10 crc kubenswrapper[4632]: I0313 11:06:10.364649 4632 patch_prober.go:28] interesting pod/oauth-openshift-75bb75cfd7-8sh2x container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": dial tcp 10.217.0.68:6443: connect: connection refused" start-of-body= Mar 13 11:06:10 crc kubenswrapper[4632]: I0313 11:06:10.364695 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" podUID="48d2bc7e-c929-42c9-b3f2-9e78c7eac8cd" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": dial tcp 10.217.0.68:6443: connect: connection refused" Mar 13 11:06:10 crc kubenswrapper[4632]: I0313 11:06:10.439363 4632 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-sk2l6 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Mar 13 11:06:10 crc kubenswrapper[4632]: I0313 11:06:10.439411 4632 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-sk2l6 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Mar 13 11:06:10 crc kubenswrapper[4632]: I0313 11:06:10.439424 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" podUID="f660255f-8f78-4876-973d-db58f2ee7020" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Mar 13 11:06:10 crc kubenswrapper[4632]: I0313 11:06:10.439449 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" podUID="f660255f-8f78-4876-973d-db58f2ee7020" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Mar 13 11:06:10 crc kubenswrapper[4632]: I0313 11:06:10.714398 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="d2c1c19b-95a5-4db1-8e54-36fe83704b25" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 11:06:10 crc kubenswrapper[4632]: I0313 11:06:10.714675 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 13 11:06:10 crc kubenswrapper[4632]: I0313 11:06:10.725026 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"67516bb124d863acdb93cbafff12001c1c53c2a821587b0e3e99f6135ee28e92"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Mar 13 11:06:10 crc kubenswrapper[4632]: I0313 11:06:10.728663 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d2c1c19b-95a5-4db1-8e54-36fe83704b25" containerName="cinder-scheduler" containerID="cri-o://67516bb124d863acdb93cbafff12001c1c53c2a821587b0e3e99f6135ee28e92" gracePeriod=30 Mar 13 11:06:10 crc kubenswrapper[4632]: I0313 11:06:10.996017 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-t9vht_7b959a85-56a5-4296-9cf3-87741e1f9c39/router/0.log" Mar 13 11:06:11 crc kubenswrapper[4632]: I0313 11:06:11.000096 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-t9vht" event={"ID":"7b959a85-56a5-4296-9cf3-87741e1f9c39","Type":"ContainerStarted","Data":"e68290d07d3694966f4a24ae2bc7a4e991ff7c78cb35b4090c838101dbb38ee0"} Mar 13 11:06:11 crc kubenswrapper[4632]: I0313 11:06:11.190464 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-t9vht" Mar 13 11:06:11 crc kubenswrapper[4632]: I0313 11:06:11.196705 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 13 11:06:11 crc kubenswrapper[4632]: I0313 11:06:11.196777 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 13 11:06:11 crc kubenswrapper[4632]: I0313 11:06:11.210601 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-2jqnk" Mar 13 11:06:11 crc kubenswrapper[4632]: I0313 11:06:11.247275 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-2jqnk" Mar 13 11:06:12 crc kubenswrapper[4632]: I0313 11:06:12.136915 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-db6b8fbf8-pllt2" Mar 13 11:06:12 crc kubenswrapper[4632]: I0313 11:06:12.219222 4632 patch_prober.go:28] interesting pod/router-default-5444994796-t9vht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 11:06:12 crc kubenswrapper[4632]: [-]has-synced failed: reason withheld Mar 13 11:06:12 crc kubenswrapper[4632]: [+]process-running ok Mar 13 11:06:12 crc kubenswrapper[4632]: healthz check failed Mar 13 11:06:12 crc kubenswrapper[4632]: I0313 11:06:12.219580 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t9vht" podUID="7b959a85-56a5-4296-9cf3-87741e1f9c39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 11:06:12 crc kubenswrapper[4632]: E0313 11:06:12.503794 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cdf295326a62a01129c4f9b5741f57b3d80d103e3c5a6bf64f5cc1951034264f" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 13 11:06:12 crc kubenswrapper[4632]: E0313 11:06:12.508245 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cdf295326a62a01129c4f9b5741f57b3d80d103e3c5a6bf64f5cc1951034264f" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 13 11:06:12 crc kubenswrapper[4632]: E0313 11:06:12.513473 4632 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cdf295326a62a01129c4f9b5741f57b3d80d103e3c5a6bf64f5cc1951034264f" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 13 11:06:12 crc kubenswrapper[4632]: E0313 11:06:12.513557 4632 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="2cb2f546-c8c5-4ec9-aba8-d3782431de10" containerName="galera" Mar 13 11:06:13 crc kubenswrapper[4632]: I0313 11:06:13.042997 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac97dc03-9537-4f95-bb79-5bb60a99089d","Type":"ContainerStarted","Data":"b57f85647a9f24895343d16ffa69abe242612a8de43972c5cb53a02f9838e13b"} Mar 13 11:06:13 crc kubenswrapper[4632]: I0313 11:06:13.198258 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-t9vht" Mar 13 11:06:13 crc kubenswrapper[4632]: I0313 11:06:13.258568 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556666-pjsh7"] Mar 13 11:06:13 crc kubenswrapper[4632]: W0313 11:06:13.302807 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc793856b_f941_4c9e_b70e_36b4844e4eac.slice/crio-c4d5e3a91207e5a09f63b2550c521edf15d369030662552b3e6ac28d2f654bea WatchSource:0}: Error finding container c4d5e3a91207e5a09f63b2550c521edf15d369030662552b3e6ac28d2f654bea: Status 404 returned error can't find the container with id c4d5e3a91207e5a09f63b2550c521edf15d369030662552b3e6ac28d2f654bea Mar 13 11:06:13 crc kubenswrapper[4632]: I0313 11:06:13.507669 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sk2l6" Mar 13 11:06:13 crc kubenswrapper[4632]: I0313 11:06:13.734391 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 13 11:06:13 crc kubenswrapper[4632]: I0313 11:06:13.734433 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 13 11:06:14 crc kubenswrapper[4632]: I0313 11:06:14.079872 4632 generic.go:334] "Generic (PLEG): container finished" podID="2cb2f546-c8c5-4ec9-aba8-d3782431de10" containerID="cdf295326a62a01129c4f9b5741f57b3d80d103e3c5a6bf64f5cc1951034264f" exitCode=0 Mar 13 11:06:14 crc kubenswrapper[4632]: I0313 11:06:14.079978 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2cb2f546-c8c5-4ec9-aba8-d3782431de10","Type":"ContainerDied","Data":"cdf295326a62a01129c4f9b5741f57b3d80d103e3c5a6bf64f5cc1951034264f"} Mar 13 11:06:14 crc kubenswrapper[4632]: I0313 11:06:14.081652 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556666-pjsh7" event={"ID":"c793856b-f941-4c9e-b70e-36b4844e4eac","Type":"ContainerStarted","Data":"c4d5e3a91207e5a09f63b2550c521edf15d369030662552b3e6ac28d2f654bea"} Mar 13 11:06:14 crc kubenswrapper[4632]: I0313 11:06:14.083096 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-t9vht" Mar 13 11:06:14 crc kubenswrapper[4632]: I0313 11:06:14.086235 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-t9vht" Mar 13 11:06:14 crc kubenswrapper[4632]: I0313 11:06:14.389483 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-lvlxj" Mar 13 11:06:14 crc kubenswrapper[4632]: I0313 11:06:14.810256 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9zbh8" Mar 13 11:06:15 crc kubenswrapper[4632]: I0313 11:06:15.106576 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2cb2f546-c8c5-4ec9-aba8-d3782431de10","Type":"ContainerStarted","Data":"75b6983c07eb146a71cf122cfc436a920ef6367277c67e8ac65ff3bc2c650e0f"} Mar 13 11:06:15 crc kubenswrapper[4632]: I0313 11:06:15.487274 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7469657588-kpf64" Mar 13 11:06:15 crc kubenswrapper[4632]: I0313 11:06:15.510667 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 11:06:15 crc kubenswrapper[4632]: I0313 11:06:15.511282 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ac97dc03-9537-4f95-bb79-5bb60a99089d" containerName="ceilometer-notification-agent" containerID="cri-o://5d34565e3f3d53e4eb4eec7fc127b7d0ef95db5c894a8b9fbc65ec70d12e4d20" gracePeriod=30 Mar 13 11:06:15 crc kubenswrapper[4632]: I0313 11:06:15.511412 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ac97dc03-9537-4f95-bb79-5bb60a99089d" containerName="ceilometer-central-agent" containerID="cri-o://b57f85647a9f24895343d16ffa69abe242612a8de43972c5cb53a02f9838e13b" gracePeriod=30 Mar 13 11:06:15 crc kubenswrapper[4632]: I0313 11:06:15.511443 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ac97dc03-9537-4f95-bb79-5bb60a99089d" containerName="sg-core" containerID="cri-o://7784ac325dc1b12d740a758a07a8e9e03da012db50eef1bc62b207161880f530" gracePeriod=30 Mar 13 11:06:15 crc kubenswrapper[4632]: I0313 11:06:15.511412 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ac97dc03-9537-4f95-bb79-5bb60a99089d" containerName="proxy-httpd" containerID="cri-o://9a2b2ece3b9e850a4d4ebe5776040511a71de7bd0fba43340538aa166e80ade2" gracePeriod=30 Mar 13 11:06:15 crc kubenswrapper[4632]: I0313 11:06:15.789830 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mm6fq" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="registry-server" probeResult="failure" output=< Mar 13 11:06:15 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:06:15 crc kubenswrapper[4632]: > Mar 13 11:06:15 crc kubenswrapper[4632]: I0313 11:06:15.794637 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-tztd9" Mar 13 11:06:16 crc kubenswrapper[4632]: I0313 11:06:16.152847 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556666-pjsh7" event={"ID":"c793856b-f941-4c9e-b70e-36b4844e4eac","Type":"ContainerStarted","Data":"ca3b5ee01f58147d02e068e963d2c27601cb82a563a21e34b84cdc61a03e2f80"} Mar 13 11:06:16 crc kubenswrapper[4632]: I0313 11:06:16.160051 4632 generic.go:334] "Generic (PLEG): container finished" podID="d2c1c19b-95a5-4db1-8e54-36fe83704b25" containerID="67516bb124d863acdb93cbafff12001c1c53c2a821587b0e3e99f6135ee28e92" exitCode=0 Mar 13 11:06:16 crc kubenswrapper[4632]: I0313 11:06:16.160107 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d2c1c19b-95a5-4db1-8e54-36fe83704b25","Type":"ContainerDied","Data":"67516bb124d863acdb93cbafff12001c1c53c2a821587b0e3e99f6135ee28e92"} Mar 13 11:06:16 crc kubenswrapper[4632]: I0313 11:06:16.174133 4632 generic.go:334] "Generic (PLEG): container finished" podID="ac97dc03-9537-4f95-bb79-5bb60a99089d" containerID="9a2b2ece3b9e850a4d4ebe5776040511a71de7bd0fba43340538aa166e80ade2" exitCode=0 Mar 13 11:06:16 crc kubenswrapper[4632]: I0313 11:06:16.174506 4632 generic.go:334] "Generic (PLEG): container finished" podID="ac97dc03-9537-4f95-bb79-5bb60a99089d" containerID="7784ac325dc1b12d740a758a07a8e9e03da012db50eef1bc62b207161880f530" exitCode=2 Mar 13 11:06:16 crc kubenswrapper[4632]: I0313 11:06:16.174405 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac97dc03-9537-4f95-bb79-5bb60a99089d","Type":"ContainerDied","Data":"9a2b2ece3b9e850a4d4ebe5776040511a71de7bd0fba43340538aa166e80ade2"} Mar 13 11:06:16 crc kubenswrapper[4632]: I0313 11:06:16.174653 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac97dc03-9537-4f95-bb79-5bb60a99089d","Type":"ContainerDied","Data":"7784ac325dc1b12d740a758a07a8e9e03da012db50eef1bc62b207161880f530"} Mar 13 11:06:16 crc kubenswrapper[4632]: I0313 11:06:16.174884 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556666-pjsh7" podStartSLOduration=14.206123141 podStartE2EDuration="15.17249465s" podCreationTimestamp="2026-03-13 11:06:01 +0000 UTC" firstStartedPulling="2026-03-13 11:06:13.314219236 +0000 UTC m=+3747.336749369" lastFinishedPulling="2026-03-13 11:06:14.280590745 +0000 UTC m=+3748.303120878" observedRunningTime="2026-03-13 11:06:16.170610543 +0000 UTC m=+3750.193140686" watchObservedRunningTime="2026-03-13 11:06:16.17249465 +0000 UTC m=+3750.195024793" Mar 13 11:06:17 crc kubenswrapper[4632]: I0313 11:06:17.189388 4632 generic.go:334] "Generic (PLEG): container finished" podID="ac97dc03-9537-4f95-bb79-5bb60a99089d" containerID="5d34565e3f3d53e4eb4eec7fc127b7d0ef95db5c894a8b9fbc65ec70d12e4d20" exitCode=0 Mar 13 11:06:17 crc kubenswrapper[4632]: I0313 11:06:17.189478 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac97dc03-9537-4f95-bb79-5bb60a99089d","Type":"ContainerDied","Data":"5d34565e3f3d53e4eb4eec7fc127b7d0ef95db5c894a8b9fbc65ec70d12e4d20"} Mar 13 11:06:18 crc kubenswrapper[4632]: I0313 11:06:18.242427 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-6nb82" Mar 13 11:06:18 crc kubenswrapper[4632]: I0313 11:06:18.501620 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 13 11:06:18 crc kubenswrapper[4632]: I0313 11:06:18.759774 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 13 11:06:19 crc kubenswrapper[4632]: I0313 11:06:19.214637 4632 generic.go:334] "Generic (PLEG): container finished" podID="c793856b-f941-4c9e-b70e-36b4844e4eac" containerID="ca3b5ee01f58147d02e068e963d2c27601cb82a563a21e34b84cdc61a03e2f80" exitCode=0 Mar 13 11:06:19 crc kubenswrapper[4632]: I0313 11:06:19.214704 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556666-pjsh7" event={"ID":"c793856b-f941-4c9e-b70e-36b4844e4eac","Type":"ContainerDied","Data":"ca3b5ee01f58147d02e068e963d2c27601cb82a563a21e34b84cdc61a03e2f80"} Mar 13 11:06:19 crc kubenswrapper[4632]: I0313 11:06:19.219288 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d2c1c19b-95a5-4db1-8e54-36fe83704b25","Type":"ContainerStarted","Data":"eebcdf889d3b8172d873374b42b5baa02f10654715a042e6a0eb5b0ebb82e252"} Mar 13 11:06:20 crc kubenswrapper[4632]: I0313 11:06:20.326049 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 13 11:06:20 crc kubenswrapper[4632]: I0313 11:06:20.376277 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-75bb75cfd7-8sh2x" Mar 13 11:06:21 crc kubenswrapper[4632]: I0313 11:06:21.042783 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556666-pjsh7" Mar 13 11:06:21 crc kubenswrapper[4632]: I0313 11:06:21.214096 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9fsz\" (UniqueName: \"kubernetes.io/projected/c793856b-f941-4c9e-b70e-36b4844e4eac-kube-api-access-f9fsz\") pod \"c793856b-f941-4c9e-b70e-36b4844e4eac\" (UID: \"c793856b-f941-4c9e-b70e-36b4844e4eac\") " Mar 13 11:06:21 crc kubenswrapper[4632]: I0313 11:06:21.246319 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c793856b-f941-4c9e-b70e-36b4844e4eac-kube-api-access-f9fsz" (OuterVolumeSpecName: "kube-api-access-f9fsz") pod "c793856b-f941-4c9e-b70e-36b4844e4eac" (UID: "c793856b-f941-4c9e-b70e-36b4844e4eac"). InnerVolumeSpecName "kube-api-access-f9fsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:06:21 crc kubenswrapper[4632]: I0313 11:06:21.248326 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556666-pjsh7" Mar 13 11:06:21 crc kubenswrapper[4632]: I0313 11:06:21.249324 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556666-pjsh7" event={"ID":"c793856b-f941-4c9e-b70e-36b4844e4eac","Type":"ContainerDied","Data":"c4d5e3a91207e5a09f63b2550c521edf15d369030662552b3e6ac28d2f654bea"} Mar 13 11:06:21 crc kubenswrapper[4632]: I0313 11:06:21.261308 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4d5e3a91207e5a09f63b2550c521edf15d369030662552b3e6ac28d2f654bea" Mar 13 11:06:21 crc kubenswrapper[4632]: I0313 11:06:21.319138 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9fsz\" (UniqueName: \"kubernetes.io/projected/c793856b-f941-4c9e-b70e-36b4844e4eac-kube-api-access-f9fsz\") on node \"crc\" DevicePath \"\"" Mar 13 11:06:21 crc kubenswrapper[4632]: I0313 11:06:21.349629 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556660-dxrjl"] Mar 13 11:06:21 crc kubenswrapper[4632]: I0313 11:06:21.363247 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 13 11:06:21 crc kubenswrapper[4632]: I0313 11:06:21.364159 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556660-dxrjl"] Mar 13 11:06:21 crc kubenswrapper[4632]: E0313 11:06:21.500708 4632 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc793856b_f941_4c9e_b70e_36b4844e4eac.slice\": RecentStats: unable to find data in memory cache]" Mar 13 11:06:22 crc kubenswrapper[4632]: I0313 11:06:22.057010 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae414ebe-e9fa-4c30-965a-e368234bbb18" path="/var/lib/kubelet/pods/ae414ebe-e9fa-4c30-965a-e368234bbb18/volumes" Mar 13 11:06:22 crc kubenswrapper[4632]: I0313 11:06:22.476861 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 13 11:06:22 crc kubenswrapper[4632]: I0313 11:06:22.476977 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 13 11:06:22 crc kubenswrapper[4632]: I0313 11:06:22.612920 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 13 11:06:23 crc kubenswrapper[4632]: I0313 11:06:23.044091 4632 scope.go:117] "RemoveContainer" containerID="8a4edabf825a9fe82d4af3664ef24831617306062afee5f4494f20215f557582" Mar 13 11:06:23 crc kubenswrapper[4632]: E0313 11:06:23.044674 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:06:23 crc kubenswrapper[4632]: I0313 11:06:23.285595 4632 generic.go:334] "Generic (PLEG): container finished" podID="a62e0eae-95dd-40a3-a489-80646fde4301" containerID="ec15c016ac8280363b8fb347025993466f5b7492f2d0ac470ef8fc423974c0e2" exitCode=1 Mar 13 11:06:23 crc kubenswrapper[4632]: I0313 11:06:23.285680 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"a62e0eae-95dd-40a3-a489-80646fde4301","Type":"ContainerDied","Data":"ec15c016ac8280363b8fb347025993466f5b7492f2d0ac470ef8fc423974c0e2"} Mar 13 11:06:23 crc kubenswrapper[4632]: I0313 11:06:23.408370 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.396656 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.430119 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.518666 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"a62e0eae-95dd-40a3-a489-80646fde4301\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") " Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.518769 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a62e0eae-95dd-40a3-a489-80646fde4301-openstack-config\") pod \"a62e0eae-95dd-40a3-a489-80646fde4301\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") " Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.518794 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8w5vp\" (UniqueName: \"kubernetes.io/projected/a62e0eae-95dd-40a3-a489-80646fde4301-kube-api-access-8w5vp\") pod \"a62e0eae-95dd-40a3-a489-80646fde4301\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") " Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.518819 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a62e0eae-95dd-40a3-a489-80646fde4301-test-operator-ephemeral-workdir\") pod \"a62e0eae-95dd-40a3-a489-80646fde4301\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") " Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.518915 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a62e0eae-95dd-40a3-a489-80646fde4301-ssh-key\") pod \"a62e0eae-95dd-40a3-a489-80646fde4301\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") " Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.518987 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a62e0eae-95dd-40a3-a489-80646fde4301-test-operator-ephemeral-temporary\") pod \"a62e0eae-95dd-40a3-a489-80646fde4301\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") " Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.519008 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a62e0eae-95dd-40a3-a489-80646fde4301-config-data\") pod \"a62e0eae-95dd-40a3-a489-80646fde4301\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") " Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.519051 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a62e0eae-95dd-40a3-a489-80646fde4301-ca-certs\") pod \"a62e0eae-95dd-40a3-a489-80646fde4301\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") " Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.519080 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a62e0eae-95dd-40a3-a489-80646fde4301-openstack-config-secret\") pod \"a62e0eae-95dd-40a3-a489-80646fde4301\" (UID: \"a62e0eae-95dd-40a3-a489-80646fde4301\") " Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.522070 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a62e0eae-95dd-40a3-a489-80646fde4301-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "a62e0eae-95dd-40a3-a489-80646fde4301" (UID: "a62e0eae-95dd-40a3-a489-80646fde4301"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.523898 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a62e0eae-95dd-40a3-a489-80646fde4301-config-data" (OuterVolumeSpecName: "config-data") pod "a62e0eae-95dd-40a3-a489-80646fde4301" (UID: "a62e0eae-95dd-40a3-a489-80646fde4301"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.530445 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "test-operator-logs") pod "a62e0eae-95dd-40a3-a489-80646fde4301" (UID: "a62e0eae-95dd-40a3-a489-80646fde4301"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.536446 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a62e0eae-95dd-40a3-a489-80646fde4301-kube-api-access-8w5vp" (OuterVolumeSpecName: "kube-api-access-8w5vp") pod "a62e0eae-95dd-40a3-a489-80646fde4301" (UID: "a62e0eae-95dd-40a3-a489-80646fde4301"). InnerVolumeSpecName "kube-api-access-8w5vp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.576466 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a62e0eae-95dd-40a3-a489-80646fde4301-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a62e0eae-95dd-40a3-a489-80646fde4301" (UID: "a62e0eae-95dd-40a3-a489-80646fde4301"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.614371 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a62e0eae-95dd-40a3-a489-80646fde4301-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "a62e0eae-95dd-40a3-a489-80646fde4301" (UID: "a62e0eae-95dd-40a3-a489-80646fde4301"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.621912 4632 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a62e0eae-95dd-40a3-a489-80646fde4301-ca-certs\") on node \"crc\" DevicePath \"\"" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.625007 4632 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.625036 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8w5vp\" (UniqueName: \"kubernetes.io/projected/a62e0eae-95dd-40a3-a489-80646fde4301-kube-api-access-8w5vp\") on node \"crc\" DevicePath \"\"" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.625051 4632 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a62e0eae-95dd-40a3-a489-80646fde4301-ssh-key\") on node \"crc\" DevicePath \"\"" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.625061 4632 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a62e0eae-95dd-40a3-a489-80646fde4301-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.625071 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a62e0eae-95dd-40a3-a489-80646fde4301-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.647400 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a62e0eae-95dd-40a3-a489-80646fde4301-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "a62e0eae-95dd-40a3-a489-80646fde4301" (UID: "a62e0eae-95dd-40a3-a489-80646fde4301"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.652842 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Mar 13 11:06:25 crc kubenswrapper[4632]: E0313 11:06:25.656989 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62e0eae-95dd-40a3-a489-80646fde4301" containerName="tempest-tests-tempest-tests-runner" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.657230 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62e0eae-95dd-40a3-a489-80646fde4301" containerName="tempest-tests-tempest-tests-runner" Mar 13 11:06:25 crc kubenswrapper[4632]: E0313 11:06:25.657877 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c793856b-f941-4c9e-b70e-36b4844e4eac" containerName="oc" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.658021 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="c793856b-f941-4c9e-b70e-36b4844e4eac" containerName="oc" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.658463 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="a62e0eae-95dd-40a3-a489-80646fde4301" containerName="tempest-tests-tempest-tests-runner" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.658570 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="c793856b-f941-4c9e-b70e-36b4844e4eac" containerName="oc" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.666139 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a62e0eae-95dd-40a3-a489-80646fde4301-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "a62e0eae-95dd-40a3-a489-80646fde4301" (UID: "a62e0eae-95dd-40a3-a489-80646fde4301"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.668161 4632 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.670397 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.679529 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a62e0eae-95dd-40a3-a489-80646fde4301-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "a62e0eae-95dd-40a3-a489-80646fde4301" (UID: "a62e0eae-95dd-40a3-a489-80646fde4301"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.682030 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s1" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.682032 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s1" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.693083 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.726945 4632 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a62e0eae-95dd-40a3-a489-80646fde4301-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.727198 4632 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.727210 4632 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a62e0eae-95dd-40a3-a489-80646fde4301-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.727219 4632 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a62e0eae-95dd-40a3-a489-80646fde4301-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.730868 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mm6fq" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="registry-server" probeResult="failure" output=< Mar 13 11:06:25 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:06:25 crc kubenswrapper[4632]: > Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.730961 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mm6fq" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.731847 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"0bbbe65ea71f36a37f33d902708fe700b15c322c14c94c121a3ca523a54d026b"} pod="openshift-marketplace/redhat-operators-mm6fq" containerMessage="Container registry-server failed startup probe, will be restarted" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.731880 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mm6fq" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="registry-server" containerID="cri-o://0bbbe65ea71f36a37f33d902708fe700b15c322c14c94c121a3ca523a54d026b" gracePeriod=30 Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.829463 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.829792 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/611401cc-04fe-4276-82fa-a896182802d4-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.829997 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/611401cc-04fe-4276-82fa-a896182802d4-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.830225 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/611401cc-04fe-4276-82fa-a896182802d4-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.830415 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/611401cc-04fe-4276-82fa-a896182802d4-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.830499 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/611401cc-04fe-4276-82fa-a896182802d4-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.830581 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/611401cc-04fe-4276-82fa-a896182802d4-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.830605 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrwf2\" (UniqueName: \"kubernetes.io/projected/611401cc-04fe-4276-82fa-a896182802d4-kube-api-access-hrwf2\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.830662 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/611401cc-04fe-4276-82fa-a896182802d4-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.932647 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/611401cc-04fe-4276-82fa-a896182802d4-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.932883 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/611401cc-04fe-4276-82fa-a896182802d4-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.932992 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/611401cc-04fe-4276-82fa-a896182802d4-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.933118 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/611401cc-04fe-4276-82fa-a896182802d4-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.933187 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrwf2\" (UniqueName: \"kubernetes.io/projected/611401cc-04fe-4276-82fa-a896182802d4-kube-api-access-hrwf2\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.933297 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/611401cc-04fe-4276-82fa-a896182802d4-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.933779 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.934030 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/611401cc-04fe-4276-82fa-a896182802d4-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.934169 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/611401cc-04fe-4276-82fa-a896182802d4-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.934486 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/611401cc-04fe-4276-82fa-a896182802d4-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.934525 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/611401cc-04fe-4276-82fa-a896182802d4-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.933722 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/611401cc-04fe-4276-82fa-a896182802d4-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.934743 4632 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.937734 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/611401cc-04fe-4276-82fa-a896182802d4-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.940602 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/611401cc-04fe-4276-82fa-a896182802d4-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.942534 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/611401cc-04fe-4276-82fa-a896182802d4-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.951811 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/611401cc-04fe-4276-82fa-a896182802d4-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.958671 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrwf2\" (UniqueName: \"kubernetes.io/projected/611401cc-04fe-4276-82fa-a896182802d4-kube-api-access-hrwf2\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.977217 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 13 11:06:25 crc kubenswrapper[4632]: I0313 11:06:25.992064 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 13 11:06:26 crc kubenswrapper[4632]: I0313 11:06:26.313049 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"a62e0eae-95dd-40a3-a489-80646fde4301","Type":"ContainerDied","Data":"959458908fe1f2c8aa4edafce9f9395e573f668491b9554e12daf71db7b5cc6a"} Mar 13 11:06:26 crc kubenswrapper[4632]: I0313 11:06:26.313414 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="959458908fe1f2c8aa4edafce9f9395e573f668491b9554e12daf71db7b5cc6a" Mar 13 11:06:26 crc kubenswrapper[4632]: I0313 11:06:26.313106 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 13 11:06:26 crc kubenswrapper[4632]: I0313 11:06:26.639834 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Mar 13 11:06:27 crc kubenswrapper[4632]: I0313 11:06:27.324850 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"611401cc-04fe-4276-82fa-a896182802d4","Type":"ContainerStarted","Data":"95b1b1d6a519cb7b9bfef154cebb6e4b73104a8706f52af49a8997ffa20ebd91"} Mar 13 11:06:30 crc kubenswrapper[4632]: I0313 11:06:30.354242 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"611401cc-04fe-4276-82fa-a896182802d4","Type":"ContainerStarted","Data":"e5092a16adcd02c327c069b34afdd26aca8018f63ed747e3778a6c696a0e6a3c"} Mar 13 11:06:30 crc kubenswrapper[4632]: I0313 11:06:30.374643 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" podStartSLOduration=5.374631369 podStartE2EDuration="5.374631369s" podCreationTimestamp="2026-03-13 11:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:06:30.373131872 +0000 UTC m=+3764.395662015" watchObservedRunningTime="2026-03-13 11:06:30.374631369 +0000 UTC m=+3764.397161502" Mar 13 11:06:35 crc kubenswrapper[4632]: I0313 11:06:35.046301 4632 scope.go:117] "RemoveContainer" containerID="8a4edabf825a9fe82d4af3664ef24831617306062afee5f4494f20215f557582" Mar 13 11:06:35 crc kubenswrapper[4632]: E0313 11:06:35.047058 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:06:42 crc kubenswrapper[4632]: I0313 11:06:42.907764 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ac97dc03-9537-4f95-bb79-5bb60a99089d" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.232:3000/\": dial tcp 10.217.0.232:3000: connect: connection refused" Mar 13 11:06:46 crc kubenswrapper[4632]: I0313 11:06:46.539568 4632 generic.go:334] "Generic (PLEG): container finished" podID="ac97dc03-9537-4f95-bb79-5bb60a99089d" containerID="b57f85647a9f24895343d16ffa69abe242612a8de43972c5cb53a02f9838e13b" exitCode=137 Mar 13 11:06:46 crc kubenswrapper[4632]: I0313 11:06:46.540169 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac97dc03-9537-4f95-bb79-5bb60a99089d","Type":"ContainerDied","Data":"b57f85647a9f24895343d16ffa69abe242612a8de43972c5cb53a02f9838e13b"} Mar 13 11:06:46 crc kubenswrapper[4632]: I0313 11:06:46.540217 4632 scope.go:117] "RemoveContainer" containerID="3da76186915cfbbbe688750a6110b1e64143d37e61c44ef62a9740eabb32c983" Mar 13 11:06:46 crc kubenswrapper[4632]: I0313 11:06:46.755371 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 11:06:46 crc kubenswrapper[4632]: I0313 11:06:46.893150 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac97dc03-9537-4f95-bb79-5bb60a99089d-ceilometer-tls-certs\") pod \"ac97dc03-9537-4f95-bb79-5bb60a99089d\" (UID: \"ac97dc03-9537-4f95-bb79-5bb60a99089d\") " Mar 13 11:06:46 crc kubenswrapper[4632]: I0313 11:06:46.893666 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22tlw\" (UniqueName: \"kubernetes.io/projected/ac97dc03-9537-4f95-bb79-5bb60a99089d-kube-api-access-22tlw\") pod \"ac97dc03-9537-4f95-bb79-5bb60a99089d\" (UID: \"ac97dc03-9537-4f95-bb79-5bb60a99089d\") " Mar 13 11:06:46 crc kubenswrapper[4632]: I0313 11:06:46.893761 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac97dc03-9537-4f95-bb79-5bb60a99089d-scripts\") pod \"ac97dc03-9537-4f95-bb79-5bb60a99089d\" (UID: \"ac97dc03-9537-4f95-bb79-5bb60a99089d\") " Mar 13 11:06:46 crc kubenswrapper[4632]: I0313 11:06:46.893983 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac97dc03-9537-4f95-bb79-5bb60a99089d-run-httpd\") pod \"ac97dc03-9537-4f95-bb79-5bb60a99089d\" (UID: \"ac97dc03-9537-4f95-bb79-5bb60a99089d\") " Mar 13 11:06:46 crc kubenswrapper[4632]: I0313 11:06:46.894036 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac97dc03-9537-4f95-bb79-5bb60a99089d-combined-ca-bundle\") pod \"ac97dc03-9537-4f95-bb79-5bb60a99089d\" (UID: \"ac97dc03-9537-4f95-bb79-5bb60a99089d\") " Mar 13 11:06:46 crc kubenswrapper[4632]: I0313 11:06:46.894238 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac97dc03-9537-4f95-bb79-5bb60a99089d-log-httpd\") pod \"ac97dc03-9537-4f95-bb79-5bb60a99089d\" (UID: \"ac97dc03-9537-4f95-bb79-5bb60a99089d\") " Mar 13 11:06:46 crc kubenswrapper[4632]: I0313 11:06:46.894300 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac97dc03-9537-4f95-bb79-5bb60a99089d-sg-core-conf-yaml\") pod \"ac97dc03-9537-4f95-bb79-5bb60a99089d\" (UID: \"ac97dc03-9537-4f95-bb79-5bb60a99089d\") " Mar 13 11:06:46 crc kubenswrapper[4632]: I0313 11:06:46.894375 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac97dc03-9537-4f95-bb79-5bb60a99089d-config-data\") pod \"ac97dc03-9537-4f95-bb79-5bb60a99089d\" (UID: \"ac97dc03-9537-4f95-bb79-5bb60a99089d\") " Mar 13 11:06:46 crc kubenswrapper[4632]: I0313 11:06:46.896834 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac97dc03-9537-4f95-bb79-5bb60a99089d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ac97dc03-9537-4f95-bb79-5bb60a99089d" (UID: "ac97dc03-9537-4f95-bb79-5bb60a99089d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:06:46 crc kubenswrapper[4632]: I0313 11:06:46.897114 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac97dc03-9537-4f95-bb79-5bb60a99089d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ac97dc03-9537-4f95-bb79-5bb60a99089d" (UID: "ac97dc03-9537-4f95-bb79-5bb60a99089d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:06:46 crc kubenswrapper[4632]: I0313 11:06:46.910201 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac97dc03-9537-4f95-bb79-5bb60a99089d-kube-api-access-22tlw" (OuterVolumeSpecName: "kube-api-access-22tlw") pod "ac97dc03-9537-4f95-bb79-5bb60a99089d" (UID: "ac97dc03-9537-4f95-bb79-5bb60a99089d"). InnerVolumeSpecName "kube-api-access-22tlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:06:46 crc kubenswrapper[4632]: I0313 11:06:46.936274 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac97dc03-9537-4f95-bb79-5bb60a99089d-scripts" (OuterVolumeSpecName: "scripts") pod "ac97dc03-9537-4f95-bb79-5bb60a99089d" (UID: "ac97dc03-9537-4f95-bb79-5bb60a99089d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:06:46 crc kubenswrapper[4632]: I0313 11:06:46.948311 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac97dc03-9537-4f95-bb79-5bb60a99089d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ac97dc03-9537-4f95-bb79-5bb60a99089d" (UID: "ac97dc03-9537-4f95-bb79-5bb60a99089d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:06:46 crc kubenswrapper[4632]: I0313 11:06:46.979113 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac97dc03-9537-4f95-bb79-5bb60a99089d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "ac97dc03-9537-4f95-bb79-5bb60a99089d" (UID: "ac97dc03-9537-4f95-bb79-5bb60a99089d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:06:46 crc kubenswrapper[4632]: I0313 11:06:46.997211 4632 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac97dc03-9537-4f95-bb79-5bb60a99089d-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 11:06:46 crc kubenswrapper[4632]: I0313 11:06:46.997316 4632 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac97dc03-9537-4f95-bb79-5bb60a99089d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 13 11:06:46 crc kubenswrapper[4632]: I0313 11:06:46.997328 4632 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac97dc03-9537-4f95-bb79-5bb60a99089d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 13 11:06:46 crc kubenswrapper[4632]: I0313 11:06:46.997339 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22tlw\" (UniqueName: \"kubernetes.io/projected/ac97dc03-9537-4f95-bb79-5bb60a99089d-kube-api-access-22tlw\") on node \"crc\" DevicePath \"\"" Mar 13 11:06:46 crc kubenswrapper[4632]: I0313 11:06:46.997346 4632 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac97dc03-9537-4f95-bb79-5bb60a99089d-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 11:06:46 crc kubenswrapper[4632]: I0313 11:06:46.997354 4632 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac97dc03-9537-4f95-bb79-5bb60a99089d-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.033515 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac97dc03-9537-4f95-bb79-5bb60a99089d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac97dc03-9537-4f95-bb79-5bb60a99089d" (UID: "ac97dc03-9537-4f95-bb79-5bb60a99089d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.044891 4632 scope.go:117] "RemoveContainer" containerID="8a4edabf825a9fe82d4af3664ef24831617306062afee5f4494f20215f557582" Mar 13 11:06:47 crc kubenswrapper[4632]: E0313 11:06:47.045374 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.099476 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac97dc03-9537-4f95-bb79-5bb60a99089d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.115140 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac97dc03-9537-4f95-bb79-5bb60a99089d-config-data" (OuterVolumeSpecName: "config-data") pod "ac97dc03-9537-4f95-bb79-5bb60a99089d" (UID: "ac97dc03-9537-4f95-bb79-5bb60a99089d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.201123 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac97dc03-9537-4f95-bb79-5bb60a99089d-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.553555 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac97dc03-9537-4f95-bb79-5bb60a99089d","Type":"ContainerDied","Data":"6c35a65f59ec813bb19b2b3e4862d24780f1cf59570c0c358308767506eead20"} Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.553651 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.553668 4632 scope.go:117] "RemoveContainer" containerID="b57f85647a9f24895343d16ffa69abe242612a8de43972c5cb53a02f9838e13b" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.605723 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.617870 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.621967 4632 scope.go:117] "RemoveContainer" containerID="9a2b2ece3b9e850a4d4ebe5776040511a71de7bd0fba43340538aa166e80ade2" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.653690 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 13 11:06:47 crc kubenswrapper[4632]: E0313 11:06:47.654167 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac97dc03-9537-4f95-bb79-5bb60a99089d" containerName="ceilometer-notification-agent" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.654187 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac97dc03-9537-4f95-bb79-5bb60a99089d" containerName="ceilometer-notification-agent" Mar 13 11:06:47 crc kubenswrapper[4632]: E0313 11:06:47.654217 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac97dc03-9537-4f95-bb79-5bb60a99089d" containerName="proxy-httpd" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.654226 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac97dc03-9537-4f95-bb79-5bb60a99089d" containerName="proxy-httpd" Mar 13 11:06:47 crc kubenswrapper[4632]: E0313 11:06:47.654255 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac97dc03-9537-4f95-bb79-5bb60a99089d" containerName="sg-core" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.654264 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac97dc03-9537-4f95-bb79-5bb60a99089d" containerName="sg-core" Mar 13 11:06:47 crc kubenswrapper[4632]: E0313 11:06:47.654278 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac97dc03-9537-4f95-bb79-5bb60a99089d" containerName="ceilometer-central-agent" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.654288 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac97dc03-9537-4f95-bb79-5bb60a99089d" containerName="ceilometer-central-agent" Mar 13 11:06:47 crc kubenswrapper[4632]: E0313 11:06:47.654302 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac97dc03-9537-4f95-bb79-5bb60a99089d" containerName="ceilometer-central-agent" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.654310 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac97dc03-9537-4f95-bb79-5bb60a99089d" containerName="ceilometer-central-agent" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.654525 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac97dc03-9537-4f95-bb79-5bb60a99089d" containerName="ceilometer-central-agent" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.654543 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac97dc03-9537-4f95-bb79-5bb60a99089d" containerName="proxy-httpd" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.654558 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac97dc03-9537-4f95-bb79-5bb60a99089d" containerName="ceilometer-notification-agent" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.654586 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac97dc03-9537-4f95-bb79-5bb60a99089d" containerName="sg-core" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.655021 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac97dc03-9537-4f95-bb79-5bb60a99089d" containerName="ceilometer-central-agent" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.661402 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.671541 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.671998 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.672151 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.678082 4632 scope.go:117] "RemoveContainer" containerID="7784ac325dc1b12d740a758a07a8e9e03da012db50eef1bc62b207161880f530" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.682140 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.778836 4632 scope.go:117] "RemoveContainer" containerID="5d34565e3f3d53e4eb4eec7fc127b7d0ef95db5c894a8b9fbc65ec70d12e4d20" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.811501 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/046f071d-f091-4681-8a9b-06c7e7dc2192-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"046f071d-f091-4681-8a9b-06c7e7dc2192\") " pod="openstack/ceilometer-0" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.811615 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btc6h\" (UniqueName: \"kubernetes.io/projected/046f071d-f091-4681-8a9b-06c7e7dc2192-kube-api-access-btc6h\") pod \"ceilometer-0\" (UID: \"046f071d-f091-4681-8a9b-06c7e7dc2192\") " pod="openstack/ceilometer-0" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.811691 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/046f071d-f091-4681-8a9b-06c7e7dc2192-run-httpd\") pod \"ceilometer-0\" (UID: \"046f071d-f091-4681-8a9b-06c7e7dc2192\") " pod="openstack/ceilometer-0" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.811850 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/046f071d-f091-4681-8a9b-06c7e7dc2192-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"046f071d-f091-4681-8a9b-06c7e7dc2192\") " pod="openstack/ceilometer-0" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.811960 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/046f071d-f091-4681-8a9b-06c7e7dc2192-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"046f071d-f091-4681-8a9b-06c7e7dc2192\") " pod="openstack/ceilometer-0" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.812072 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/046f071d-f091-4681-8a9b-06c7e7dc2192-log-httpd\") pod \"ceilometer-0\" (UID: \"046f071d-f091-4681-8a9b-06c7e7dc2192\") " pod="openstack/ceilometer-0" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.812119 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/046f071d-f091-4681-8a9b-06c7e7dc2192-config-data\") pod \"ceilometer-0\" (UID: \"046f071d-f091-4681-8a9b-06c7e7dc2192\") " pod="openstack/ceilometer-0" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.812189 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/046f071d-f091-4681-8a9b-06c7e7dc2192-scripts\") pod \"ceilometer-0\" (UID: \"046f071d-f091-4681-8a9b-06c7e7dc2192\") " pod="openstack/ceilometer-0" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.914667 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/046f071d-f091-4681-8a9b-06c7e7dc2192-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"046f071d-f091-4681-8a9b-06c7e7dc2192\") " pod="openstack/ceilometer-0" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.915795 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/046f071d-f091-4681-8a9b-06c7e7dc2192-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"046f071d-f091-4681-8a9b-06c7e7dc2192\") " pod="openstack/ceilometer-0" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.915892 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/046f071d-f091-4681-8a9b-06c7e7dc2192-log-httpd\") pod \"ceilometer-0\" (UID: \"046f071d-f091-4681-8a9b-06c7e7dc2192\") " pod="openstack/ceilometer-0" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.915971 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/046f071d-f091-4681-8a9b-06c7e7dc2192-config-data\") pod \"ceilometer-0\" (UID: \"046f071d-f091-4681-8a9b-06c7e7dc2192\") " pod="openstack/ceilometer-0" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.916667 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/046f071d-f091-4681-8a9b-06c7e7dc2192-scripts\") pod \"ceilometer-0\" (UID: \"046f071d-f091-4681-8a9b-06c7e7dc2192\") " pod="openstack/ceilometer-0" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.916871 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/046f071d-f091-4681-8a9b-06c7e7dc2192-log-httpd\") pod \"ceilometer-0\" (UID: \"046f071d-f091-4681-8a9b-06c7e7dc2192\") " pod="openstack/ceilometer-0" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.917354 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/046f071d-f091-4681-8a9b-06c7e7dc2192-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"046f071d-f091-4681-8a9b-06c7e7dc2192\") " pod="openstack/ceilometer-0" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.917438 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btc6h\" (UniqueName: \"kubernetes.io/projected/046f071d-f091-4681-8a9b-06c7e7dc2192-kube-api-access-btc6h\") pod \"ceilometer-0\" (UID: \"046f071d-f091-4681-8a9b-06c7e7dc2192\") " pod="openstack/ceilometer-0" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.917491 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/046f071d-f091-4681-8a9b-06c7e7dc2192-run-httpd\") pod \"ceilometer-0\" (UID: \"046f071d-f091-4681-8a9b-06c7e7dc2192\") " pod="openstack/ceilometer-0" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.918082 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/046f071d-f091-4681-8a9b-06c7e7dc2192-run-httpd\") pod \"ceilometer-0\" (UID: \"046f071d-f091-4681-8a9b-06c7e7dc2192\") " pod="openstack/ceilometer-0" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.922489 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/046f071d-f091-4681-8a9b-06c7e7dc2192-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"046f071d-f091-4681-8a9b-06c7e7dc2192\") " pod="openstack/ceilometer-0" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.923322 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/046f071d-f091-4681-8a9b-06c7e7dc2192-scripts\") pod \"ceilometer-0\" (UID: \"046f071d-f091-4681-8a9b-06c7e7dc2192\") " pod="openstack/ceilometer-0" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.924059 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/046f071d-f091-4681-8a9b-06c7e7dc2192-config-data\") pod \"ceilometer-0\" (UID: \"046f071d-f091-4681-8a9b-06c7e7dc2192\") " pod="openstack/ceilometer-0" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.924658 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/046f071d-f091-4681-8a9b-06c7e7dc2192-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"046f071d-f091-4681-8a9b-06c7e7dc2192\") " pod="openstack/ceilometer-0" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.934162 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/046f071d-f091-4681-8a9b-06c7e7dc2192-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"046f071d-f091-4681-8a9b-06c7e7dc2192\") " pod="openstack/ceilometer-0" Mar 13 11:06:47 crc kubenswrapper[4632]: I0313 11:06:47.942345 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btc6h\" (UniqueName: \"kubernetes.io/projected/046f071d-f091-4681-8a9b-06c7e7dc2192-kube-api-access-btc6h\") pod \"ceilometer-0\" (UID: \"046f071d-f091-4681-8a9b-06c7e7dc2192\") " pod="openstack/ceilometer-0" Mar 13 11:06:48 crc kubenswrapper[4632]: I0313 11:06:48.059100 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 11:06:48 crc kubenswrapper[4632]: I0313 11:06:48.062422 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac97dc03-9537-4f95-bb79-5bb60a99089d" path="/var/lib/kubelet/pods/ac97dc03-9537-4f95-bb79-5bb60a99089d/volumes" Mar 13 11:06:48 crc kubenswrapper[4632]: I0313 11:06:48.633850 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 11:06:48 crc kubenswrapper[4632]: W0313 11:06:48.643014 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod046f071d_f091_4681_8a9b_06c7e7dc2192.slice/crio-27050561388e6ddf9f76aa250d32d0fc503c15ed538344df77e2efb4d8e9e619 WatchSource:0}: Error finding container 27050561388e6ddf9f76aa250d32d0fc503c15ed538344df77e2efb4d8e9e619: Status 404 returned error can't find the container with id 27050561388e6ddf9f76aa250d32d0fc503c15ed538344df77e2efb4d8e9e619 Mar 13 11:06:49 crc kubenswrapper[4632]: I0313 11:06:49.574113 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"046f071d-f091-4681-8a9b-06c7e7dc2192","Type":"ContainerStarted","Data":"576756d63180fa10edf4bdedeab7fa16aa80d2e15a64d7a8dc5e32f747e2cd3a"} Mar 13 11:06:49 crc kubenswrapper[4632]: I0313 11:06:49.574713 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"046f071d-f091-4681-8a9b-06c7e7dc2192","Type":"ContainerStarted","Data":"02ad1c47d8e6b66a14651c47c09b8b2a59f4584678d570942aab1e3282149017"} Mar 13 11:06:49 crc kubenswrapper[4632]: I0313 11:06:49.574727 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"046f071d-f091-4681-8a9b-06c7e7dc2192","Type":"ContainerStarted","Data":"27050561388e6ddf9f76aa250d32d0fc503c15ed538344df77e2efb4d8e9e619"} Mar 13 11:06:50 crc kubenswrapper[4632]: I0313 11:06:50.584931 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"046f071d-f091-4681-8a9b-06c7e7dc2192","Type":"ContainerStarted","Data":"caef16b68b74ddf8de5706c4854903b7a02a9f270caee37fd1eb97735d2e12cf"} Mar 13 11:06:52 crc kubenswrapper[4632]: I0313 11:06:52.603707 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"046f071d-f091-4681-8a9b-06c7e7dc2192","Type":"ContainerStarted","Data":"dfdadecf41bcb95c18a4fe4bdcab7d2331d8cdd63461ec09255dc41b0025b9ee"} Mar 13 11:06:52 crc kubenswrapper[4632]: I0313 11:06:52.604221 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 13 11:06:52 crc kubenswrapper[4632]: I0313 11:06:52.630521 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.937397272 podStartE2EDuration="5.630503906s" podCreationTimestamp="2026-03-13 11:06:47 +0000 UTC" firstStartedPulling="2026-03-13 11:06:48.64578449 +0000 UTC m=+3782.668314623" lastFinishedPulling="2026-03-13 11:06:51.338891104 +0000 UTC m=+3785.361421257" observedRunningTime="2026-03-13 11:06:52.626818854 +0000 UTC m=+3786.649348987" watchObservedRunningTime="2026-03-13 11:06:52.630503906 +0000 UTC m=+3786.653034039" Mar 13 11:06:56 crc kubenswrapper[4632]: I0313 11:06:56.668873 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mm6fq_269ac923-f4f9-43f2-934f-8b0f26f6c4af/registry-server/1.log" Mar 13 11:06:56 crc kubenswrapper[4632]: I0313 11:06:56.675654 4632 generic.go:334] "Generic (PLEG): container finished" podID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerID="0bbbe65ea71f36a37f33d902708fe700b15c322c14c94c121a3ca523a54d026b" exitCode=137 Mar 13 11:06:56 crc kubenswrapper[4632]: I0313 11:06:56.675710 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm6fq" event={"ID":"269ac923-f4f9-43f2-934f-8b0f26f6c4af","Type":"ContainerDied","Data":"0bbbe65ea71f36a37f33d902708fe700b15c322c14c94c121a3ca523a54d026b"} Mar 13 11:06:56 crc kubenswrapper[4632]: I0313 11:06:56.675772 4632 scope.go:117] "RemoveContainer" containerID="67b531d65834622b374c34e759c46150ba93cade0961705aa2b576c0c27e19d2" Mar 13 11:06:57 crc kubenswrapper[4632]: I0313 11:06:57.688344 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mm6fq_269ac923-f4f9-43f2-934f-8b0f26f6c4af/registry-server/1.log" Mar 13 11:06:57 crc kubenswrapper[4632]: I0313 11:06:57.689402 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm6fq" event={"ID":"269ac923-f4f9-43f2-934f-8b0f26f6c4af","Type":"ContainerStarted","Data":"261dd98ddf33ee923d82b99030fb5e045cfb4833509d7ef9ec05e635f3f13122"} Mar 13 11:06:58 crc kubenswrapper[4632]: I0313 11:06:58.054536 4632 scope.go:117] "RemoveContainer" containerID="8a4edabf825a9fe82d4af3664ef24831617306062afee5f4494f20215f557582" Mar 13 11:06:58 crc kubenswrapper[4632]: E0313 11:06:58.054895 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:07:04 crc kubenswrapper[4632]: I0313 11:07:04.671139 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mm6fq" Mar 13 11:07:04 crc kubenswrapper[4632]: I0313 11:07:04.671585 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mm6fq" Mar 13 11:07:05 crc kubenswrapper[4632]: I0313 11:07:05.720977 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mm6fq" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="registry-server" probeResult="failure" output=< Mar 13 11:07:05 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:07:05 crc kubenswrapper[4632]: > Mar 13 11:07:12 crc kubenswrapper[4632]: I0313 11:07:12.044982 4632 scope.go:117] "RemoveContainer" containerID="8a4edabf825a9fe82d4af3664ef24831617306062afee5f4494f20215f557582" Mar 13 11:07:12 crc kubenswrapper[4632]: E0313 11:07:12.045778 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:07:15 crc kubenswrapper[4632]: I0313 11:07:15.378361 4632 scope.go:117] "RemoveContainer" containerID="b3d4b9e8bcea3a6dbdeee6316ce9071df3a8c8906a4c416a00caede29a1de5ca" Mar 13 11:07:15 crc kubenswrapper[4632]: I0313 11:07:15.713488 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mm6fq" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="registry-server" probeResult="failure" output=< Mar 13 11:07:15 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:07:15 crc kubenswrapper[4632]: > Mar 13 11:07:18 crc kubenswrapper[4632]: I0313 11:07:18.075276 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 13 11:07:23 crc kubenswrapper[4632]: I0313 11:07:23.044923 4632 scope.go:117] "RemoveContainer" containerID="8a4edabf825a9fe82d4af3664ef24831617306062afee5f4494f20215f557582" Mar 13 11:07:23 crc kubenswrapper[4632]: E0313 11:07:23.046713 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:07:25 crc kubenswrapper[4632]: I0313 11:07:25.731298 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mm6fq" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="registry-server" probeResult="failure" output=< Mar 13 11:07:25 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:07:25 crc kubenswrapper[4632]: > Mar 13 11:07:35 crc kubenswrapper[4632]: I0313 11:07:35.728292 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mm6fq" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="registry-server" probeResult="failure" output=< Mar 13 11:07:35 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:07:35 crc kubenswrapper[4632]: > Mar 13 11:07:36 crc kubenswrapper[4632]: I0313 11:07:36.044352 4632 scope.go:117] "RemoveContainer" containerID="8a4edabf825a9fe82d4af3664ef24831617306062afee5f4494f20215f557582" Mar 13 11:07:36 crc kubenswrapper[4632]: E0313 11:07:36.044589 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:07:44 crc kubenswrapper[4632]: I0313 11:07:44.732070 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mm6fq" Mar 13 11:07:44 crc kubenswrapper[4632]: I0313 11:07:44.803341 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mm6fq" Mar 13 11:07:45 crc kubenswrapper[4632]: I0313 11:07:45.035017 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mm6fq"] Mar 13 11:07:46 crc kubenswrapper[4632]: I0313 11:07:46.203484 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mm6fq" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="registry-server" containerID="cri-o://261dd98ddf33ee923d82b99030fb5e045cfb4833509d7ef9ec05e635f3f13122" gracePeriod=2 Mar 13 11:07:47 crc kubenswrapper[4632]: I0313 11:07:47.216873 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mm6fq_269ac923-f4f9-43f2-934f-8b0f26f6c4af/registry-server/1.log" Mar 13 11:07:47 crc kubenswrapper[4632]: I0313 11:07:47.218536 4632 generic.go:334] "Generic (PLEG): container finished" podID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerID="261dd98ddf33ee923d82b99030fb5e045cfb4833509d7ef9ec05e635f3f13122" exitCode=0 Mar 13 11:07:47 crc kubenswrapper[4632]: I0313 11:07:47.218574 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm6fq" event={"ID":"269ac923-f4f9-43f2-934f-8b0f26f6c4af","Type":"ContainerDied","Data":"261dd98ddf33ee923d82b99030fb5e045cfb4833509d7ef9ec05e635f3f13122"} Mar 13 11:07:47 crc kubenswrapper[4632]: I0313 11:07:47.218624 4632 scope.go:117] "RemoveContainer" containerID="0bbbe65ea71f36a37f33d902708fe700b15c322c14c94c121a3ca523a54d026b" Mar 13 11:07:47 crc kubenswrapper[4632]: I0313 11:07:47.732008 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mm6fq" Mar 13 11:07:47 crc kubenswrapper[4632]: I0313 11:07:47.780923 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/269ac923-f4f9-43f2-934f-8b0f26f6c4af-utilities\") pod \"269ac923-f4f9-43f2-934f-8b0f26f6c4af\" (UID: \"269ac923-f4f9-43f2-934f-8b0f26f6c4af\") " Mar 13 11:07:47 crc kubenswrapper[4632]: I0313 11:07:47.781002 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tfk7\" (UniqueName: \"kubernetes.io/projected/269ac923-f4f9-43f2-934f-8b0f26f6c4af-kube-api-access-9tfk7\") pod \"269ac923-f4f9-43f2-934f-8b0f26f6c4af\" (UID: \"269ac923-f4f9-43f2-934f-8b0f26f6c4af\") " Mar 13 11:07:47 crc kubenswrapper[4632]: I0313 11:07:47.781037 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/269ac923-f4f9-43f2-934f-8b0f26f6c4af-catalog-content\") pod \"269ac923-f4f9-43f2-934f-8b0f26f6c4af\" (UID: \"269ac923-f4f9-43f2-934f-8b0f26f6c4af\") " Mar 13 11:07:47 crc kubenswrapper[4632]: I0313 11:07:47.782519 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/269ac923-f4f9-43f2-934f-8b0f26f6c4af-utilities" (OuterVolumeSpecName: "utilities") pod "269ac923-f4f9-43f2-934f-8b0f26f6c4af" (UID: "269ac923-f4f9-43f2-934f-8b0f26f6c4af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:07:47 crc kubenswrapper[4632]: I0313 11:07:47.802566 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/269ac923-f4f9-43f2-934f-8b0f26f6c4af-kube-api-access-9tfk7" (OuterVolumeSpecName: "kube-api-access-9tfk7") pod "269ac923-f4f9-43f2-934f-8b0f26f6c4af" (UID: "269ac923-f4f9-43f2-934f-8b0f26f6c4af"). InnerVolumeSpecName "kube-api-access-9tfk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:07:47 crc kubenswrapper[4632]: I0313 11:07:47.883591 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/269ac923-f4f9-43f2-934f-8b0f26f6c4af-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 11:07:47 crc kubenswrapper[4632]: I0313 11:07:47.883625 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tfk7\" (UniqueName: \"kubernetes.io/projected/269ac923-f4f9-43f2-934f-8b0f26f6c4af-kube-api-access-9tfk7\") on node \"crc\" DevicePath \"\"" Mar 13 11:07:47 crc kubenswrapper[4632]: I0313 11:07:47.910317 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/269ac923-f4f9-43f2-934f-8b0f26f6c4af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "269ac923-f4f9-43f2-934f-8b0f26f6c4af" (UID: "269ac923-f4f9-43f2-934f-8b0f26f6c4af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:07:47 crc kubenswrapper[4632]: I0313 11:07:47.985605 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/269ac923-f4f9-43f2-934f-8b0f26f6c4af-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 11:07:48 crc kubenswrapper[4632]: I0313 11:07:48.058560 4632 scope.go:117] "RemoveContainer" containerID="8a4edabf825a9fe82d4af3664ef24831617306062afee5f4494f20215f557582" Mar 13 11:07:48 crc kubenswrapper[4632]: E0313 11:07:48.059278 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:07:48 crc kubenswrapper[4632]: I0313 11:07:48.264295 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm6fq" event={"ID":"269ac923-f4f9-43f2-934f-8b0f26f6c4af","Type":"ContainerDied","Data":"c53342fac7b10f1bef54be90bcf0e83cc2e423f561f4ea27cefd68e4947d5bb4"} Mar 13 11:07:48 crc kubenswrapper[4632]: I0313 11:07:48.266049 4632 scope.go:117] "RemoveContainer" containerID="261dd98ddf33ee923d82b99030fb5e045cfb4833509d7ef9ec05e635f3f13122" Mar 13 11:07:48 crc kubenswrapper[4632]: I0313 11:07:48.264621 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mm6fq" Mar 13 11:07:48 crc kubenswrapper[4632]: I0313 11:07:48.314251 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mm6fq"] Mar 13 11:07:48 crc kubenswrapper[4632]: I0313 11:07:48.323443 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mm6fq"] Mar 13 11:07:48 crc kubenswrapper[4632]: I0313 11:07:48.331446 4632 scope.go:117] "RemoveContainer" containerID="d03ceb30503b22a2ad94cc53347c3b0ae54c134bb2b9db1bd0c47dcfc27a8ece" Mar 13 11:07:48 crc kubenswrapper[4632]: I0313 11:07:48.360190 4632 scope.go:117] "RemoveContainer" containerID="93ddcb9911b3bbd33b20a1520c077d1ce20ed42dceb52f18631471d802d7e139" Mar 13 11:07:50 crc kubenswrapper[4632]: I0313 11:07:50.061709 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" path="/var/lib/kubelet/pods/269ac923-f4f9-43f2-934f-8b0f26f6c4af/volumes" Mar 13 11:07:52 crc kubenswrapper[4632]: I0313 11:07:52.014171 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-59586ff4c9-s4xn7"] Mar 13 11:07:52 crc kubenswrapper[4632]: E0313 11:07:52.024304 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="registry-server" Mar 13 11:07:52 crc kubenswrapper[4632]: I0313 11:07:52.024374 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="registry-server" Mar 13 11:07:52 crc kubenswrapper[4632]: E0313 11:07:52.024435 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="registry-server" Mar 13 11:07:52 crc kubenswrapper[4632]: I0313 11:07:52.024442 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="registry-server" Mar 13 11:07:52 crc kubenswrapper[4632]: E0313 11:07:52.024452 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="extract-utilities" Mar 13 11:07:52 crc kubenswrapper[4632]: I0313 11:07:52.024460 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="extract-utilities" Mar 13 11:07:52 crc kubenswrapper[4632]: E0313 11:07:52.024486 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="registry-server" Mar 13 11:07:52 crc kubenswrapper[4632]: I0313 11:07:52.024492 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="registry-server" Mar 13 11:07:52 crc kubenswrapper[4632]: E0313 11:07:52.024511 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="extract-content" Mar 13 11:07:52 crc kubenswrapper[4632]: I0313 11:07:52.024519 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="extract-content" Mar 13 11:07:52 crc kubenswrapper[4632]: I0313 11:07:52.024864 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="registry-server" Mar 13 11:07:52 crc kubenswrapper[4632]: I0313 11:07:52.024881 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="registry-server" Mar 13 11:07:52 crc kubenswrapper[4632]: I0313 11:07:52.025326 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="269ac923-f4f9-43f2-934f-8b0f26f6c4af" containerName="registry-server" Mar 13 11:07:52 crc kubenswrapper[4632]: I0313 11:07:52.031506 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59586ff4c9-s4xn7" Mar 13 11:07:52 crc kubenswrapper[4632]: I0313 11:07:52.058919 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59586ff4c9-s4xn7"] Mar 13 11:07:52 crc kubenswrapper[4632]: I0313 11:07:52.077066 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llcp8\" (UniqueName: \"kubernetes.io/projected/8b9495c7-c9ae-4a07-b216-a250d4cd274e-kube-api-access-llcp8\") pod \"neutron-59586ff4c9-s4xn7\" (UID: \"8b9495c7-c9ae-4a07-b216-a250d4cd274e\") " pod="openstack/neutron-59586ff4c9-s4xn7" Mar 13 11:07:52 crc kubenswrapper[4632]: I0313 11:07:52.077369 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8b9495c7-c9ae-4a07-b216-a250d4cd274e-config\") pod \"neutron-59586ff4c9-s4xn7\" (UID: \"8b9495c7-c9ae-4a07-b216-a250d4cd274e\") " pod="openstack/neutron-59586ff4c9-s4xn7" Mar 13 11:07:52 crc kubenswrapper[4632]: I0313 11:07:52.077490 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9495c7-c9ae-4a07-b216-a250d4cd274e-internal-tls-certs\") pod \"neutron-59586ff4c9-s4xn7\" (UID: \"8b9495c7-c9ae-4a07-b216-a250d4cd274e\") " pod="openstack/neutron-59586ff4c9-s4xn7" Mar 13 11:07:52 crc kubenswrapper[4632]: I0313 11:07:52.077770 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8b9495c7-c9ae-4a07-b216-a250d4cd274e-httpd-config\") pod \"neutron-59586ff4c9-s4xn7\" (UID: \"8b9495c7-c9ae-4a07-b216-a250d4cd274e\") " pod="openstack/neutron-59586ff4c9-s4xn7" Mar 13 11:07:52 crc kubenswrapper[4632]: I0313 11:07:52.078495 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9495c7-c9ae-4a07-b216-a250d4cd274e-public-tls-certs\") pod \"neutron-59586ff4c9-s4xn7\" (UID: \"8b9495c7-c9ae-4a07-b216-a250d4cd274e\") " pod="openstack/neutron-59586ff4c9-s4xn7" Mar 13 11:07:52 crc kubenswrapper[4632]: I0313 11:07:52.078550 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9495c7-c9ae-4a07-b216-a250d4cd274e-ovndb-tls-certs\") pod \"neutron-59586ff4c9-s4xn7\" (UID: \"8b9495c7-c9ae-4a07-b216-a250d4cd274e\") " pod="openstack/neutron-59586ff4c9-s4xn7" Mar 13 11:07:52 crc kubenswrapper[4632]: I0313 11:07:52.078904 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b9495c7-c9ae-4a07-b216-a250d4cd274e-combined-ca-bundle\") pod \"neutron-59586ff4c9-s4xn7\" (UID: \"8b9495c7-c9ae-4a07-b216-a250d4cd274e\") " pod="openstack/neutron-59586ff4c9-s4xn7" Mar 13 11:07:52 crc kubenswrapper[4632]: I0313 11:07:52.180641 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8b9495c7-c9ae-4a07-b216-a250d4cd274e-httpd-config\") pod \"neutron-59586ff4c9-s4xn7\" (UID: \"8b9495c7-c9ae-4a07-b216-a250d4cd274e\") " pod="openstack/neutron-59586ff4c9-s4xn7" Mar 13 11:07:52 crc kubenswrapper[4632]: I0313 11:07:52.180732 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9495c7-c9ae-4a07-b216-a250d4cd274e-public-tls-certs\") pod \"neutron-59586ff4c9-s4xn7\" (UID: \"8b9495c7-c9ae-4a07-b216-a250d4cd274e\") " pod="openstack/neutron-59586ff4c9-s4xn7" Mar 13 11:07:52 crc kubenswrapper[4632]: I0313 11:07:52.180759 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9495c7-c9ae-4a07-b216-a250d4cd274e-ovndb-tls-certs\") pod \"neutron-59586ff4c9-s4xn7\" (UID: \"8b9495c7-c9ae-4a07-b216-a250d4cd274e\") " pod="openstack/neutron-59586ff4c9-s4xn7" Mar 13 11:07:52 crc kubenswrapper[4632]: I0313 11:07:52.180858 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b9495c7-c9ae-4a07-b216-a250d4cd274e-combined-ca-bundle\") pod \"neutron-59586ff4c9-s4xn7\" (UID: \"8b9495c7-c9ae-4a07-b216-a250d4cd274e\") " pod="openstack/neutron-59586ff4c9-s4xn7" Mar 13 11:07:52 crc kubenswrapper[4632]: I0313 11:07:52.180992 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llcp8\" (UniqueName: \"kubernetes.io/projected/8b9495c7-c9ae-4a07-b216-a250d4cd274e-kube-api-access-llcp8\") pod \"neutron-59586ff4c9-s4xn7\" (UID: \"8b9495c7-c9ae-4a07-b216-a250d4cd274e\") " pod="openstack/neutron-59586ff4c9-s4xn7" Mar 13 11:07:52 crc kubenswrapper[4632]: I0313 11:07:52.181052 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8b9495c7-c9ae-4a07-b216-a250d4cd274e-config\") pod \"neutron-59586ff4c9-s4xn7\" (UID: \"8b9495c7-c9ae-4a07-b216-a250d4cd274e\") " pod="openstack/neutron-59586ff4c9-s4xn7" Mar 13 11:07:52 crc kubenswrapper[4632]: I0313 11:07:52.181077 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9495c7-c9ae-4a07-b216-a250d4cd274e-internal-tls-certs\") pod \"neutron-59586ff4c9-s4xn7\" (UID: \"8b9495c7-c9ae-4a07-b216-a250d4cd274e\") " pod="openstack/neutron-59586ff4c9-s4xn7" Mar 13 11:07:52 crc kubenswrapper[4632]: I0313 11:07:52.192895 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9495c7-c9ae-4a07-b216-a250d4cd274e-ovndb-tls-certs\") pod \"neutron-59586ff4c9-s4xn7\" (UID: \"8b9495c7-c9ae-4a07-b216-a250d4cd274e\") " pod="openstack/neutron-59586ff4c9-s4xn7" Mar 13 11:07:52 crc kubenswrapper[4632]: I0313 11:07:52.192954 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9495c7-c9ae-4a07-b216-a250d4cd274e-public-tls-certs\") pod \"neutron-59586ff4c9-s4xn7\" (UID: \"8b9495c7-c9ae-4a07-b216-a250d4cd274e\") " pod="openstack/neutron-59586ff4c9-s4xn7" Mar 13 11:07:52 crc kubenswrapper[4632]: I0313 11:07:52.193469 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b9495c7-c9ae-4a07-b216-a250d4cd274e-combined-ca-bundle\") pod \"neutron-59586ff4c9-s4xn7\" (UID: \"8b9495c7-c9ae-4a07-b216-a250d4cd274e\") " pod="openstack/neutron-59586ff4c9-s4xn7" Mar 13 11:07:52 crc kubenswrapper[4632]: I0313 11:07:52.193572 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8b9495c7-c9ae-4a07-b216-a250d4cd274e-httpd-config\") pod \"neutron-59586ff4c9-s4xn7\" (UID: \"8b9495c7-c9ae-4a07-b216-a250d4cd274e\") " pod="openstack/neutron-59586ff4c9-s4xn7" Mar 13 11:07:52 crc kubenswrapper[4632]: I0313 11:07:52.193971 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9495c7-c9ae-4a07-b216-a250d4cd274e-internal-tls-certs\") pod \"neutron-59586ff4c9-s4xn7\" (UID: \"8b9495c7-c9ae-4a07-b216-a250d4cd274e\") " pod="openstack/neutron-59586ff4c9-s4xn7" Mar 13 11:07:52 crc kubenswrapper[4632]: I0313 11:07:52.199732 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8b9495c7-c9ae-4a07-b216-a250d4cd274e-config\") pod \"neutron-59586ff4c9-s4xn7\" (UID: \"8b9495c7-c9ae-4a07-b216-a250d4cd274e\") " pod="openstack/neutron-59586ff4c9-s4xn7" Mar 13 11:07:52 crc kubenswrapper[4632]: I0313 11:07:52.204021 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llcp8\" (UniqueName: \"kubernetes.io/projected/8b9495c7-c9ae-4a07-b216-a250d4cd274e-kube-api-access-llcp8\") pod \"neutron-59586ff4c9-s4xn7\" (UID: \"8b9495c7-c9ae-4a07-b216-a250d4cd274e\") " pod="openstack/neutron-59586ff4c9-s4xn7" Mar 13 11:07:52 crc kubenswrapper[4632]: I0313 11:07:52.355394 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59586ff4c9-s4xn7" Mar 13 11:07:53 crc kubenswrapper[4632]: I0313 11:07:53.830914 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59586ff4c9-s4xn7"] Mar 13 11:07:54 crc kubenswrapper[4632]: I0313 11:07:54.323411 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59586ff4c9-s4xn7" event={"ID":"8b9495c7-c9ae-4a07-b216-a250d4cd274e","Type":"ContainerStarted","Data":"94fc75b5bf96292690ce359a5d4ce65dd30bc2b06b1aeb4d309bd6e1dcd7e70c"} Mar 13 11:07:54 crc kubenswrapper[4632]: I0313 11:07:54.324115 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59586ff4c9-s4xn7" event={"ID":"8b9495c7-c9ae-4a07-b216-a250d4cd274e","Type":"ContainerStarted","Data":"baa77c1d37fb9c8cc82676bdaaab769c07869647e902c21581eef67e591e5d68"} Mar 13 11:07:55 crc kubenswrapper[4632]: I0313 11:07:55.349608 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59586ff4c9-s4xn7" event={"ID":"8b9495c7-c9ae-4a07-b216-a250d4cd274e","Type":"ContainerStarted","Data":"82e1ea3147a5e24713a581b7fd1d1be6dc38543edaf91f1fa20ce5282f06b072"} Mar 13 11:07:55 crc kubenswrapper[4632]: I0313 11:07:55.349848 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-59586ff4c9-s4xn7" Mar 13 11:07:55 crc kubenswrapper[4632]: I0313 11:07:55.390100 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-59586ff4c9-s4xn7" podStartSLOduration=4.390085623 podStartE2EDuration="4.390085623s" podCreationTimestamp="2026-03-13 11:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:07:55.387303574 +0000 UTC m=+3849.409833707" watchObservedRunningTime="2026-03-13 11:07:55.390085623 +0000 UTC m=+3849.412615756" Mar 13 11:07:59 crc kubenswrapper[4632]: I0313 11:07:59.045497 4632 scope.go:117] "RemoveContainer" containerID="8a4edabf825a9fe82d4af3664ef24831617306062afee5f4494f20215f557582" Mar 13 11:07:59 crc kubenswrapper[4632]: E0313 11:07:59.046526 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:08:00 crc kubenswrapper[4632]: I0313 11:08:00.180279 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556668-bzw4c"] Mar 13 11:08:00 crc kubenswrapper[4632]: I0313 11:08:00.182177 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556668-bzw4c" Mar 13 11:08:00 crc kubenswrapper[4632]: I0313 11:08:00.195636 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556668-bzw4c"] Mar 13 11:08:00 crc kubenswrapper[4632]: I0313 11:08:00.199832 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 11:08:00 crc kubenswrapper[4632]: I0313 11:08:00.199834 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 11:08:00 crc kubenswrapper[4632]: I0313 11:08:00.200023 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 11:08:00 crc kubenswrapper[4632]: I0313 11:08:00.286630 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm97k\" (UniqueName: \"kubernetes.io/projected/4d08697f-ce87-4d33-823a-9bf5d2d0d801-kube-api-access-jm97k\") pod \"auto-csr-approver-29556668-bzw4c\" (UID: \"4d08697f-ce87-4d33-823a-9bf5d2d0d801\") " pod="openshift-infra/auto-csr-approver-29556668-bzw4c" Mar 13 11:08:00 crc kubenswrapper[4632]: I0313 11:08:00.388735 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm97k\" (UniqueName: \"kubernetes.io/projected/4d08697f-ce87-4d33-823a-9bf5d2d0d801-kube-api-access-jm97k\") pod \"auto-csr-approver-29556668-bzw4c\" (UID: \"4d08697f-ce87-4d33-823a-9bf5d2d0d801\") " pod="openshift-infra/auto-csr-approver-29556668-bzw4c" Mar 13 11:08:00 crc kubenswrapper[4632]: I0313 11:08:00.427825 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm97k\" (UniqueName: \"kubernetes.io/projected/4d08697f-ce87-4d33-823a-9bf5d2d0d801-kube-api-access-jm97k\") pod \"auto-csr-approver-29556668-bzw4c\" (UID: \"4d08697f-ce87-4d33-823a-9bf5d2d0d801\") " pod="openshift-infra/auto-csr-approver-29556668-bzw4c" Mar 13 11:08:00 crc kubenswrapper[4632]: I0313 11:08:00.513865 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556668-bzw4c" Mar 13 11:08:01 crc kubenswrapper[4632]: I0313 11:08:01.001572 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556668-bzw4c"] Mar 13 11:08:01 crc kubenswrapper[4632]: I0313 11:08:01.008320 4632 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 11:08:01 crc kubenswrapper[4632]: I0313 11:08:01.408911 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556668-bzw4c" event={"ID":"4d08697f-ce87-4d33-823a-9bf5d2d0d801","Type":"ContainerStarted","Data":"f023d21fce137ea242abc1719cf5d0fd4ebe997c19eab16e9956070cc8c44339"} Mar 13 11:08:03 crc kubenswrapper[4632]: I0313 11:08:03.438166 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556668-bzw4c" event={"ID":"4d08697f-ce87-4d33-823a-9bf5d2d0d801","Type":"ContainerStarted","Data":"cdc6d57bdf1672f0d8614c97a39233ec4a346ac1edaab1837f60116110b310ef"} Mar 13 11:08:03 crc kubenswrapper[4632]: I0313 11:08:03.471779 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556668-bzw4c" podStartSLOduration=2.415084248 podStartE2EDuration="3.471759343s" podCreationTimestamp="2026-03-13 11:08:00 +0000 UTC" firstStartedPulling="2026-03-13 11:08:01.004722876 +0000 UTC m=+3855.027253009" lastFinishedPulling="2026-03-13 11:08:02.061397971 +0000 UTC m=+3856.083928104" observedRunningTime="2026-03-13 11:08:03.457736996 +0000 UTC m=+3857.480267199" watchObservedRunningTime="2026-03-13 11:08:03.471759343 +0000 UTC m=+3857.494289476" Mar 13 11:08:04 crc kubenswrapper[4632]: I0313 11:08:04.455986 4632 generic.go:334] "Generic (PLEG): container finished" podID="4d08697f-ce87-4d33-823a-9bf5d2d0d801" containerID="cdc6d57bdf1672f0d8614c97a39233ec4a346ac1edaab1837f60116110b310ef" exitCode=0 Mar 13 11:08:04 crc kubenswrapper[4632]: I0313 11:08:04.456297 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556668-bzw4c" event={"ID":"4d08697f-ce87-4d33-823a-9bf5d2d0d801","Type":"ContainerDied","Data":"cdc6d57bdf1672f0d8614c97a39233ec4a346ac1edaab1837f60116110b310ef"} Mar 13 11:08:05 crc kubenswrapper[4632]: I0313 11:08:05.890756 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556668-bzw4c" Mar 13 11:08:06 crc kubenswrapper[4632]: I0313 11:08:06.007494 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm97k\" (UniqueName: \"kubernetes.io/projected/4d08697f-ce87-4d33-823a-9bf5d2d0d801-kube-api-access-jm97k\") pod \"4d08697f-ce87-4d33-823a-9bf5d2d0d801\" (UID: \"4d08697f-ce87-4d33-823a-9bf5d2d0d801\") " Mar 13 11:08:06 crc kubenswrapper[4632]: I0313 11:08:06.016270 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d08697f-ce87-4d33-823a-9bf5d2d0d801-kube-api-access-jm97k" (OuterVolumeSpecName: "kube-api-access-jm97k") pod "4d08697f-ce87-4d33-823a-9bf5d2d0d801" (UID: "4d08697f-ce87-4d33-823a-9bf5d2d0d801"). InnerVolumeSpecName "kube-api-access-jm97k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:08:06 crc kubenswrapper[4632]: I0313 11:08:06.109875 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jm97k\" (UniqueName: \"kubernetes.io/projected/4d08697f-ce87-4d33-823a-9bf5d2d0d801-kube-api-access-jm97k\") on node \"crc\" DevicePath \"\"" Mar 13 11:08:06 crc kubenswrapper[4632]: I0313 11:08:06.481404 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556668-bzw4c" event={"ID":"4d08697f-ce87-4d33-823a-9bf5d2d0d801","Type":"ContainerDied","Data":"f023d21fce137ea242abc1719cf5d0fd4ebe997c19eab16e9956070cc8c44339"} Mar 13 11:08:06 crc kubenswrapper[4632]: I0313 11:08:06.481504 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556668-bzw4c" Mar 13 11:08:06 crc kubenswrapper[4632]: I0313 11:08:06.482498 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f023d21fce137ea242abc1719cf5d0fd4ebe997c19eab16e9956070cc8c44339" Mar 13 11:08:06 crc kubenswrapper[4632]: I0313 11:08:06.593988 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556662-pw9tk"] Mar 13 11:08:06 crc kubenswrapper[4632]: I0313 11:08:06.601177 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556662-pw9tk"] Mar 13 11:08:08 crc kubenswrapper[4632]: I0313 11:08:08.060738 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62b0f696-9e5c-4535-a181-fa2f4b645711" path="/var/lib/kubelet/pods/62b0f696-9e5c-4535-a181-fa2f4b645711/volumes" Mar 13 11:08:11 crc kubenswrapper[4632]: I0313 11:08:11.044560 4632 scope.go:117] "RemoveContainer" containerID="8a4edabf825a9fe82d4af3664ef24831617306062afee5f4494f20215f557582" Mar 13 11:08:11 crc kubenswrapper[4632]: E0313 11:08:11.045490 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:08:15 crc kubenswrapper[4632]: I0313 11:08:15.658751 4632 scope.go:117] "RemoveContainer" containerID="69d080c6683237a330690584133c6005521df29f2dcf4c21ed9a518e4de4e991" Mar 13 11:08:22 crc kubenswrapper[4632]: I0313 11:08:22.368142 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-59586ff4c9-s4xn7" Mar 13 11:08:22 crc kubenswrapper[4632]: I0313 11:08:22.475209 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6588559b77-6f4bf"] Mar 13 11:08:22 crc kubenswrapper[4632]: I0313 11:08:22.476069 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6588559b77-6f4bf" podUID="79498b99-6b5c-4a95-8558-5d615fc7abba" containerName="neutron-api" containerID="cri-o://a37056b823559676b78bbdad36e07fb68a02ab13bf670546d16508926857a154" gracePeriod=30 Mar 13 11:08:22 crc kubenswrapper[4632]: I0313 11:08:22.476235 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6588559b77-6f4bf" podUID="79498b99-6b5c-4a95-8558-5d615fc7abba" containerName="neutron-httpd" containerID="cri-o://2e4dbe726a115e20d5697b52cbd987856c78465356a65ffaf180382482e42ad0" gracePeriod=30 Mar 13 11:08:23 crc kubenswrapper[4632]: I0313 11:08:23.677970 4632 generic.go:334] "Generic (PLEG): container finished" podID="79498b99-6b5c-4a95-8558-5d615fc7abba" containerID="2e4dbe726a115e20d5697b52cbd987856c78465356a65ffaf180382482e42ad0" exitCode=0 Mar 13 11:08:23 crc kubenswrapper[4632]: I0313 11:08:23.678079 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6588559b77-6f4bf" event={"ID":"79498b99-6b5c-4a95-8558-5d615fc7abba","Type":"ContainerDied","Data":"2e4dbe726a115e20d5697b52cbd987856c78465356a65ffaf180382482e42ad0"} Mar 13 11:08:25 crc kubenswrapper[4632]: I0313 11:08:25.043787 4632 scope.go:117] "RemoveContainer" containerID="8a4edabf825a9fe82d4af3664ef24831617306062afee5f4494f20215f557582" Mar 13 11:08:25 crc kubenswrapper[4632]: E0313 11:08:25.044554 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:08:25 crc kubenswrapper[4632]: I0313 11:08:25.115149 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6588559b77-6f4bf" podUID="79498b99-6b5c-4a95-8558-5d615fc7abba" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.174:9696/\": dial tcp 10.217.0.174:9696: connect: connection refused" Mar 13 11:08:34 crc kubenswrapper[4632]: I0313 11:08:34.812638 4632 generic.go:334] "Generic (PLEG): container finished" podID="79498b99-6b5c-4a95-8558-5d615fc7abba" containerID="a37056b823559676b78bbdad36e07fb68a02ab13bf670546d16508926857a154" exitCode=0 Mar 13 11:08:34 crc kubenswrapper[4632]: I0313 11:08:34.813214 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6588559b77-6f4bf" event={"ID":"79498b99-6b5c-4a95-8558-5d615fc7abba","Type":"ContainerDied","Data":"a37056b823559676b78bbdad36e07fb68a02ab13bf670546d16508926857a154"} Mar 13 11:08:35 crc kubenswrapper[4632]: I0313 11:08:35.153483 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6588559b77-6f4bf" Mar 13 11:08:35 crc kubenswrapper[4632]: I0313 11:08:35.341199 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/79498b99-6b5c-4a95-8558-5d615fc7abba-ovndb-tls-certs\") pod \"79498b99-6b5c-4a95-8558-5d615fc7abba\" (UID: \"79498b99-6b5c-4a95-8558-5d615fc7abba\") " Mar 13 11:08:35 crc kubenswrapper[4632]: I0313 11:08:35.341254 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79498b99-6b5c-4a95-8558-5d615fc7abba-public-tls-certs\") pod \"79498b99-6b5c-4a95-8558-5d615fc7abba\" (UID: \"79498b99-6b5c-4a95-8558-5d615fc7abba\") " Mar 13 11:08:35 crc kubenswrapper[4632]: I0313 11:08:35.341283 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g98kg\" (UniqueName: \"kubernetes.io/projected/79498b99-6b5c-4a95-8558-5d615fc7abba-kube-api-access-g98kg\") pod \"79498b99-6b5c-4a95-8558-5d615fc7abba\" (UID: \"79498b99-6b5c-4a95-8558-5d615fc7abba\") " Mar 13 11:08:35 crc kubenswrapper[4632]: I0313 11:08:35.341340 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/79498b99-6b5c-4a95-8558-5d615fc7abba-config\") pod \"79498b99-6b5c-4a95-8558-5d615fc7abba\" (UID: \"79498b99-6b5c-4a95-8558-5d615fc7abba\") " Mar 13 11:08:35 crc kubenswrapper[4632]: I0313 11:08:35.341407 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/79498b99-6b5c-4a95-8558-5d615fc7abba-httpd-config\") pod \"79498b99-6b5c-4a95-8558-5d615fc7abba\" (UID: \"79498b99-6b5c-4a95-8558-5d615fc7abba\") " Mar 13 11:08:35 crc kubenswrapper[4632]: I0313 11:08:35.341472 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79498b99-6b5c-4a95-8558-5d615fc7abba-combined-ca-bundle\") pod \"79498b99-6b5c-4a95-8558-5d615fc7abba\" (UID: \"79498b99-6b5c-4a95-8558-5d615fc7abba\") " Mar 13 11:08:35 crc kubenswrapper[4632]: I0313 11:08:35.341572 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79498b99-6b5c-4a95-8558-5d615fc7abba-internal-tls-certs\") pod \"79498b99-6b5c-4a95-8558-5d615fc7abba\" (UID: \"79498b99-6b5c-4a95-8558-5d615fc7abba\") " Mar 13 11:08:35 crc kubenswrapper[4632]: I0313 11:08:35.355744 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79498b99-6b5c-4a95-8558-5d615fc7abba-kube-api-access-g98kg" (OuterVolumeSpecName: "kube-api-access-g98kg") pod "79498b99-6b5c-4a95-8558-5d615fc7abba" (UID: "79498b99-6b5c-4a95-8558-5d615fc7abba"). InnerVolumeSpecName "kube-api-access-g98kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:08:35 crc kubenswrapper[4632]: I0313 11:08:35.356283 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79498b99-6b5c-4a95-8558-5d615fc7abba-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "79498b99-6b5c-4a95-8558-5d615fc7abba" (UID: "79498b99-6b5c-4a95-8558-5d615fc7abba"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:08:35 crc kubenswrapper[4632]: I0313 11:08:35.426130 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79498b99-6b5c-4a95-8558-5d615fc7abba-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "79498b99-6b5c-4a95-8558-5d615fc7abba" (UID: "79498b99-6b5c-4a95-8558-5d615fc7abba"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:08:35 crc kubenswrapper[4632]: I0313 11:08:35.428131 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79498b99-6b5c-4a95-8558-5d615fc7abba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79498b99-6b5c-4a95-8558-5d615fc7abba" (UID: "79498b99-6b5c-4a95-8558-5d615fc7abba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:08:35 crc kubenswrapper[4632]: I0313 11:08:35.434970 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79498b99-6b5c-4a95-8558-5d615fc7abba-config" (OuterVolumeSpecName: "config") pod "79498b99-6b5c-4a95-8558-5d615fc7abba" (UID: "79498b99-6b5c-4a95-8558-5d615fc7abba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:08:35 crc kubenswrapper[4632]: I0313 11:08:35.440095 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79498b99-6b5c-4a95-8558-5d615fc7abba-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "79498b99-6b5c-4a95-8558-5d615fc7abba" (UID: "79498b99-6b5c-4a95-8558-5d615fc7abba"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:08:35 crc kubenswrapper[4632]: I0313 11:08:35.445879 4632 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/79498b99-6b5c-4a95-8558-5d615fc7abba-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 13 11:08:35 crc kubenswrapper[4632]: I0313 11:08:35.445927 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79498b99-6b5c-4a95-8558-5d615fc7abba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 11:08:35 crc kubenswrapper[4632]: I0313 11:08:35.445963 4632 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/79498b99-6b5c-4a95-8558-5d615fc7abba-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 13 11:08:35 crc kubenswrapper[4632]: I0313 11:08:35.446004 4632 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79498b99-6b5c-4a95-8558-5d615fc7abba-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 13 11:08:35 crc kubenswrapper[4632]: I0313 11:08:35.446017 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g98kg\" (UniqueName: \"kubernetes.io/projected/79498b99-6b5c-4a95-8558-5d615fc7abba-kube-api-access-g98kg\") on node \"crc\" DevicePath \"\"" Mar 13 11:08:35 crc kubenswrapper[4632]: I0313 11:08:35.446032 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/79498b99-6b5c-4a95-8558-5d615fc7abba-config\") on node \"crc\" DevicePath \"\"" Mar 13 11:08:35 crc kubenswrapper[4632]: I0313 11:08:35.448728 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79498b99-6b5c-4a95-8558-5d615fc7abba-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "79498b99-6b5c-4a95-8558-5d615fc7abba" (UID: "79498b99-6b5c-4a95-8558-5d615fc7abba"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:08:35 crc kubenswrapper[4632]: I0313 11:08:35.547679 4632 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79498b99-6b5c-4a95-8558-5d615fc7abba-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 13 11:08:35 crc kubenswrapper[4632]: I0313 11:08:35.826148 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6588559b77-6f4bf" event={"ID":"79498b99-6b5c-4a95-8558-5d615fc7abba","Type":"ContainerDied","Data":"cc3aa5e44b0dc25bbbe479e7210125c65a55be4449da25f59fdeef0322a73ed3"} Mar 13 11:08:35 crc kubenswrapper[4632]: I0313 11:08:35.826209 4632 scope.go:117] "RemoveContainer" containerID="2e4dbe726a115e20d5697b52cbd987856c78465356a65ffaf180382482e42ad0" Mar 13 11:08:35 crc kubenswrapper[4632]: I0313 11:08:35.826334 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6588559b77-6f4bf" Mar 13 11:08:35 crc kubenswrapper[4632]: I0313 11:08:35.877082 4632 scope.go:117] "RemoveContainer" containerID="a37056b823559676b78bbdad36e07fb68a02ab13bf670546d16508926857a154" Mar 13 11:08:35 crc kubenswrapper[4632]: I0313 11:08:35.891302 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6588559b77-6f4bf"] Mar 13 11:08:35 crc kubenswrapper[4632]: I0313 11:08:35.901998 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6588559b77-6f4bf"] Mar 13 11:08:36 crc kubenswrapper[4632]: I0313 11:08:36.056002 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79498b99-6b5c-4a95-8558-5d615fc7abba" path="/var/lib/kubelet/pods/79498b99-6b5c-4a95-8558-5d615fc7abba/volumes" Mar 13 11:08:38 crc kubenswrapper[4632]: I0313 11:08:38.058175 4632 scope.go:117] "RemoveContainer" containerID="8a4edabf825a9fe82d4af3664ef24831617306062afee5f4494f20215f557582" Mar 13 11:08:38 crc kubenswrapper[4632]: E0313 11:08:38.059129 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:08:50 crc kubenswrapper[4632]: I0313 11:08:50.046425 4632 scope.go:117] "RemoveContainer" containerID="8a4edabf825a9fe82d4af3664ef24831617306062afee5f4494f20215f557582" Mar 13 11:08:50 crc kubenswrapper[4632]: E0313 11:08:50.047434 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:09:01 crc kubenswrapper[4632]: I0313 11:09:01.044724 4632 scope.go:117] "RemoveContainer" containerID="8a4edabf825a9fe82d4af3664ef24831617306062afee5f4494f20215f557582" Mar 13 11:09:01 crc kubenswrapper[4632]: E0313 11:09:01.046037 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:09:15 crc kubenswrapper[4632]: I0313 11:09:15.044419 4632 scope.go:117] "RemoveContainer" containerID="8a4edabf825a9fe82d4af3664ef24831617306062afee5f4494f20215f557582" Mar 13 11:09:15 crc kubenswrapper[4632]: E0313 11:09:15.045653 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:09:27 crc kubenswrapper[4632]: I0313 11:09:27.044444 4632 scope.go:117] "RemoveContainer" containerID="8a4edabf825a9fe82d4af3664ef24831617306062afee5f4494f20215f557582" Mar 13 11:09:27 crc kubenswrapper[4632]: E0313 11:09:27.045019 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:09:38 crc kubenswrapper[4632]: I0313 11:09:38.051423 4632 scope.go:117] "RemoveContainer" containerID="8a4edabf825a9fe82d4af3664ef24831617306062afee5f4494f20215f557582" Mar 13 11:09:38 crc kubenswrapper[4632]: E0313 11:09:38.052465 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:09:52 crc kubenswrapper[4632]: I0313 11:09:52.044878 4632 scope.go:117] "RemoveContainer" containerID="8a4edabf825a9fe82d4af3664ef24831617306062afee5f4494f20215f557582" Mar 13 11:09:52 crc kubenswrapper[4632]: E0313 11:09:52.046274 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:10:00 crc kubenswrapper[4632]: I0313 11:10:00.151532 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556670-fqvf6"] Mar 13 11:10:00 crc kubenswrapper[4632]: E0313 11:10:00.152381 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79498b99-6b5c-4a95-8558-5d615fc7abba" containerName="neutron-httpd" Mar 13 11:10:00 crc kubenswrapper[4632]: I0313 11:10:00.152394 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="79498b99-6b5c-4a95-8558-5d615fc7abba" containerName="neutron-httpd" Mar 13 11:10:00 crc kubenswrapper[4632]: E0313 11:10:00.152409 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79498b99-6b5c-4a95-8558-5d615fc7abba" containerName="neutron-api" Mar 13 11:10:00 crc kubenswrapper[4632]: I0313 11:10:00.152427 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="79498b99-6b5c-4a95-8558-5d615fc7abba" containerName="neutron-api" Mar 13 11:10:00 crc kubenswrapper[4632]: E0313 11:10:00.152458 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d08697f-ce87-4d33-823a-9bf5d2d0d801" containerName="oc" Mar 13 11:10:00 crc kubenswrapper[4632]: I0313 11:10:00.152466 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d08697f-ce87-4d33-823a-9bf5d2d0d801" containerName="oc" Mar 13 11:10:00 crc kubenswrapper[4632]: I0313 11:10:00.152637 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="79498b99-6b5c-4a95-8558-5d615fc7abba" containerName="neutron-api" Mar 13 11:10:00 crc kubenswrapper[4632]: I0313 11:10:00.152656 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d08697f-ce87-4d33-823a-9bf5d2d0d801" containerName="oc" Mar 13 11:10:00 crc kubenswrapper[4632]: I0313 11:10:00.152671 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="79498b99-6b5c-4a95-8558-5d615fc7abba" containerName="neutron-httpd" Mar 13 11:10:00 crc kubenswrapper[4632]: I0313 11:10:00.153323 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556670-fqvf6" Mar 13 11:10:00 crc kubenswrapper[4632]: I0313 11:10:00.155362 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 11:10:00 crc kubenswrapper[4632]: I0313 11:10:00.155674 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 11:10:00 crc kubenswrapper[4632]: I0313 11:10:00.156503 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 11:10:00 crc kubenswrapper[4632]: I0313 11:10:00.168686 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556670-fqvf6"] Mar 13 11:10:00 crc kubenswrapper[4632]: I0313 11:10:00.290489 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgtnj\" (UniqueName: \"kubernetes.io/projected/565a5983-3957-42c2-b7d4-47d26e00aec8-kube-api-access-bgtnj\") pod \"auto-csr-approver-29556670-fqvf6\" (UID: \"565a5983-3957-42c2-b7d4-47d26e00aec8\") " pod="openshift-infra/auto-csr-approver-29556670-fqvf6" Mar 13 11:10:00 crc kubenswrapper[4632]: I0313 11:10:00.393258 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgtnj\" (UniqueName: \"kubernetes.io/projected/565a5983-3957-42c2-b7d4-47d26e00aec8-kube-api-access-bgtnj\") pod \"auto-csr-approver-29556670-fqvf6\" (UID: \"565a5983-3957-42c2-b7d4-47d26e00aec8\") " pod="openshift-infra/auto-csr-approver-29556670-fqvf6" Mar 13 11:10:00 crc kubenswrapper[4632]: I0313 11:10:00.420762 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgtnj\" (UniqueName: \"kubernetes.io/projected/565a5983-3957-42c2-b7d4-47d26e00aec8-kube-api-access-bgtnj\") pod \"auto-csr-approver-29556670-fqvf6\" (UID: \"565a5983-3957-42c2-b7d4-47d26e00aec8\") " pod="openshift-infra/auto-csr-approver-29556670-fqvf6" Mar 13 11:10:00 crc kubenswrapper[4632]: I0313 11:10:00.474517 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556670-fqvf6" Mar 13 11:10:01 crc kubenswrapper[4632]: I0313 11:10:00.999858 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556670-fqvf6"] Mar 13 11:10:01 crc kubenswrapper[4632]: I0313 11:10:01.764580 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556670-fqvf6" event={"ID":"565a5983-3957-42c2-b7d4-47d26e00aec8","Type":"ContainerStarted","Data":"c51ebc91b28459b756985f47316bb878f71da3f694f51215dc5d353dc6155f3f"} Mar 13 11:10:02 crc kubenswrapper[4632]: I0313 11:10:02.776911 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556670-fqvf6" event={"ID":"565a5983-3957-42c2-b7d4-47d26e00aec8","Type":"ContainerStarted","Data":"c0218119e7ac388fadab5a0e90f8eec2d8161ed6d34eaec2b46cb615f7e41508"} Mar 13 11:10:02 crc kubenswrapper[4632]: I0313 11:10:02.804183 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556670-fqvf6" podStartSLOduration=1.695292517 podStartE2EDuration="2.804157199s" podCreationTimestamp="2026-03-13 11:10:00 +0000 UTC" firstStartedPulling="2026-03-13 11:10:01.012255691 +0000 UTC m=+3975.034785834" lastFinishedPulling="2026-03-13 11:10:02.121120373 +0000 UTC m=+3976.143650516" observedRunningTime="2026-03-13 11:10:02.793553699 +0000 UTC m=+3976.816083832" watchObservedRunningTime="2026-03-13 11:10:02.804157199 +0000 UTC m=+3976.826687332" Mar 13 11:10:03 crc kubenswrapper[4632]: I0313 11:10:03.792513 4632 generic.go:334] "Generic (PLEG): container finished" podID="565a5983-3957-42c2-b7d4-47d26e00aec8" containerID="c0218119e7ac388fadab5a0e90f8eec2d8161ed6d34eaec2b46cb615f7e41508" exitCode=0 Mar 13 11:10:03 crc kubenswrapper[4632]: I0313 11:10:03.792622 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556670-fqvf6" event={"ID":"565a5983-3957-42c2-b7d4-47d26e00aec8","Type":"ContainerDied","Data":"c0218119e7ac388fadab5a0e90f8eec2d8161ed6d34eaec2b46cb615f7e41508"} Mar 13 11:10:05 crc kubenswrapper[4632]: I0313 11:10:05.254909 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556670-fqvf6" Mar 13 11:10:05 crc kubenswrapper[4632]: I0313 11:10:05.286318 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgtnj\" (UniqueName: \"kubernetes.io/projected/565a5983-3957-42c2-b7d4-47d26e00aec8-kube-api-access-bgtnj\") pod \"565a5983-3957-42c2-b7d4-47d26e00aec8\" (UID: \"565a5983-3957-42c2-b7d4-47d26e00aec8\") " Mar 13 11:10:05 crc kubenswrapper[4632]: I0313 11:10:05.293895 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/565a5983-3957-42c2-b7d4-47d26e00aec8-kube-api-access-bgtnj" (OuterVolumeSpecName: "kube-api-access-bgtnj") pod "565a5983-3957-42c2-b7d4-47d26e00aec8" (UID: "565a5983-3957-42c2-b7d4-47d26e00aec8"). InnerVolumeSpecName "kube-api-access-bgtnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:10:05 crc kubenswrapper[4632]: I0313 11:10:05.388674 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgtnj\" (UniqueName: \"kubernetes.io/projected/565a5983-3957-42c2-b7d4-47d26e00aec8-kube-api-access-bgtnj\") on node \"crc\" DevicePath \"\"" Mar 13 11:10:05 crc kubenswrapper[4632]: I0313 11:10:05.819159 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556670-fqvf6" event={"ID":"565a5983-3957-42c2-b7d4-47d26e00aec8","Type":"ContainerDied","Data":"c51ebc91b28459b756985f47316bb878f71da3f694f51215dc5d353dc6155f3f"} Mar 13 11:10:05 crc kubenswrapper[4632]: I0313 11:10:05.819216 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c51ebc91b28459b756985f47316bb878f71da3f694f51215dc5d353dc6155f3f" Mar 13 11:10:05 crc kubenswrapper[4632]: I0313 11:10:05.819292 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556670-fqvf6" Mar 13 11:10:05 crc kubenswrapper[4632]: I0313 11:10:05.865243 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556664-vgmtg"] Mar 13 11:10:05 crc kubenswrapper[4632]: I0313 11:10:05.874688 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556664-vgmtg"] Mar 13 11:10:06 crc kubenswrapper[4632]: I0313 11:10:06.077799 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ecca46c-1e06-43be-bacc-eae4a1a474b7" path="/var/lib/kubelet/pods/6ecca46c-1e06-43be-bacc-eae4a1a474b7/volumes" Mar 13 11:10:07 crc kubenswrapper[4632]: I0313 11:10:07.044180 4632 scope.go:117] "RemoveContainer" containerID="8a4edabf825a9fe82d4af3664ef24831617306062afee5f4494f20215f557582" Mar 13 11:10:07 crc kubenswrapper[4632]: E0313 11:10:07.044760 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:10:15 crc kubenswrapper[4632]: I0313 11:10:15.927543 4632 scope.go:117] "RemoveContainer" containerID="5e0a7ac81434eac7eff8520645fc1fc30caa50af82d06bce9d4415863d0b9aa2" Mar 13 11:10:20 crc kubenswrapper[4632]: I0313 11:10:20.045974 4632 scope.go:117] "RemoveContainer" containerID="8a4edabf825a9fe82d4af3664ef24831617306062afee5f4494f20215f557582" Mar 13 11:10:20 crc kubenswrapper[4632]: E0313 11:10:20.046652 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:10:31 crc kubenswrapper[4632]: I0313 11:10:31.044478 4632 scope.go:117] "RemoveContainer" containerID="8a4edabf825a9fe82d4af3664ef24831617306062afee5f4494f20215f557582" Mar 13 11:10:31 crc kubenswrapper[4632]: E0313 11:10:31.045132 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:10:42 crc kubenswrapper[4632]: I0313 11:10:42.047283 4632 scope.go:117] "RemoveContainer" containerID="8a4edabf825a9fe82d4af3664ef24831617306062afee5f4494f20215f557582" Mar 13 11:10:43 crc kubenswrapper[4632]: I0313 11:10:43.320216 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerStarted","Data":"908510815c251e300c3555d9b7458818dfba317c0679f487df71c717e5c832f9"} Mar 13 11:11:16 crc kubenswrapper[4632]: I0313 11:11:16.030034 4632 scope.go:117] "RemoveContainer" containerID="c4e602b48052cce5414a1759fd3d99f56ebde469321edc3b351de56e308a589e" Mar 13 11:11:16 crc kubenswrapper[4632]: I0313 11:11:16.098915 4632 scope.go:117] "RemoveContainer" containerID="e7435d3cb1416970cf6b4162802419aa1bcf01c76e855132a1393d7b353e8c78" Mar 13 11:11:16 crc kubenswrapper[4632]: I0313 11:11:16.137372 4632 scope.go:117] "RemoveContainer" containerID="c772f3e31ea57b90bb99ca7ef746eba3e104a41a227d8c675887f2261f06ab48" Mar 13 11:12:00 crc kubenswrapper[4632]: I0313 11:12:00.151480 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556672-sd9zg"] Mar 13 11:12:00 crc kubenswrapper[4632]: E0313 11:12:00.152258 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="565a5983-3957-42c2-b7d4-47d26e00aec8" containerName="oc" Mar 13 11:12:00 crc kubenswrapper[4632]: I0313 11:12:00.152270 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="565a5983-3957-42c2-b7d4-47d26e00aec8" containerName="oc" Mar 13 11:12:00 crc kubenswrapper[4632]: I0313 11:12:00.152489 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="565a5983-3957-42c2-b7d4-47d26e00aec8" containerName="oc" Mar 13 11:12:00 crc kubenswrapper[4632]: I0313 11:12:00.153082 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556672-sd9zg" Mar 13 11:12:00 crc kubenswrapper[4632]: I0313 11:12:00.155693 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 11:12:00 crc kubenswrapper[4632]: I0313 11:12:00.155860 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 11:12:00 crc kubenswrapper[4632]: I0313 11:12:00.161771 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 11:12:00 crc kubenswrapper[4632]: I0313 11:12:00.181031 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556672-sd9zg"] Mar 13 11:12:00 crc kubenswrapper[4632]: I0313 11:12:00.238198 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jtfs\" (UniqueName: \"kubernetes.io/projected/bc83f2bf-bfdf-4426-a5a6-d1299a5b4da8-kube-api-access-5jtfs\") pod \"auto-csr-approver-29556672-sd9zg\" (UID: \"bc83f2bf-bfdf-4426-a5a6-d1299a5b4da8\") " pod="openshift-infra/auto-csr-approver-29556672-sd9zg" Mar 13 11:12:00 crc kubenswrapper[4632]: I0313 11:12:00.340347 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jtfs\" (UniqueName: \"kubernetes.io/projected/bc83f2bf-bfdf-4426-a5a6-d1299a5b4da8-kube-api-access-5jtfs\") pod \"auto-csr-approver-29556672-sd9zg\" (UID: \"bc83f2bf-bfdf-4426-a5a6-d1299a5b4da8\") " pod="openshift-infra/auto-csr-approver-29556672-sd9zg" Mar 13 11:12:00 crc kubenswrapper[4632]: I0313 11:12:00.360904 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jtfs\" (UniqueName: \"kubernetes.io/projected/bc83f2bf-bfdf-4426-a5a6-d1299a5b4da8-kube-api-access-5jtfs\") pod \"auto-csr-approver-29556672-sd9zg\" (UID: \"bc83f2bf-bfdf-4426-a5a6-d1299a5b4da8\") " pod="openshift-infra/auto-csr-approver-29556672-sd9zg" Mar 13 11:12:00 crc kubenswrapper[4632]: I0313 11:12:00.485175 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556672-sd9zg" Mar 13 11:12:01 crc kubenswrapper[4632]: I0313 11:12:01.170987 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556672-sd9zg"] Mar 13 11:12:01 crc kubenswrapper[4632]: I0313 11:12:01.342864 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556672-sd9zg" event={"ID":"bc83f2bf-bfdf-4426-a5a6-d1299a5b4da8","Type":"ContainerStarted","Data":"f2dda7430b6fcdef405e402c434627e812eece3b35eaae4c925ef5572f23bfe1"} Mar 13 11:12:03 crc kubenswrapper[4632]: I0313 11:12:03.363799 4632 generic.go:334] "Generic (PLEG): container finished" podID="bc83f2bf-bfdf-4426-a5a6-d1299a5b4da8" containerID="5f687cba4c29fe06e8932802cf25f9e44ab270587540acfd69f26e45b584a52b" exitCode=0 Mar 13 11:12:03 crc kubenswrapper[4632]: I0313 11:12:03.363844 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556672-sd9zg" event={"ID":"bc83f2bf-bfdf-4426-a5a6-d1299a5b4da8","Type":"ContainerDied","Data":"5f687cba4c29fe06e8932802cf25f9e44ab270587540acfd69f26e45b584a52b"} Mar 13 11:12:04 crc kubenswrapper[4632]: I0313 11:12:04.766053 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556672-sd9zg" Mar 13 11:12:04 crc kubenswrapper[4632]: I0313 11:12:04.848559 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jtfs\" (UniqueName: \"kubernetes.io/projected/bc83f2bf-bfdf-4426-a5a6-d1299a5b4da8-kube-api-access-5jtfs\") pod \"bc83f2bf-bfdf-4426-a5a6-d1299a5b4da8\" (UID: \"bc83f2bf-bfdf-4426-a5a6-d1299a5b4da8\") " Mar 13 11:12:04 crc kubenswrapper[4632]: I0313 11:12:04.856000 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc83f2bf-bfdf-4426-a5a6-d1299a5b4da8-kube-api-access-5jtfs" (OuterVolumeSpecName: "kube-api-access-5jtfs") pod "bc83f2bf-bfdf-4426-a5a6-d1299a5b4da8" (UID: "bc83f2bf-bfdf-4426-a5a6-d1299a5b4da8"). InnerVolumeSpecName "kube-api-access-5jtfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:12:04 crc kubenswrapper[4632]: I0313 11:12:04.951349 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jtfs\" (UniqueName: \"kubernetes.io/projected/bc83f2bf-bfdf-4426-a5a6-d1299a5b4da8-kube-api-access-5jtfs\") on node \"crc\" DevicePath \"\"" Mar 13 11:12:05 crc kubenswrapper[4632]: I0313 11:12:05.388125 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556672-sd9zg" event={"ID":"bc83f2bf-bfdf-4426-a5a6-d1299a5b4da8","Type":"ContainerDied","Data":"f2dda7430b6fcdef405e402c434627e812eece3b35eaae4c925ef5572f23bfe1"} Mar 13 11:12:05 crc kubenswrapper[4632]: I0313 11:12:05.388468 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2dda7430b6fcdef405e402c434627e812eece3b35eaae4c925ef5572f23bfe1" Mar 13 11:12:05 crc kubenswrapper[4632]: I0313 11:12:05.388276 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556672-sd9zg" Mar 13 11:12:05 crc kubenswrapper[4632]: I0313 11:12:05.842913 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556666-pjsh7"] Mar 13 11:12:05 crc kubenswrapper[4632]: I0313 11:12:05.855726 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556666-pjsh7"] Mar 13 11:12:06 crc kubenswrapper[4632]: I0313 11:12:06.059076 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c793856b-f941-4c9e-b70e-36b4844e4eac" path="/var/lib/kubelet/pods/c793856b-f941-4c9e-b70e-36b4844e4eac/volumes" Mar 13 11:12:16 crc kubenswrapper[4632]: I0313 11:12:16.201883 4632 scope.go:117] "RemoveContainer" containerID="ca3b5ee01f58147d02e068e963d2c27601cb82a563a21e34b84cdc61a03e2f80" Mar 13 11:13:10 crc kubenswrapper[4632]: I0313 11:13:10.460797 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 11:13:10 crc kubenswrapper[4632]: I0313 11:13:10.461458 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 11:13:40 crc kubenswrapper[4632]: I0313 11:13:40.461663 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 11:13:40 crc kubenswrapper[4632]: I0313 11:13:40.462353 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 11:14:00 crc kubenswrapper[4632]: I0313 11:14:00.146652 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556674-j6lkl"] Mar 13 11:14:00 crc kubenswrapper[4632]: E0313 11:14:00.147643 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc83f2bf-bfdf-4426-a5a6-d1299a5b4da8" containerName="oc" Mar 13 11:14:00 crc kubenswrapper[4632]: I0313 11:14:00.147659 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc83f2bf-bfdf-4426-a5a6-d1299a5b4da8" containerName="oc" Mar 13 11:14:00 crc kubenswrapper[4632]: I0313 11:14:00.147931 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc83f2bf-bfdf-4426-a5a6-d1299a5b4da8" containerName="oc" Mar 13 11:14:00 crc kubenswrapper[4632]: I0313 11:14:00.148688 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556674-j6lkl" Mar 13 11:14:00 crc kubenswrapper[4632]: I0313 11:14:00.151389 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 11:14:00 crc kubenswrapper[4632]: I0313 11:14:00.151803 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 11:14:00 crc kubenswrapper[4632]: I0313 11:14:00.151989 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 11:14:00 crc kubenswrapper[4632]: I0313 11:14:00.202787 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556674-j6lkl"] Mar 13 11:14:00 crc kubenswrapper[4632]: I0313 11:14:00.250314 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dm8b\" (UniqueName: \"kubernetes.io/projected/2607e7bb-5f81-48cf-945a-6dee68b60040-kube-api-access-8dm8b\") pod \"auto-csr-approver-29556674-j6lkl\" (UID: \"2607e7bb-5f81-48cf-945a-6dee68b60040\") " pod="openshift-infra/auto-csr-approver-29556674-j6lkl" Mar 13 11:14:00 crc kubenswrapper[4632]: I0313 11:14:00.352821 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dm8b\" (UniqueName: \"kubernetes.io/projected/2607e7bb-5f81-48cf-945a-6dee68b60040-kube-api-access-8dm8b\") pod \"auto-csr-approver-29556674-j6lkl\" (UID: \"2607e7bb-5f81-48cf-945a-6dee68b60040\") " pod="openshift-infra/auto-csr-approver-29556674-j6lkl" Mar 13 11:14:00 crc kubenswrapper[4632]: I0313 11:14:00.378197 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dm8b\" (UniqueName: \"kubernetes.io/projected/2607e7bb-5f81-48cf-945a-6dee68b60040-kube-api-access-8dm8b\") pod \"auto-csr-approver-29556674-j6lkl\" (UID: \"2607e7bb-5f81-48cf-945a-6dee68b60040\") " pod="openshift-infra/auto-csr-approver-29556674-j6lkl" Mar 13 11:14:00 crc kubenswrapper[4632]: I0313 11:14:00.467814 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556674-j6lkl" Mar 13 11:14:00 crc kubenswrapper[4632]: I0313 11:14:00.978198 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556674-j6lkl"] Mar 13 11:14:00 crc kubenswrapper[4632]: W0313 11:14:00.993126 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2607e7bb_5f81_48cf_945a_6dee68b60040.slice/crio-64817e1f3ef5c3a97ad60dd28f54df1da870ccd9ca8dc11cc50f926ec7e8a21a WatchSource:0}: Error finding container 64817e1f3ef5c3a97ad60dd28f54df1da870ccd9ca8dc11cc50f926ec7e8a21a: Status 404 returned error can't find the container with id 64817e1f3ef5c3a97ad60dd28f54df1da870ccd9ca8dc11cc50f926ec7e8a21a Mar 13 11:14:00 crc kubenswrapper[4632]: I0313 11:14:00.994751 4632 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 11:14:01 crc kubenswrapper[4632]: I0313 11:14:01.270801 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556674-j6lkl" event={"ID":"2607e7bb-5f81-48cf-945a-6dee68b60040","Type":"ContainerStarted","Data":"64817e1f3ef5c3a97ad60dd28f54df1da870ccd9ca8dc11cc50f926ec7e8a21a"} Mar 13 11:14:02 crc kubenswrapper[4632]: I0313 11:14:02.280712 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556674-j6lkl" event={"ID":"2607e7bb-5f81-48cf-945a-6dee68b60040","Type":"ContainerStarted","Data":"54b5c748d72d1466e81773f53e5a61ff5e546e63d80706f8982cd195e971c601"} Mar 13 11:14:02 crc kubenswrapper[4632]: I0313 11:14:02.309493 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556674-j6lkl" podStartSLOduration=1.408531875 podStartE2EDuration="2.309470612s" podCreationTimestamp="2026-03-13 11:14:00 +0000 UTC" firstStartedPulling="2026-03-13 11:14:00.994491349 +0000 UTC m=+4215.017021482" lastFinishedPulling="2026-03-13 11:14:01.895430076 +0000 UTC m=+4215.917960219" observedRunningTime="2026-03-13 11:14:02.29923714 +0000 UTC m=+4216.321767303" watchObservedRunningTime="2026-03-13 11:14:02.309470612 +0000 UTC m=+4216.332000745" Mar 13 11:14:03 crc kubenswrapper[4632]: I0313 11:14:03.300111 4632 generic.go:334] "Generic (PLEG): container finished" podID="2607e7bb-5f81-48cf-945a-6dee68b60040" containerID="54b5c748d72d1466e81773f53e5a61ff5e546e63d80706f8982cd195e971c601" exitCode=0 Mar 13 11:14:03 crc kubenswrapper[4632]: I0313 11:14:03.300288 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556674-j6lkl" event={"ID":"2607e7bb-5f81-48cf-945a-6dee68b60040","Type":"ContainerDied","Data":"54b5c748d72d1466e81773f53e5a61ff5e546e63d80706f8982cd195e971c601"} Mar 13 11:14:04 crc kubenswrapper[4632]: I0313 11:14:04.694312 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556674-j6lkl" Mar 13 11:14:04 crc kubenswrapper[4632]: I0313 11:14:04.729206 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dm8b\" (UniqueName: \"kubernetes.io/projected/2607e7bb-5f81-48cf-945a-6dee68b60040-kube-api-access-8dm8b\") pod \"2607e7bb-5f81-48cf-945a-6dee68b60040\" (UID: \"2607e7bb-5f81-48cf-945a-6dee68b60040\") " Mar 13 11:14:04 crc kubenswrapper[4632]: I0313 11:14:04.735190 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2607e7bb-5f81-48cf-945a-6dee68b60040-kube-api-access-8dm8b" (OuterVolumeSpecName: "kube-api-access-8dm8b") pod "2607e7bb-5f81-48cf-945a-6dee68b60040" (UID: "2607e7bb-5f81-48cf-945a-6dee68b60040"). InnerVolumeSpecName "kube-api-access-8dm8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:14:04 crc kubenswrapper[4632]: I0313 11:14:04.831538 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dm8b\" (UniqueName: \"kubernetes.io/projected/2607e7bb-5f81-48cf-945a-6dee68b60040-kube-api-access-8dm8b\") on node \"crc\" DevicePath \"\"" Mar 13 11:14:05 crc kubenswrapper[4632]: I0313 11:14:05.320031 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556674-j6lkl" event={"ID":"2607e7bb-5f81-48cf-945a-6dee68b60040","Type":"ContainerDied","Data":"64817e1f3ef5c3a97ad60dd28f54df1da870ccd9ca8dc11cc50f926ec7e8a21a"} Mar 13 11:14:05 crc kubenswrapper[4632]: I0313 11:14:05.320084 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64817e1f3ef5c3a97ad60dd28f54df1da870ccd9ca8dc11cc50f926ec7e8a21a" Mar 13 11:14:05 crc kubenswrapper[4632]: I0313 11:14:05.320153 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556674-j6lkl" Mar 13 11:14:05 crc kubenswrapper[4632]: I0313 11:14:05.419066 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556668-bzw4c"] Mar 13 11:14:05 crc kubenswrapper[4632]: I0313 11:14:05.439381 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556668-bzw4c"] Mar 13 11:14:06 crc kubenswrapper[4632]: I0313 11:14:06.062692 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d08697f-ce87-4d33-823a-9bf5d2d0d801" path="/var/lib/kubelet/pods/4d08697f-ce87-4d33-823a-9bf5d2d0d801/volumes" Mar 13 11:14:10 crc kubenswrapper[4632]: I0313 11:14:10.460894 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 11:14:10 crc kubenswrapper[4632]: I0313 11:14:10.461381 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 11:14:10 crc kubenswrapper[4632]: I0313 11:14:10.461441 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 11:14:10 crc kubenswrapper[4632]: I0313 11:14:10.462265 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"908510815c251e300c3555d9b7458818dfba317c0679f487df71c717e5c832f9"} pod="openshift-machine-config-operator/machine-config-daemon-zkscb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 11:14:10 crc kubenswrapper[4632]: I0313 11:14:10.462329 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" containerID="cri-o://908510815c251e300c3555d9b7458818dfba317c0679f487df71c717e5c832f9" gracePeriod=600 Mar 13 11:14:11 crc kubenswrapper[4632]: I0313 11:14:11.381216 4632 generic.go:334] "Generic (PLEG): container finished" podID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerID="908510815c251e300c3555d9b7458818dfba317c0679f487df71c717e5c832f9" exitCode=0 Mar 13 11:14:11 crc kubenswrapper[4632]: I0313 11:14:11.381377 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerDied","Data":"908510815c251e300c3555d9b7458818dfba317c0679f487df71c717e5c832f9"} Mar 13 11:14:11 crc kubenswrapper[4632]: I0313 11:14:11.381649 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerStarted","Data":"70b8629c42fc12676389ef5c404d5d088a40865f44432caaaf2da0b0a7e69c7a"} Mar 13 11:14:11 crc kubenswrapper[4632]: I0313 11:14:11.381673 4632 scope.go:117] "RemoveContainer" containerID="8a4edabf825a9fe82d4af3664ef24831617306062afee5f4494f20215f557582" Mar 13 11:14:16 crc kubenswrapper[4632]: I0313 11:14:16.327415 4632 scope.go:117] "RemoveContainer" containerID="cdc6d57bdf1672f0d8614c97a39233ec4a346ac1edaab1837f60116110b310ef" Mar 13 11:15:00 crc kubenswrapper[4632]: I0313 11:15:00.171126 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556675-n64w8"] Mar 13 11:15:00 crc kubenswrapper[4632]: E0313 11:15:00.172046 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2607e7bb-5f81-48cf-945a-6dee68b60040" containerName="oc" Mar 13 11:15:00 crc kubenswrapper[4632]: I0313 11:15:00.172062 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="2607e7bb-5f81-48cf-945a-6dee68b60040" containerName="oc" Mar 13 11:15:00 crc kubenswrapper[4632]: I0313 11:15:00.172338 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="2607e7bb-5f81-48cf-945a-6dee68b60040" containerName="oc" Mar 13 11:15:00 crc kubenswrapper[4632]: I0313 11:15:00.173162 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556675-n64w8" Mar 13 11:15:00 crc kubenswrapper[4632]: I0313 11:15:00.175474 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 13 11:15:00 crc kubenswrapper[4632]: I0313 11:15:00.175985 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 13 11:15:00 crc kubenswrapper[4632]: I0313 11:15:00.205249 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556675-n64w8"] Mar 13 11:15:00 crc kubenswrapper[4632]: I0313 11:15:00.292814 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9481bb7b-d00a-4ee1-b711-7b90d97907c1-config-volume\") pod \"collect-profiles-29556675-n64w8\" (UID: \"9481bb7b-d00a-4ee1-b711-7b90d97907c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556675-n64w8" Mar 13 11:15:00 crc kubenswrapper[4632]: I0313 11:15:00.292879 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dhhd\" (UniqueName: \"kubernetes.io/projected/9481bb7b-d00a-4ee1-b711-7b90d97907c1-kube-api-access-6dhhd\") pod \"collect-profiles-29556675-n64w8\" (UID: \"9481bb7b-d00a-4ee1-b711-7b90d97907c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556675-n64w8" Mar 13 11:15:00 crc kubenswrapper[4632]: I0313 11:15:00.293271 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9481bb7b-d00a-4ee1-b711-7b90d97907c1-secret-volume\") pod \"collect-profiles-29556675-n64w8\" (UID: \"9481bb7b-d00a-4ee1-b711-7b90d97907c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556675-n64w8" Mar 13 11:15:00 crc kubenswrapper[4632]: I0313 11:15:00.395337 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9481bb7b-d00a-4ee1-b711-7b90d97907c1-secret-volume\") pod \"collect-profiles-29556675-n64w8\" (UID: \"9481bb7b-d00a-4ee1-b711-7b90d97907c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556675-n64w8" Mar 13 11:15:00 crc kubenswrapper[4632]: I0313 11:15:00.395482 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9481bb7b-d00a-4ee1-b711-7b90d97907c1-config-volume\") pod \"collect-profiles-29556675-n64w8\" (UID: \"9481bb7b-d00a-4ee1-b711-7b90d97907c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556675-n64w8" Mar 13 11:15:00 crc kubenswrapper[4632]: I0313 11:15:00.395532 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dhhd\" (UniqueName: \"kubernetes.io/projected/9481bb7b-d00a-4ee1-b711-7b90d97907c1-kube-api-access-6dhhd\") pod \"collect-profiles-29556675-n64w8\" (UID: \"9481bb7b-d00a-4ee1-b711-7b90d97907c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556675-n64w8" Mar 13 11:15:00 crc kubenswrapper[4632]: I0313 11:15:00.396715 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9481bb7b-d00a-4ee1-b711-7b90d97907c1-config-volume\") pod \"collect-profiles-29556675-n64w8\" (UID: \"9481bb7b-d00a-4ee1-b711-7b90d97907c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556675-n64w8" Mar 13 11:15:00 crc kubenswrapper[4632]: I0313 11:15:00.402990 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9481bb7b-d00a-4ee1-b711-7b90d97907c1-secret-volume\") pod \"collect-profiles-29556675-n64w8\" (UID: \"9481bb7b-d00a-4ee1-b711-7b90d97907c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556675-n64w8" Mar 13 11:15:00 crc kubenswrapper[4632]: I0313 11:15:00.416469 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dhhd\" (UniqueName: \"kubernetes.io/projected/9481bb7b-d00a-4ee1-b711-7b90d97907c1-kube-api-access-6dhhd\") pod \"collect-profiles-29556675-n64w8\" (UID: \"9481bb7b-d00a-4ee1-b711-7b90d97907c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556675-n64w8" Mar 13 11:15:00 crc kubenswrapper[4632]: I0313 11:15:00.492017 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556675-n64w8" Mar 13 11:15:01 crc kubenswrapper[4632]: I0313 11:15:00.999850 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556675-n64w8"] Mar 13 11:15:01 crc kubenswrapper[4632]: I0313 11:15:01.124106 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556675-n64w8" event={"ID":"9481bb7b-d00a-4ee1-b711-7b90d97907c1","Type":"ContainerStarted","Data":"bb607a5fcbbc599f445e9f5e35641c5968cf1efefa8cb36e1b8cbe3421faa466"} Mar 13 11:15:02 crc kubenswrapper[4632]: I0313 11:15:02.136801 4632 generic.go:334] "Generic (PLEG): container finished" podID="9481bb7b-d00a-4ee1-b711-7b90d97907c1" containerID="95360e41112a84b3ea4b235c3e7fd03654d6110fccc446520298cff419091ae2" exitCode=0 Mar 13 11:15:02 crc kubenswrapper[4632]: I0313 11:15:02.136879 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556675-n64w8" event={"ID":"9481bb7b-d00a-4ee1-b711-7b90d97907c1","Type":"ContainerDied","Data":"95360e41112a84b3ea4b235c3e7fd03654d6110fccc446520298cff419091ae2"} Mar 13 11:15:03 crc kubenswrapper[4632]: I0313 11:15:03.512605 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556675-n64w8" Mar 13 11:15:03 crc kubenswrapper[4632]: I0313 11:15:03.558819 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9481bb7b-d00a-4ee1-b711-7b90d97907c1-secret-volume\") pod \"9481bb7b-d00a-4ee1-b711-7b90d97907c1\" (UID: \"9481bb7b-d00a-4ee1-b711-7b90d97907c1\") " Mar 13 11:15:03 crc kubenswrapper[4632]: I0313 11:15:03.559135 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dhhd\" (UniqueName: \"kubernetes.io/projected/9481bb7b-d00a-4ee1-b711-7b90d97907c1-kube-api-access-6dhhd\") pod \"9481bb7b-d00a-4ee1-b711-7b90d97907c1\" (UID: \"9481bb7b-d00a-4ee1-b711-7b90d97907c1\") " Mar 13 11:15:03 crc kubenswrapper[4632]: I0313 11:15:03.559244 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9481bb7b-d00a-4ee1-b711-7b90d97907c1-config-volume\") pod \"9481bb7b-d00a-4ee1-b711-7b90d97907c1\" (UID: \"9481bb7b-d00a-4ee1-b711-7b90d97907c1\") " Mar 13 11:15:03 crc kubenswrapper[4632]: I0313 11:15:03.560419 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9481bb7b-d00a-4ee1-b711-7b90d97907c1-config-volume" (OuterVolumeSpecName: "config-volume") pod "9481bb7b-d00a-4ee1-b711-7b90d97907c1" (UID: "9481bb7b-d00a-4ee1-b711-7b90d97907c1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:15:03 crc kubenswrapper[4632]: I0313 11:15:03.568145 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9481bb7b-d00a-4ee1-b711-7b90d97907c1-kube-api-access-6dhhd" (OuterVolumeSpecName: "kube-api-access-6dhhd") pod "9481bb7b-d00a-4ee1-b711-7b90d97907c1" (UID: "9481bb7b-d00a-4ee1-b711-7b90d97907c1"). InnerVolumeSpecName "kube-api-access-6dhhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:15:03 crc kubenswrapper[4632]: I0313 11:15:03.570993 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9481bb7b-d00a-4ee1-b711-7b90d97907c1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9481bb7b-d00a-4ee1-b711-7b90d97907c1" (UID: "9481bb7b-d00a-4ee1-b711-7b90d97907c1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:15:03 crc kubenswrapper[4632]: I0313 11:15:03.662337 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dhhd\" (UniqueName: \"kubernetes.io/projected/9481bb7b-d00a-4ee1-b711-7b90d97907c1-kube-api-access-6dhhd\") on node \"crc\" DevicePath \"\"" Mar 13 11:15:03 crc kubenswrapper[4632]: I0313 11:15:03.662633 4632 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9481bb7b-d00a-4ee1-b711-7b90d97907c1-config-volume\") on node \"crc\" DevicePath \"\"" Mar 13 11:15:03 crc kubenswrapper[4632]: I0313 11:15:03.662745 4632 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9481bb7b-d00a-4ee1-b711-7b90d97907c1-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 13 11:15:04 crc kubenswrapper[4632]: I0313 11:15:04.181497 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556675-n64w8" event={"ID":"9481bb7b-d00a-4ee1-b711-7b90d97907c1","Type":"ContainerDied","Data":"bb607a5fcbbc599f445e9f5e35641c5968cf1efefa8cb36e1b8cbe3421faa466"} Mar 13 11:15:04 crc kubenswrapper[4632]: I0313 11:15:04.181897 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb607a5fcbbc599f445e9f5e35641c5968cf1efefa8cb36e1b8cbe3421faa466" Mar 13 11:15:04 crc kubenswrapper[4632]: I0313 11:15:04.181607 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556675-n64w8" Mar 13 11:15:04 crc kubenswrapper[4632]: I0313 11:15:04.618291 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556630-kpbz7"] Mar 13 11:15:04 crc kubenswrapper[4632]: I0313 11:15:04.629686 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556630-kpbz7"] Mar 13 11:15:06 crc kubenswrapper[4632]: I0313 11:15:06.056326 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e912f5a7-eb85-4d19-9703-6cd7ff46c810" path="/var/lib/kubelet/pods/e912f5a7-eb85-4d19-9703-6cd7ff46c810/volumes" Mar 13 11:15:16 crc kubenswrapper[4632]: I0313 11:15:16.875549 4632 scope.go:117] "RemoveContainer" containerID="020084ff22e9c174abe1865969844a8ece77dd4c848ac5f03af6af51bccf8643" Mar 13 11:15:28 crc kubenswrapper[4632]: I0313 11:15:28.907947 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fjcb8"] Mar 13 11:15:28 crc kubenswrapper[4632]: E0313 11:15:28.908719 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9481bb7b-d00a-4ee1-b711-7b90d97907c1" containerName="collect-profiles" Mar 13 11:15:28 crc kubenswrapper[4632]: I0313 11:15:28.908730 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="9481bb7b-d00a-4ee1-b711-7b90d97907c1" containerName="collect-profiles" Mar 13 11:15:28 crc kubenswrapper[4632]: I0313 11:15:28.908994 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="9481bb7b-d00a-4ee1-b711-7b90d97907c1" containerName="collect-profiles" Mar 13 11:15:28 crc kubenswrapper[4632]: I0313 11:15:28.910261 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fjcb8" Mar 13 11:15:28 crc kubenswrapper[4632]: I0313 11:15:28.932394 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fjcb8"] Mar 13 11:15:28 crc kubenswrapper[4632]: I0313 11:15:28.998462 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b80498f-6567-4384-8312-3eec23afb96f-utilities\") pod \"redhat-marketplace-fjcb8\" (UID: \"6b80498f-6567-4384-8312-3eec23afb96f\") " pod="openshift-marketplace/redhat-marketplace-fjcb8" Mar 13 11:15:28 crc kubenswrapper[4632]: I0313 11:15:28.998531 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4v48\" (UniqueName: \"kubernetes.io/projected/6b80498f-6567-4384-8312-3eec23afb96f-kube-api-access-q4v48\") pod \"redhat-marketplace-fjcb8\" (UID: \"6b80498f-6567-4384-8312-3eec23afb96f\") " pod="openshift-marketplace/redhat-marketplace-fjcb8" Mar 13 11:15:28 crc kubenswrapper[4632]: I0313 11:15:28.998551 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b80498f-6567-4384-8312-3eec23afb96f-catalog-content\") pod \"redhat-marketplace-fjcb8\" (UID: \"6b80498f-6567-4384-8312-3eec23afb96f\") " pod="openshift-marketplace/redhat-marketplace-fjcb8" Mar 13 11:15:29 crc kubenswrapper[4632]: I0313 11:15:29.100524 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b80498f-6567-4384-8312-3eec23afb96f-utilities\") pod \"redhat-marketplace-fjcb8\" (UID: \"6b80498f-6567-4384-8312-3eec23afb96f\") " pod="openshift-marketplace/redhat-marketplace-fjcb8" Mar 13 11:15:29 crc kubenswrapper[4632]: I0313 11:15:29.100998 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4v48\" (UniqueName: \"kubernetes.io/projected/6b80498f-6567-4384-8312-3eec23afb96f-kube-api-access-q4v48\") pod \"redhat-marketplace-fjcb8\" (UID: \"6b80498f-6567-4384-8312-3eec23afb96f\") " pod="openshift-marketplace/redhat-marketplace-fjcb8" Mar 13 11:15:29 crc kubenswrapper[4632]: I0313 11:15:29.101030 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b80498f-6567-4384-8312-3eec23afb96f-catalog-content\") pod \"redhat-marketplace-fjcb8\" (UID: \"6b80498f-6567-4384-8312-3eec23afb96f\") " pod="openshift-marketplace/redhat-marketplace-fjcb8" Mar 13 11:15:29 crc kubenswrapper[4632]: I0313 11:15:29.101405 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b80498f-6567-4384-8312-3eec23afb96f-utilities\") pod \"redhat-marketplace-fjcb8\" (UID: \"6b80498f-6567-4384-8312-3eec23afb96f\") " pod="openshift-marketplace/redhat-marketplace-fjcb8" Mar 13 11:15:29 crc kubenswrapper[4632]: I0313 11:15:29.101667 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b80498f-6567-4384-8312-3eec23afb96f-catalog-content\") pod \"redhat-marketplace-fjcb8\" (UID: \"6b80498f-6567-4384-8312-3eec23afb96f\") " pod="openshift-marketplace/redhat-marketplace-fjcb8" Mar 13 11:15:29 crc kubenswrapper[4632]: I0313 11:15:29.125463 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4v48\" (UniqueName: \"kubernetes.io/projected/6b80498f-6567-4384-8312-3eec23afb96f-kube-api-access-q4v48\") pod \"redhat-marketplace-fjcb8\" (UID: \"6b80498f-6567-4384-8312-3eec23afb96f\") " pod="openshift-marketplace/redhat-marketplace-fjcb8" Mar 13 11:15:29 crc kubenswrapper[4632]: I0313 11:15:29.227435 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fjcb8" Mar 13 11:15:29 crc kubenswrapper[4632]: I0313 11:15:29.730328 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fjcb8"] Mar 13 11:15:30 crc kubenswrapper[4632]: I0313 11:15:30.467626 4632 generic.go:334] "Generic (PLEG): container finished" podID="6b80498f-6567-4384-8312-3eec23afb96f" containerID="e4c2cf5dbf98373fe62cd97ac31b0a9d4d0cfebb456d4b4a21937b1cdd3aa690" exitCode=0 Mar 13 11:15:30 crc kubenswrapper[4632]: I0313 11:15:30.467738 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fjcb8" event={"ID":"6b80498f-6567-4384-8312-3eec23afb96f","Type":"ContainerDied","Data":"e4c2cf5dbf98373fe62cd97ac31b0a9d4d0cfebb456d4b4a21937b1cdd3aa690"} Mar 13 11:15:30 crc kubenswrapper[4632]: I0313 11:15:30.468072 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fjcb8" event={"ID":"6b80498f-6567-4384-8312-3eec23afb96f","Type":"ContainerStarted","Data":"d0a5afdd3947f0bc783bf017602e397a2d152c6ec52ad447d9ddbb2f47b390d3"} Mar 13 11:15:31 crc kubenswrapper[4632]: I0313 11:15:31.494340 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fjcb8" event={"ID":"6b80498f-6567-4384-8312-3eec23afb96f","Type":"ContainerStarted","Data":"b5f67696ef58459c1f88ef05dc6509bd9cb7f8b1c9e52e4e87ca38283e2b4063"} Mar 13 11:15:32 crc kubenswrapper[4632]: I0313 11:15:32.504415 4632 generic.go:334] "Generic (PLEG): container finished" podID="6b80498f-6567-4384-8312-3eec23afb96f" containerID="b5f67696ef58459c1f88ef05dc6509bd9cb7f8b1c9e52e4e87ca38283e2b4063" exitCode=0 Mar 13 11:15:32 crc kubenswrapper[4632]: I0313 11:15:32.504636 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fjcb8" event={"ID":"6b80498f-6567-4384-8312-3eec23afb96f","Type":"ContainerDied","Data":"b5f67696ef58459c1f88ef05dc6509bd9cb7f8b1c9e52e4e87ca38283e2b4063"} Mar 13 11:15:33 crc kubenswrapper[4632]: I0313 11:15:33.515777 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fjcb8" event={"ID":"6b80498f-6567-4384-8312-3eec23afb96f","Type":"ContainerStarted","Data":"cae11f82bcd03a8e15a974a2d286987da044e8a582b503a6c527777766946f93"} Mar 13 11:15:33 crc kubenswrapper[4632]: I0313 11:15:33.541754 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fjcb8" podStartSLOduration=3.001395962 podStartE2EDuration="5.541726037s" podCreationTimestamp="2026-03-13 11:15:28 +0000 UTC" firstStartedPulling="2026-03-13 11:15:30.470684129 +0000 UTC m=+4304.493214262" lastFinishedPulling="2026-03-13 11:15:33.011014204 +0000 UTC m=+4307.033544337" observedRunningTime="2026-03-13 11:15:33.536003915 +0000 UTC m=+4307.558534048" watchObservedRunningTime="2026-03-13 11:15:33.541726037 +0000 UTC m=+4307.564256170" Mar 13 11:15:39 crc kubenswrapper[4632]: I0313 11:15:39.228689 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fjcb8" Mar 13 11:15:39 crc kubenswrapper[4632]: I0313 11:15:39.229184 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fjcb8" Mar 13 11:15:39 crc kubenswrapper[4632]: I0313 11:15:39.367784 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fjcb8" Mar 13 11:15:40 crc kubenswrapper[4632]: I0313 11:15:40.304072 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fjcb8" Mar 13 11:15:40 crc kubenswrapper[4632]: I0313 11:15:40.360319 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fjcb8"] Mar 13 11:15:41 crc kubenswrapper[4632]: I0313 11:15:41.593733 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fjcb8" podUID="6b80498f-6567-4384-8312-3eec23afb96f" containerName="registry-server" containerID="cri-o://cae11f82bcd03a8e15a974a2d286987da044e8a582b503a6c527777766946f93" gracePeriod=2 Mar 13 11:15:42 crc kubenswrapper[4632]: I0313 11:15:42.469378 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fjcb8" Mar 13 11:15:42 crc kubenswrapper[4632]: I0313 11:15:42.604285 4632 generic.go:334] "Generic (PLEG): container finished" podID="6b80498f-6567-4384-8312-3eec23afb96f" containerID="cae11f82bcd03a8e15a974a2d286987da044e8a582b503a6c527777766946f93" exitCode=0 Mar 13 11:15:42 crc kubenswrapper[4632]: I0313 11:15:42.604352 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fjcb8" Mar 13 11:15:42 crc kubenswrapper[4632]: I0313 11:15:42.604376 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fjcb8" event={"ID":"6b80498f-6567-4384-8312-3eec23afb96f","Type":"ContainerDied","Data":"cae11f82bcd03a8e15a974a2d286987da044e8a582b503a6c527777766946f93"} Mar 13 11:15:42 crc kubenswrapper[4632]: I0313 11:15:42.604775 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fjcb8" event={"ID":"6b80498f-6567-4384-8312-3eec23afb96f","Type":"ContainerDied","Data":"d0a5afdd3947f0bc783bf017602e397a2d152c6ec52ad447d9ddbb2f47b390d3"} Mar 13 11:15:42 crc kubenswrapper[4632]: I0313 11:15:42.604805 4632 scope.go:117] "RemoveContainer" containerID="cae11f82bcd03a8e15a974a2d286987da044e8a582b503a6c527777766946f93" Mar 13 11:15:42 crc kubenswrapper[4632]: I0313 11:15:42.625380 4632 scope.go:117] "RemoveContainer" containerID="b5f67696ef58459c1f88ef05dc6509bd9cb7f8b1c9e52e4e87ca38283e2b4063" Mar 13 11:15:42 crc kubenswrapper[4632]: I0313 11:15:42.646651 4632 scope.go:117] "RemoveContainer" containerID="e4c2cf5dbf98373fe62cd97ac31b0a9d4d0cfebb456d4b4a21937b1cdd3aa690" Mar 13 11:15:42 crc kubenswrapper[4632]: I0313 11:15:42.669191 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b80498f-6567-4384-8312-3eec23afb96f-catalog-content\") pod \"6b80498f-6567-4384-8312-3eec23afb96f\" (UID: \"6b80498f-6567-4384-8312-3eec23afb96f\") " Mar 13 11:15:42 crc kubenswrapper[4632]: I0313 11:15:42.669268 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4v48\" (UniqueName: \"kubernetes.io/projected/6b80498f-6567-4384-8312-3eec23afb96f-kube-api-access-q4v48\") pod \"6b80498f-6567-4384-8312-3eec23afb96f\" (UID: \"6b80498f-6567-4384-8312-3eec23afb96f\") " Mar 13 11:15:42 crc kubenswrapper[4632]: I0313 11:15:42.669305 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b80498f-6567-4384-8312-3eec23afb96f-utilities\") pod \"6b80498f-6567-4384-8312-3eec23afb96f\" (UID: \"6b80498f-6567-4384-8312-3eec23afb96f\") " Mar 13 11:15:42 crc kubenswrapper[4632]: I0313 11:15:42.671207 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b80498f-6567-4384-8312-3eec23afb96f-utilities" (OuterVolumeSpecName: "utilities") pod "6b80498f-6567-4384-8312-3eec23afb96f" (UID: "6b80498f-6567-4384-8312-3eec23afb96f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:15:42 crc kubenswrapper[4632]: I0313 11:15:42.678072 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b80498f-6567-4384-8312-3eec23afb96f-kube-api-access-q4v48" (OuterVolumeSpecName: "kube-api-access-q4v48") pod "6b80498f-6567-4384-8312-3eec23afb96f" (UID: "6b80498f-6567-4384-8312-3eec23afb96f"). InnerVolumeSpecName "kube-api-access-q4v48". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:15:42 crc kubenswrapper[4632]: I0313 11:15:42.704981 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b80498f-6567-4384-8312-3eec23afb96f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6b80498f-6567-4384-8312-3eec23afb96f" (UID: "6b80498f-6567-4384-8312-3eec23afb96f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:15:42 crc kubenswrapper[4632]: I0313 11:15:42.745662 4632 scope.go:117] "RemoveContainer" containerID="cae11f82bcd03a8e15a974a2d286987da044e8a582b503a6c527777766946f93" Mar 13 11:15:42 crc kubenswrapper[4632]: E0313 11:15:42.747020 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cae11f82bcd03a8e15a974a2d286987da044e8a582b503a6c527777766946f93\": container with ID starting with cae11f82bcd03a8e15a974a2d286987da044e8a582b503a6c527777766946f93 not found: ID does not exist" containerID="cae11f82bcd03a8e15a974a2d286987da044e8a582b503a6c527777766946f93" Mar 13 11:15:42 crc kubenswrapper[4632]: I0313 11:15:42.747082 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cae11f82bcd03a8e15a974a2d286987da044e8a582b503a6c527777766946f93"} err="failed to get container status \"cae11f82bcd03a8e15a974a2d286987da044e8a582b503a6c527777766946f93\": rpc error: code = NotFound desc = could not find container \"cae11f82bcd03a8e15a974a2d286987da044e8a582b503a6c527777766946f93\": container with ID starting with cae11f82bcd03a8e15a974a2d286987da044e8a582b503a6c527777766946f93 not found: ID does not exist" Mar 13 11:15:42 crc kubenswrapper[4632]: I0313 11:15:42.747105 4632 scope.go:117] "RemoveContainer" containerID="b5f67696ef58459c1f88ef05dc6509bd9cb7f8b1c9e52e4e87ca38283e2b4063" Mar 13 11:15:42 crc kubenswrapper[4632]: E0313 11:15:42.747749 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5f67696ef58459c1f88ef05dc6509bd9cb7f8b1c9e52e4e87ca38283e2b4063\": container with ID starting with b5f67696ef58459c1f88ef05dc6509bd9cb7f8b1c9e52e4e87ca38283e2b4063 not found: ID does not exist" containerID="b5f67696ef58459c1f88ef05dc6509bd9cb7f8b1c9e52e4e87ca38283e2b4063" Mar 13 11:15:42 crc kubenswrapper[4632]: I0313 11:15:42.747807 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5f67696ef58459c1f88ef05dc6509bd9cb7f8b1c9e52e4e87ca38283e2b4063"} err="failed to get container status \"b5f67696ef58459c1f88ef05dc6509bd9cb7f8b1c9e52e4e87ca38283e2b4063\": rpc error: code = NotFound desc = could not find container \"b5f67696ef58459c1f88ef05dc6509bd9cb7f8b1c9e52e4e87ca38283e2b4063\": container with ID starting with b5f67696ef58459c1f88ef05dc6509bd9cb7f8b1c9e52e4e87ca38283e2b4063 not found: ID does not exist" Mar 13 11:15:42 crc kubenswrapper[4632]: I0313 11:15:42.747839 4632 scope.go:117] "RemoveContainer" containerID="e4c2cf5dbf98373fe62cd97ac31b0a9d4d0cfebb456d4b4a21937b1cdd3aa690" Mar 13 11:15:42 crc kubenswrapper[4632]: E0313 11:15:42.748539 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4c2cf5dbf98373fe62cd97ac31b0a9d4d0cfebb456d4b4a21937b1cdd3aa690\": container with ID starting with e4c2cf5dbf98373fe62cd97ac31b0a9d4d0cfebb456d4b4a21937b1cdd3aa690 not found: ID does not exist" containerID="e4c2cf5dbf98373fe62cd97ac31b0a9d4d0cfebb456d4b4a21937b1cdd3aa690" Mar 13 11:15:42 crc kubenswrapper[4632]: I0313 11:15:42.748592 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4c2cf5dbf98373fe62cd97ac31b0a9d4d0cfebb456d4b4a21937b1cdd3aa690"} err="failed to get container status \"e4c2cf5dbf98373fe62cd97ac31b0a9d4d0cfebb456d4b4a21937b1cdd3aa690\": rpc error: code = NotFound desc = could not find container \"e4c2cf5dbf98373fe62cd97ac31b0a9d4d0cfebb456d4b4a21937b1cdd3aa690\": container with ID starting with e4c2cf5dbf98373fe62cd97ac31b0a9d4d0cfebb456d4b4a21937b1cdd3aa690 not found: ID does not exist" Mar 13 11:15:42 crc kubenswrapper[4632]: I0313 11:15:42.771175 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b80498f-6567-4384-8312-3eec23afb96f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 11:15:42 crc kubenswrapper[4632]: I0313 11:15:42.771209 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4v48\" (UniqueName: \"kubernetes.io/projected/6b80498f-6567-4384-8312-3eec23afb96f-kube-api-access-q4v48\") on node \"crc\" DevicePath \"\"" Mar 13 11:15:42 crc kubenswrapper[4632]: I0313 11:15:42.771221 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b80498f-6567-4384-8312-3eec23afb96f-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 11:15:42 crc kubenswrapper[4632]: I0313 11:15:42.939500 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fjcb8"] Mar 13 11:15:42 crc kubenswrapper[4632]: I0313 11:15:42.947400 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fjcb8"] Mar 13 11:15:44 crc kubenswrapper[4632]: I0313 11:15:44.060785 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b80498f-6567-4384-8312-3eec23afb96f" path="/var/lib/kubelet/pods/6b80498f-6567-4384-8312-3eec23afb96f/volumes" Mar 13 11:15:51 crc kubenswrapper[4632]: I0313 11:15:51.035565 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mkx4r"] Mar 13 11:15:51 crc kubenswrapper[4632]: E0313 11:15:51.038583 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b80498f-6567-4384-8312-3eec23afb96f" containerName="extract-content" Mar 13 11:15:51 crc kubenswrapper[4632]: I0313 11:15:51.038684 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b80498f-6567-4384-8312-3eec23afb96f" containerName="extract-content" Mar 13 11:15:51 crc kubenswrapper[4632]: E0313 11:15:51.038776 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b80498f-6567-4384-8312-3eec23afb96f" containerName="extract-utilities" Mar 13 11:15:51 crc kubenswrapper[4632]: I0313 11:15:51.038869 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b80498f-6567-4384-8312-3eec23afb96f" containerName="extract-utilities" Mar 13 11:15:51 crc kubenswrapper[4632]: E0313 11:15:51.039064 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b80498f-6567-4384-8312-3eec23afb96f" containerName="registry-server" Mar 13 11:15:51 crc kubenswrapper[4632]: I0313 11:15:51.039080 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b80498f-6567-4384-8312-3eec23afb96f" containerName="registry-server" Mar 13 11:15:51 crc kubenswrapper[4632]: I0313 11:15:51.039564 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b80498f-6567-4384-8312-3eec23afb96f" containerName="registry-server" Mar 13 11:15:51 crc kubenswrapper[4632]: I0313 11:15:51.041423 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mkx4r" Mar 13 11:15:51 crc kubenswrapper[4632]: I0313 11:15:51.080806 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mkx4r"] Mar 13 11:15:51 crc kubenswrapper[4632]: I0313 11:15:51.154817 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43baafa6-f011-4a57-a843-9ff515c2d27c-catalog-content\") pod \"certified-operators-mkx4r\" (UID: \"43baafa6-f011-4a57-a843-9ff515c2d27c\") " pod="openshift-marketplace/certified-operators-mkx4r" Mar 13 11:15:51 crc kubenswrapper[4632]: I0313 11:15:51.155226 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gj6h\" (UniqueName: \"kubernetes.io/projected/43baafa6-f011-4a57-a843-9ff515c2d27c-kube-api-access-5gj6h\") pod \"certified-operators-mkx4r\" (UID: \"43baafa6-f011-4a57-a843-9ff515c2d27c\") " pod="openshift-marketplace/certified-operators-mkx4r" Mar 13 11:15:51 crc kubenswrapper[4632]: I0313 11:15:51.155317 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43baafa6-f011-4a57-a843-9ff515c2d27c-utilities\") pod \"certified-operators-mkx4r\" (UID: \"43baafa6-f011-4a57-a843-9ff515c2d27c\") " pod="openshift-marketplace/certified-operators-mkx4r" Mar 13 11:15:51 crc kubenswrapper[4632]: I0313 11:15:51.257494 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43baafa6-f011-4a57-a843-9ff515c2d27c-catalog-content\") pod \"certified-operators-mkx4r\" (UID: \"43baafa6-f011-4a57-a843-9ff515c2d27c\") " pod="openshift-marketplace/certified-operators-mkx4r" Mar 13 11:15:51 crc kubenswrapper[4632]: I0313 11:15:51.257562 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gj6h\" (UniqueName: \"kubernetes.io/projected/43baafa6-f011-4a57-a843-9ff515c2d27c-kube-api-access-5gj6h\") pod \"certified-operators-mkx4r\" (UID: \"43baafa6-f011-4a57-a843-9ff515c2d27c\") " pod="openshift-marketplace/certified-operators-mkx4r" Mar 13 11:15:51 crc kubenswrapper[4632]: I0313 11:15:51.257605 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43baafa6-f011-4a57-a843-9ff515c2d27c-utilities\") pod \"certified-operators-mkx4r\" (UID: \"43baafa6-f011-4a57-a843-9ff515c2d27c\") " pod="openshift-marketplace/certified-operators-mkx4r" Mar 13 11:15:51 crc kubenswrapper[4632]: I0313 11:15:51.258145 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43baafa6-f011-4a57-a843-9ff515c2d27c-utilities\") pod \"certified-operators-mkx4r\" (UID: \"43baafa6-f011-4a57-a843-9ff515c2d27c\") " pod="openshift-marketplace/certified-operators-mkx4r" Mar 13 11:15:51 crc kubenswrapper[4632]: I0313 11:15:51.258204 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43baafa6-f011-4a57-a843-9ff515c2d27c-catalog-content\") pod \"certified-operators-mkx4r\" (UID: \"43baafa6-f011-4a57-a843-9ff515c2d27c\") " pod="openshift-marketplace/certified-operators-mkx4r" Mar 13 11:15:51 crc kubenswrapper[4632]: I0313 11:15:51.278329 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gj6h\" (UniqueName: \"kubernetes.io/projected/43baafa6-f011-4a57-a843-9ff515c2d27c-kube-api-access-5gj6h\") pod \"certified-operators-mkx4r\" (UID: \"43baafa6-f011-4a57-a843-9ff515c2d27c\") " pod="openshift-marketplace/certified-operators-mkx4r" Mar 13 11:15:51 crc kubenswrapper[4632]: I0313 11:15:51.375595 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mkx4r" Mar 13 11:15:51 crc kubenswrapper[4632]: I0313 11:15:51.944068 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mkx4r"] Mar 13 11:15:52 crc kubenswrapper[4632]: I0313 11:15:52.717575 4632 generic.go:334] "Generic (PLEG): container finished" podID="43baafa6-f011-4a57-a843-9ff515c2d27c" containerID="0a8143ecbbf23166aa2635bf482dd844c2de6e91412ef57334b5958839b8171f" exitCode=0 Mar 13 11:15:52 crc kubenswrapper[4632]: I0313 11:15:52.717632 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkx4r" event={"ID":"43baafa6-f011-4a57-a843-9ff515c2d27c","Type":"ContainerDied","Data":"0a8143ecbbf23166aa2635bf482dd844c2de6e91412ef57334b5958839b8171f"} Mar 13 11:15:52 crc kubenswrapper[4632]: I0313 11:15:52.718601 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkx4r" event={"ID":"43baafa6-f011-4a57-a843-9ff515c2d27c","Type":"ContainerStarted","Data":"2d6c4a914964242a2013f8a205bc41526ff8fae16e636ee7d8cd1ac5a0677874"} Mar 13 11:15:54 crc kubenswrapper[4632]: I0313 11:15:54.739525 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkx4r" event={"ID":"43baafa6-f011-4a57-a843-9ff515c2d27c","Type":"ContainerStarted","Data":"eeb3c7988fe06aac21b41b63a9bb8fc4ac2e8f50f48b82caf52b00ff344abc9c"} Mar 13 11:15:56 crc kubenswrapper[4632]: I0313 11:15:56.761779 4632 generic.go:334] "Generic (PLEG): container finished" podID="43baafa6-f011-4a57-a843-9ff515c2d27c" containerID="eeb3c7988fe06aac21b41b63a9bb8fc4ac2e8f50f48b82caf52b00ff344abc9c" exitCode=0 Mar 13 11:15:56 crc kubenswrapper[4632]: I0313 11:15:56.761888 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkx4r" event={"ID":"43baafa6-f011-4a57-a843-9ff515c2d27c","Type":"ContainerDied","Data":"eeb3c7988fe06aac21b41b63a9bb8fc4ac2e8f50f48b82caf52b00ff344abc9c"} Mar 13 11:15:57 crc kubenswrapper[4632]: I0313 11:15:57.775592 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkx4r" event={"ID":"43baafa6-f011-4a57-a843-9ff515c2d27c","Type":"ContainerStarted","Data":"5068b71db4a67bf4fba229474a62974c266237a6aa5af74f3af86b3451d483dc"} Mar 13 11:15:57 crc kubenswrapper[4632]: I0313 11:15:57.798344 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mkx4r" podStartSLOduration=2.334089454 podStartE2EDuration="6.798327702s" podCreationTimestamp="2026-03-13 11:15:51 +0000 UTC" firstStartedPulling="2026-03-13 11:15:52.72134613 +0000 UTC m=+4326.743876273" lastFinishedPulling="2026-03-13 11:15:57.185584388 +0000 UTC m=+4331.208114521" observedRunningTime="2026-03-13 11:15:57.793474231 +0000 UTC m=+4331.816004374" watchObservedRunningTime="2026-03-13 11:15:57.798327702 +0000 UTC m=+4331.820857835" Mar 13 11:16:00 crc kubenswrapper[4632]: I0313 11:16:00.142032 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556676-8qz49"] Mar 13 11:16:00 crc kubenswrapper[4632]: I0313 11:16:00.144405 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556676-8qz49" Mar 13 11:16:00 crc kubenswrapper[4632]: I0313 11:16:00.147013 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 11:16:00 crc kubenswrapper[4632]: I0313 11:16:00.147720 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 11:16:00 crc kubenswrapper[4632]: I0313 11:16:00.148228 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 11:16:00 crc kubenswrapper[4632]: I0313 11:16:00.156549 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556676-8qz49"] Mar 13 11:16:00 crc kubenswrapper[4632]: I0313 11:16:00.295139 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xznxs\" (UniqueName: \"kubernetes.io/projected/f97cdaa5-f90d-4cd0-9b62-7bb0c3b41911-kube-api-access-xznxs\") pod \"auto-csr-approver-29556676-8qz49\" (UID: \"f97cdaa5-f90d-4cd0-9b62-7bb0c3b41911\") " pod="openshift-infra/auto-csr-approver-29556676-8qz49" Mar 13 11:16:00 crc kubenswrapper[4632]: I0313 11:16:00.396504 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xznxs\" (UniqueName: \"kubernetes.io/projected/f97cdaa5-f90d-4cd0-9b62-7bb0c3b41911-kube-api-access-xznxs\") pod \"auto-csr-approver-29556676-8qz49\" (UID: \"f97cdaa5-f90d-4cd0-9b62-7bb0c3b41911\") " pod="openshift-infra/auto-csr-approver-29556676-8qz49" Mar 13 11:16:00 crc kubenswrapper[4632]: I0313 11:16:00.418274 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xznxs\" (UniqueName: \"kubernetes.io/projected/f97cdaa5-f90d-4cd0-9b62-7bb0c3b41911-kube-api-access-xznxs\") pod \"auto-csr-approver-29556676-8qz49\" (UID: \"f97cdaa5-f90d-4cd0-9b62-7bb0c3b41911\") " pod="openshift-infra/auto-csr-approver-29556676-8qz49" Mar 13 11:16:00 crc kubenswrapper[4632]: I0313 11:16:00.464705 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556676-8qz49" Mar 13 11:16:01 crc kubenswrapper[4632]: I0313 11:16:01.036198 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556676-8qz49"] Mar 13 11:16:01 crc kubenswrapper[4632]: W0313 11:16:01.038089 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf97cdaa5_f90d_4cd0_9b62_7bb0c3b41911.slice/crio-4d598b65671a365f65cf6a6e23cbf30862f2afbcc820e1c2e386503e7ec46797 WatchSource:0}: Error finding container 4d598b65671a365f65cf6a6e23cbf30862f2afbcc820e1c2e386503e7ec46797: Status 404 returned error can't find the container with id 4d598b65671a365f65cf6a6e23cbf30862f2afbcc820e1c2e386503e7ec46797 Mar 13 11:16:01 crc kubenswrapper[4632]: I0313 11:16:01.376604 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mkx4r" Mar 13 11:16:01 crc kubenswrapper[4632]: I0313 11:16:01.376709 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mkx4r" Mar 13 11:16:01 crc kubenswrapper[4632]: I0313 11:16:01.820074 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556676-8qz49" event={"ID":"f97cdaa5-f90d-4cd0-9b62-7bb0c3b41911","Type":"ContainerStarted","Data":"4d598b65671a365f65cf6a6e23cbf30862f2afbcc820e1c2e386503e7ec46797"} Mar 13 11:16:02 crc kubenswrapper[4632]: I0313 11:16:02.449072 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-mkx4r" podUID="43baafa6-f011-4a57-a843-9ff515c2d27c" containerName="registry-server" probeResult="failure" output=< Mar 13 11:16:02 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:16:02 crc kubenswrapper[4632]: > Mar 13 11:16:02 crc kubenswrapper[4632]: I0313 11:16:02.832561 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556676-8qz49" event={"ID":"f97cdaa5-f90d-4cd0-9b62-7bb0c3b41911","Type":"ContainerStarted","Data":"1913657413fffb5d7b6f0c5a32e25db59682a49e52d39a0adc600808b4a0def3"} Mar 13 11:16:02 crc kubenswrapper[4632]: I0313 11:16:02.857335 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556676-8qz49" podStartSLOduration=2.003172086 podStartE2EDuration="2.857307759s" podCreationTimestamp="2026-03-13 11:16:00 +0000 UTC" firstStartedPulling="2026-03-13 11:16:01.039824941 +0000 UTC m=+4335.062355074" lastFinishedPulling="2026-03-13 11:16:01.893960614 +0000 UTC m=+4335.916490747" observedRunningTime="2026-03-13 11:16:02.849153578 +0000 UTC m=+4336.871683721" watchObservedRunningTime="2026-03-13 11:16:02.857307759 +0000 UTC m=+4336.879837902" Mar 13 11:16:03 crc kubenswrapper[4632]: I0313 11:16:03.746965 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-92vd7"] Mar 13 11:16:03 crc kubenswrapper[4632]: I0313 11:16:03.752532 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-92vd7" Mar 13 11:16:03 crc kubenswrapper[4632]: I0313 11:16:03.781207 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-92vd7"] Mar 13 11:16:03 crc kubenswrapper[4632]: I0313 11:16:03.845583 4632 generic.go:334] "Generic (PLEG): container finished" podID="f97cdaa5-f90d-4cd0-9b62-7bb0c3b41911" containerID="1913657413fffb5d7b6f0c5a32e25db59682a49e52d39a0adc600808b4a0def3" exitCode=0 Mar 13 11:16:03 crc kubenswrapper[4632]: I0313 11:16:03.845638 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556676-8qz49" event={"ID":"f97cdaa5-f90d-4cd0-9b62-7bb0c3b41911","Type":"ContainerDied","Data":"1913657413fffb5d7b6f0c5a32e25db59682a49e52d39a0adc600808b4a0def3"} Mar 13 11:16:03 crc kubenswrapper[4632]: I0313 11:16:03.873632 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7c05421-6c9d-4a5b-b77e-278ff5610dbb-utilities\") pod \"community-operators-92vd7\" (UID: \"c7c05421-6c9d-4a5b-b77e-278ff5610dbb\") " pod="openshift-marketplace/community-operators-92vd7" Mar 13 11:16:03 crc kubenswrapper[4632]: I0313 11:16:03.873716 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7c05421-6c9d-4a5b-b77e-278ff5610dbb-catalog-content\") pod \"community-operators-92vd7\" (UID: \"c7c05421-6c9d-4a5b-b77e-278ff5610dbb\") " pod="openshift-marketplace/community-operators-92vd7" Mar 13 11:16:03 crc kubenswrapper[4632]: I0313 11:16:03.873763 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbxvh\" (UniqueName: \"kubernetes.io/projected/c7c05421-6c9d-4a5b-b77e-278ff5610dbb-kube-api-access-fbxvh\") pod \"community-operators-92vd7\" (UID: \"c7c05421-6c9d-4a5b-b77e-278ff5610dbb\") " pod="openshift-marketplace/community-operators-92vd7" Mar 13 11:16:03 crc kubenswrapper[4632]: I0313 11:16:03.974968 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7c05421-6c9d-4a5b-b77e-278ff5610dbb-catalog-content\") pod \"community-operators-92vd7\" (UID: \"c7c05421-6c9d-4a5b-b77e-278ff5610dbb\") " pod="openshift-marketplace/community-operators-92vd7" Mar 13 11:16:03 crc kubenswrapper[4632]: I0313 11:16:03.975044 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbxvh\" (UniqueName: \"kubernetes.io/projected/c7c05421-6c9d-4a5b-b77e-278ff5610dbb-kube-api-access-fbxvh\") pod \"community-operators-92vd7\" (UID: \"c7c05421-6c9d-4a5b-b77e-278ff5610dbb\") " pod="openshift-marketplace/community-operators-92vd7" Mar 13 11:16:03 crc kubenswrapper[4632]: I0313 11:16:03.975142 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7c05421-6c9d-4a5b-b77e-278ff5610dbb-utilities\") pod \"community-operators-92vd7\" (UID: \"c7c05421-6c9d-4a5b-b77e-278ff5610dbb\") " pod="openshift-marketplace/community-operators-92vd7" Mar 13 11:16:03 crc kubenswrapper[4632]: I0313 11:16:03.975498 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7c05421-6c9d-4a5b-b77e-278ff5610dbb-catalog-content\") pod \"community-operators-92vd7\" (UID: \"c7c05421-6c9d-4a5b-b77e-278ff5610dbb\") " pod="openshift-marketplace/community-operators-92vd7" Mar 13 11:16:03 crc kubenswrapper[4632]: I0313 11:16:03.975572 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7c05421-6c9d-4a5b-b77e-278ff5610dbb-utilities\") pod \"community-operators-92vd7\" (UID: \"c7c05421-6c9d-4a5b-b77e-278ff5610dbb\") " pod="openshift-marketplace/community-operators-92vd7" Mar 13 11:16:03 crc kubenswrapper[4632]: I0313 11:16:03.997813 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbxvh\" (UniqueName: \"kubernetes.io/projected/c7c05421-6c9d-4a5b-b77e-278ff5610dbb-kube-api-access-fbxvh\") pod \"community-operators-92vd7\" (UID: \"c7c05421-6c9d-4a5b-b77e-278ff5610dbb\") " pod="openshift-marketplace/community-operators-92vd7" Mar 13 11:16:04 crc kubenswrapper[4632]: I0313 11:16:04.081149 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-92vd7" Mar 13 11:16:04 crc kubenswrapper[4632]: I0313 11:16:04.614338 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-92vd7"] Mar 13 11:16:04 crc kubenswrapper[4632]: I0313 11:16:04.854290 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92vd7" event={"ID":"c7c05421-6c9d-4a5b-b77e-278ff5610dbb","Type":"ContainerStarted","Data":"2292a7de70f1a922a86c5aac2e3477eea585fc2a4541f7fdab875f746948a354"} Mar 13 11:16:04 crc kubenswrapper[4632]: I0313 11:16:04.854578 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92vd7" event={"ID":"c7c05421-6c9d-4a5b-b77e-278ff5610dbb","Type":"ContainerStarted","Data":"b93e7fe938f5ed8d7dd19d8e5f95e035e0510382f4433390808953a862971f44"} Mar 13 11:16:05 crc kubenswrapper[4632]: I0313 11:16:05.246751 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556676-8qz49" Mar 13 11:16:05 crc kubenswrapper[4632]: I0313 11:16:05.300129 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xznxs\" (UniqueName: \"kubernetes.io/projected/f97cdaa5-f90d-4cd0-9b62-7bb0c3b41911-kube-api-access-xznxs\") pod \"f97cdaa5-f90d-4cd0-9b62-7bb0c3b41911\" (UID: \"f97cdaa5-f90d-4cd0-9b62-7bb0c3b41911\") " Mar 13 11:16:05 crc kubenswrapper[4632]: I0313 11:16:05.308764 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f97cdaa5-f90d-4cd0-9b62-7bb0c3b41911-kube-api-access-xznxs" (OuterVolumeSpecName: "kube-api-access-xznxs") pod "f97cdaa5-f90d-4cd0-9b62-7bb0c3b41911" (UID: "f97cdaa5-f90d-4cd0-9b62-7bb0c3b41911"). InnerVolumeSpecName "kube-api-access-xznxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:16:05 crc kubenswrapper[4632]: I0313 11:16:05.402703 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xznxs\" (UniqueName: \"kubernetes.io/projected/f97cdaa5-f90d-4cd0-9b62-7bb0c3b41911-kube-api-access-xznxs\") on node \"crc\" DevicePath \"\"" Mar 13 11:16:05 crc kubenswrapper[4632]: I0313 11:16:05.866794 4632 generic.go:334] "Generic (PLEG): container finished" podID="c7c05421-6c9d-4a5b-b77e-278ff5610dbb" containerID="2292a7de70f1a922a86c5aac2e3477eea585fc2a4541f7fdab875f746948a354" exitCode=0 Mar 13 11:16:05 crc kubenswrapper[4632]: I0313 11:16:05.868288 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92vd7" event={"ID":"c7c05421-6c9d-4a5b-b77e-278ff5610dbb","Type":"ContainerDied","Data":"2292a7de70f1a922a86c5aac2e3477eea585fc2a4541f7fdab875f746948a354"} Mar 13 11:16:05 crc kubenswrapper[4632]: I0313 11:16:05.872453 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556676-8qz49" event={"ID":"f97cdaa5-f90d-4cd0-9b62-7bb0c3b41911","Type":"ContainerDied","Data":"4d598b65671a365f65cf6a6e23cbf30862f2afbcc820e1c2e386503e7ec46797"} Mar 13 11:16:05 crc kubenswrapper[4632]: I0313 11:16:05.872513 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d598b65671a365f65cf6a6e23cbf30862f2afbcc820e1c2e386503e7ec46797" Mar 13 11:16:05 crc kubenswrapper[4632]: I0313 11:16:05.872617 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556676-8qz49" Mar 13 11:16:05 crc kubenswrapper[4632]: I0313 11:16:05.948229 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556670-fqvf6"] Mar 13 11:16:05 crc kubenswrapper[4632]: I0313 11:16:05.957555 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556670-fqvf6"] Mar 13 11:16:06 crc kubenswrapper[4632]: I0313 11:16:06.061354 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="565a5983-3957-42c2-b7d4-47d26e00aec8" path="/var/lib/kubelet/pods/565a5983-3957-42c2-b7d4-47d26e00aec8/volumes" Mar 13 11:16:06 crc kubenswrapper[4632]: I0313 11:16:06.883204 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92vd7" event={"ID":"c7c05421-6c9d-4a5b-b77e-278ff5610dbb","Type":"ContainerStarted","Data":"863e77cb00d2ce8145dde64bc045043b8b85335c157a29f91e5e18fe7f269448"} Mar 13 11:16:08 crc kubenswrapper[4632]: I0313 11:16:08.904496 4632 generic.go:334] "Generic (PLEG): container finished" podID="c7c05421-6c9d-4a5b-b77e-278ff5610dbb" containerID="863e77cb00d2ce8145dde64bc045043b8b85335c157a29f91e5e18fe7f269448" exitCode=0 Mar 13 11:16:08 crc kubenswrapper[4632]: I0313 11:16:08.904589 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92vd7" event={"ID":"c7c05421-6c9d-4a5b-b77e-278ff5610dbb","Type":"ContainerDied","Data":"863e77cb00d2ce8145dde64bc045043b8b85335c157a29f91e5e18fe7f269448"} Mar 13 11:16:09 crc kubenswrapper[4632]: I0313 11:16:09.916231 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92vd7" event={"ID":"c7c05421-6c9d-4a5b-b77e-278ff5610dbb","Type":"ContainerStarted","Data":"4716208f34ca87ca08b3d99ecbc5950cf92ba98ce62d017396c4d72582dffff5"} Mar 13 11:16:09 crc kubenswrapper[4632]: I0313 11:16:09.940092 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-92vd7" podStartSLOduration=3.262218417 podStartE2EDuration="6.940073442s" podCreationTimestamp="2026-03-13 11:16:03 +0000 UTC" firstStartedPulling="2026-03-13 11:16:05.868924052 +0000 UTC m=+4339.891454205" lastFinishedPulling="2026-03-13 11:16:09.546779077 +0000 UTC m=+4343.569309230" observedRunningTime="2026-03-13 11:16:09.938754169 +0000 UTC m=+4343.961284312" watchObservedRunningTime="2026-03-13 11:16:09.940073442 +0000 UTC m=+4343.962603575" Mar 13 11:16:10 crc kubenswrapper[4632]: I0313 11:16:10.461163 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 11:16:10 crc kubenswrapper[4632]: I0313 11:16:10.461216 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 11:16:12 crc kubenswrapper[4632]: I0313 11:16:12.455830 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-mkx4r" podUID="43baafa6-f011-4a57-a843-9ff515c2d27c" containerName="registry-server" probeResult="failure" output=< Mar 13 11:16:12 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:16:12 crc kubenswrapper[4632]: > Mar 13 11:16:14 crc kubenswrapper[4632]: I0313 11:16:14.081867 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-92vd7" Mar 13 11:16:14 crc kubenswrapper[4632]: I0313 11:16:14.083094 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-92vd7" Mar 13 11:16:15 crc kubenswrapper[4632]: I0313 11:16:15.144266 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-92vd7" podUID="c7c05421-6c9d-4a5b-b77e-278ff5610dbb" containerName="registry-server" probeResult="failure" output=< Mar 13 11:16:15 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:16:15 crc kubenswrapper[4632]: > Mar 13 11:16:16 crc kubenswrapper[4632]: I0313 11:16:16.988738 4632 scope.go:117] "RemoveContainer" containerID="c0218119e7ac388fadab5a0e90f8eec2d8161ed6d34eaec2b46cb615f7e41508" Mar 13 11:16:21 crc kubenswrapper[4632]: I0313 11:16:21.606664 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mkx4r" Mar 13 11:16:21 crc kubenswrapper[4632]: I0313 11:16:21.664970 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mkx4r" Mar 13 11:16:22 crc kubenswrapper[4632]: I0313 11:16:22.241770 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mkx4r"] Mar 13 11:16:23 crc kubenswrapper[4632]: I0313 11:16:23.041891 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mkx4r" podUID="43baafa6-f011-4a57-a843-9ff515c2d27c" containerName="registry-server" containerID="cri-o://5068b71db4a67bf4fba229474a62974c266237a6aa5af74f3af86b3451d483dc" gracePeriod=2 Mar 13 11:16:23 crc kubenswrapper[4632]: I0313 11:16:23.780494 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mkx4r" Mar 13 11:16:23 crc kubenswrapper[4632]: I0313 11:16:23.799235 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43baafa6-f011-4a57-a843-9ff515c2d27c-utilities\") pod \"43baafa6-f011-4a57-a843-9ff515c2d27c\" (UID: \"43baafa6-f011-4a57-a843-9ff515c2d27c\") " Mar 13 11:16:23 crc kubenswrapper[4632]: I0313 11:16:23.799288 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43baafa6-f011-4a57-a843-9ff515c2d27c-catalog-content\") pod \"43baafa6-f011-4a57-a843-9ff515c2d27c\" (UID: \"43baafa6-f011-4a57-a843-9ff515c2d27c\") " Mar 13 11:16:23 crc kubenswrapper[4632]: I0313 11:16:23.799496 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gj6h\" (UniqueName: \"kubernetes.io/projected/43baafa6-f011-4a57-a843-9ff515c2d27c-kube-api-access-5gj6h\") pod \"43baafa6-f011-4a57-a843-9ff515c2d27c\" (UID: \"43baafa6-f011-4a57-a843-9ff515c2d27c\") " Mar 13 11:16:23 crc kubenswrapper[4632]: I0313 11:16:23.801083 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43baafa6-f011-4a57-a843-9ff515c2d27c-utilities" (OuterVolumeSpecName: "utilities") pod "43baafa6-f011-4a57-a843-9ff515c2d27c" (UID: "43baafa6-f011-4a57-a843-9ff515c2d27c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:16:23 crc kubenswrapper[4632]: I0313 11:16:23.836266 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43baafa6-f011-4a57-a843-9ff515c2d27c-kube-api-access-5gj6h" (OuterVolumeSpecName: "kube-api-access-5gj6h") pod "43baafa6-f011-4a57-a843-9ff515c2d27c" (UID: "43baafa6-f011-4a57-a843-9ff515c2d27c"). InnerVolumeSpecName "kube-api-access-5gj6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:16:23 crc kubenswrapper[4632]: I0313 11:16:23.903918 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43baafa6-f011-4a57-a843-9ff515c2d27c-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 11:16:23 crc kubenswrapper[4632]: I0313 11:16:23.903997 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gj6h\" (UniqueName: \"kubernetes.io/projected/43baafa6-f011-4a57-a843-9ff515c2d27c-kube-api-access-5gj6h\") on node \"crc\" DevicePath \"\"" Mar 13 11:16:23 crc kubenswrapper[4632]: I0313 11:16:23.916770 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43baafa6-f011-4a57-a843-9ff515c2d27c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "43baafa6-f011-4a57-a843-9ff515c2d27c" (UID: "43baafa6-f011-4a57-a843-9ff515c2d27c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:16:24 crc kubenswrapper[4632]: I0313 11:16:24.006252 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43baafa6-f011-4a57-a843-9ff515c2d27c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 11:16:24 crc kubenswrapper[4632]: I0313 11:16:24.051509 4632 generic.go:334] "Generic (PLEG): container finished" podID="43baafa6-f011-4a57-a843-9ff515c2d27c" containerID="5068b71db4a67bf4fba229474a62974c266237a6aa5af74f3af86b3451d483dc" exitCode=0 Mar 13 11:16:24 crc kubenswrapper[4632]: I0313 11:16:24.051600 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mkx4r" Mar 13 11:16:24 crc kubenswrapper[4632]: I0313 11:16:24.054280 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkx4r" event={"ID":"43baafa6-f011-4a57-a843-9ff515c2d27c","Type":"ContainerDied","Data":"5068b71db4a67bf4fba229474a62974c266237a6aa5af74f3af86b3451d483dc"} Mar 13 11:16:24 crc kubenswrapper[4632]: I0313 11:16:24.054328 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkx4r" event={"ID":"43baafa6-f011-4a57-a843-9ff515c2d27c","Type":"ContainerDied","Data":"2d6c4a914964242a2013f8a205bc41526ff8fae16e636ee7d8cd1ac5a0677874"} Mar 13 11:16:24 crc kubenswrapper[4632]: I0313 11:16:24.054349 4632 scope.go:117] "RemoveContainer" containerID="5068b71db4a67bf4fba229474a62974c266237a6aa5af74f3af86b3451d483dc" Mar 13 11:16:24 crc kubenswrapper[4632]: I0313 11:16:24.106701 4632 scope.go:117] "RemoveContainer" containerID="eeb3c7988fe06aac21b41b63a9bb8fc4ac2e8f50f48b82caf52b00ff344abc9c" Mar 13 11:16:24 crc kubenswrapper[4632]: I0313 11:16:24.110590 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mkx4r"] Mar 13 11:16:24 crc kubenswrapper[4632]: I0313 11:16:24.116650 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mkx4r"] Mar 13 11:16:24 crc kubenswrapper[4632]: I0313 11:16:24.142605 4632 scope.go:117] "RemoveContainer" containerID="0a8143ecbbf23166aa2635bf482dd844c2de6e91412ef57334b5958839b8171f" Mar 13 11:16:24 crc kubenswrapper[4632]: I0313 11:16:24.148060 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-92vd7" Mar 13 11:16:24 crc kubenswrapper[4632]: I0313 11:16:24.207352 4632 scope.go:117] "RemoveContainer" containerID="5068b71db4a67bf4fba229474a62974c266237a6aa5af74f3af86b3451d483dc" Mar 13 11:16:24 crc kubenswrapper[4632]: E0313 11:16:24.208178 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5068b71db4a67bf4fba229474a62974c266237a6aa5af74f3af86b3451d483dc\": container with ID starting with 5068b71db4a67bf4fba229474a62974c266237a6aa5af74f3af86b3451d483dc not found: ID does not exist" containerID="5068b71db4a67bf4fba229474a62974c266237a6aa5af74f3af86b3451d483dc" Mar 13 11:16:24 crc kubenswrapper[4632]: I0313 11:16:24.208363 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5068b71db4a67bf4fba229474a62974c266237a6aa5af74f3af86b3451d483dc"} err="failed to get container status \"5068b71db4a67bf4fba229474a62974c266237a6aa5af74f3af86b3451d483dc\": rpc error: code = NotFound desc = could not find container \"5068b71db4a67bf4fba229474a62974c266237a6aa5af74f3af86b3451d483dc\": container with ID starting with 5068b71db4a67bf4fba229474a62974c266237a6aa5af74f3af86b3451d483dc not found: ID does not exist" Mar 13 11:16:24 crc kubenswrapper[4632]: I0313 11:16:24.208444 4632 scope.go:117] "RemoveContainer" containerID="eeb3c7988fe06aac21b41b63a9bb8fc4ac2e8f50f48b82caf52b00ff344abc9c" Mar 13 11:16:24 crc kubenswrapper[4632]: E0313 11:16:24.210229 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eeb3c7988fe06aac21b41b63a9bb8fc4ac2e8f50f48b82caf52b00ff344abc9c\": container with ID starting with eeb3c7988fe06aac21b41b63a9bb8fc4ac2e8f50f48b82caf52b00ff344abc9c not found: ID does not exist" containerID="eeb3c7988fe06aac21b41b63a9bb8fc4ac2e8f50f48b82caf52b00ff344abc9c" Mar 13 11:16:24 crc kubenswrapper[4632]: I0313 11:16:24.210346 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeb3c7988fe06aac21b41b63a9bb8fc4ac2e8f50f48b82caf52b00ff344abc9c"} err="failed to get container status \"eeb3c7988fe06aac21b41b63a9bb8fc4ac2e8f50f48b82caf52b00ff344abc9c\": rpc error: code = NotFound desc = could not find container \"eeb3c7988fe06aac21b41b63a9bb8fc4ac2e8f50f48b82caf52b00ff344abc9c\": container with ID starting with eeb3c7988fe06aac21b41b63a9bb8fc4ac2e8f50f48b82caf52b00ff344abc9c not found: ID does not exist" Mar 13 11:16:24 crc kubenswrapper[4632]: I0313 11:16:24.210443 4632 scope.go:117] "RemoveContainer" containerID="0a8143ecbbf23166aa2635bf482dd844c2de6e91412ef57334b5958839b8171f" Mar 13 11:16:24 crc kubenswrapper[4632]: E0313 11:16:24.210781 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a8143ecbbf23166aa2635bf482dd844c2de6e91412ef57334b5958839b8171f\": container with ID starting with 0a8143ecbbf23166aa2635bf482dd844c2de6e91412ef57334b5958839b8171f not found: ID does not exist" containerID="0a8143ecbbf23166aa2635bf482dd844c2de6e91412ef57334b5958839b8171f" Mar 13 11:16:24 crc kubenswrapper[4632]: I0313 11:16:24.210869 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a8143ecbbf23166aa2635bf482dd844c2de6e91412ef57334b5958839b8171f"} err="failed to get container status \"0a8143ecbbf23166aa2635bf482dd844c2de6e91412ef57334b5958839b8171f\": rpc error: code = NotFound desc = could not find container \"0a8143ecbbf23166aa2635bf482dd844c2de6e91412ef57334b5958839b8171f\": container with ID starting with 0a8143ecbbf23166aa2635bf482dd844c2de6e91412ef57334b5958839b8171f not found: ID does not exist" Mar 13 11:16:24 crc kubenswrapper[4632]: I0313 11:16:24.241766 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-92vd7" Mar 13 11:16:26 crc kubenswrapper[4632]: I0313 11:16:26.075046 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43baafa6-f011-4a57-a843-9ff515c2d27c" path="/var/lib/kubelet/pods/43baafa6-f011-4a57-a843-9ff515c2d27c/volumes" Mar 13 11:16:26 crc kubenswrapper[4632]: I0313 11:16:26.444005 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-92vd7"] Mar 13 11:16:26 crc kubenswrapper[4632]: I0313 11:16:26.444501 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-92vd7" podUID="c7c05421-6c9d-4a5b-b77e-278ff5610dbb" containerName="registry-server" containerID="cri-o://4716208f34ca87ca08b3d99ecbc5950cf92ba98ce62d017396c4d72582dffff5" gracePeriod=2 Mar 13 11:16:26 crc kubenswrapper[4632]: I0313 11:16:26.996098 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-92vd7" Mar 13 11:16:27 crc kubenswrapper[4632]: I0313 11:16:27.093821 4632 generic.go:334] "Generic (PLEG): container finished" podID="c7c05421-6c9d-4a5b-b77e-278ff5610dbb" containerID="4716208f34ca87ca08b3d99ecbc5950cf92ba98ce62d017396c4d72582dffff5" exitCode=0 Mar 13 11:16:27 crc kubenswrapper[4632]: I0313 11:16:27.093872 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-92vd7" Mar 13 11:16:27 crc kubenswrapper[4632]: I0313 11:16:27.093872 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92vd7" event={"ID":"c7c05421-6c9d-4a5b-b77e-278ff5610dbb","Type":"ContainerDied","Data":"4716208f34ca87ca08b3d99ecbc5950cf92ba98ce62d017396c4d72582dffff5"} Mar 13 11:16:27 crc kubenswrapper[4632]: I0313 11:16:27.094047 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92vd7" event={"ID":"c7c05421-6c9d-4a5b-b77e-278ff5610dbb","Type":"ContainerDied","Data":"b93e7fe938f5ed8d7dd19d8e5f95e035e0510382f4433390808953a862971f44"} Mar 13 11:16:27 crc kubenswrapper[4632]: I0313 11:16:27.094075 4632 scope.go:117] "RemoveContainer" containerID="4716208f34ca87ca08b3d99ecbc5950cf92ba98ce62d017396c4d72582dffff5" Mar 13 11:16:27 crc kubenswrapper[4632]: I0313 11:16:27.117309 4632 scope.go:117] "RemoveContainer" containerID="863e77cb00d2ce8145dde64bc045043b8b85335c157a29f91e5e18fe7f269448" Mar 13 11:16:27 crc kubenswrapper[4632]: I0313 11:16:27.134629 4632 scope.go:117] "RemoveContainer" containerID="2292a7de70f1a922a86c5aac2e3477eea585fc2a4541f7fdab875f746948a354" Mar 13 11:16:27 crc kubenswrapper[4632]: I0313 11:16:27.180260 4632 scope.go:117] "RemoveContainer" containerID="4716208f34ca87ca08b3d99ecbc5950cf92ba98ce62d017396c4d72582dffff5" Mar 13 11:16:27 crc kubenswrapper[4632]: E0313 11:16:27.180782 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4716208f34ca87ca08b3d99ecbc5950cf92ba98ce62d017396c4d72582dffff5\": container with ID starting with 4716208f34ca87ca08b3d99ecbc5950cf92ba98ce62d017396c4d72582dffff5 not found: ID does not exist" containerID="4716208f34ca87ca08b3d99ecbc5950cf92ba98ce62d017396c4d72582dffff5" Mar 13 11:16:27 crc kubenswrapper[4632]: I0313 11:16:27.180832 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4716208f34ca87ca08b3d99ecbc5950cf92ba98ce62d017396c4d72582dffff5"} err="failed to get container status \"4716208f34ca87ca08b3d99ecbc5950cf92ba98ce62d017396c4d72582dffff5\": rpc error: code = NotFound desc = could not find container \"4716208f34ca87ca08b3d99ecbc5950cf92ba98ce62d017396c4d72582dffff5\": container with ID starting with 4716208f34ca87ca08b3d99ecbc5950cf92ba98ce62d017396c4d72582dffff5 not found: ID does not exist" Mar 13 11:16:27 crc kubenswrapper[4632]: I0313 11:16:27.180863 4632 scope.go:117] "RemoveContainer" containerID="863e77cb00d2ce8145dde64bc045043b8b85335c157a29f91e5e18fe7f269448" Mar 13 11:16:27 crc kubenswrapper[4632]: I0313 11:16:27.181655 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbxvh\" (UniqueName: \"kubernetes.io/projected/c7c05421-6c9d-4a5b-b77e-278ff5610dbb-kube-api-access-fbxvh\") pod \"c7c05421-6c9d-4a5b-b77e-278ff5610dbb\" (UID: \"c7c05421-6c9d-4a5b-b77e-278ff5610dbb\") " Mar 13 11:16:27 crc kubenswrapper[4632]: I0313 11:16:27.181875 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7c05421-6c9d-4a5b-b77e-278ff5610dbb-utilities\") pod \"c7c05421-6c9d-4a5b-b77e-278ff5610dbb\" (UID: \"c7c05421-6c9d-4a5b-b77e-278ff5610dbb\") " Mar 13 11:16:27 crc kubenswrapper[4632]: I0313 11:16:27.182011 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7c05421-6c9d-4a5b-b77e-278ff5610dbb-catalog-content\") pod \"c7c05421-6c9d-4a5b-b77e-278ff5610dbb\" (UID: \"c7c05421-6c9d-4a5b-b77e-278ff5610dbb\") " Mar 13 11:16:27 crc kubenswrapper[4632]: I0313 11:16:27.183697 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7c05421-6c9d-4a5b-b77e-278ff5610dbb-utilities" (OuterVolumeSpecName: "utilities") pod "c7c05421-6c9d-4a5b-b77e-278ff5610dbb" (UID: "c7c05421-6c9d-4a5b-b77e-278ff5610dbb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:16:27 crc kubenswrapper[4632]: E0313 11:16:27.184001 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"863e77cb00d2ce8145dde64bc045043b8b85335c157a29f91e5e18fe7f269448\": container with ID starting with 863e77cb00d2ce8145dde64bc045043b8b85335c157a29f91e5e18fe7f269448 not found: ID does not exist" containerID="863e77cb00d2ce8145dde64bc045043b8b85335c157a29f91e5e18fe7f269448" Mar 13 11:16:27 crc kubenswrapper[4632]: I0313 11:16:27.184033 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"863e77cb00d2ce8145dde64bc045043b8b85335c157a29f91e5e18fe7f269448"} err="failed to get container status \"863e77cb00d2ce8145dde64bc045043b8b85335c157a29f91e5e18fe7f269448\": rpc error: code = NotFound desc = could not find container \"863e77cb00d2ce8145dde64bc045043b8b85335c157a29f91e5e18fe7f269448\": container with ID starting with 863e77cb00d2ce8145dde64bc045043b8b85335c157a29f91e5e18fe7f269448 not found: ID does not exist" Mar 13 11:16:27 crc kubenswrapper[4632]: I0313 11:16:27.184060 4632 scope.go:117] "RemoveContainer" containerID="2292a7de70f1a922a86c5aac2e3477eea585fc2a4541f7fdab875f746948a354" Mar 13 11:16:27 crc kubenswrapper[4632]: E0313 11:16:27.184573 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2292a7de70f1a922a86c5aac2e3477eea585fc2a4541f7fdab875f746948a354\": container with ID starting with 2292a7de70f1a922a86c5aac2e3477eea585fc2a4541f7fdab875f746948a354 not found: ID does not exist" containerID="2292a7de70f1a922a86c5aac2e3477eea585fc2a4541f7fdab875f746948a354" Mar 13 11:16:27 crc kubenswrapper[4632]: I0313 11:16:27.184654 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2292a7de70f1a922a86c5aac2e3477eea585fc2a4541f7fdab875f746948a354"} err="failed to get container status \"2292a7de70f1a922a86c5aac2e3477eea585fc2a4541f7fdab875f746948a354\": rpc error: code = NotFound desc = could not find container \"2292a7de70f1a922a86c5aac2e3477eea585fc2a4541f7fdab875f746948a354\": container with ID starting with 2292a7de70f1a922a86c5aac2e3477eea585fc2a4541f7fdab875f746948a354 not found: ID does not exist" Mar 13 11:16:27 crc kubenswrapper[4632]: I0313 11:16:27.191291 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7c05421-6c9d-4a5b-b77e-278ff5610dbb-kube-api-access-fbxvh" (OuterVolumeSpecName: "kube-api-access-fbxvh") pod "c7c05421-6c9d-4a5b-b77e-278ff5610dbb" (UID: "c7c05421-6c9d-4a5b-b77e-278ff5610dbb"). InnerVolumeSpecName "kube-api-access-fbxvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:16:27 crc kubenswrapper[4632]: I0313 11:16:27.237182 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7c05421-6c9d-4a5b-b77e-278ff5610dbb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7c05421-6c9d-4a5b-b77e-278ff5610dbb" (UID: "c7c05421-6c9d-4a5b-b77e-278ff5610dbb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:16:27 crc kubenswrapper[4632]: I0313 11:16:27.285101 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbxvh\" (UniqueName: \"kubernetes.io/projected/c7c05421-6c9d-4a5b-b77e-278ff5610dbb-kube-api-access-fbxvh\") on node \"crc\" DevicePath \"\"" Mar 13 11:16:27 crc kubenswrapper[4632]: I0313 11:16:27.285173 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7c05421-6c9d-4a5b-b77e-278ff5610dbb-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 11:16:27 crc kubenswrapper[4632]: I0313 11:16:27.285186 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7c05421-6c9d-4a5b-b77e-278ff5610dbb-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 11:16:27 crc kubenswrapper[4632]: I0313 11:16:27.450439 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-92vd7"] Mar 13 11:16:27 crc kubenswrapper[4632]: I0313 11:16:27.466870 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-92vd7"] Mar 13 11:16:28 crc kubenswrapper[4632]: I0313 11:16:28.058380 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7c05421-6c9d-4a5b-b77e-278ff5610dbb" path="/var/lib/kubelet/pods/c7c05421-6c9d-4a5b-b77e-278ff5610dbb/volumes" Mar 13 11:16:40 crc kubenswrapper[4632]: I0313 11:16:40.460858 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 11:16:40 crc kubenswrapper[4632]: I0313 11:16:40.462112 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 11:17:10 crc kubenswrapper[4632]: I0313 11:17:10.460620 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 11:17:10 crc kubenswrapper[4632]: I0313 11:17:10.461199 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 11:17:10 crc kubenswrapper[4632]: I0313 11:17:10.461247 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 11:17:10 crc kubenswrapper[4632]: I0313 11:17:10.462212 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"70b8629c42fc12676389ef5c404d5d088a40865f44432caaaf2da0b0a7e69c7a"} pod="openshift-machine-config-operator/machine-config-daemon-zkscb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 11:17:10 crc kubenswrapper[4632]: I0313 11:17:10.462272 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" containerID="cri-o://70b8629c42fc12676389ef5c404d5d088a40865f44432caaaf2da0b0a7e69c7a" gracePeriod=600 Mar 13 11:17:10 crc kubenswrapper[4632]: I0313 11:17:10.678270 4632 generic.go:334] "Generic (PLEG): container finished" podID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerID="70b8629c42fc12676389ef5c404d5d088a40865f44432caaaf2da0b0a7e69c7a" exitCode=0 Mar 13 11:17:10 crc kubenswrapper[4632]: I0313 11:17:10.678330 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerDied","Data":"70b8629c42fc12676389ef5c404d5d088a40865f44432caaaf2da0b0a7e69c7a"} Mar 13 11:17:10 crc kubenswrapper[4632]: I0313 11:17:10.678403 4632 scope.go:117] "RemoveContainer" containerID="908510815c251e300c3555d9b7458818dfba317c0679f487df71c717e5c832f9" Mar 13 11:17:11 crc kubenswrapper[4632]: E0313 11:17:11.273265 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:17:11 crc kubenswrapper[4632]: I0313 11:17:11.701645 4632 scope.go:117] "RemoveContainer" containerID="70b8629c42fc12676389ef5c404d5d088a40865f44432caaaf2da0b0a7e69c7a" Mar 13 11:17:11 crc kubenswrapper[4632]: E0313 11:17:11.701892 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:17:25 crc kubenswrapper[4632]: I0313 11:17:25.044688 4632 scope.go:117] "RemoveContainer" containerID="70b8629c42fc12676389ef5c404d5d088a40865f44432caaaf2da0b0a7e69c7a" Mar 13 11:17:25 crc kubenswrapper[4632]: E0313 11:17:25.045541 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:17:36 crc kubenswrapper[4632]: I0313 11:17:36.044471 4632 scope.go:117] "RemoveContainer" containerID="70b8629c42fc12676389ef5c404d5d088a40865f44432caaaf2da0b0a7e69c7a" Mar 13 11:17:36 crc kubenswrapper[4632]: E0313 11:17:36.045252 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:17:38 crc kubenswrapper[4632]: I0313 11:17:38.719889 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gcqr2"] Mar 13 11:17:38 crc kubenswrapper[4632]: E0313 11:17:38.720719 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7c05421-6c9d-4a5b-b77e-278ff5610dbb" containerName="extract-utilities" Mar 13 11:17:38 crc kubenswrapper[4632]: I0313 11:17:38.720736 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7c05421-6c9d-4a5b-b77e-278ff5610dbb" containerName="extract-utilities" Mar 13 11:17:38 crc kubenswrapper[4632]: E0313 11:17:38.720755 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43baafa6-f011-4a57-a843-9ff515c2d27c" containerName="extract-content" Mar 13 11:17:38 crc kubenswrapper[4632]: I0313 11:17:38.720763 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="43baafa6-f011-4a57-a843-9ff515c2d27c" containerName="extract-content" Mar 13 11:17:38 crc kubenswrapper[4632]: E0313 11:17:38.720790 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f97cdaa5-f90d-4cd0-9b62-7bb0c3b41911" containerName="oc" Mar 13 11:17:38 crc kubenswrapper[4632]: I0313 11:17:38.720799 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="f97cdaa5-f90d-4cd0-9b62-7bb0c3b41911" containerName="oc" Mar 13 11:17:38 crc kubenswrapper[4632]: E0313 11:17:38.720817 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7c05421-6c9d-4a5b-b77e-278ff5610dbb" containerName="extract-content" Mar 13 11:17:38 crc kubenswrapper[4632]: I0313 11:17:38.720825 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7c05421-6c9d-4a5b-b77e-278ff5610dbb" containerName="extract-content" Mar 13 11:17:38 crc kubenswrapper[4632]: E0313 11:17:38.720838 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43baafa6-f011-4a57-a843-9ff515c2d27c" containerName="registry-server" Mar 13 11:17:38 crc kubenswrapper[4632]: I0313 11:17:38.720846 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="43baafa6-f011-4a57-a843-9ff515c2d27c" containerName="registry-server" Mar 13 11:17:38 crc kubenswrapper[4632]: E0313 11:17:38.720862 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7c05421-6c9d-4a5b-b77e-278ff5610dbb" containerName="registry-server" Mar 13 11:17:38 crc kubenswrapper[4632]: I0313 11:17:38.720870 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7c05421-6c9d-4a5b-b77e-278ff5610dbb" containerName="registry-server" Mar 13 11:17:38 crc kubenswrapper[4632]: E0313 11:17:38.720885 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43baafa6-f011-4a57-a843-9ff515c2d27c" containerName="extract-utilities" Mar 13 11:17:38 crc kubenswrapper[4632]: I0313 11:17:38.720893 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="43baafa6-f011-4a57-a843-9ff515c2d27c" containerName="extract-utilities" Mar 13 11:17:38 crc kubenswrapper[4632]: I0313 11:17:38.721210 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7c05421-6c9d-4a5b-b77e-278ff5610dbb" containerName="registry-server" Mar 13 11:17:38 crc kubenswrapper[4632]: I0313 11:17:38.721232 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="f97cdaa5-f90d-4cd0-9b62-7bb0c3b41911" containerName="oc" Mar 13 11:17:38 crc kubenswrapper[4632]: I0313 11:17:38.721258 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="43baafa6-f011-4a57-a843-9ff515c2d27c" containerName="registry-server" Mar 13 11:17:38 crc kubenswrapper[4632]: I0313 11:17:38.723579 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gcqr2" Mar 13 11:17:38 crc kubenswrapper[4632]: I0313 11:17:38.761430 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gcqr2"] Mar 13 11:17:38 crc kubenswrapper[4632]: I0313 11:17:38.902073 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27c3c199-cc34-438a-ac59-4555ee7c5a1d-catalog-content\") pod \"redhat-operators-gcqr2\" (UID: \"27c3c199-cc34-438a-ac59-4555ee7c5a1d\") " pod="openshift-marketplace/redhat-operators-gcqr2" Mar 13 11:17:38 crc kubenswrapper[4632]: I0313 11:17:38.902135 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x27fk\" (UniqueName: \"kubernetes.io/projected/27c3c199-cc34-438a-ac59-4555ee7c5a1d-kube-api-access-x27fk\") pod \"redhat-operators-gcqr2\" (UID: \"27c3c199-cc34-438a-ac59-4555ee7c5a1d\") " pod="openshift-marketplace/redhat-operators-gcqr2" Mar 13 11:17:38 crc kubenswrapper[4632]: I0313 11:17:38.902162 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27c3c199-cc34-438a-ac59-4555ee7c5a1d-utilities\") pod \"redhat-operators-gcqr2\" (UID: \"27c3c199-cc34-438a-ac59-4555ee7c5a1d\") " pod="openshift-marketplace/redhat-operators-gcqr2" Mar 13 11:17:39 crc kubenswrapper[4632]: I0313 11:17:39.004379 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x27fk\" (UniqueName: \"kubernetes.io/projected/27c3c199-cc34-438a-ac59-4555ee7c5a1d-kube-api-access-x27fk\") pod \"redhat-operators-gcqr2\" (UID: \"27c3c199-cc34-438a-ac59-4555ee7c5a1d\") " pod="openshift-marketplace/redhat-operators-gcqr2" Mar 13 11:17:39 crc kubenswrapper[4632]: I0313 11:17:39.004448 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27c3c199-cc34-438a-ac59-4555ee7c5a1d-utilities\") pod \"redhat-operators-gcqr2\" (UID: \"27c3c199-cc34-438a-ac59-4555ee7c5a1d\") " pod="openshift-marketplace/redhat-operators-gcqr2" Mar 13 11:17:39 crc kubenswrapper[4632]: I0313 11:17:39.004607 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27c3c199-cc34-438a-ac59-4555ee7c5a1d-catalog-content\") pod \"redhat-operators-gcqr2\" (UID: \"27c3c199-cc34-438a-ac59-4555ee7c5a1d\") " pod="openshift-marketplace/redhat-operators-gcqr2" Mar 13 11:17:39 crc kubenswrapper[4632]: I0313 11:17:39.004982 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27c3c199-cc34-438a-ac59-4555ee7c5a1d-utilities\") pod \"redhat-operators-gcqr2\" (UID: \"27c3c199-cc34-438a-ac59-4555ee7c5a1d\") " pod="openshift-marketplace/redhat-operators-gcqr2" Mar 13 11:17:39 crc kubenswrapper[4632]: I0313 11:17:39.005049 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27c3c199-cc34-438a-ac59-4555ee7c5a1d-catalog-content\") pod \"redhat-operators-gcqr2\" (UID: \"27c3c199-cc34-438a-ac59-4555ee7c5a1d\") " pod="openshift-marketplace/redhat-operators-gcqr2" Mar 13 11:17:39 crc kubenswrapper[4632]: I0313 11:17:39.027877 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x27fk\" (UniqueName: \"kubernetes.io/projected/27c3c199-cc34-438a-ac59-4555ee7c5a1d-kube-api-access-x27fk\") pod \"redhat-operators-gcqr2\" (UID: \"27c3c199-cc34-438a-ac59-4555ee7c5a1d\") " pod="openshift-marketplace/redhat-operators-gcqr2" Mar 13 11:17:39 crc kubenswrapper[4632]: I0313 11:17:39.055411 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gcqr2" Mar 13 11:17:39 crc kubenswrapper[4632]: I0313 11:17:39.707862 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gcqr2"] Mar 13 11:17:40 crc kubenswrapper[4632]: I0313 11:17:40.065657 4632 generic.go:334] "Generic (PLEG): container finished" podID="27c3c199-cc34-438a-ac59-4555ee7c5a1d" containerID="8fc46f8a96264e119d9565fb1071d38b9eca7de4da0002eb50dc0db108fea360" exitCode=0 Mar 13 11:17:40 crc kubenswrapper[4632]: I0313 11:17:40.065987 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gcqr2" event={"ID":"27c3c199-cc34-438a-ac59-4555ee7c5a1d","Type":"ContainerDied","Data":"8fc46f8a96264e119d9565fb1071d38b9eca7de4da0002eb50dc0db108fea360"} Mar 13 11:17:40 crc kubenswrapper[4632]: I0313 11:17:40.066047 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gcqr2" event={"ID":"27c3c199-cc34-438a-ac59-4555ee7c5a1d","Type":"ContainerStarted","Data":"507ec2a71889920853d24af4b68882a43ebcc431eef7fce060f7fd2fe2c9a9ee"} Mar 13 11:17:42 crc kubenswrapper[4632]: I0313 11:17:42.123464 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gcqr2" event={"ID":"27c3c199-cc34-438a-ac59-4555ee7c5a1d","Type":"ContainerStarted","Data":"f9fc93bd191406b3e302e5390ddb104f245922f0c823cd3aa32e935c54c7057f"} Mar 13 11:17:47 crc kubenswrapper[4632]: I0313 11:17:47.045101 4632 scope.go:117] "RemoveContainer" containerID="70b8629c42fc12676389ef5c404d5d088a40865f44432caaaf2da0b0a7e69c7a" Mar 13 11:17:47 crc kubenswrapper[4632]: E0313 11:17:47.046157 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:17:47 crc kubenswrapper[4632]: I0313 11:17:47.180408 4632 generic.go:334] "Generic (PLEG): container finished" podID="27c3c199-cc34-438a-ac59-4555ee7c5a1d" containerID="f9fc93bd191406b3e302e5390ddb104f245922f0c823cd3aa32e935c54c7057f" exitCode=0 Mar 13 11:17:47 crc kubenswrapper[4632]: I0313 11:17:47.180490 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gcqr2" event={"ID":"27c3c199-cc34-438a-ac59-4555ee7c5a1d","Type":"ContainerDied","Data":"f9fc93bd191406b3e302e5390ddb104f245922f0c823cd3aa32e935c54c7057f"} Mar 13 11:17:48 crc kubenswrapper[4632]: I0313 11:17:48.193261 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gcqr2" event={"ID":"27c3c199-cc34-438a-ac59-4555ee7c5a1d","Type":"ContainerStarted","Data":"098f5138f07f45ec1bf0ad95c6647f82c5303be4fdf81f24941c5f5d332834fa"} Mar 13 11:17:48 crc kubenswrapper[4632]: I0313 11:17:48.267497 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gcqr2" podStartSLOduration=2.674656681 podStartE2EDuration="10.267479921s" podCreationTimestamp="2026-03-13 11:17:38 +0000 UTC" firstStartedPulling="2026-03-13 11:17:40.071423264 +0000 UTC m=+4434.093953397" lastFinishedPulling="2026-03-13 11:17:47.664246504 +0000 UTC m=+4441.686776637" observedRunningTime="2026-03-13 11:17:48.265262626 +0000 UTC m=+4442.287792779" watchObservedRunningTime="2026-03-13 11:17:48.267479921 +0000 UTC m=+4442.290010054" Mar 13 11:17:49 crc kubenswrapper[4632]: I0313 11:17:49.056533 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gcqr2" Mar 13 11:17:49 crc kubenswrapper[4632]: I0313 11:17:49.057116 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gcqr2" Mar 13 11:17:50 crc kubenswrapper[4632]: I0313 11:17:50.129759 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gcqr2" podUID="27c3c199-cc34-438a-ac59-4555ee7c5a1d" containerName="registry-server" probeResult="failure" output=< Mar 13 11:17:50 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:17:50 crc kubenswrapper[4632]: > Mar 13 11:18:00 crc kubenswrapper[4632]: I0313 11:18:00.114076 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gcqr2" podUID="27c3c199-cc34-438a-ac59-4555ee7c5a1d" containerName="registry-server" probeResult="failure" output=< Mar 13 11:18:00 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:18:00 crc kubenswrapper[4632]: > Mar 13 11:18:00 crc kubenswrapper[4632]: I0313 11:18:00.164443 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556678-kxcj9"] Mar 13 11:18:00 crc kubenswrapper[4632]: I0313 11:18:00.166383 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556678-kxcj9" Mar 13 11:18:00 crc kubenswrapper[4632]: I0313 11:18:00.169665 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 11:18:00 crc kubenswrapper[4632]: I0313 11:18:00.169726 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 11:18:00 crc kubenswrapper[4632]: I0313 11:18:00.170117 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 11:18:00 crc kubenswrapper[4632]: I0313 11:18:00.185991 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556678-kxcj9"] Mar 13 11:18:00 crc kubenswrapper[4632]: I0313 11:18:00.225932 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd7bd\" (UniqueName: \"kubernetes.io/projected/feb87e78-c7fb-4997-869e-1e652f57ffe9-kube-api-access-zd7bd\") pod \"auto-csr-approver-29556678-kxcj9\" (UID: \"feb87e78-c7fb-4997-869e-1e652f57ffe9\") " pod="openshift-infra/auto-csr-approver-29556678-kxcj9" Mar 13 11:18:00 crc kubenswrapper[4632]: I0313 11:18:00.328325 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd7bd\" (UniqueName: \"kubernetes.io/projected/feb87e78-c7fb-4997-869e-1e652f57ffe9-kube-api-access-zd7bd\") pod \"auto-csr-approver-29556678-kxcj9\" (UID: \"feb87e78-c7fb-4997-869e-1e652f57ffe9\") " pod="openshift-infra/auto-csr-approver-29556678-kxcj9" Mar 13 11:18:00 crc kubenswrapper[4632]: I0313 11:18:00.349595 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd7bd\" (UniqueName: \"kubernetes.io/projected/feb87e78-c7fb-4997-869e-1e652f57ffe9-kube-api-access-zd7bd\") pod \"auto-csr-approver-29556678-kxcj9\" (UID: \"feb87e78-c7fb-4997-869e-1e652f57ffe9\") " pod="openshift-infra/auto-csr-approver-29556678-kxcj9" Mar 13 11:18:00 crc kubenswrapper[4632]: I0313 11:18:00.486028 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556678-kxcj9" Mar 13 11:18:01 crc kubenswrapper[4632]: I0313 11:18:01.054410 4632 scope.go:117] "RemoveContainer" containerID="70b8629c42fc12676389ef5c404d5d088a40865f44432caaaf2da0b0a7e69c7a" Mar 13 11:18:01 crc kubenswrapper[4632]: E0313 11:18:01.055032 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:18:01 crc kubenswrapper[4632]: I0313 11:18:01.433806 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556678-kxcj9"] Mar 13 11:18:02 crc kubenswrapper[4632]: I0313 11:18:02.340333 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556678-kxcj9" event={"ID":"feb87e78-c7fb-4997-869e-1e652f57ffe9","Type":"ContainerStarted","Data":"ca5c093b3e400764932de5c65149b0fc21dfec697c6ff3f3c089cc7c7701ab47"} Mar 13 11:18:03 crc kubenswrapper[4632]: I0313 11:18:03.352051 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556678-kxcj9" event={"ID":"feb87e78-c7fb-4997-869e-1e652f57ffe9","Type":"ContainerStarted","Data":"d5beb81ff52cba139334670e53dbdbf15336383f8949e10d3fb4d56429b9cd89"} Mar 13 11:18:05 crc kubenswrapper[4632]: I0313 11:18:05.381260 4632 generic.go:334] "Generic (PLEG): container finished" podID="feb87e78-c7fb-4997-869e-1e652f57ffe9" containerID="d5beb81ff52cba139334670e53dbdbf15336383f8949e10d3fb4d56429b9cd89" exitCode=0 Mar 13 11:18:05 crc kubenswrapper[4632]: I0313 11:18:05.381330 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556678-kxcj9" event={"ID":"feb87e78-c7fb-4997-869e-1e652f57ffe9","Type":"ContainerDied","Data":"d5beb81ff52cba139334670e53dbdbf15336383f8949e10d3fb4d56429b9cd89"} Mar 13 11:18:06 crc kubenswrapper[4632]: I0313 11:18:06.814234 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556678-kxcj9" Mar 13 11:18:06 crc kubenswrapper[4632]: I0313 11:18:06.904516 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd7bd\" (UniqueName: \"kubernetes.io/projected/feb87e78-c7fb-4997-869e-1e652f57ffe9-kube-api-access-zd7bd\") pod \"feb87e78-c7fb-4997-869e-1e652f57ffe9\" (UID: \"feb87e78-c7fb-4997-869e-1e652f57ffe9\") " Mar 13 11:18:06 crc kubenswrapper[4632]: I0313 11:18:06.911387 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feb87e78-c7fb-4997-869e-1e652f57ffe9-kube-api-access-zd7bd" (OuterVolumeSpecName: "kube-api-access-zd7bd") pod "feb87e78-c7fb-4997-869e-1e652f57ffe9" (UID: "feb87e78-c7fb-4997-869e-1e652f57ffe9"). InnerVolumeSpecName "kube-api-access-zd7bd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:18:07 crc kubenswrapper[4632]: I0313 11:18:07.007215 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd7bd\" (UniqueName: \"kubernetes.io/projected/feb87e78-c7fb-4997-869e-1e652f57ffe9-kube-api-access-zd7bd\") on node \"crc\" DevicePath \"\"" Mar 13 11:18:07 crc kubenswrapper[4632]: I0313 11:18:07.407672 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556678-kxcj9" event={"ID":"feb87e78-c7fb-4997-869e-1e652f57ffe9","Type":"ContainerDied","Data":"ca5c093b3e400764932de5c65149b0fc21dfec697c6ff3f3c089cc7c7701ab47"} Mar 13 11:18:07 crc kubenswrapper[4632]: I0313 11:18:07.407740 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca5c093b3e400764932de5c65149b0fc21dfec697c6ff3f3c089cc7c7701ab47" Mar 13 11:18:07 crc kubenswrapper[4632]: I0313 11:18:07.407825 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556678-kxcj9" Mar 13 11:18:07 crc kubenswrapper[4632]: I0313 11:18:07.491457 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556672-sd9zg"] Mar 13 11:18:07 crc kubenswrapper[4632]: I0313 11:18:07.499373 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556672-sd9zg"] Mar 13 11:18:08 crc kubenswrapper[4632]: I0313 11:18:08.067499 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc83f2bf-bfdf-4426-a5a6-d1299a5b4da8" path="/var/lib/kubelet/pods/bc83f2bf-bfdf-4426-a5a6-d1299a5b4da8/volumes" Mar 13 11:18:10 crc kubenswrapper[4632]: I0313 11:18:10.105072 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gcqr2" podUID="27c3c199-cc34-438a-ac59-4555ee7c5a1d" containerName="registry-server" probeResult="failure" output=< Mar 13 11:18:10 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:18:10 crc kubenswrapper[4632]: > Mar 13 11:18:12 crc kubenswrapper[4632]: I0313 11:18:12.044843 4632 scope.go:117] "RemoveContainer" containerID="70b8629c42fc12676389ef5c404d5d088a40865f44432caaaf2da0b0a7e69c7a" Mar 13 11:18:12 crc kubenswrapper[4632]: E0313 11:18:12.046054 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:18:17 crc kubenswrapper[4632]: I0313 11:18:17.365087 4632 scope.go:117] "RemoveContainer" containerID="5f687cba4c29fe06e8932802cf25f9e44ab270587540acfd69f26e45b584a52b" Mar 13 11:18:20 crc kubenswrapper[4632]: I0313 11:18:20.142155 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gcqr2" podUID="27c3c199-cc34-438a-ac59-4555ee7c5a1d" containerName="registry-server" probeResult="failure" output=< Mar 13 11:18:20 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:18:20 crc kubenswrapper[4632]: > Mar 13 11:18:23 crc kubenswrapper[4632]: I0313 11:18:23.045817 4632 scope.go:117] "RemoveContainer" containerID="70b8629c42fc12676389ef5c404d5d088a40865f44432caaaf2da0b0a7e69c7a" Mar 13 11:18:23 crc kubenswrapper[4632]: E0313 11:18:23.046655 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:18:29 crc kubenswrapper[4632]: I0313 11:18:29.128259 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gcqr2" Mar 13 11:18:29 crc kubenswrapper[4632]: I0313 11:18:29.199063 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gcqr2" Mar 13 11:18:29 crc kubenswrapper[4632]: I0313 11:18:29.382138 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gcqr2"] Mar 13 11:18:30 crc kubenswrapper[4632]: I0313 11:18:30.706155 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gcqr2" podUID="27c3c199-cc34-438a-ac59-4555ee7c5a1d" containerName="registry-server" containerID="cri-o://098f5138f07f45ec1bf0ad95c6647f82c5303be4fdf81f24941c5f5d332834fa" gracePeriod=2 Mar 13 11:18:31 crc kubenswrapper[4632]: I0313 11:18:31.729568 4632 generic.go:334] "Generic (PLEG): container finished" podID="27c3c199-cc34-438a-ac59-4555ee7c5a1d" containerID="098f5138f07f45ec1bf0ad95c6647f82c5303be4fdf81f24941c5f5d332834fa" exitCode=0 Mar 13 11:18:31 crc kubenswrapper[4632]: I0313 11:18:31.730008 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gcqr2" event={"ID":"27c3c199-cc34-438a-ac59-4555ee7c5a1d","Type":"ContainerDied","Data":"098f5138f07f45ec1bf0ad95c6647f82c5303be4fdf81f24941c5f5d332834fa"} Mar 13 11:18:31 crc kubenswrapper[4632]: I0313 11:18:31.730050 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gcqr2" event={"ID":"27c3c199-cc34-438a-ac59-4555ee7c5a1d","Type":"ContainerDied","Data":"507ec2a71889920853d24af4b68882a43ebcc431eef7fce060f7fd2fe2c9a9ee"} Mar 13 11:18:31 crc kubenswrapper[4632]: I0313 11:18:31.730063 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="507ec2a71889920853d24af4b68882a43ebcc431eef7fce060f7fd2fe2c9a9ee" Mar 13 11:18:31 crc kubenswrapper[4632]: I0313 11:18:31.788570 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gcqr2" Mar 13 11:18:31 crc kubenswrapper[4632]: I0313 11:18:31.858789 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27c3c199-cc34-438a-ac59-4555ee7c5a1d-catalog-content\") pod \"27c3c199-cc34-438a-ac59-4555ee7c5a1d\" (UID: \"27c3c199-cc34-438a-ac59-4555ee7c5a1d\") " Mar 13 11:18:31 crc kubenswrapper[4632]: I0313 11:18:31.858840 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27c3c199-cc34-438a-ac59-4555ee7c5a1d-utilities\") pod \"27c3c199-cc34-438a-ac59-4555ee7c5a1d\" (UID: \"27c3c199-cc34-438a-ac59-4555ee7c5a1d\") " Mar 13 11:18:31 crc kubenswrapper[4632]: I0313 11:18:31.858930 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x27fk\" (UniqueName: \"kubernetes.io/projected/27c3c199-cc34-438a-ac59-4555ee7c5a1d-kube-api-access-x27fk\") pod \"27c3c199-cc34-438a-ac59-4555ee7c5a1d\" (UID: \"27c3c199-cc34-438a-ac59-4555ee7c5a1d\") " Mar 13 11:18:31 crc kubenswrapper[4632]: I0313 11:18:31.859632 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27c3c199-cc34-438a-ac59-4555ee7c5a1d-utilities" (OuterVolumeSpecName: "utilities") pod "27c3c199-cc34-438a-ac59-4555ee7c5a1d" (UID: "27c3c199-cc34-438a-ac59-4555ee7c5a1d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:18:31 crc kubenswrapper[4632]: I0313 11:18:31.873339 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27c3c199-cc34-438a-ac59-4555ee7c5a1d-kube-api-access-x27fk" (OuterVolumeSpecName: "kube-api-access-x27fk") pod "27c3c199-cc34-438a-ac59-4555ee7c5a1d" (UID: "27c3c199-cc34-438a-ac59-4555ee7c5a1d"). InnerVolumeSpecName "kube-api-access-x27fk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:18:31 crc kubenswrapper[4632]: I0313 11:18:31.960599 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x27fk\" (UniqueName: \"kubernetes.io/projected/27c3c199-cc34-438a-ac59-4555ee7c5a1d-kube-api-access-x27fk\") on node \"crc\" DevicePath \"\"" Mar 13 11:18:31 crc kubenswrapper[4632]: I0313 11:18:31.960628 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27c3c199-cc34-438a-ac59-4555ee7c5a1d-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 11:18:32 crc kubenswrapper[4632]: I0313 11:18:32.006586 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27c3c199-cc34-438a-ac59-4555ee7c5a1d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "27c3c199-cc34-438a-ac59-4555ee7c5a1d" (UID: "27c3c199-cc34-438a-ac59-4555ee7c5a1d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:18:32 crc kubenswrapper[4632]: I0313 11:18:32.061985 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27c3c199-cc34-438a-ac59-4555ee7c5a1d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 11:18:32 crc kubenswrapper[4632]: I0313 11:18:32.738970 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gcqr2" Mar 13 11:18:32 crc kubenswrapper[4632]: I0313 11:18:32.769010 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gcqr2"] Mar 13 11:18:32 crc kubenswrapper[4632]: I0313 11:18:32.785009 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gcqr2"] Mar 13 11:18:34 crc kubenswrapper[4632]: I0313 11:18:34.073902 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27c3c199-cc34-438a-ac59-4555ee7c5a1d" path="/var/lib/kubelet/pods/27c3c199-cc34-438a-ac59-4555ee7c5a1d/volumes" Mar 13 11:18:35 crc kubenswrapper[4632]: I0313 11:18:35.044459 4632 scope.go:117] "RemoveContainer" containerID="70b8629c42fc12676389ef5c404d5d088a40865f44432caaaf2da0b0a7e69c7a" Mar 13 11:18:35 crc kubenswrapper[4632]: E0313 11:18:35.045175 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:18:48 crc kubenswrapper[4632]: I0313 11:18:48.060092 4632 scope.go:117] "RemoveContainer" containerID="70b8629c42fc12676389ef5c404d5d088a40865f44432caaaf2da0b0a7e69c7a" Mar 13 11:18:48 crc kubenswrapper[4632]: E0313 11:18:48.061043 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:19:03 crc kubenswrapper[4632]: I0313 11:19:03.044530 4632 scope.go:117] "RemoveContainer" containerID="70b8629c42fc12676389ef5c404d5d088a40865f44432caaaf2da0b0a7e69c7a" Mar 13 11:19:03 crc kubenswrapper[4632]: E0313 11:19:03.045850 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:19:16 crc kubenswrapper[4632]: I0313 11:19:16.046606 4632 scope.go:117] "RemoveContainer" containerID="70b8629c42fc12676389ef5c404d5d088a40865f44432caaaf2da0b0a7e69c7a" Mar 13 11:19:16 crc kubenswrapper[4632]: E0313 11:19:16.047868 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:19:29 crc kubenswrapper[4632]: I0313 11:19:29.045016 4632 scope.go:117] "RemoveContainer" containerID="70b8629c42fc12676389ef5c404d5d088a40865f44432caaaf2da0b0a7e69c7a" Mar 13 11:19:29 crc kubenswrapper[4632]: E0313 11:19:29.045774 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:19:44 crc kubenswrapper[4632]: I0313 11:19:44.045196 4632 scope.go:117] "RemoveContainer" containerID="70b8629c42fc12676389ef5c404d5d088a40865f44432caaaf2da0b0a7e69c7a" Mar 13 11:19:44 crc kubenswrapper[4632]: E0313 11:19:44.046089 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:19:55 crc kubenswrapper[4632]: I0313 11:19:55.044213 4632 scope.go:117] "RemoveContainer" containerID="70b8629c42fc12676389ef5c404d5d088a40865f44432caaaf2da0b0a7e69c7a" Mar 13 11:19:55 crc kubenswrapper[4632]: E0313 11:19:55.045298 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:20:00 crc kubenswrapper[4632]: I0313 11:20:00.140511 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556680-g7sw8"] Mar 13 11:20:00 crc kubenswrapper[4632]: E0313 11:20:00.141303 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27c3c199-cc34-438a-ac59-4555ee7c5a1d" containerName="extract-content" Mar 13 11:20:00 crc kubenswrapper[4632]: I0313 11:20:00.141315 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="27c3c199-cc34-438a-ac59-4555ee7c5a1d" containerName="extract-content" Mar 13 11:20:00 crc kubenswrapper[4632]: E0313 11:20:00.141328 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feb87e78-c7fb-4997-869e-1e652f57ffe9" containerName="oc" Mar 13 11:20:00 crc kubenswrapper[4632]: I0313 11:20:00.141333 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="feb87e78-c7fb-4997-869e-1e652f57ffe9" containerName="oc" Mar 13 11:20:00 crc kubenswrapper[4632]: E0313 11:20:00.141341 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27c3c199-cc34-438a-ac59-4555ee7c5a1d" containerName="extract-utilities" Mar 13 11:20:00 crc kubenswrapper[4632]: I0313 11:20:00.141347 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="27c3c199-cc34-438a-ac59-4555ee7c5a1d" containerName="extract-utilities" Mar 13 11:20:00 crc kubenswrapper[4632]: E0313 11:20:00.141370 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27c3c199-cc34-438a-ac59-4555ee7c5a1d" containerName="registry-server" Mar 13 11:20:00 crc kubenswrapper[4632]: I0313 11:20:00.141376 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="27c3c199-cc34-438a-ac59-4555ee7c5a1d" containerName="registry-server" Mar 13 11:20:00 crc kubenswrapper[4632]: I0313 11:20:00.141561 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="feb87e78-c7fb-4997-869e-1e652f57ffe9" containerName="oc" Mar 13 11:20:00 crc kubenswrapper[4632]: I0313 11:20:00.141573 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="27c3c199-cc34-438a-ac59-4555ee7c5a1d" containerName="registry-server" Mar 13 11:20:00 crc kubenswrapper[4632]: I0313 11:20:00.142218 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556680-g7sw8" Mar 13 11:20:00 crc kubenswrapper[4632]: I0313 11:20:00.153227 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556680-g7sw8"] Mar 13 11:20:00 crc kubenswrapper[4632]: I0313 11:20:00.171810 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 11:20:00 crc kubenswrapper[4632]: I0313 11:20:00.172075 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 11:20:00 crc kubenswrapper[4632]: I0313 11:20:00.172221 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 11:20:00 crc kubenswrapper[4632]: I0313 11:20:00.299554 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d29qr\" (UniqueName: \"kubernetes.io/projected/34a7cf92-d429-468d-9eff-e76b0302dee4-kube-api-access-d29qr\") pod \"auto-csr-approver-29556680-g7sw8\" (UID: \"34a7cf92-d429-468d-9eff-e76b0302dee4\") " pod="openshift-infra/auto-csr-approver-29556680-g7sw8" Mar 13 11:20:00 crc kubenswrapper[4632]: I0313 11:20:00.401793 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d29qr\" (UniqueName: \"kubernetes.io/projected/34a7cf92-d429-468d-9eff-e76b0302dee4-kube-api-access-d29qr\") pod \"auto-csr-approver-29556680-g7sw8\" (UID: \"34a7cf92-d429-468d-9eff-e76b0302dee4\") " pod="openshift-infra/auto-csr-approver-29556680-g7sw8" Mar 13 11:20:00 crc kubenswrapper[4632]: I0313 11:20:00.431302 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d29qr\" (UniqueName: \"kubernetes.io/projected/34a7cf92-d429-468d-9eff-e76b0302dee4-kube-api-access-d29qr\") pod \"auto-csr-approver-29556680-g7sw8\" (UID: \"34a7cf92-d429-468d-9eff-e76b0302dee4\") " pod="openshift-infra/auto-csr-approver-29556680-g7sw8" Mar 13 11:20:00 crc kubenswrapper[4632]: I0313 11:20:00.491430 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556680-g7sw8" Mar 13 11:20:00 crc kubenswrapper[4632]: I0313 11:20:00.992909 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556680-g7sw8"] Mar 13 11:20:01 crc kubenswrapper[4632]: I0313 11:20:01.010252 4632 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 11:20:01 crc kubenswrapper[4632]: I0313 11:20:01.062396 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556680-g7sw8" event={"ID":"34a7cf92-d429-468d-9eff-e76b0302dee4","Type":"ContainerStarted","Data":"aad306172ca9a7df97cbb17f5fba2e3d8adc8878f978f0d8c1b464a869d49dd3"} Mar 13 11:20:03 crc kubenswrapper[4632]: I0313 11:20:03.092972 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556680-g7sw8" event={"ID":"34a7cf92-d429-468d-9eff-e76b0302dee4","Type":"ContainerStarted","Data":"a287e319f36103c4becc462637b974d036358eff92b92ae569b32780de4efe87"} Mar 13 11:20:04 crc kubenswrapper[4632]: I0313 11:20:04.104429 4632 generic.go:334] "Generic (PLEG): container finished" podID="34a7cf92-d429-468d-9eff-e76b0302dee4" containerID="a287e319f36103c4becc462637b974d036358eff92b92ae569b32780de4efe87" exitCode=0 Mar 13 11:20:04 crc kubenswrapper[4632]: I0313 11:20:04.104651 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556680-g7sw8" event={"ID":"34a7cf92-d429-468d-9eff-e76b0302dee4","Type":"ContainerDied","Data":"a287e319f36103c4becc462637b974d036358eff92b92ae569b32780de4efe87"} Mar 13 11:20:05 crc kubenswrapper[4632]: I0313 11:20:05.569672 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556680-g7sw8" Mar 13 11:20:05 crc kubenswrapper[4632]: I0313 11:20:05.738867 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d29qr\" (UniqueName: \"kubernetes.io/projected/34a7cf92-d429-468d-9eff-e76b0302dee4-kube-api-access-d29qr\") pod \"34a7cf92-d429-468d-9eff-e76b0302dee4\" (UID: \"34a7cf92-d429-468d-9eff-e76b0302dee4\") " Mar 13 11:20:05 crc kubenswrapper[4632]: I0313 11:20:05.750361 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34a7cf92-d429-468d-9eff-e76b0302dee4-kube-api-access-d29qr" (OuterVolumeSpecName: "kube-api-access-d29qr") pod "34a7cf92-d429-468d-9eff-e76b0302dee4" (UID: "34a7cf92-d429-468d-9eff-e76b0302dee4"). InnerVolumeSpecName "kube-api-access-d29qr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:20:05 crc kubenswrapper[4632]: I0313 11:20:05.841186 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d29qr\" (UniqueName: \"kubernetes.io/projected/34a7cf92-d429-468d-9eff-e76b0302dee4-kube-api-access-d29qr\") on node \"crc\" DevicePath \"\"" Mar 13 11:20:06 crc kubenswrapper[4632]: I0313 11:20:06.044506 4632 scope.go:117] "RemoveContainer" containerID="70b8629c42fc12676389ef5c404d5d088a40865f44432caaaf2da0b0a7e69c7a" Mar 13 11:20:06 crc kubenswrapper[4632]: E0313 11:20:06.044838 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:20:06 crc kubenswrapper[4632]: I0313 11:20:06.124103 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556680-g7sw8" event={"ID":"34a7cf92-d429-468d-9eff-e76b0302dee4","Type":"ContainerDied","Data":"aad306172ca9a7df97cbb17f5fba2e3d8adc8878f978f0d8c1b464a869d49dd3"} Mar 13 11:20:06 crc kubenswrapper[4632]: I0313 11:20:06.124153 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aad306172ca9a7df97cbb17f5fba2e3d8adc8878f978f0d8c1b464a869d49dd3" Mar 13 11:20:06 crc kubenswrapper[4632]: I0313 11:20:06.124217 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556680-g7sw8" Mar 13 11:20:06 crc kubenswrapper[4632]: I0313 11:20:06.641977 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556674-j6lkl"] Mar 13 11:20:06 crc kubenswrapper[4632]: I0313 11:20:06.653881 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556674-j6lkl"] Mar 13 11:20:08 crc kubenswrapper[4632]: I0313 11:20:08.062047 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2607e7bb-5f81-48cf-945a-6dee68b60040" path="/var/lib/kubelet/pods/2607e7bb-5f81-48cf-945a-6dee68b60040/volumes" Mar 13 11:20:18 crc kubenswrapper[4632]: I0313 11:20:18.176722 4632 scope.go:117] "RemoveContainer" containerID="54b5c748d72d1466e81773f53e5a61ff5e546e63d80706f8982cd195e971c601" Mar 13 11:20:20 crc kubenswrapper[4632]: I0313 11:20:20.044167 4632 scope.go:117] "RemoveContainer" containerID="70b8629c42fc12676389ef5c404d5d088a40865f44432caaaf2da0b0a7e69c7a" Mar 13 11:20:20 crc kubenswrapper[4632]: E0313 11:20:20.044678 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:20:34 crc kubenswrapper[4632]: I0313 11:20:34.044752 4632 scope.go:117] "RemoveContainer" containerID="70b8629c42fc12676389ef5c404d5d088a40865f44432caaaf2da0b0a7e69c7a" Mar 13 11:20:34 crc kubenswrapper[4632]: E0313 11:20:34.045750 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:20:49 crc kubenswrapper[4632]: I0313 11:20:49.044109 4632 scope.go:117] "RemoveContainer" containerID="70b8629c42fc12676389ef5c404d5d088a40865f44432caaaf2da0b0a7e69c7a" Mar 13 11:20:49 crc kubenswrapper[4632]: E0313 11:20:49.045079 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:21:01 crc kubenswrapper[4632]: I0313 11:21:01.044361 4632 scope.go:117] "RemoveContainer" containerID="70b8629c42fc12676389ef5c404d5d088a40865f44432caaaf2da0b0a7e69c7a" Mar 13 11:21:01 crc kubenswrapper[4632]: E0313 11:21:01.045226 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:21:13 crc kubenswrapper[4632]: I0313 11:21:13.044995 4632 scope.go:117] "RemoveContainer" containerID="70b8629c42fc12676389ef5c404d5d088a40865f44432caaaf2da0b0a7e69c7a" Mar 13 11:21:13 crc kubenswrapper[4632]: E0313 11:21:13.046251 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:21:28 crc kubenswrapper[4632]: I0313 11:21:28.058447 4632 scope.go:117] "RemoveContainer" containerID="70b8629c42fc12676389ef5c404d5d088a40865f44432caaaf2da0b0a7e69c7a" Mar 13 11:21:28 crc kubenswrapper[4632]: E0313 11:21:28.059470 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:21:42 crc kubenswrapper[4632]: I0313 11:21:42.044693 4632 scope.go:117] "RemoveContainer" containerID="70b8629c42fc12676389ef5c404d5d088a40865f44432caaaf2da0b0a7e69c7a" Mar 13 11:21:42 crc kubenswrapper[4632]: E0313 11:21:42.046868 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:21:56 crc kubenswrapper[4632]: I0313 11:21:56.044408 4632 scope.go:117] "RemoveContainer" containerID="70b8629c42fc12676389ef5c404d5d088a40865f44432caaaf2da0b0a7e69c7a" Mar 13 11:21:56 crc kubenswrapper[4632]: E0313 11:21:56.045144 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:22:00 crc kubenswrapper[4632]: I0313 11:22:00.152139 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556682-pm7lm"] Mar 13 11:22:00 crc kubenswrapper[4632]: E0313 11:22:00.152890 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34a7cf92-d429-468d-9eff-e76b0302dee4" containerName="oc" Mar 13 11:22:00 crc kubenswrapper[4632]: I0313 11:22:00.152902 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a7cf92-d429-468d-9eff-e76b0302dee4" containerName="oc" Mar 13 11:22:00 crc kubenswrapper[4632]: I0313 11:22:00.153175 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="34a7cf92-d429-468d-9eff-e76b0302dee4" containerName="oc" Mar 13 11:22:00 crc kubenswrapper[4632]: I0313 11:22:00.153884 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556682-pm7lm" Mar 13 11:22:00 crc kubenswrapper[4632]: I0313 11:22:00.160317 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556682-pm7lm"] Mar 13 11:22:00 crc kubenswrapper[4632]: I0313 11:22:00.162256 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 11:22:00 crc kubenswrapper[4632]: I0313 11:22:00.162539 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 11:22:00 crc kubenswrapper[4632]: I0313 11:22:00.162865 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 11:22:00 crc kubenswrapper[4632]: I0313 11:22:00.254434 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgkd8\" (UniqueName: \"kubernetes.io/projected/191fb79a-448d-4181-8346-f9dec8721d81-kube-api-access-cgkd8\") pod \"auto-csr-approver-29556682-pm7lm\" (UID: \"191fb79a-448d-4181-8346-f9dec8721d81\") " pod="openshift-infra/auto-csr-approver-29556682-pm7lm" Mar 13 11:22:00 crc kubenswrapper[4632]: I0313 11:22:00.356022 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgkd8\" (UniqueName: \"kubernetes.io/projected/191fb79a-448d-4181-8346-f9dec8721d81-kube-api-access-cgkd8\") pod \"auto-csr-approver-29556682-pm7lm\" (UID: \"191fb79a-448d-4181-8346-f9dec8721d81\") " pod="openshift-infra/auto-csr-approver-29556682-pm7lm" Mar 13 11:22:01 crc kubenswrapper[4632]: I0313 11:22:01.070704 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgkd8\" (UniqueName: \"kubernetes.io/projected/191fb79a-448d-4181-8346-f9dec8721d81-kube-api-access-cgkd8\") pod \"auto-csr-approver-29556682-pm7lm\" (UID: \"191fb79a-448d-4181-8346-f9dec8721d81\") " pod="openshift-infra/auto-csr-approver-29556682-pm7lm" Mar 13 11:22:01 crc kubenswrapper[4632]: I0313 11:22:01.078117 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556682-pm7lm" Mar 13 11:22:01 crc kubenswrapper[4632]: I0313 11:22:01.648217 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556682-pm7lm"] Mar 13 11:22:02 crc kubenswrapper[4632]: I0313 11:22:02.428731 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556682-pm7lm" event={"ID":"191fb79a-448d-4181-8346-f9dec8721d81","Type":"ContainerStarted","Data":"9cac74b2414d34c5abeccf3930f3da4938ea48597909f1e379cb0c2aacbc5dbd"} Mar 13 11:22:04 crc kubenswrapper[4632]: I0313 11:22:04.450831 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556682-pm7lm" event={"ID":"191fb79a-448d-4181-8346-f9dec8721d81","Type":"ContainerStarted","Data":"6b6b471905ed6fd6c16476a8c40a8d65b889486b4d10fadb0c4b7b6cf7a150be"} Mar 13 11:22:04 crc kubenswrapper[4632]: I0313 11:22:04.475678 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556682-pm7lm" podStartSLOduration=3.423013903 podStartE2EDuration="4.475658595s" podCreationTimestamp="2026-03-13 11:22:00 +0000 UTC" firstStartedPulling="2026-03-13 11:22:01.657163303 +0000 UTC m=+4695.679693436" lastFinishedPulling="2026-03-13 11:22:02.709808005 +0000 UTC m=+4696.732338128" observedRunningTime="2026-03-13 11:22:04.474456005 +0000 UTC m=+4698.496986138" watchObservedRunningTime="2026-03-13 11:22:04.475658595 +0000 UTC m=+4698.498188728" Mar 13 11:22:06 crc kubenswrapper[4632]: I0313 11:22:06.479784 4632 generic.go:334] "Generic (PLEG): container finished" podID="191fb79a-448d-4181-8346-f9dec8721d81" containerID="6b6b471905ed6fd6c16476a8c40a8d65b889486b4d10fadb0c4b7b6cf7a150be" exitCode=0 Mar 13 11:22:06 crc kubenswrapper[4632]: I0313 11:22:06.479882 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556682-pm7lm" event={"ID":"191fb79a-448d-4181-8346-f9dec8721d81","Type":"ContainerDied","Data":"6b6b471905ed6fd6c16476a8c40a8d65b889486b4d10fadb0c4b7b6cf7a150be"} Mar 13 11:22:07 crc kubenswrapper[4632]: I0313 11:22:07.985383 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556682-pm7lm" Mar 13 11:22:08 crc kubenswrapper[4632]: I0313 11:22:08.057280 4632 scope.go:117] "RemoveContainer" containerID="70b8629c42fc12676389ef5c404d5d088a40865f44432caaaf2da0b0a7e69c7a" Mar 13 11:22:08 crc kubenswrapper[4632]: E0313 11:22:08.057783 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:22:08 crc kubenswrapper[4632]: I0313 11:22:08.117816 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgkd8\" (UniqueName: \"kubernetes.io/projected/191fb79a-448d-4181-8346-f9dec8721d81-kube-api-access-cgkd8\") pod \"191fb79a-448d-4181-8346-f9dec8721d81\" (UID: \"191fb79a-448d-4181-8346-f9dec8721d81\") " Mar 13 11:22:08 crc kubenswrapper[4632]: I0313 11:22:08.125186 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/191fb79a-448d-4181-8346-f9dec8721d81-kube-api-access-cgkd8" (OuterVolumeSpecName: "kube-api-access-cgkd8") pod "191fb79a-448d-4181-8346-f9dec8721d81" (UID: "191fb79a-448d-4181-8346-f9dec8721d81"). InnerVolumeSpecName "kube-api-access-cgkd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:22:08 crc kubenswrapper[4632]: I0313 11:22:08.221192 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgkd8\" (UniqueName: \"kubernetes.io/projected/191fb79a-448d-4181-8346-f9dec8721d81-kube-api-access-cgkd8\") on node \"crc\" DevicePath \"\"" Mar 13 11:22:08 crc kubenswrapper[4632]: I0313 11:22:08.503909 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556682-pm7lm" event={"ID":"191fb79a-448d-4181-8346-f9dec8721d81","Type":"ContainerDied","Data":"9cac74b2414d34c5abeccf3930f3da4938ea48597909f1e379cb0c2aacbc5dbd"} Mar 13 11:22:08 crc kubenswrapper[4632]: I0313 11:22:08.503962 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cac74b2414d34c5abeccf3930f3da4938ea48597909f1e379cb0c2aacbc5dbd" Mar 13 11:22:08 crc kubenswrapper[4632]: I0313 11:22:08.504021 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556682-pm7lm" Mar 13 11:22:08 crc kubenswrapper[4632]: I0313 11:22:08.573050 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556676-8qz49"] Mar 13 11:22:08 crc kubenswrapper[4632]: I0313 11:22:08.580560 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556676-8qz49"] Mar 13 11:22:10 crc kubenswrapper[4632]: I0313 11:22:10.057873 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f97cdaa5-f90d-4cd0-9b62-7bb0c3b41911" path="/var/lib/kubelet/pods/f97cdaa5-f90d-4cd0-9b62-7bb0c3b41911/volumes" Mar 13 11:22:18 crc kubenswrapper[4632]: I0313 11:22:18.303087 4632 scope.go:117] "RemoveContainer" containerID="1913657413fffb5d7b6f0c5a32e25db59682a49e52d39a0adc600808b4a0def3" Mar 13 11:22:22 crc kubenswrapper[4632]: I0313 11:22:22.044583 4632 scope.go:117] "RemoveContainer" containerID="70b8629c42fc12676389ef5c404d5d088a40865f44432caaaf2da0b0a7e69c7a" Mar 13 11:22:22 crc kubenswrapper[4632]: I0313 11:22:22.710264 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerStarted","Data":"49682358d72adf3dbebb4a70c4dbc847548d4046ae5ef96f55f2ae4dfd58b9f9"} Mar 13 11:24:00 crc kubenswrapper[4632]: I0313 11:24:00.165952 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556684-bd9st"] Mar 13 11:24:00 crc kubenswrapper[4632]: E0313 11:24:00.166819 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="191fb79a-448d-4181-8346-f9dec8721d81" containerName="oc" Mar 13 11:24:00 crc kubenswrapper[4632]: I0313 11:24:00.166830 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="191fb79a-448d-4181-8346-f9dec8721d81" containerName="oc" Mar 13 11:24:00 crc kubenswrapper[4632]: I0313 11:24:00.167191 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="191fb79a-448d-4181-8346-f9dec8721d81" containerName="oc" Mar 13 11:24:00 crc kubenswrapper[4632]: I0313 11:24:00.175372 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556684-bd9st" Mar 13 11:24:00 crc kubenswrapper[4632]: I0313 11:24:00.177378 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 11:24:00 crc kubenswrapper[4632]: I0313 11:24:00.182924 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556684-bd9st"] Mar 13 11:24:00 crc kubenswrapper[4632]: I0313 11:24:00.193389 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 11:24:00 crc kubenswrapper[4632]: I0313 11:24:00.193653 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 11:24:00 crc kubenswrapper[4632]: I0313 11:24:00.255424 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh6ln\" (UniqueName: \"kubernetes.io/projected/fb45f1ce-58e0-4f55-afd6-2e14db5f24ca-kube-api-access-rh6ln\") pod \"auto-csr-approver-29556684-bd9st\" (UID: \"fb45f1ce-58e0-4f55-afd6-2e14db5f24ca\") " pod="openshift-infra/auto-csr-approver-29556684-bd9st" Mar 13 11:24:00 crc kubenswrapper[4632]: I0313 11:24:00.356889 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh6ln\" (UniqueName: \"kubernetes.io/projected/fb45f1ce-58e0-4f55-afd6-2e14db5f24ca-kube-api-access-rh6ln\") pod \"auto-csr-approver-29556684-bd9st\" (UID: \"fb45f1ce-58e0-4f55-afd6-2e14db5f24ca\") " pod="openshift-infra/auto-csr-approver-29556684-bd9st" Mar 13 11:24:00 crc kubenswrapper[4632]: I0313 11:24:00.392883 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh6ln\" (UniqueName: \"kubernetes.io/projected/fb45f1ce-58e0-4f55-afd6-2e14db5f24ca-kube-api-access-rh6ln\") pod \"auto-csr-approver-29556684-bd9st\" (UID: \"fb45f1ce-58e0-4f55-afd6-2e14db5f24ca\") " pod="openshift-infra/auto-csr-approver-29556684-bd9st" Mar 13 11:24:00 crc kubenswrapper[4632]: I0313 11:24:00.517721 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556684-bd9st" Mar 13 11:24:01 crc kubenswrapper[4632]: I0313 11:24:01.143205 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556684-bd9st"] Mar 13 11:24:01 crc kubenswrapper[4632]: I0313 11:24:01.905220 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556684-bd9st" event={"ID":"fb45f1ce-58e0-4f55-afd6-2e14db5f24ca","Type":"ContainerStarted","Data":"9a6bcaf29573d59c8877c5b6b24a62abc2b6f3f927437f93288a174ead4c9ab5"} Mar 13 11:24:02 crc kubenswrapper[4632]: I0313 11:24:02.918425 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556684-bd9st" event={"ID":"fb45f1ce-58e0-4f55-afd6-2e14db5f24ca","Type":"ContainerStarted","Data":"2af97c9efffc6f3dc7413dcff6c97889a640ef442506af8ad264876a675427dc"} Mar 13 11:24:02 crc kubenswrapper[4632]: I0313 11:24:02.941244 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556684-bd9st" podStartSLOduration=2.024133913 podStartE2EDuration="2.941222131s" podCreationTimestamp="2026-03-13 11:24:00 +0000 UTC" firstStartedPulling="2026-03-13 11:24:01.152431654 +0000 UTC m=+4815.174961787" lastFinishedPulling="2026-03-13 11:24:02.069519872 +0000 UTC m=+4816.092050005" observedRunningTime="2026-03-13 11:24:02.936216567 +0000 UTC m=+4816.958746710" watchObservedRunningTime="2026-03-13 11:24:02.941222131 +0000 UTC m=+4816.963752264" Mar 13 11:24:03 crc kubenswrapper[4632]: I0313 11:24:03.928606 4632 generic.go:334] "Generic (PLEG): container finished" podID="fb45f1ce-58e0-4f55-afd6-2e14db5f24ca" containerID="2af97c9efffc6f3dc7413dcff6c97889a640ef442506af8ad264876a675427dc" exitCode=0 Mar 13 11:24:03 crc kubenswrapper[4632]: I0313 11:24:03.928662 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556684-bd9st" event={"ID":"fb45f1ce-58e0-4f55-afd6-2e14db5f24ca","Type":"ContainerDied","Data":"2af97c9efffc6f3dc7413dcff6c97889a640ef442506af8ad264876a675427dc"} Mar 13 11:24:05 crc kubenswrapper[4632]: I0313 11:24:05.340962 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556684-bd9st" Mar 13 11:24:05 crc kubenswrapper[4632]: I0313 11:24:05.456744 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rh6ln\" (UniqueName: \"kubernetes.io/projected/fb45f1ce-58e0-4f55-afd6-2e14db5f24ca-kube-api-access-rh6ln\") pod \"fb45f1ce-58e0-4f55-afd6-2e14db5f24ca\" (UID: \"fb45f1ce-58e0-4f55-afd6-2e14db5f24ca\") " Mar 13 11:24:05 crc kubenswrapper[4632]: I0313 11:24:05.464123 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb45f1ce-58e0-4f55-afd6-2e14db5f24ca-kube-api-access-rh6ln" (OuterVolumeSpecName: "kube-api-access-rh6ln") pod "fb45f1ce-58e0-4f55-afd6-2e14db5f24ca" (UID: "fb45f1ce-58e0-4f55-afd6-2e14db5f24ca"). InnerVolumeSpecName "kube-api-access-rh6ln". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:24:05 crc kubenswrapper[4632]: I0313 11:24:05.559734 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rh6ln\" (UniqueName: \"kubernetes.io/projected/fb45f1ce-58e0-4f55-afd6-2e14db5f24ca-kube-api-access-rh6ln\") on node \"crc\" DevicePath \"\"" Mar 13 11:24:05 crc kubenswrapper[4632]: I0313 11:24:05.958986 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556684-bd9st" event={"ID":"fb45f1ce-58e0-4f55-afd6-2e14db5f24ca","Type":"ContainerDied","Data":"9a6bcaf29573d59c8877c5b6b24a62abc2b6f3f927437f93288a174ead4c9ab5"} Mar 13 11:24:05 crc kubenswrapper[4632]: I0313 11:24:05.959066 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a6bcaf29573d59c8877c5b6b24a62abc2b6f3f927437f93288a174ead4c9ab5" Mar 13 11:24:05 crc kubenswrapper[4632]: I0313 11:24:05.959092 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556684-bd9st" Mar 13 11:24:06 crc kubenswrapper[4632]: I0313 11:24:06.023003 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556678-kxcj9"] Mar 13 11:24:06 crc kubenswrapper[4632]: I0313 11:24:06.033868 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556678-kxcj9"] Mar 13 11:24:06 crc kubenswrapper[4632]: I0313 11:24:06.058340 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="feb87e78-c7fb-4997-869e-1e652f57ffe9" path="/var/lib/kubelet/pods/feb87e78-c7fb-4997-869e-1e652f57ffe9/volumes" Mar 13 11:24:18 crc kubenswrapper[4632]: I0313 11:24:18.426756 4632 scope.go:117] "RemoveContainer" containerID="d5beb81ff52cba139334670e53dbdbf15336383f8949e10d3fb4d56429b9cd89" Mar 13 11:24:18 crc kubenswrapper[4632]: I0313 11:24:18.547115 4632 scope.go:117] "RemoveContainer" containerID="098f5138f07f45ec1bf0ad95c6647f82c5303be4fdf81f24941c5f5d332834fa" Mar 13 11:24:18 crc kubenswrapper[4632]: I0313 11:24:18.611740 4632 scope.go:117] "RemoveContainer" containerID="f9fc93bd191406b3e302e5390ddb104f245922f0c823cd3aa32e935c54c7057f" Mar 13 11:24:18 crc kubenswrapper[4632]: I0313 11:24:18.660655 4632 scope.go:117] "RemoveContainer" containerID="8fc46f8a96264e119d9565fb1071d38b9eca7de4da0002eb50dc0db108fea360" Mar 13 11:24:40 crc kubenswrapper[4632]: I0313 11:24:40.462471 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 11:24:40 crc kubenswrapper[4632]: I0313 11:24:40.464839 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 11:25:10 crc kubenswrapper[4632]: I0313 11:25:10.461609 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 11:25:10 crc kubenswrapper[4632]: I0313 11:25:10.462294 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 11:25:40 crc kubenswrapper[4632]: I0313 11:25:40.461202 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 11:25:40 crc kubenswrapper[4632]: I0313 11:25:40.461910 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 11:25:40 crc kubenswrapper[4632]: I0313 11:25:40.461985 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 11:25:40 crc kubenswrapper[4632]: I0313 11:25:40.464767 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"49682358d72adf3dbebb4a70c4dbc847548d4046ae5ef96f55f2ae4dfd58b9f9"} pod="openshift-machine-config-operator/machine-config-daemon-zkscb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 11:25:40 crc kubenswrapper[4632]: I0313 11:25:40.464854 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" containerID="cri-o://49682358d72adf3dbebb4a70c4dbc847548d4046ae5ef96f55f2ae4dfd58b9f9" gracePeriod=600 Mar 13 11:25:41 crc kubenswrapper[4632]: I0313 11:25:41.014034 4632 generic.go:334] "Generic (PLEG): container finished" podID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerID="49682358d72adf3dbebb4a70c4dbc847548d4046ae5ef96f55f2ae4dfd58b9f9" exitCode=0 Mar 13 11:25:41 crc kubenswrapper[4632]: I0313 11:25:41.014097 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerDied","Data":"49682358d72adf3dbebb4a70c4dbc847548d4046ae5ef96f55f2ae4dfd58b9f9"} Mar 13 11:25:41 crc kubenswrapper[4632]: I0313 11:25:41.014669 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerStarted","Data":"d23237a6fa10676a84af0ea53cff2f624fc9045a8ba857339106215f64dbb8e3"} Mar 13 11:25:41 crc kubenswrapper[4632]: I0313 11:25:41.014697 4632 scope.go:117] "RemoveContainer" containerID="70b8629c42fc12676389ef5c404d5d088a40865f44432caaaf2da0b0a7e69c7a" Mar 13 11:26:00 crc kubenswrapper[4632]: I0313 11:26:00.191662 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556686-v9c2p"] Mar 13 11:26:00 crc kubenswrapper[4632]: E0313 11:26:00.193929 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb45f1ce-58e0-4f55-afd6-2e14db5f24ca" containerName="oc" Mar 13 11:26:00 crc kubenswrapper[4632]: I0313 11:26:00.194079 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb45f1ce-58e0-4f55-afd6-2e14db5f24ca" containerName="oc" Mar 13 11:26:00 crc kubenswrapper[4632]: I0313 11:26:00.194445 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb45f1ce-58e0-4f55-afd6-2e14db5f24ca" containerName="oc" Mar 13 11:26:00 crc kubenswrapper[4632]: I0313 11:26:00.199706 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556686-v9c2p" Mar 13 11:26:00 crc kubenswrapper[4632]: I0313 11:26:00.205870 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 11:26:00 crc kubenswrapper[4632]: I0313 11:26:00.206034 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 11:26:00 crc kubenswrapper[4632]: I0313 11:26:00.206154 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 11:26:00 crc kubenswrapper[4632]: I0313 11:26:00.208163 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556686-v9c2p"] Mar 13 11:26:00 crc kubenswrapper[4632]: I0313 11:26:00.300677 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vjfs\" (UniqueName: \"kubernetes.io/projected/78358d6e-af24-4da5-8c77-9453e6228cda-kube-api-access-5vjfs\") pod \"auto-csr-approver-29556686-v9c2p\" (UID: \"78358d6e-af24-4da5-8c77-9453e6228cda\") " pod="openshift-infra/auto-csr-approver-29556686-v9c2p" Mar 13 11:26:00 crc kubenswrapper[4632]: I0313 11:26:00.402958 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vjfs\" (UniqueName: \"kubernetes.io/projected/78358d6e-af24-4da5-8c77-9453e6228cda-kube-api-access-5vjfs\") pod \"auto-csr-approver-29556686-v9c2p\" (UID: \"78358d6e-af24-4da5-8c77-9453e6228cda\") " pod="openshift-infra/auto-csr-approver-29556686-v9c2p" Mar 13 11:26:00 crc kubenswrapper[4632]: I0313 11:26:00.429849 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vjfs\" (UniqueName: \"kubernetes.io/projected/78358d6e-af24-4da5-8c77-9453e6228cda-kube-api-access-5vjfs\") pod \"auto-csr-approver-29556686-v9c2p\" (UID: \"78358d6e-af24-4da5-8c77-9453e6228cda\") " pod="openshift-infra/auto-csr-approver-29556686-v9c2p" Mar 13 11:26:00 crc kubenswrapper[4632]: I0313 11:26:00.522537 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556686-v9c2p" Mar 13 11:26:01 crc kubenswrapper[4632]: I0313 11:26:01.304197 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556686-v9c2p"] Mar 13 11:26:01 crc kubenswrapper[4632]: W0313 11:26:01.313656 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78358d6e_af24_4da5_8c77_9453e6228cda.slice/crio-a9e0fec35a927773c559298dfd634557a6a7a9da41447ae6a5ede71fb7933809 WatchSource:0}: Error finding container a9e0fec35a927773c559298dfd634557a6a7a9da41447ae6a5ede71fb7933809: Status 404 returned error can't find the container with id a9e0fec35a927773c559298dfd634557a6a7a9da41447ae6a5ede71fb7933809 Mar 13 11:26:01 crc kubenswrapper[4632]: I0313 11:26:01.316711 4632 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 11:26:02 crc kubenswrapper[4632]: I0313 11:26:02.274839 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556686-v9c2p" event={"ID":"78358d6e-af24-4da5-8c77-9453e6228cda","Type":"ContainerStarted","Data":"a9e0fec35a927773c559298dfd634557a6a7a9da41447ae6a5ede71fb7933809"} Mar 13 11:26:03 crc kubenswrapper[4632]: I0313 11:26:03.289230 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556686-v9c2p" event={"ID":"78358d6e-af24-4da5-8c77-9453e6228cda","Type":"ContainerStarted","Data":"0306bee576d03326b01d2c08c76cf2909394a8ec9e729a13ca12d86ebb721532"} Mar 13 11:26:03 crc kubenswrapper[4632]: I0313 11:26:03.313823 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556686-v9c2p" podStartSLOduration=2.167040802 podStartE2EDuration="3.313801869s" podCreationTimestamp="2026-03-13 11:26:00 +0000 UTC" firstStartedPulling="2026-03-13 11:26:01.316454439 +0000 UTC m=+4935.338984562" lastFinishedPulling="2026-03-13 11:26:02.463215496 +0000 UTC m=+4936.485745629" observedRunningTime="2026-03-13 11:26:03.304335836 +0000 UTC m=+4937.326865999" watchObservedRunningTime="2026-03-13 11:26:03.313801869 +0000 UTC m=+4937.336332022" Mar 13 11:26:04 crc kubenswrapper[4632]: I0313 11:26:04.298665 4632 generic.go:334] "Generic (PLEG): container finished" podID="78358d6e-af24-4da5-8c77-9453e6228cda" containerID="0306bee576d03326b01d2c08c76cf2909394a8ec9e729a13ca12d86ebb721532" exitCode=0 Mar 13 11:26:04 crc kubenswrapper[4632]: I0313 11:26:04.298795 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556686-v9c2p" event={"ID":"78358d6e-af24-4da5-8c77-9453e6228cda","Type":"ContainerDied","Data":"0306bee576d03326b01d2c08c76cf2909394a8ec9e729a13ca12d86ebb721532"} Mar 13 11:26:05 crc kubenswrapper[4632]: I0313 11:26:05.718452 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556686-v9c2p" Mar 13 11:26:05 crc kubenswrapper[4632]: I0313 11:26:05.809678 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vjfs\" (UniqueName: \"kubernetes.io/projected/78358d6e-af24-4da5-8c77-9453e6228cda-kube-api-access-5vjfs\") pod \"78358d6e-af24-4da5-8c77-9453e6228cda\" (UID: \"78358d6e-af24-4da5-8c77-9453e6228cda\") " Mar 13 11:26:05 crc kubenswrapper[4632]: I0313 11:26:05.833177 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78358d6e-af24-4da5-8c77-9453e6228cda-kube-api-access-5vjfs" (OuterVolumeSpecName: "kube-api-access-5vjfs") pod "78358d6e-af24-4da5-8c77-9453e6228cda" (UID: "78358d6e-af24-4da5-8c77-9453e6228cda"). InnerVolumeSpecName "kube-api-access-5vjfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:26:05 crc kubenswrapper[4632]: I0313 11:26:05.912276 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vjfs\" (UniqueName: \"kubernetes.io/projected/78358d6e-af24-4da5-8c77-9453e6228cda-kube-api-access-5vjfs\") on node \"crc\" DevicePath \"\"" Mar 13 11:26:06 crc kubenswrapper[4632]: I0313 11:26:06.320098 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556686-v9c2p" event={"ID":"78358d6e-af24-4da5-8c77-9453e6228cda","Type":"ContainerDied","Data":"a9e0fec35a927773c559298dfd634557a6a7a9da41447ae6a5ede71fb7933809"} Mar 13 11:26:06 crc kubenswrapper[4632]: I0313 11:26:06.320140 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9e0fec35a927773c559298dfd634557a6a7a9da41447ae6a5ede71fb7933809" Mar 13 11:26:06 crc kubenswrapper[4632]: I0313 11:26:06.320136 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556686-v9c2p" Mar 13 11:26:06 crc kubenswrapper[4632]: I0313 11:26:06.395164 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556680-g7sw8"] Mar 13 11:26:06 crc kubenswrapper[4632]: I0313 11:26:06.403393 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556680-g7sw8"] Mar 13 11:26:08 crc kubenswrapper[4632]: I0313 11:26:08.060696 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34a7cf92-d429-468d-9eff-e76b0302dee4" path="/var/lib/kubelet/pods/34a7cf92-d429-468d-9eff-e76b0302dee4/volumes" Mar 13 11:26:16 crc kubenswrapper[4632]: I0313 11:26:16.705058 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tctdq"] Mar 13 11:26:16 crc kubenswrapper[4632]: E0313 11:26:16.706093 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78358d6e-af24-4da5-8c77-9453e6228cda" containerName="oc" Mar 13 11:26:16 crc kubenswrapper[4632]: I0313 11:26:16.706110 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="78358d6e-af24-4da5-8c77-9453e6228cda" containerName="oc" Mar 13 11:26:16 crc kubenswrapper[4632]: I0313 11:26:16.706362 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="78358d6e-af24-4da5-8c77-9453e6228cda" containerName="oc" Mar 13 11:26:16 crc kubenswrapper[4632]: I0313 11:26:16.709933 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tctdq" Mar 13 11:26:16 crc kubenswrapper[4632]: I0313 11:26:16.728330 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tctdq"] Mar 13 11:26:16 crc kubenswrapper[4632]: I0313 11:26:16.836826 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt6sp\" (UniqueName: \"kubernetes.io/projected/743aacd5-2974-48e5-b2c8-ed448225edc3-kube-api-access-rt6sp\") pod \"redhat-marketplace-tctdq\" (UID: \"743aacd5-2974-48e5-b2c8-ed448225edc3\") " pod="openshift-marketplace/redhat-marketplace-tctdq" Mar 13 11:26:16 crc kubenswrapper[4632]: I0313 11:26:16.836929 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/743aacd5-2974-48e5-b2c8-ed448225edc3-utilities\") pod \"redhat-marketplace-tctdq\" (UID: \"743aacd5-2974-48e5-b2c8-ed448225edc3\") " pod="openshift-marketplace/redhat-marketplace-tctdq" Mar 13 11:26:16 crc kubenswrapper[4632]: I0313 11:26:16.837662 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/743aacd5-2974-48e5-b2c8-ed448225edc3-catalog-content\") pod \"redhat-marketplace-tctdq\" (UID: \"743aacd5-2974-48e5-b2c8-ed448225edc3\") " pod="openshift-marketplace/redhat-marketplace-tctdq" Mar 13 11:26:16 crc kubenswrapper[4632]: I0313 11:26:16.939580 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/743aacd5-2974-48e5-b2c8-ed448225edc3-utilities\") pod \"redhat-marketplace-tctdq\" (UID: \"743aacd5-2974-48e5-b2c8-ed448225edc3\") " pod="openshift-marketplace/redhat-marketplace-tctdq" Mar 13 11:26:16 crc kubenswrapper[4632]: I0313 11:26:16.939648 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/743aacd5-2974-48e5-b2c8-ed448225edc3-catalog-content\") pod \"redhat-marketplace-tctdq\" (UID: \"743aacd5-2974-48e5-b2c8-ed448225edc3\") " pod="openshift-marketplace/redhat-marketplace-tctdq" Mar 13 11:26:16 crc kubenswrapper[4632]: I0313 11:26:16.939745 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt6sp\" (UniqueName: \"kubernetes.io/projected/743aacd5-2974-48e5-b2c8-ed448225edc3-kube-api-access-rt6sp\") pod \"redhat-marketplace-tctdq\" (UID: \"743aacd5-2974-48e5-b2c8-ed448225edc3\") " pod="openshift-marketplace/redhat-marketplace-tctdq" Mar 13 11:26:16 crc kubenswrapper[4632]: I0313 11:26:16.940245 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/743aacd5-2974-48e5-b2c8-ed448225edc3-utilities\") pod \"redhat-marketplace-tctdq\" (UID: \"743aacd5-2974-48e5-b2c8-ed448225edc3\") " pod="openshift-marketplace/redhat-marketplace-tctdq" Mar 13 11:26:16 crc kubenswrapper[4632]: I0313 11:26:16.940305 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/743aacd5-2974-48e5-b2c8-ed448225edc3-catalog-content\") pod \"redhat-marketplace-tctdq\" (UID: \"743aacd5-2974-48e5-b2c8-ed448225edc3\") " pod="openshift-marketplace/redhat-marketplace-tctdq" Mar 13 11:26:17 crc kubenswrapper[4632]: I0313 11:26:17.556711 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt6sp\" (UniqueName: \"kubernetes.io/projected/743aacd5-2974-48e5-b2c8-ed448225edc3-kube-api-access-rt6sp\") pod \"redhat-marketplace-tctdq\" (UID: \"743aacd5-2974-48e5-b2c8-ed448225edc3\") " pod="openshift-marketplace/redhat-marketplace-tctdq" Mar 13 11:26:17 crc kubenswrapper[4632]: I0313 11:26:17.633275 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tctdq" Mar 13 11:26:18 crc kubenswrapper[4632]: I0313 11:26:18.209013 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tctdq"] Mar 13 11:26:18 crc kubenswrapper[4632]: I0313 11:26:18.453334 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tctdq" event={"ID":"743aacd5-2974-48e5-b2c8-ed448225edc3","Type":"ContainerStarted","Data":"d989bc3bcc743f71b5d3bb9088655740c776263f3efde0646f3c95fa1bdc6620"} Mar 13 11:26:18 crc kubenswrapper[4632]: I0313 11:26:18.453708 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tctdq" event={"ID":"743aacd5-2974-48e5-b2c8-ed448225edc3","Type":"ContainerStarted","Data":"5bdb6190ec028b9a390285458bd532667460f65f6a7b1d61e8a16b8f194afa5a"} Mar 13 11:26:18 crc kubenswrapper[4632]: I0313 11:26:18.953491 4632 scope.go:117] "RemoveContainer" containerID="a287e319f36103c4becc462637b974d036358eff92b92ae569b32780de4efe87" Mar 13 11:26:19 crc kubenswrapper[4632]: I0313 11:26:19.463708 4632 generic.go:334] "Generic (PLEG): container finished" podID="743aacd5-2974-48e5-b2c8-ed448225edc3" containerID="d989bc3bcc743f71b5d3bb9088655740c776263f3efde0646f3c95fa1bdc6620" exitCode=0 Mar 13 11:26:19 crc kubenswrapper[4632]: I0313 11:26:19.463891 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tctdq" event={"ID":"743aacd5-2974-48e5-b2c8-ed448225edc3","Type":"ContainerDied","Data":"d989bc3bcc743f71b5d3bb9088655740c776263f3efde0646f3c95fa1bdc6620"} Mar 13 11:26:20 crc kubenswrapper[4632]: I0313 11:26:20.478963 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tctdq" event={"ID":"743aacd5-2974-48e5-b2c8-ed448225edc3","Type":"ContainerStarted","Data":"cde224ff7ac851fa03bc99aec7f4fa42671c6966019c97205cc33ba61daa73a2"} Mar 13 11:26:22 crc kubenswrapper[4632]: I0313 11:26:22.504008 4632 generic.go:334] "Generic (PLEG): container finished" podID="743aacd5-2974-48e5-b2c8-ed448225edc3" containerID="cde224ff7ac851fa03bc99aec7f4fa42671c6966019c97205cc33ba61daa73a2" exitCode=0 Mar 13 11:26:22 crc kubenswrapper[4632]: I0313 11:26:22.504047 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tctdq" event={"ID":"743aacd5-2974-48e5-b2c8-ed448225edc3","Type":"ContainerDied","Data":"cde224ff7ac851fa03bc99aec7f4fa42671c6966019c97205cc33ba61daa73a2"} Mar 13 11:26:24 crc kubenswrapper[4632]: I0313 11:26:24.533560 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tctdq" event={"ID":"743aacd5-2974-48e5-b2c8-ed448225edc3","Type":"ContainerStarted","Data":"caacbc7cb0001a5e23da731e23554473632bc37a4fa3b14376fde42cab9413c2"} Mar 13 11:26:24 crc kubenswrapper[4632]: I0313 11:26:24.556372 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tctdq" podStartSLOduration=4.81532544 podStartE2EDuration="8.556350258s" podCreationTimestamp="2026-03-13 11:26:16 +0000 UTC" firstStartedPulling="2026-03-13 11:26:19.715655216 +0000 UTC m=+4953.738185349" lastFinishedPulling="2026-03-13 11:26:23.456680034 +0000 UTC m=+4957.479210167" observedRunningTime="2026-03-13 11:26:24.554884191 +0000 UTC m=+4958.577414334" watchObservedRunningTime="2026-03-13 11:26:24.556350258 +0000 UTC m=+4958.578880401" Mar 13 11:26:27 crc kubenswrapper[4632]: I0313 11:26:27.633914 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tctdq" Mar 13 11:26:27 crc kubenswrapper[4632]: I0313 11:26:27.634683 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tctdq" Mar 13 11:26:28 crc kubenswrapper[4632]: I0313 11:26:28.679759 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-tctdq" podUID="743aacd5-2974-48e5-b2c8-ed448225edc3" containerName="registry-server" probeResult="failure" output=< Mar 13 11:26:28 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:26:28 crc kubenswrapper[4632]: > Mar 13 11:26:37 crc kubenswrapper[4632]: I0313 11:26:37.689313 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tctdq" Mar 13 11:26:37 crc kubenswrapper[4632]: I0313 11:26:37.750619 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tctdq" Mar 13 11:26:37 crc kubenswrapper[4632]: I0313 11:26:37.933776 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tctdq"] Mar 13 11:26:39 crc kubenswrapper[4632]: I0313 11:26:39.683831 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tctdq" podUID="743aacd5-2974-48e5-b2c8-ed448225edc3" containerName="registry-server" containerID="cri-o://caacbc7cb0001a5e23da731e23554473632bc37a4fa3b14376fde42cab9413c2" gracePeriod=2 Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.311955 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tctdq" Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.343569 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6nl2m"] Mar 13 11:26:40 crc kubenswrapper[4632]: E0313 11:26:40.344148 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="743aacd5-2974-48e5-b2c8-ed448225edc3" containerName="registry-server" Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.344172 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="743aacd5-2974-48e5-b2c8-ed448225edc3" containerName="registry-server" Mar 13 11:26:40 crc kubenswrapper[4632]: E0313 11:26:40.344196 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="743aacd5-2974-48e5-b2c8-ed448225edc3" containerName="extract-content" Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.344203 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="743aacd5-2974-48e5-b2c8-ed448225edc3" containerName="extract-content" Mar 13 11:26:40 crc kubenswrapper[4632]: E0313 11:26:40.344241 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="743aacd5-2974-48e5-b2c8-ed448225edc3" containerName="extract-utilities" Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.344249 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="743aacd5-2974-48e5-b2c8-ed448225edc3" containerName="extract-utilities" Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.344499 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="743aacd5-2974-48e5-b2c8-ed448225edc3" containerName="registry-server" Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.346212 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6nl2m" Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.370971 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6nl2m"] Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.450290 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/743aacd5-2974-48e5-b2c8-ed448225edc3-utilities\") pod \"743aacd5-2974-48e5-b2c8-ed448225edc3\" (UID: \"743aacd5-2974-48e5-b2c8-ed448225edc3\") " Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.450587 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/743aacd5-2974-48e5-b2c8-ed448225edc3-catalog-content\") pod \"743aacd5-2974-48e5-b2c8-ed448225edc3\" (UID: \"743aacd5-2974-48e5-b2c8-ed448225edc3\") " Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.450651 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rt6sp\" (UniqueName: \"kubernetes.io/projected/743aacd5-2974-48e5-b2c8-ed448225edc3-kube-api-access-rt6sp\") pod \"743aacd5-2974-48e5-b2c8-ed448225edc3\" (UID: \"743aacd5-2974-48e5-b2c8-ed448225edc3\") " Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.451191 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45f934fb-e679-4600-98e5-a67888251b13-catalog-content\") pod \"certified-operators-6nl2m\" (UID: \"45f934fb-e679-4600-98e5-a67888251b13\") " pod="openshift-marketplace/certified-operators-6nl2m" Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.451271 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45f934fb-e679-4600-98e5-a67888251b13-utilities\") pod \"certified-operators-6nl2m\" (UID: \"45f934fb-e679-4600-98e5-a67888251b13\") " pod="openshift-marketplace/certified-operators-6nl2m" Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.451307 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dn74\" (UniqueName: \"kubernetes.io/projected/45f934fb-e679-4600-98e5-a67888251b13-kube-api-access-2dn74\") pod \"certified-operators-6nl2m\" (UID: \"45f934fb-e679-4600-98e5-a67888251b13\") " pod="openshift-marketplace/certified-operators-6nl2m" Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.451332 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/743aacd5-2974-48e5-b2c8-ed448225edc3-utilities" (OuterVolumeSpecName: "utilities") pod "743aacd5-2974-48e5-b2c8-ed448225edc3" (UID: "743aacd5-2974-48e5-b2c8-ed448225edc3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.479641 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/743aacd5-2974-48e5-b2c8-ed448225edc3-kube-api-access-rt6sp" (OuterVolumeSpecName: "kube-api-access-rt6sp") pod "743aacd5-2974-48e5-b2c8-ed448225edc3" (UID: "743aacd5-2974-48e5-b2c8-ed448225edc3"). InnerVolumeSpecName "kube-api-access-rt6sp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.481994 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/743aacd5-2974-48e5-b2c8-ed448225edc3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "743aacd5-2974-48e5-b2c8-ed448225edc3" (UID: "743aacd5-2974-48e5-b2c8-ed448225edc3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.552687 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45f934fb-e679-4600-98e5-a67888251b13-catalog-content\") pod \"certified-operators-6nl2m\" (UID: \"45f934fb-e679-4600-98e5-a67888251b13\") " pod="openshift-marketplace/certified-operators-6nl2m" Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.552751 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45f934fb-e679-4600-98e5-a67888251b13-utilities\") pod \"certified-operators-6nl2m\" (UID: \"45f934fb-e679-4600-98e5-a67888251b13\") " pod="openshift-marketplace/certified-operators-6nl2m" Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.552787 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dn74\" (UniqueName: \"kubernetes.io/projected/45f934fb-e679-4600-98e5-a67888251b13-kube-api-access-2dn74\") pod \"certified-operators-6nl2m\" (UID: \"45f934fb-e679-4600-98e5-a67888251b13\") " pod="openshift-marketplace/certified-operators-6nl2m" Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.552926 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/743aacd5-2974-48e5-b2c8-ed448225edc3-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.552956 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rt6sp\" (UniqueName: \"kubernetes.io/projected/743aacd5-2974-48e5-b2c8-ed448225edc3-kube-api-access-rt6sp\") on node \"crc\" DevicePath \"\"" Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.552968 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/743aacd5-2974-48e5-b2c8-ed448225edc3-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.553194 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45f934fb-e679-4600-98e5-a67888251b13-catalog-content\") pod \"certified-operators-6nl2m\" (UID: \"45f934fb-e679-4600-98e5-a67888251b13\") " pod="openshift-marketplace/certified-operators-6nl2m" Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.553347 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45f934fb-e679-4600-98e5-a67888251b13-utilities\") pod \"certified-operators-6nl2m\" (UID: \"45f934fb-e679-4600-98e5-a67888251b13\") " pod="openshift-marketplace/certified-operators-6nl2m" Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.569566 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dn74\" (UniqueName: \"kubernetes.io/projected/45f934fb-e679-4600-98e5-a67888251b13-kube-api-access-2dn74\") pod \"certified-operators-6nl2m\" (UID: \"45f934fb-e679-4600-98e5-a67888251b13\") " pod="openshift-marketplace/certified-operators-6nl2m" Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.680063 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6nl2m" Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.696216 4632 generic.go:334] "Generic (PLEG): container finished" podID="743aacd5-2974-48e5-b2c8-ed448225edc3" containerID="caacbc7cb0001a5e23da731e23554473632bc37a4fa3b14376fde42cab9413c2" exitCode=0 Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.696277 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tctdq" event={"ID":"743aacd5-2974-48e5-b2c8-ed448225edc3","Type":"ContainerDied","Data":"caacbc7cb0001a5e23da731e23554473632bc37a4fa3b14376fde42cab9413c2"} Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.696378 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tctdq" event={"ID":"743aacd5-2974-48e5-b2c8-ed448225edc3","Type":"ContainerDied","Data":"5bdb6190ec028b9a390285458bd532667460f65f6a7b1d61e8a16b8f194afa5a"} Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.696299 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tctdq" Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.696404 4632 scope.go:117] "RemoveContainer" containerID="caacbc7cb0001a5e23da731e23554473632bc37a4fa3b14376fde42cab9413c2" Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.730437 4632 scope.go:117] "RemoveContainer" containerID="cde224ff7ac851fa03bc99aec7f4fa42671c6966019c97205cc33ba61daa73a2" Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.753178 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tctdq"] Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.779896 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tctdq"] Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.783018 4632 scope.go:117] "RemoveContainer" containerID="d989bc3bcc743f71b5d3bb9088655740c776263f3efde0646f3c95fa1bdc6620" Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.813193 4632 scope.go:117] "RemoveContainer" containerID="caacbc7cb0001a5e23da731e23554473632bc37a4fa3b14376fde42cab9413c2" Mar 13 11:26:40 crc kubenswrapper[4632]: E0313 11:26:40.822100 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caacbc7cb0001a5e23da731e23554473632bc37a4fa3b14376fde42cab9413c2\": container with ID starting with caacbc7cb0001a5e23da731e23554473632bc37a4fa3b14376fde42cab9413c2 not found: ID does not exist" containerID="caacbc7cb0001a5e23da731e23554473632bc37a4fa3b14376fde42cab9413c2" Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.822157 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caacbc7cb0001a5e23da731e23554473632bc37a4fa3b14376fde42cab9413c2"} err="failed to get container status \"caacbc7cb0001a5e23da731e23554473632bc37a4fa3b14376fde42cab9413c2\": rpc error: code = NotFound desc = could not find container \"caacbc7cb0001a5e23da731e23554473632bc37a4fa3b14376fde42cab9413c2\": container with ID starting with caacbc7cb0001a5e23da731e23554473632bc37a4fa3b14376fde42cab9413c2 not found: ID does not exist" Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.822185 4632 scope.go:117] "RemoveContainer" containerID="cde224ff7ac851fa03bc99aec7f4fa42671c6966019c97205cc33ba61daa73a2" Mar 13 11:26:40 crc kubenswrapper[4632]: E0313 11:26:40.826096 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cde224ff7ac851fa03bc99aec7f4fa42671c6966019c97205cc33ba61daa73a2\": container with ID starting with cde224ff7ac851fa03bc99aec7f4fa42671c6966019c97205cc33ba61daa73a2 not found: ID does not exist" containerID="cde224ff7ac851fa03bc99aec7f4fa42671c6966019c97205cc33ba61daa73a2" Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.826141 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cde224ff7ac851fa03bc99aec7f4fa42671c6966019c97205cc33ba61daa73a2"} err="failed to get container status \"cde224ff7ac851fa03bc99aec7f4fa42671c6966019c97205cc33ba61daa73a2\": rpc error: code = NotFound desc = could not find container \"cde224ff7ac851fa03bc99aec7f4fa42671c6966019c97205cc33ba61daa73a2\": container with ID starting with cde224ff7ac851fa03bc99aec7f4fa42671c6966019c97205cc33ba61daa73a2 not found: ID does not exist" Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.826167 4632 scope.go:117] "RemoveContainer" containerID="d989bc3bcc743f71b5d3bb9088655740c776263f3efde0646f3c95fa1bdc6620" Mar 13 11:26:40 crc kubenswrapper[4632]: E0313 11:26:40.835166 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d989bc3bcc743f71b5d3bb9088655740c776263f3efde0646f3c95fa1bdc6620\": container with ID starting with d989bc3bcc743f71b5d3bb9088655740c776263f3efde0646f3c95fa1bdc6620 not found: ID does not exist" containerID="d989bc3bcc743f71b5d3bb9088655740c776263f3efde0646f3c95fa1bdc6620" Mar 13 11:26:40 crc kubenswrapper[4632]: I0313 11:26:40.835210 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d989bc3bcc743f71b5d3bb9088655740c776263f3efde0646f3c95fa1bdc6620"} err="failed to get container status \"d989bc3bcc743f71b5d3bb9088655740c776263f3efde0646f3c95fa1bdc6620\": rpc error: code = NotFound desc = could not find container \"d989bc3bcc743f71b5d3bb9088655740c776263f3efde0646f3c95fa1bdc6620\": container with ID starting with d989bc3bcc743f71b5d3bb9088655740c776263f3efde0646f3c95fa1bdc6620 not found: ID does not exist" Mar 13 11:26:42 crc kubenswrapper[4632]: I0313 11:26:41.461905 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6nl2m"] Mar 13 11:26:42 crc kubenswrapper[4632]: I0313 11:26:41.713356 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6nl2m" event={"ID":"45f934fb-e679-4600-98e5-a67888251b13","Type":"ContainerStarted","Data":"4e3b7c19053eb4c84db186ea166af9584471f9afc53b60a314467a0d726485ee"} Mar 13 11:26:42 crc kubenswrapper[4632]: I0313 11:26:42.057690 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="743aacd5-2974-48e5-b2c8-ed448225edc3" path="/var/lib/kubelet/pods/743aacd5-2974-48e5-b2c8-ed448225edc3/volumes" Mar 13 11:26:42 crc kubenswrapper[4632]: I0313 11:26:42.723698 4632 generic.go:334] "Generic (PLEG): container finished" podID="45f934fb-e679-4600-98e5-a67888251b13" containerID="5c3a87ecc65399621b88755466dcd9fbdd959c39ae545371f0246e2e62cc5227" exitCode=0 Mar 13 11:26:42 crc kubenswrapper[4632]: I0313 11:26:42.724056 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6nl2m" event={"ID":"45f934fb-e679-4600-98e5-a67888251b13","Type":"ContainerDied","Data":"5c3a87ecc65399621b88755466dcd9fbdd959c39ae545371f0246e2e62cc5227"} Mar 13 11:26:44 crc kubenswrapper[4632]: I0313 11:26:44.742874 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6nl2m" event={"ID":"45f934fb-e679-4600-98e5-a67888251b13","Type":"ContainerStarted","Data":"6c54f8c37225b428e25e0b2eb3a551336c13c5eab39c71c35116c1eb9a44fab6"} Mar 13 11:26:46 crc kubenswrapper[4632]: I0313 11:26:46.762138 4632 generic.go:334] "Generic (PLEG): container finished" podID="45f934fb-e679-4600-98e5-a67888251b13" containerID="6c54f8c37225b428e25e0b2eb3a551336c13c5eab39c71c35116c1eb9a44fab6" exitCode=0 Mar 13 11:26:46 crc kubenswrapper[4632]: I0313 11:26:46.762207 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6nl2m" event={"ID":"45f934fb-e679-4600-98e5-a67888251b13","Type":"ContainerDied","Data":"6c54f8c37225b428e25e0b2eb3a551336c13c5eab39c71c35116c1eb9a44fab6"} Mar 13 11:26:47 crc kubenswrapper[4632]: I0313 11:26:47.773831 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6nl2m" event={"ID":"45f934fb-e679-4600-98e5-a67888251b13","Type":"ContainerStarted","Data":"1f3d3300925758d71546e25b20caa6d6766393cc646bf78998dc1871e1262799"} Mar 13 11:26:47 crc kubenswrapper[4632]: I0313 11:26:47.799897 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6nl2m" podStartSLOduration=3.330901611 podStartE2EDuration="7.799877975s" podCreationTimestamp="2026-03-13 11:26:40 +0000 UTC" firstStartedPulling="2026-03-13 11:26:42.725752978 +0000 UTC m=+4976.748283111" lastFinishedPulling="2026-03-13 11:26:47.194729332 +0000 UTC m=+4981.217259475" observedRunningTime="2026-03-13 11:26:47.789661213 +0000 UTC m=+4981.812191366" watchObservedRunningTime="2026-03-13 11:26:47.799877975 +0000 UTC m=+4981.822408108" Mar 13 11:26:50 crc kubenswrapper[4632]: I0313 11:26:50.680694 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6nl2m" Mar 13 11:26:50 crc kubenswrapper[4632]: I0313 11:26:50.682284 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6nl2m" Mar 13 11:26:51 crc kubenswrapper[4632]: I0313 11:26:51.766113 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-6nl2m" podUID="45f934fb-e679-4600-98e5-a67888251b13" containerName="registry-server" probeResult="failure" output=< Mar 13 11:26:51 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:26:51 crc kubenswrapper[4632]: > Mar 13 11:27:00 crc kubenswrapper[4632]: I0313 11:27:00.762069 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6nl2m" Mar 13 11:27:00 crc kubenswrapper[4632]: I0313 11:27:00.832528 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6nl2m" Mar 13 11:27:01 crc kubenswrapper[4632]: I0313 11:27:01.007389 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6nl2m"] Mar 13 11:27:01 crc kubenswrapper[4632]: I0313 11:27:01.912457 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6nl2m" podUID="45f934fb-e679-4600-98e5-a67888251b13" containerName="registry-server" containerID="cri-o://1f3d3300925758d71546e25b20caa6d6766393cc646bf78998dc1871e1262799" gracePeriod=2 Mar 13 11:27:02 crc kubenswrapper[4632]: I0313 11:27:02.550722 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6nl2m" Mar 13 11:27:02 crc kubenswrapper[4632]: I0313 11:27:02.699361 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dn74\" (UniqueName: \"kubernetes.io/projected/45f934fb-e679-4600-98e5-a67888251b13-kube-api-access-2dn74\") pod \"45f934fb-e679-4600-98e5-a67888251b13\" (UID: \"45f934fb-e679-4600-98e5-a67888251b13\") " Mar 13 11:27:02 crc kubenswrapper[4632]: I0313 11:27:02.699671 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45f934fb-e679-4600-98e5-a67888251b13-catalog-content\") pod \"45f934fb-e679-4600-98e5-a67888251b13\" (UID: \"45f934fb-e679-4600-98e5-a67888251b13\") " Mar 13 11:27:02 crc kubenswrapper[4632]: I0313 11:27:02.700299 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45f934fb-e679-4600-98e5-a67888251b13-utilities\") pod \"45f934fb-e679-4600-98e5-a67888251b13\" (UID: \"45f934fb-e679-4600-98e5-a67888251b13\") " Mar 13 11:27:02 crc kubenswrapper[4632]: I0313 11:27:02.701385 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45f934fb-e679-4600-98e5-a67888251b13-utilities" (OuterVolumeSpecName: "utilities") pod "45f934fb-e679-4600-98e5-a67888251b13" (UID: "45f934fb-e679-4600-98e5-a67888251b13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:27:02 crc kubenswrapper[4632]: I0313 11:27:02.702308 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45f934fb-e679-4600-98e5-a67888251b13-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 11:27:02 crc kubenswrapper[4632]: I0313 11:27:02.706993 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45f934fb-e679-4600-98e5-a67888251b13-kube-api-access-2dn74" (OuterVolumeSpecName: "kube-api-access-2dn74") pod "45f934fb-e679-4600-98e5-a67888251b13" (UID: "45f934fb-e679-4600-98e5-a67888251b13"). InnerVolumeSpecName "kube-api-access-2dn74". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:27:02 crc kubenswrapper[4632]: I0313 11:27:02.764542 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45f934fb-e679-4600-98e5-a67888251b13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45f934fb-e679-4600-98e5-a67888251b13" (UID: "45f934fb-e679-4600-98e5-a67888251b13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:27:02 crc kubenswrapper[4632]: I0313 11:27:02.803712 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dn74\" (UniqueName: \"kubernetes.io/projected/45f934fb-e679-4600-98e5-a67888251b13-kube-api-access-2dn74\") on node \"crc\" DevicePath \"\"" Mar 13 11:27:02 crc kubenswrapper[4632]: I0313 11:27:02.803750 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45f934fb-e679-4600-98e5-a67888251b13-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 11:27:02 crc kubenswrapper[4632]: I0313 11:27:02.936421 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6nl2m" Mar 13 11:27:02 crc kubenswrapper[4632]: I0313 11:27:02.937074 4632 generic.go:334] "Generic (PLEG): container finished" podID="45f934fb-e679-4600-98e5-a67888251b13" containerID="1f3d3300925758d71546e25b20caa6d6766393cc646bf78998dc1871e1262799" exitCode=0 Mar 13 11:27:02 crc kubenswrapper[4632]: I0313 11:27:02.937145 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6nl2m" event={"ID":"45f934fb-e679-4600-98e5-a67888251b13","Type":"ContainerDied","Data":"1f3d3300925758d71546e25b20caa6d6766393cc646bf78998dc1871e1262799"} Mar 13 11:27:02 crc kubenswrapper[4632]: I0313 11:27:02.937187 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6nl2m" event={"ID":"45f934fb-e679-4600-98e5-a67888251b13","Type":"ContainerDied","Data":"4e3b7c19053eb4c84db186ea166af9584471f9afc53b60a314467a0d726485ee"} Mar 13 11:27:02 crc kubenswrapper[4632]: I0313 11:27:02.937218 4632 scope.go:117] "RemoveContainer" containerID="1f3d3300925758d71546e25b20caa6d6766393cc646bf78998dc1871e1262799" Mar 13 11:27:02 crc kubenswrapper[4632]: I0313 11:27:02.977857 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6nl2m"] Mar 13 11:27:02 crc kubenswrapper[4632]: I0313 11:27:02.982463 4632 scope.go:117] "RemoveContainer" containerID="6c54f8c37225b428e25e0b2eb3a551336c13c5eab39c71c35116c1eb9a44fab6" Mar 13 11:27:02 crc kubenswrapper[4632]: I0313 11:27:02.987760 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6nl2m"] Mar 13 11:27:03 crc kubenswrapper[4632]: I0313 11:27:03.010960 4632 scope.go:117] "RemoveContainer" containerID="5c3a87ecc65399621b88755466dcd9fbdd959c39ae545371f0246e2e62cc5227" Mar 13 11:27:03 crc kubenswrapper[4632]: I0313 11:27:03.045130 4632 scope.go:117] "RemoveContainer" containerID="1f3d3300925758d71546e25b20caa6d6766393cc646bf78998dc1871e1262799" Mar 13 11:27:03 crc kubenswrapper[4632]: E0313 11:27:03.046242 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f3d3300925758d71546e25b20caa6d6766393cc646bf78998dc1871e1262799\": container with ID starting with 1f3d3300925758d71546e25b20caa6d6766393cc646bf78998dc1871e1262799 not found: ID does not exist" containerID="1f3d3300925758d71546e25b20caa6d6766393cc646bf78998dc1871e1262799" Mar 13 11:27:03 crc kubenswrapper[4632]: I0313 11:27:03.046370 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f3d3300925758d71546e25b20caa6d6766393cc646bf78998dc1871e1262799"} err="failed to get container status \"1f3d3300925758d71546e25b20caa6d6766393cc646bf78998dc1871e1262799\": rpc error: code = NotFound desc = could not find container \"1f3d3300925758d71546e25b20caa6d6766393cc646bf78998dc1871e1262799\": container with ID starting with 1f3d3300925758d71546e25b20caa6d6766393cc646bf78998dc1871e1262799 not found: ID does not exist" Mar 13 11:27:03 crc kubenswrapper[4632]: I0313 11:27:03.046452 4632 scope.go:117] "RemoveContainer" containerID="6c54f8c37225b428e25e0b2eb3a551336c13c5eab39c71c35116c1eb9a44fab6" Mar 13 11:27:03 crc kubenswrapper[4632]: E0313 11:27:03.046972 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c54f8c37225b428e25e0b2eb3a551336c13c5eab39c71c35116c1eb9a44fab6\": container with ID starting with 6c54f8c37225b428e25e0b2eb3a551336c13c5eab39c71c35116c1eb9a44fab6 not found: ID does not exist" containerID="6c54f8c37225b428e25e0b2eb3a551336c13c5eab39c71c35116c1eb9a44fab6" Mar 13 11:27:03 crc kubenswrapper[4632]: I0313 11:27:03.047071 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c54f8c37225b428e25e0b2eb3a551336c13c5eab39c71c35116c1eb9a44fab6"} err="failed to get container status \"6c54f8c37225b428e25e0b2eb3a551336c13c5eab39c71c35116c1eb9a44fab6\": rpc error: code = NotFound desc = could not find container \"6c54f8c37225b428e25e0b2eb3a551336c13c5eab39c71c35116c1eb9a44fab6\": container with ID starting with 6c54f8c37225b428e25e0b2eb3a551336c13c5eab39c71c35116c1eb9a44fab6 not found: ID does not exist" Mar 13 11:27:03 crc kubenswrapper[4632]: I0313 11:27:03.047140 4632 scope.go:117] "RemoveContainer" containerID="5c3a87ecc65399621b88755466dcd9fbdd959c39ae545371f0246e2e62cc5227" Mar 13 11:27:03 crc kubenswrapper[4632]: E0313 11:27:03.047402 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c3a87ecc65399621b88755466dcd9fbdd959c39ae545371f0246e2e62cc5227\": container with ID starting with 5c3a87ecc65399621b88755466dcd9fbdd959c39ae545371f0246e2e62cc5227 not found: ID does not exist" containerID="5c3a87ecc65399621b88755466dcd9fbdd959c39ae545371f0246e2e62cc5227" Mar 13 11:27:03 crc kubenswrapper[4632]: I0313 11:27:03.047484 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c3a87ecc65399621b88755466dcd9fbdd959c39ae545371f0246e2e62cc5227"} err="failed to get container status \"5c3a87ecc65399621b88755466dcd9fbdd959c39ae545371f0246e2e62cc5227\": rpc error: code = NotFound desc = could not find container \"5c3a87ecc65399621b88755466dcd9fbdd959c39ae545371f0246e2e62cc5227\": container with ID starting with 5c3a87ecc65399621b88755466dcd9fbdd959c39ae545371f0246e2e62cc5227 not found: ID does not exist" Mar 13 11:27:04 crc kubenswrapper[4632]: I0313 11:27:04.062347 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45f934fb-e679-4600-98e5-a67888251b13" path="/var/lib/kubelet/pods/45f934fb-e679-4600-98e5-a67888251b13/volumes" Mar 13 11:27:28 crc kubenswrapper[4632]: I0313 11:27:28.509965 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zlblp"] Mar 13 11:27:28 crc kubenswrapper[4632]: E0313 11:27:28.510761 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45f934fb-e679-4600-98e5-a67888251b13" containerName="extract-content" Mar 13 11:27:28 crc kubenswrapper[4632]: I0313 11:27:28.510773 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="45f934fb-e679-4600-98e5-a67888251b13" containerName="extract-content" Mar 13 11:27:28 crc kubenswrapper[4632]: E0313 11:27:28.510785 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45f934fb-e679-4600-98e5-a67888251b13" containerName="extract-utilities" Mar 13 11:27:28 crc kubenswrapper[4632]: I0313 11:27:28.510792 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="45f934fb-e679-4600-98e5-a67888251b13" containerName="extract-utilities" Mar 13 11:27:28 crc kubenswrapper[4632]: E0313 11:27:28.510804 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45f934fb-e679-4600-98e5-a67888251b13" containerName="registry-server" Mar 13 11:27:28 crc kubenswrapper[4632]: I0313 11:27:28.510810 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="45f934fb-e679-4600-98e5-a67888251b13" containerName="registry-server" Mar 13 11:27:28 crc kubenswrapper[4632]: I0313 11:27:28.511013 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="45f934fb-e679-4600-98e5-a67888251b13" containerName="registry-server" Mar 13 11:27:28 crc kubenswrapper[4632]: I0313 11:27:28.512277 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zlblp" Mar 13 11:27:28 crc kubenswrapper[4632]: I0313 11:27:28.541363 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zlblp"] Mar 13 11:27:28 crc kubenswrapper[4632]: I0313 11:27:28.693655 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f615080-fa27-4ea2-ab8b-bd6409bdcaf7-catalog-content\") pod \"community-operators-zlblp\" (UID: \"7f615080-fa27-4ea2-ab8b-bd6409bdcaf7\") " pod="openshift-marketplace/community-operators-zlblp" Mar 13 11:27:28 crc kubenswrapper[4632]: I0313 11:27:28.693979 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f615080-fa27-4ea2-ab8b-bd6409bdcaf7-utilities\") pod \"community-operators-zlblp\" (UID: \"7f615080-fa27-4ea2-ab8b-bd6409bdcaf7\") " pod="openshift-marketplace/community-operators-zlblp" Mar 13 11:27:28 crc kubenswrapper[4632]: I0313 11:27:28.694089 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds2wt\" (UniqueName: \"kubernetes.io/projected/7f615080-fa27-4ea2-ab8b-bd6409bdcaf7-kube-api-access-ds2wt\") pod \"community-operators-zlblp\" (UID: \"7f615080-fa27-4ea2-ab8b-bd6409bdcaf7\") " pod="openshift-marketplace/community-operators-zlblp" Mar 13 11:27:28 crc kubenswrapper[4632]: I0313 11:27:28.795666 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f615080-fa27-4ea2-ab8b-bd6409bdcaf7-catalog-content\") pod \"community-operators-zlblp\" (UID: \"7f615080-fa27-4ea2-ab8b-bd6409bdcaf7\") " pod="openshift-marketplace/community-operators-zlblp" Mar 13 11:27:28 crc kubenswrapper[4632]: I0313 11:27:28.795837 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f615080-fa27-4ea2-ab8b-bd6409bdcaf7-utilities\") pod \"community-operators-zlblp\" (UID: \"7f615080-fa27-4ea2-ab8b-bd6409bdcaf7\") " pod="openshift-marketplace/community-operators-zlblp" Mar 13 11:27:28 crc kubenswrapper[4632]: I0313 11:27:28.795876 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds2wt\" (UniqueName: \"kubernetes.io/projected/7f615080-fa27-4ea2-ab8b-bd6409bdcaf7-kube-api-access-ds2wt\") pod \"community-operators-zlblp\" (UID: \"7f615080-fa27-4ea2-ab8b-bd6409bdcaf7\") " pod="openshift-marketplace/community-operators-zlblp" Mar 13 11:27:28 crc kubenswrapper[4632]: I0313 11:27:28.796246 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f615080-fa27-4ea2-ab8b-bd6409bdcaf7-catalog-content\") pod \"community-operators-zlblp\" (UID: \"7f615080-fa27-4ea2-ab8b-bd6409bdcaf7\") " pod="openshift-marketplace/community-operators-zlblp" Mar 13 11:27:28 crc kubenswrapper[4632]: I0313 11:27:28.796663 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f615080-fa27-4ea2-ab8b-bd6409bdcaf7-utilities\") pod \"community-operators-zlblp\" (UID: \"7f615080-fa27-4ea2-ab8b-bd6409bdcaf7\") " pod="openshift-marketplace/community-operators-zlblp" Mar 13 11:27:28 crc kubenswrapper[4632]: I0313 11:27:28.831318 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds2wt\" (UniqueName: \"kubernetes.io/projected/7f615080-fa27-4ea2-ab8b-bd6409bdcaf7-kube-api-access-ds2wt\") pod \"community-operators-zlblp\" (UID: \"7f615080-fa27-4ea2-ab8b-bd6409bdcaf7\") " pod="openshift-marketplace/community-operators-zlblp" Mar 13 11:27:28 crc kubenswrapper[4632]: I0313 11:27:28.900528 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zlblp" Mar 13 11:27:29 crc kubenswrapper[4632]: I0313 11:27:29.389274 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zlblp"] Mar 13 11:27:30 crc kubenswrapper[4632]: I0313 11:27:30.255608 4632 generic.go:334] "Generic (PLEG): container finished" podID="7f615080-fa27-4ea2-ab8b-bd6409bdcaf7" containerID="588578b93ec0d7a4ab9503a78edfe460fc82f8b613c54d0795b8a8243a44b801" exitCode=0 Mar 13 11:27:30 crc kubenswrapper[4632]: I0313 11:27:30.255667 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlblp" event={"ID":"7f615080-fa27-4ea2-ab8b-bd6409bdcaf7","Type":"ContainerDied","Data":"588578b93ec0d7a4ab9503a78edfe460fc82f8b613c54d0795b8a8243a44b801"} Mar 13 11:27:30 crc kubenswrapper[4632]: I0313 11:27:30.255731 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlblp" event={"ID":"7f615080-fa27-4ea2-ab8b-bd6409bdcaf7","Type":"ContainerStarted","Data":"f9576dbf1fd0b8a99204ae893036fa6519be832d4a433a5ae4a842e9e21ad975"} Mar 13 11:27:31 crc kubenswrapper[4632]: I0313 11:27:31.270611 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlblp" event={"ID":"7f615080-fa27-4ea2-ab8b-bd6409bdcaf7","Type":"ContainerStarted","Data":"ddd93ebe954f83eabb9f3a51aec00226efd4801cb41ff33b60684d2fb0414f63"} Mar 13 11:27:33 crc kubenswrapper[4632]: I0313 11:27:33.293746 4632 generic.go:334] "Generic (PLEG): container finished" podID="7f615080-fa27-4ea2-ab8b-bd6409bdcaf7" containerID="ddd93ebe954f83eabb9f3a51aec00226efd4801cb41ff33b60684d2fb0414f63" exitCode=0 Mar 13 11:27:33 crc kubenswrapper[4632]: I0313 11:27:33.293833 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlblp" event={"ID":"7f615080-fa27-4ea2-ab8b-bd6409bdcaf7","Type":"ContainerDied","Data":"ddd93ebe954f83eabb9f3a51aec00226efd4801cb41ff33b60684d2fb0414f63"} Mar 13 11:27:35 crc kubenswrapper[4632]: I0313 11:27:35.317260 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlblp" event={"ID":"7f615080-fa27-4ea2-ab8b-bd6409bdcaf7","Type":"ContainerStarted","Data":"231d214dea21217dfb7e7d55a896c931c65d43b0af2f081bf87a1d9116d05023"} Mar 13 11:27:35 crc kubenswrapper[4632]: I0313 11:27:35.354435 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zlblp" podStartSLOduration=3.741217878 podStartE2EDuration="7.354410168s" podCreationTimestamp="2026-03-13 11:27:28 +0000 UTC" firstStartedPulling="2026-03-13 11:27:30.259033727 +0000 UTC m=+5024.281563860" lastFinishedPulling="2026-03-13 11:27:33.872226017 +0000 UTC m=+5027.894756150" observedRunningTime="2026-03-13 11:27:35.343819046 +0000 UTC m=+5029.366349189" watchObservedRunningTime="2026-03-13 11:27:35.354410168 +0000 UTC m=+5029.376940311" Mar 13 11:27:38 crc kubenswrapper[4632]: I0313 11:27:38.900891 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zlblp" Mar 13 11:27:38 crc kubenswrapper[4632]: I0313 11:27:38.901325 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zlblp" Mar 13 11:27:39 crc kubenswrapper[4632]: I0313 11:27:39.953533 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-zlblp" podUID="7f615080-fa27-4ea2-ab8b-bd6409bdcaf7" containerName="registry-server" probeResult="failure" output=< Mar 13 11:27:39 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:27:39 crc kubenswrapper[4632]: > Mar 13 11:27:40 crc kubenswrapper[4632]: I0313 11:27:40.461001 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 11:27:40 crc kubenswrapper[4632]: I0313 11:27:40.461134 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 11:27:48 crc kubenswrapper[4632]: I0313 11:27:48.955232 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zlblp" Mar 13 11:27:49 crc kubenswrapper[4632]: I0313 11:27:49.010105 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zlblp" Mar 13 11:27:49 crc kubenswrapper[4632]: I0313 11:27:49.260973 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zlblp"] Mar 13 11:27:50 crc kubenswrapper[4632]: I0313 11:27:50.480209 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zlblp" podUID="7f615080-fa27-4ea2-ab8b-bd6409bdcaf7" containerName="registry-server" containerID="cri-o://231d214dea21217dfb7e7d55a896c931c65d43b0af2f081bf87a1d9116d05023" gracePeriod=2 Mar 13 11:27:51 crc kubenswrapper[4632]: I0313 11:27:51.010213 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zlblp" Mar 13 11:27:51 crc kubenswrapper[4632]: I0313 11:27:51.025097 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds2wt\" (UniqueName: \"kubernetes.io/projected/7f615080-fa27-4ea2-ab8b-bd6409bdcaf7-kube-api-access-ds2wt\") pod \"7f615080-fa27-4ea2-ab8b-bd6409bdcaf7\" (UID: \"7f615080-fa27-4ea2-ab8b-bd6409bdcaf7\") " Mar 13 11:27:51 crc kubenswrapper[4632]: I0313 11:27:51.025406 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f615080-fa27-4ea2-ab8b-bd6409bdcaf7-utilities\") pod \"7f615080-fa27-4ea2-ab8b-bd6409bdcaf7\" (UID: \"7f615080-fa27-4ea2-ab8b-bd6409bdcaf7\") " Mar 13 11:27:51 crc kubenswrapper[4632]: I0313 11:27:51.025847 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f615080-fa27-4ea2-ab8b-bd6409bdcaf7-catalog-content\") pod \"7f615080-fa27-4ea2-ab8b-bd6409bdcaf7\" (UID: \"7f615080-fa27-4ea2-ab8b-bd6409bdcaf7\") " Mar 13 11:27:51 crc kubenswrapper[4632]: I0313 11:27:51.031201 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f615080-fa27-4ea2-ab8b-bd6409bdcaf7-kube-api-access-ds2wt" (OuterVolumeSpecName: "kube-api-access-ds2wt") pod "7f615080-fa27-4ea2-ab8b-bd6409bdcaf7" (UID: "7f615080-fa27-4ea2-ab8b-bd6409bdcaf7"). InnerVolumeSpecName "kube-api-access-ds2wt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:27:51 crc kubenswrapper[4632]: I0313 11:27:51.031586 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f615080-fa27-4ea2-ab8b-bd6409bdcaf7-utilities" (OuterVolumeSpecName: "utilities") pod "7f615080-fa27-4ea2-ab8b-bd6409bdcaf7" (UID: "7f615080-fa27-4ea2-ab8b-bd6409bdcaf7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:27:51 crc kubenswrapper[4632]: I0313 11:27:51.106801 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f615080-fa27-4ea2-ab8b-bd6409bdcaf7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f615080-fa27-4ea2-ab8b-bd6409bdcaf7" (UID: "7f615080-fa27-4ea2-ab8b-bd6409bdcaf7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:27:51 crc kubenswrapper[4632]: I0313 11:27:51.128665 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f615080-fa27-4ea2-ab8b-bd6409bdcaf7-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 11:27:51 crc kubenswrapper[4632]: I0313 11:27:51.128727 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ds2wt\" (UniqueName: \"kubernetes.io/projected/7f615080-fa27-4ea2-ab8b-bd6409bdcaf7-kube-api-access-ds2wt\") on node \"crc\" DevicePath \"\"" Mar 13 11:27:51 crc kubenswrapper[4632]: I0313 11:27:51.128739 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f615080-fa27-4ea2-ab8b-bd6409bdcaf7-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 11:27:51 crc kubenswrapper[4632]: I0313 11:27:51.492584 4632 generic.go:334] "Generic (PLEG): container finished" podID="7f615080-fa27-4ea2-ab8b-bd6409bdcaf7" containerID="231d214dea21217dfb7e7d55a896c931c65d43b0af2f081bf87a1d9116d05023" exitCode=0 Mar 13 11:27:51 crc kubenswrapper[4632]: I0313 11:27:51.492669 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlblp" event={"ID":"7f615080-fa27-4ea2-ab8b-bd6409bdcaf7","Type":"ContainerDied","Data":"231d214dea21217dfb7e7d55a896c931c65d43b0af2f081bf87a1d9116d05023"} Mar 13 11:27:51 crc kubenswrapper[4632]: I0313 11:27:51.492761 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlblp" event={"ID":"7f615080-fa27-4ea2-ab8b-bd6409bdcaf7","Type":"ContainerDied","Data":"f9576dbf1fd0b8a99204ae893036fa6519be832d4a433a5ae4a842e9e21ad975"} Mar 13 11:27:51 crc kubenswrapper[4632]: I0313 11:27:51.492807 4632 scope.go:117] "RemoveContainer" containerID="231d214dea21217dfb7e7d55a896c931c65d43b0af2f081bf87a1d9116d05023" Mar 13 11:27:51 crc kubenswrapper[4632]: I0313 11:27:51.493180 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zlblp" Mar 13 11:27:51 crc kubenswrapper[4632]: I0313 11:27:51.518994 4632 scope.go:117] "RemoveContainer" containerID="ddd93ebe954f83eabb9f3a51aec00226efd4801cb41ff33b60684d2fb0414f63" Mar 13 11:27:51 crc kubenswrapper[4632]: I0313 11:27:51.539384 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zlblp"] Mar 13 11:27:51 crc kubenswrapper[4632]: I0313 11:27:51.551823 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zlblp"] Mar 13 11:27:51 crc kubenswrapper[4632]: I0313 11:27:51.979862 4632 scope.go:117] "RemoveContainer" containerID="588578b93ec0d7a4ab9503a78edfe460fc82f8b613c54d0795b8a8243a44b801" Mar 13 11:27:52 crc kubenswrapper[4632]: I0313 11:27:52.064027 4632 scope.go:117] "RemoveContainer" containerID="231d214dea21217dfb7e7d55a896c931c65d43b0af2f081bf87a1d9116d05023" Mar 13 11:27:52 crc kubenswrapper[4632]: E0313 11:27:52.064554 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"231d214dea21217dfb7e7d55a896c931c65d43b0af2f081bf87a1d9116d05023\": container with ID starting with 231d214dea21217dfb7e7d55a896c931c65d43b0af2f081bf87a1d9116d05023 not found: ID does not exist" containerID="231d214dea21217dfb7e7d55a896c931c65d43b0af2f081bf87a1d9116d05023" Mar 13 11:27:52 crc kubenswrapper[4632]: I0313 11:27:52.064599 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"231d214dea21217dfb7e7d55a896c931c65d43b0af2f081bf87a1d9116d05023"} err="failed to get container status \"231d214dea21217dfb7e7d55a896c931c65d43b0af2f081bf87a1d9116d05023\": rpc error: code = NotFound desc = could not find container \"231d214dea21217dfb7e7d55a896c931c65d43b0af2f081bf87a1d9116d05023\": container with ID starting with 231d214dea21217dfb7e7d55a896c931c65d43b0af2f081bf87a1d9116d05023 not found: ID does not exist" Mar 13 11:27:52 crc kubenswrapper[4632]: I0313 11:27:52.064622 4632 scope.go:117] "RemoveContainer" containerID="ddd93ebe954f83eabb9f3a51aec00226efd4801cb41ff33b60684d2fb0414f63" Mar 13 11:27:52 crc kubenswrapper[4632]: I0313 11:27:52.065008 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f615080-fa27-4ea2-ab8b-bd6409bdcaf7" path="/var/lib/kubelet/pods/7f615080-fa27-4ea2-ab8b-bd6409bdcaf7/volumes" Mar 13 11:27:52 crc kubenswrapper[4632]: E0313 11:27:52.065075 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddd93ebe954f83eabb9f3a51aec00226efd4801cb41ff33b60684d2fb0414f63\": container with ID starting with ddd93ebe954f83eabb9f3a51aec00226efd4801cb41ff33b60684d2fb0414f63 not found: ID does not exist" containerID="ddd93ebe954f83eabb9f3a51aec00226efd4801cb41ff33b60684d2fb0414f63" Mar 13 11:27:52 crc kubenswrapper[4632]: I0313 11:27:52.065111 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddd93ebe954f83eabb9f3a51aec00226efd4801cb41ff33b60684d2fb0414f63"} err="failed to get container status \"ddd93ebe954f83eabb9f3a51aec00226efd4801cb41ff33b60684d2fb0414f63\": rpc error: code = NotFound desc = could not find container \"ddd93ebe954f83eabb9f3a51aec00226efd4801cb41ff33b60684d2fb0414f63\": container with ID starting with ddd93ebe954f83eabb9f3a51aec00226efd4801cb41ff33b60684d2fb0414f63 not found: ID does not exist" Mar 13 11:27:52 crc kubenswrapper[4632]: I0313 11:27:52.065136 4632 scope.go:117] "RemoveContainer" containerID="588578b93ec0d7a4ab9503a78edfe460fc82f8b613c54d0795b8a8243a44b801" Mar 13 11:27:52 crc kubenswrapper[4632]: E0313 11:27:52.065613 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"588578b93ec0d7a4ab9503a78edfe460fc82f8b613c54d0795b8a8243a44b801\": container with ID starting with 588578b93ec0d7a4ab9503a78edfe460fc82f8b613c54d0795b8a8243a44b801 not found: ID does not exist" containerID="588578b93ec0d7a4ab9503a78edfe460fc82f8b613c54d0795b8a8243a44b801" Mar 13 11:27:52 crc kubenswrapper[4632]: I0313 11:27:52.065649 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"588578b93ec0d7a4ab9503a78edfe460fc82f8b613c54d0795b8a8243a44b801"} err="failed to get container status \"588578b93ec0d7a4ab9503a78edfe460fc82f8b613c54d0795b8a8243a44b801\": rpc error: code = NotFound desc = could not find container \"588578b93ec0d7a4ab9503a78edfe460fc82f8b613c54d0795b8a8243a44b801\": container with ID starting with 588578b93ec0d7a4ab9503a78edfe460fc82f8b613c54d0795b8a8243a44b801 not found: ID does not exist" Mar 13 11:28:00 crc kubenswrapper[4632]: I0313 11:28:00.208167 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556688-btp9b"] Mar 13 11:28:00 crc kubenswrapper[4632]: E0313 11:28:00.209216 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f615080-fa27-4ea2-ab8b-bd6409bdcaf7" containerName="registry-server" Mar 13 11:28:00 crc kubenswrapper[4632]: I0313 11:28:00.209238 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f615080-fa27-4ea2-ab8b-bd6409bdcaf7" containerName="registry-server" Mar 13 11:28:00 crc kubenswrapper[4632]: E0313 11:28:00.209280 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f615080-fa27-4ea2-ab8b-bd6409bdcaf7" containerName="extract-utilities" Mar 13 11:28:00 crc kubenswrapper[4632]: I0313 11:28:00.209290 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f615080-fa27-4ea2-ab8b-bd6409bdcaf7" containerName="extract-utilities" Mar 13 11:28:00 crc kubenswrapper[4632]: E0313 11:28:00.209307 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f615080-fa27-4ea2-ab8b-bd6409bdcaf7" containerName="extract-content" Mar 13 11:28:00 crc kubenswrapper[4632]: I0313 11:28:00.209317 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f615080-fa27-4ea2-ab8b-bd6409bdcaf7" containerName="extract-content" Mar 13 11:28:00 crc kubenswrapper[4632]: I0313 11:28:00.209596 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f615080-fa27-4ea2-ab8b-bd6409bdcaf7" containerName="registry-server" Mar 13 11:28:00 crc kubenswrapper[4632]: I0313 11:28:00.210394 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556688-btp9b" Mar 13 11:28:00 crc kubenswrapper[4632]: I0313 11:28:00.218427 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 11:28:00 crc kubenswrapper[4632]: I0313 11:28:00.218668 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 11:28:00 crc kubenswrapper[4632]: I0313 11:28:00.218818 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 11:28:00 crc kubenswrapper[4632]: I0313 11:28:00.236649 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556688-btp9b"] Mar 13 11:28:00 crc kubenswrapper[4632]: I0313 11:28:00.311969 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwnbq\" (UniqueName: \"kubernetes.io/projected/99bcc88d-9858-4d4f-97e5-68e185f06401-kube-api-access-xwnbq\") pod \"auto-csr-approver-29556688-btp9b\" (UID: \"99bcc88d-9858-4d4f-97e5-68e185f06401\") " pod="openshift-infra/auto-csr-approver-29556688-btp9b" Mar 13 11:28:00 crc kubenswrapper[4632]: I0313 11:28:00.414385 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwnbq\" (UniqueName: \"kubernetes.io/projected/99bcc88d-9858-4d4f-97e5-68e185f06401-kube-api-access-xwnbq\") pod \"auto-csr-approver-29556688-btp9b\" (UID: \"99bcc88d-9858-4d4f-97e5-68e185f06401\") " pod="openshift-infra/auto-csr-approver-29556688-btp9b" Mar 13 11:28:00 crc kubenswrapper[4632]: I0313 11:28:00.450806 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwnbq\" (UniqueName: \"kubernetes.io/projected/99bcc88d-9858-4d4f-97e5-68e185f06401-kube-api-access-xwnbq\") pod \"auto-csr-approver-29556688-btp9b\" (UID: \"99bcc88d-9858-4d4f-97e5-68e185f06401\") " pod="openshift-infra/auto-csr-approver-29556688-btp9b" Mar 13 11:28:00 crc kubenswrapper[4632]: I0313 11:28:00.538788 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556688-btp9b" Mar 13 11:28:01 crc kubenswrapper[4632]: I0313 11:28:01.030304 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556688-btp9b"] Mar 13 11:28:01 crc kubenswrapper[4632]: I0313 11:28:01.607184 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556688-btp9b" event={"ID":"99bcc88d-9858-4d4f-97e5-68e185f06401","Type":"ContainerStarted","Data":"8530e4258bcde86671610ca0987044da96d15fd754611f558323ff659137e25a"} Mar 13 11:28:03 crc kubenswrapper[4632]: I0313 11:28:03.637410 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556688-btp9b" event={"ID":"99bcc88d-9858-4d4f-97e5-68e185f06401","Type":"ContainerStarted","Data":"419020d6577499249f09db214d08f7500440b2163407ca9b721f361af1ab72f7"} Mar 13 11:28:03 crc kubenswrapper[4632]: I0313 11:28:03.660222 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556688-btp9b" podStartSLOduration=2.405580217 podStartE2EDuration="3.660198468s" podCreationTimestamp="2026-03-13 11:28:00 +0000 UTC" firstStartedPulling="2026-03-13 11:28:01.036776138 +0000 UTC m=+5055.059306271" lastFinishedPulling="2026-03-13 11:28:02.291394369 +0000 UTC m=+5056.313924522" observedRunningTime="2026-03-13 11:28:03.657872702 +0000 UTC m=+5057.680402845" watchObservedRunningTime="2026-03-13 11:28:03.660198468 +0000 UTC m=+5057.682728611" Mar 13 11:28:04 crc kubenswrapper[4632]: I0313 11:28:04.650090 4632 generic.go:334] "Generic (PLEG): container finished" podID="99bcc88d-9858-4d4f-97e5-68e185f06401" containerID="419020d6577499249f09db214d08f7500440b2163407ca9b721f361af1ab72f7" exitCode=0 Mar 13 11:28:04 crc kubenswrapper[4632]: I0313 11:28:04.650215 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556688-btp9b" event={"ID":"99bcc88d-9858-4d4f-97e5-68e185f06401","Type":"ContainerDied","Data":"419020d6577499249f09db214d08f7500440b2163407ca9b721f361af1ab72f7"} Mar 13 11:28:06 crc kubenswrapper[4632]: I0313 11:28:06.032103 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556688-btp9b" Mar 13 11:28:06 crc kubenswrapper[4632]: I0313 11:28:06.132301 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwnbq\" (UniqueName: \"kubernetes.io/projected/99bcc88d-9858-4d4f-97e5-68e185f06401-kube-api-access-xwnbq\") pod \"99bcc88d-9858-4d4f-97e5-68e185f06401\" (UID: \"99bcc88d-9858-4d4f-97e5-68e185f06401\") " Mar 13 11:28:06 crc kubenswrapper[4632]: I0313 11:28:06.155458 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99bcc88d-9858-4d4f-97e5-68e185f06401-kube-api-access-xwnbq" (OuterVolumeSpecName: "kube-api-access-xwnbq") pod "99bcc88d-9858-4d4f-97e5-68e185f06401" (UID: "99bcc88d-9858-4d4f-97e5-68e185f06401"). InnerVolumeSpecName "kube-api-access-xwnbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:28:06 crc kubenswrapper[4632]: I0313 11:28:06.234038 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwnbq\" (UniqueName: \"kubernetes.io/projected/99bcc88d-9858-4d4f-97e5-68e185f06401-kube-api-access-xwnbq\") on node \"crc\" DevicePath \"\"" Mar 13 11:28:06 crc kubenswrapper[4632]: I0313 11:28:06.671670 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556688-btp9b" event={"ID":"99bcc88d-9858-4d4f-97e5-68e185f06401","Type":"ContainerDied","Data":"8530e4258bcde86671610ca0987044da96d15fd754611f558323ff659137e25a"} Mar 13 11:28:06 crc kubenswrapper[4632]: I0313 11:28:06.671722 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8530e4258bcde86671610ca0987044da96d15fd754611f558323ff659137e25a" Mar 13 11:28:06 crc kubenswrapper[4632]: I0313 11:28:06.671782 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556688-btp9b" Mar 13 11:28:06 crc kubenswrapper[4632]: E0313 11:28:06.796819 4632 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99bcc88d_9858_4d4f_97e5_68e185f06401.slice/crio-8530e4258bcde86671610ca0987044da96d15fd754611f558323ff659137e25a\": RecentStats: unable to find data in memory cache]" Mar 13 11:28:07 crc kubenswrapper[4632]: I0313 11:28:07.130084 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556682-pm7lm"] Mar 13 11:28:07 crc kubenswrapper[4632]: I0313 11:28:07.138401 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556682-pm7lm"] Mar 13 11:28:08 crc kubenswrapper[4632]: I0313 11:28:08.065997 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="191fb79a-448d-4181-8346-f9dec8721d81" path="/var/lib/kubelet/pods/191fb79a-448d-4181-8346-f9dec8721d81/volumes" Mar 13 11:28:10 crc kubenswrapper[4632]: I0313 11:28:10.460722 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 11:28:10 crc kubenswrapper[4632]: I0313 11:28:10.461059 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 11:28:19 crc kubenswrapper[4632]: I0313 11:28:19.843254 4632 scope.go:117] "RemoveContainer" containerID="6b6b471905ed6fd6c16476a8c40a8d65b889486b4d10fadb0c4b7b6cf7a150be" Mar 13 11:28:40 crc kubenswrapper[4632]: I0313 11:28:40.458922 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l9hln"] Mar 13 11:28:40 crc kubenswrapper[4632]: E0313 11:28:40.459688 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99bcc88d-9858-4d4f-97e5-68e185f06401" containerName="oc" Mar 13 11:28:40 crc kubenswrapper[4632]: I0313 11:28:40.459700 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="99bcc88d-9858-4d4f-97e5-68e185f06401" containerName="oc" Mar 13 11:28:40 crc kubenswrapper[4632]: I0313 11:28:40.459904 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="99bcc88d-9858-4d4f-97e5-68e185f06401" containerName="oc" Mar 13 11:28:40 crc kubenswrapper[4632]: I0313 11:28:40.461983 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 11:28:40 crc kubenswrapper[4632]: I0313 11:28:40.462037 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 11:28:40 crc kubenswrapper[4632]: I0313 11:28:40.463793 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 11:28:40 crc kubenswrapper[4632]: I0313 11:28:40.464547 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d23237a6fa10676a84af0ea53cff2f624fc9045a8ba857339106215f64dbb8e3"} pod="openshift-machine-config-operator/machine-config-daemon-zkscb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 11:28:40 crc kubenswrapper[4632]: I0313 11:28:40.464611 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" containerID="cri-o://d23237a6fa10676a84af0ea53cff2f624fc9045a8ba857339106215f64dbb8e3" gracePeriod=600 Mar 13 11:28:40 crc kubenswrapper[4632]: I0313 11:28:40.464721 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l9hln" Mar 13 11:28:40 crc kubenswrapper[4632]: I0313 11:28:40.482960 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l9hln"] Mar 13 11:28:40 crc kubenswrapper[4632]: I0313 11:28:40.508702 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddpsp\" (UniqueName: \"kubernetes.io/projected/e855a265-8de4-49c0-b910-ff29ae34b9c9-kube-api-access-ddpsp\") pod \"redhat-operators-l9hln\" (UID: \"e855a265-8de4-49c0-b910-ff29ae34b9c9\") " pod="openshift-marketplace/redhat-operators-l9hln" Mar 13 11:28:40 crc kubenswrapper[4632]: I0313 11:28:40.508837 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e855a265-8de4-49c0-b910-ff29ae34b9c9-utilities\") pod \"redhat-operators-l9hln\" (UID: \"e855a265-8de4-49c0-b910-ff29ae34b9c9\") " pod="openshift-marketplace/redhat-operators-l9hln" Mar 13 11:28:40 crc kubenswrapper[4632]: I0313 11:28:40.508883 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e855a265-8de4-49c0-b910-ff29ae34b9c9-catalog-content\") pod \"redhat-operators-l9hln\" (UID: \"e855a265-8de4-49c0-b910-ff29ae34b9c9\") " pod="openshift-marketplace/redhat-operators-l9hln" Mar 13 11:28:40 crc kubenswrapper[4632]: E0313 11:28:40.590905 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:28:40 crc kubenswrapper[4632]: I0313 11:28:40.611216 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e855a265-8de4-49c0-b910-ff29ae34b9c9-utilities\") pod \"redhat-operators-l9hln\" (UID: \"e855a265-8de4-49c0-b910-ff29ae34b9c9\") " pod="openshift-marketplace/redhat-operators-l9hln" Mar 13 11:28:40 crc kubenswrapper[4632]: I0313 11:28:40.611282 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e855a265-8de4-49c0-b910-ff29ae34b9c9-catalog-content\") pod \"redhat-operators-l9hln\" (UID: \"e855a265-8de4-49c0-b910-ff29ae34b9c9\") " pod="openshift-marketplace/redhat-operators-l9hln" Mar 13 11:28:40 crc kubenswrapper[4632]: I0313 11:28:40.611419 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddpsp\" (UniqueName: \"kubernetes.io/projected/e855a265-8de4-49c0-b910-ff29ae34b9c9-kube-api-access-ddpsp\") pod \"redhat-operators-l9hln\" (UID: \"e855a265-8de4-49c0-b910-ff29ae34b9c9\") " pod="openshift-marketplace/redhat-operators-l9hln" Mar 13 11:28:40 crc kubenswrapper[4632]: I0313 11:28:40.611827 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e855a265-8de4-49c0-b910-ff29ae34b9c9-utilities\") pod \"redhat-operators-l9hln\" (UID: \"e855a265-8de4-49c0-b910-ff29ae34b9c9\") " pod="openshift-marketplace/redhat-operators-l9hln" Mar 13 11:28:40 crc kubenswrapper[4632]: I0313 11:28:40.611912 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e855a265-8de4-49c0-b910-ff29ae34b9c9-catalog-content\") pod \"redhat-operators-l9hln\" (UID: \"e855a265-8de4-49c0-b910-ff29ae34b9c9\") " pod="openshift-marketplace/redhat-operators-l9hln" Mar 13 11:28:40 crc kubenswrapper[4632]: I0313 11:28:40.635576 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddpsp\" (UniqueName: \"kubernetes.io/projected/e855a265-8de4-49c0-b910-ff29ae34b9c9-kube-api-access-ddpsp\") pod \"redhat-operators-l9hln\" (UID: \"e855a265-8de4-49c0-b910-ff29ae34b9c9\") " pod="openshift-marketplace/redhat-operators-l9hln" Mar 13 11:28:40 crc kubenswrapper[4632]: I0313 11:28:40.788929 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l9hln" Mar 13 11:28:41 crc kubenswrapper[4632]: I0313 11:28:41.019875 4632 generic.go:334] "Generic (PLEG): container finished" podID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerID="d23237a6fa10676a84af0ea53cff2f624fc9045a8ba857339106215f64dbb8e3" exitCode=0 Mar 13 11:28:41 crc kubenswrapper[4632]: I0313 11:28:41.020269 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerDied","Data":"d23237a6fa10676a84af0ea53cff2f624fc9045a8ba857339106215f64dbb8e3"} Mar 13 11:28:41 crc kubenswrapper[4632]: I0313 11:28:41.020304 4632 scope.go:117] "RemoveContainer" containerID="49682358d72adf3dbebb4a70c4dbc847548d4046ae5ef96f55f2ae4dfd58b9f9" Mar 13 11:28:41 crc kubenswrapper[4632]: I0313 11:28:41.020732 4632 scope.go:117] "RemoveContainer" containerID="d23237a6fa10676a84af0ea53cff2f624fc9045a8ba857339106215f64dbb8e3" Mar 13 11:28:41 crc kubenswrapper[4632]: E0313 11:28:41.020975 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:28:41 crc kubenswrapper[4632]: I0313 11:28:41.353448 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l9hln"] Mar 13 11:28:41 crc kubenswrapper[4632]: W0313 11:28:41.765656 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode855a265_8de4_49c0_b910_ff29ae34b9c9.slice/crio-b8a9e04b6a98018edb581e5ef7c81695b29ef6091ce835092c2938e31a0a2e11 WatchSource:0}: Error finding container b8a9e04b6a98018edb581e5ef7c81695b29ef6091ce835092c2938e31a0a2e11: Status 404 returned error can't find the container with id b8a9e04b6a98018edb581e5ef7c81695b29ef6091ce835092c2938e31a0a2e11 Mar 13 11:28:42 crc kubenswrapper[4632]: I0313 11:28:42.032553 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9hln" event={"ID":"e855a265-8de4-49c0-b910-ff29ae34b9c9","Type":"ContainerStarted","Data":"867f95f5d2082d718d8174c12306f380c336af715d4c9e98f5b40dbfb57626a7"} Mar 13 11:28:42 crc kubenswrapper[4632]: I0313 11:28:42.032605 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9hln" event={"ID":"e855a265-8de4-49c0-b910-ff29ae34b9c9","Type":"ContainerStarted","Data":"b8a9e04b6a98018edb581e5ef7c81695b29ef6091ce835092c2938e31a0a2e11"} Mar 13 11:28:43 crc kubenswrapper[4632]: I0313 11:28:43.048748 4632 generic.go:334] "Generic (PLEG): container finished" podID="e855a265-8de4-49c0-b910-ff29ae34b9c9" containerID="867f95f5d2082d718d8174c12306f380c336af715d4c9e98f5b40dbfb57626a7" exitCode=0 Mar 13 11:28:43 crc kubenswrapper[4632]: I0313 11:28:43.049023 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9hln" event={"ID":"e855a265-8de4-49c0-b910-ff29ae34b9c9","Type":"ContainerDied","Data":"867f95f5d2082d718d8174c12306f380c336af715d4c9e98f5b40dbfb57626a7"} Mar 13 11:28:44 crc kubenswrapper[4632]: I0313 11:28:44.068512 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9hln" event={"ID":"e855a265-8de4-49c0-b910-ff29ae34b9c9","Type":"ContainerStarted","Data":"0d7e36ef30c33fd9204b409ec56e01a15539fd48854a29ce545aac9958b99985"} Mar 13 11:28:51 crc kubenswrapper[4632]: I0313 11:28:51.148415 4632 generic.go:334] "Generic (PLEG): container finished" podID="e855a265-8de4-49c0-b910-ff29ae34b9c9" containerID="0d7e36ef30c33fd9204b409ec56e01a15539fd48854a29ce545aac9958b99985" exitCode=0 Mar 13 11:28:51 crc kubenswrapper[4632]: I0313 11:28:51.148489 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9hln" event={"ID":"e855a265-8de4-49c0-b910-ff29ae34b9c9","Type":"ContainerDied","Data":"0d7e36ef30c33fd9204b409ec56e01a15539fd48854a29ce545aac9958b99985"} Mar 13 11:28:53 crc kubenswrapper[4632]: I0313 11:28:53.045245 4632 scope.go:117] "RemoveContainer" containerID="d23237a6fa10676a84af0ea53cff2f624fc9045a8ba857339106215f64dbb8e3" Mar 13 11:28:53 crc kubenswrapper[4632]: E0313 11:28:53.045776 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:28:53 crc kubenswrapper[4632]: I0313 11:28:53.174139 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9hln" event={"ID":"e855a265-8de4-49c0-b910-ff29ae34b9c9","Type":"ContainerStarted","Data":"1507c5b681588f963d5f2f4b7375480788c56bfd5b05e62a793bca31530d4e2f"} Mar 13 11:28:53 crc kubenswrapper[4632]: I0313 11:28:53.200578 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l9hln" podStartSLOduration=4.627011124 podStartE2EDuration="13.200547102s" podCreationTimestamp="2026-03-13 11:28:40 +0000 UTC" firstStartedPulling="2026-03-13 11:28:43.05072313 +0000 UTC m=+5097.073253263" lastFinishedPulling="2026-03-13 11:28:51.624259098 +0000 UTC m=+5105.646789241" observedRunningTime="2026-03-13 11:28:53.196991834 +0000 UTC m=+5107.219521967" watchObservedRunningTime="2026-03-13 11:28:53.200547102 +0000 UTC m=+5107.223077235" Mar 13 11:29:00 crc kubenswrapper[4632]: I0313 11:29:00.789651 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l9hln" Mar 13 11:29:00 crc kubenswrapper[4632]: I0313 11:29:00.790219 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l9hln" Mar 13 11:29:01 crc kubenswrapper[4632]: I0313 11:29:01.835450 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l9hln" podUID="e855a265-8de4-49c0-b910-ff29ae34b9c9" containerName="registry-server" probeResult="failure" output=< Mar 13 11:29:01 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:29:01 crc kubenswrapper[4632]: > Mar 13 11:29:06 crc kubenswrapper[4632]: I0313 11:29:06.044274 4632 scope.go:117] "RemoveContainer" containerID="d23237a6fa10676a84af0ea53cff2f624fc9045a8ba857339106215f64dbb8e3" Mar 13 11:29:06 crc kubenswrapper[4632]: E0313 11:29:06.045160 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:29:11 crc kubenswrapper[4632]: I0313 11:29:11.851029 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l9hln" podUID="e855a265-8de4-49c0-b910-ff29ae34b9c9" containerName="registry-server" probeResult="failure" output=< Mar 13 11:29:11 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:29:11 crc kubenswrapper[4632]: > Mar 13 11:29:21 crc kubenswrapper[4632]: I0313 11:29:21.044537 4632 scope.go:117] "RemoveContainer" containerID="d23237a6fa10676a84af0ea53cff2f624fc9045a8ba857339106215f64dbb8e3" Mar 13 11:29:21 crc kubenswrapper[4632]: E0313 11:29:21.045341 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:29:21 crc kubenswrapper[4632]: I0313 11:29:21.852051 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l9hln" podUID="e855a265-8de4-49c0-b910-ff29ae34b9c9" containerName="registry-server" probeResult="failure" output=< Mar 13 11:29:21 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:29:21 crc kubenswrapper[4632]: > Mar 13 11:29:31 crc kubenswrapper[4632]: I0313 11:29:31.870436 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l9hln" podUID="e855a265-8de4-49c0-b910-ff29ae34b9c9" containerName="registry-server" probeResult="failure" output=< Mar 13 11:29:31 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:29:31 crc kubenswrapper[4632]: > Mar 13 11:29:33 crc kubenswrapper[4632]: I0313 11:29:33.044266 4632 scope.go:117] "RemoveContainer" containerID="d23237a6fa10676a84af0ea53cff2f624fc9045a8ba857339106215f64dbb8e3" Mar 13 11:29:33 crc kubenswrapper[4632]: E0313 11:29:33.044602 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:29:40 crc kubenswrapper[4632]: I0313 11:29:40.837039 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l9hln" Mar 13 11:29:40 crc kubenswrapper[4632]: I0313 11:29:40.906755 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l9hln" Mar 13 11:29:41 crc kubenswrapper[4632]: I0313 11:29:41.682826 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l9hln"] Mar 13 11:29:42 crc kubenswrapper[4632]: I0313 11:29:42.657234 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l9hln" podUID="e855a265-8de4-49c0-b910-ff29ae34b9c9" containerName="registry-server" containerID="cri-o://1507c5b681588f963d5f2f4b7375480788c56bfd5b05e62a793bca31530d4e2f" gracePeriod=2 Mar 13 11:29:43 crc kubenswrapper[4632]: I0313 11:29:43.674029 4632 generic.go:334] "Generic (PLEG): container finished" podID="e855a265-8de4-49c0-b910-ff29ae34b9c9" containerID="1507c5b681588f963d5f2f4b7375480788c56bfd5b05e62a793bca31530d4e2f" exitCode=0 Mar 13 11:29:43 crc kubenswrapper[4632]: I0313 11:29:43.674253 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9hln" event={"ID":"e855a265-8de4-49c0-b910-ff29ae34b9c9","Type":"ContainerDied","Data":"1507c5b681588f963d5f2f4b7375480788c56bfd5b05e62a793bca31530d4e2f"} Mar 13 11:29:43 crc kubenswrapper[4632]: I0313 11:29:43.985475 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l9hln" Mar 13 11:29:44 crc kubenswrapper[4632]: I0313 11:29:44.130026 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e855a265-8de4-49c0-b910-ff29ae34b9c9-utilities\") pod \"e855a265-8de4-49c0-b910-ff29ae34b9c9\" (UID: \"e855a265-8de4-49c0-b910-ff29ae34b9c9\") " Mar 13 11:29:44 crc kubenswrapper[4632]: I0313 11:29:44.130136 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddpsp\" (UniqueName: \"kubernetes.io/projected/e855a265-8de4-49c0-b910-ff29ae34b9c9-kube-api-access-ddpsp\") pod \"e855a265-8de4-49c0-b910-ff29ae34b9c9\" (UID: \"e855a265-8de4-49c0-b910-ff29ae34b9c9\") " Mar 13 11:29:44 crc kubenswrapper[4632]: I0313 11:29:44.130340 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e855a265-8de4-49c0-b910-ff29ae34b9c9-catalog-content\") pod \"e855a265-8de4-49c0-b910-ff29ae34b9c9\" (UID: \"e855a265-8de4-49c0-b910-ff29ae34b9c9\") " Mar 13 11:29:44 crc kubenswrapper[4632]: I0313 11:29:44.133044 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e855a265-8de4-49c0-b910-ff29ae34b9c9-utilities" (OuterVolumeSpecName: "utilities") pod "e855a265-8de4-49c0-b910-ff29ae34b9c9" (UID: "e855a265-8de4-49c0-b910-ff29ae34b9c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:29:44 crc kubenswrapper[4632]: I0313 11:29:44.221523 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e855a265-8de4-49c0-b910-ff29ae34b9c9-kube-api-access-ddpsp" (OuterVolumeSpecName: "kube-api-access-ddpsp") pod "e855a265-8de4-49c0-b910-ff29ae34b9c9" (UID: "e855a265-8de4-49c0-b910-ff29ae34b9c9"). InnerVolumeSpecName "kube-api-access-ddpsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:29:44 crc kubenswrapper[4632]: I0313 11:29:44.237872 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e855a265-8de4-49c0-b910-ff29ae34b9c9-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 11:29:44 crc kubenswrapper[4632]: I0313 11:29:44.238112 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddpsp\" (UniqueName: \"kubernetes.io/projected/e855a265-8de4-49c0-b910-ff29ae34b9c9-kube-api-access-ddpsp\") on node \"crc\" DevicePath \"\"" Mar 13 11:29:44 crc kubenswrapper[4632]: I0313 11:29:44.318080 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e855a265-8de4-49c0-b910-ff29ae34b9c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e855a265-8de4-49c0-b910-ff29ae34b9c9" (UID: "e855a265-8de4-49c0-b910-ff29ae34b9c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:29:44 crc kubenswrapper[4632]: I0313 11:29:44.340117 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e855a265-8de4-49c0-b910-ff29ae34b9c9-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 11:29:44 crc kubenswrapper[4632]: I0313 11:29:44.687407 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9hln" event={"ID":"e855a265-8de4-49c0-b910-ff29ae34b9c9","Type":"ContainerDied","Data":"b8a9e04b6a98018edb581e5ef7c81695b29ef6091ce835092c2938e31a0a2e11"} Mar 13 11:29:44 crc kubenswrapper[4632]: I0313 11:29:44.687451 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l9hln" Mar 13 11:29:44 crc kubenswrapper[4632]: I0313 11:29:44.687474 4632 scope.go:117] "RemoveContainer" containerID="1507c5b681588f963d5f2f4b7375480788c56bfd5b05e62a793bca31530d4e2f" Mar 13 11:29:44 crc kubenswrapper[4632]: I0313 11:29:44.720093 4632 scope.go:117] "RemoveContainer" containerID="0d7e36ef30c33fd9204b409ec56e01a15539fd48854a29ce545aac9958b99985" Mar 13 11:29:44 crc kubenswrapper[4632]: I0313 11:29:44.733168 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l9hln"] Mar 13 11:29:44 crc kubenswrapper[4632]: I0313 11:29:44.754054 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l9hln"] Mar 13 11:29:44 crc kubenswrapper[4632]: I0313 11:29:44.760070 4632 scope.go:117] "RemoveContainer" containerID="867f95f5d2082d718d8174c12306f380c336af715d4c9e98f5b40dbfb57626a7" Mar 13 11:29:46 crc kubenswrapper[4632]: I0313 11:29:46.113493 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e855a265-8de4-49c0-b910-ff29ae34b9c9" path="/var/lib/kubelet/pods/e855a265-8de4-49c0-b910-ff29ae34b9c9/volumes" Mar 13 11:29:47 crc kubenswrapper[4632]: I0313 11:29:47.044274 4632 scope.go:117] "RemoveContainer" containerID="d23237a6fa10676a84af0ea53cff2f624fc9045a8ba857339106215f64dbb8e3" Mar 13 11:29:47 crc kubenswrapper[4632]: E0313 11:29:47.045055 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:29:58 crc kubenswrapper[4632]: I0313 11:29:58.051514 4632 scope.go:117] "RemoveContainer" containerID="d23237a6fa10676a84af0ea53cff2f624fc9045a8ba857339106215f64dbb8e3" Mar 13 11:29:58 crc kubenswrapper[4632]: E0313 11:29:58.052851 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:30:00 crc kubenswrapper[4632]: I0313 11:30:00.191295 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556690-d6xbb"] Mar 13 11:30:00 crc kubenswrapper[4632]: E0313 11:30:00.193782 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e855a265-8de4-49c0-b910-ff29ae34b9c9" containerName="extract-utilities" Mar 13 11:30:00 crc kubenswrapper[4632]: I0313 11:30:00.193814 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="e855a265-8de4-49c0-b910-ff29ae34b9c9" containerName="extract-utilities" Mar 13 11:30:00 crc kubenswrapper[4632]: E0313 11:30:00.193837 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e855a265-8de4-49c0-b910-ff29ae34b9c9" containerName="registry-server" Mar 13 11:30:00 crc kubenswrapper[4632]: I0313 11:30:00.193844 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="e855a265-8de4-49c0-b910-ff29ae34b9c9" containerName="registry-server" Mar 13 11:30:00 crc kubenswrapper[4632]: E0313 11:30:00.193873 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e855a265-8de4-49c0-b910-ff29ae34b9c9" containerName="extract-content" Mar 13 11:30:00 crc kubenswrapper[4632]: I0313 11:30:00.193880 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="e855a265-8de4-49c0-b910-ff29ae34b9c9" containerName="extract-content" Mar 13 11:30:00 crc kubenswrapper[4632]: I0313 11:30:00.194294 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="e855a265-8de4-49c0-b910-ff29ae34b9c9" containerName="registry-server" Mar 13 11:30:00 crc kubenswrapper[4632]: I0313 11:30:00.202881 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556690-p82jh"] Mar 13 11:30:00 crc kubenswrapper[4632]: I0313 11:30:00.203240 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556690-d6xbb" Mar 13 11:30:00 crc kubenswrapper[4632]: I0313 11:30:00.205190 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556690-p82jh" Mar 13 11:30:00 crc kubenswrapper[4632]: I0313 11:30:00.211510 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 11:30:00 crc kubenswrapper[4632]: I0313 11:30:00.211675 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 13 11:30:00 crc kubenswrapper[4632]: I0313 11:30:00.211799 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 11:30:00 crc kubenswrapper[4632]: I0313 11:30:00.216976 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556690-d6xbb"] Mar 13 11:30:00 crc kubenswrapper[4632]: I0313 11:30:00.225339 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 13 11:30:00 crc kubenswrapper[4632]: I0313 11:30:00.245735 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556690-p82jh"] Mar 13 11:30:00 crc kubenswrapper[4632]: I0313 11:30:00.249060 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 11:30:00 crc kubenswrapper[4632]: I0313 11:30:00.304040 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9dc72a85-cdb5-4b11-9e0a-158d269edf96-secret-volume\") pod \"collect-profiles-29556690-p82jh\" (UID: \"9dc72a85-cdb5-4b11-9e0a-158d269edf96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556690-p82jh" Mar 13 11:30:00 crc kubenswrapper[4632]: I0313 11:30:00.304131 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jclp2\" (UniqueName: \"kubernetes.io/projected/9dc72a85-cdb5-4b11-9e0a-158d269edf96-kube-api-access-jclp2\") pod \"collect-profiles-29556690-p82jh\" (UID: \"9dc72a85-cdb5-4b11-9e0a-158d269edf96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556690-p82jh" Mar 13 11:30:00 crc kubenswrapper[4632]: I0313 11:30:00.304165 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqksf\" (UniqueName: \"kubernetes.io/projected/d7194752-8651-4e1b-8973-1f821bf23755-kube-api-access-cqksf\") pod \"auto-csr-approver-29556690-d6xbb\" (UID: \"d7194752-8651-4e1b-8973-1f821bf23755\") " pod="openshift-infra/auto-csr-approver-29556690-d6xbb" Mar 13 11:30:00 crc kubenswrapper[4632]: I0313 11:30:00.304207 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9dc72a85-cdb5-4b11-9e0a-158d269edf96-config-volume\") pod \"collect-profiles-29556690-p82jh\" (UID: \"9dc72a85-cdb5-4b11-9e0a-158d269edf96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556690-p82jh" Mar 13 11:30:00 crc kubenswrapper[4632]: I0313 11:30:00.406150 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqksf\" (UniqueName: \"kubernetes.io/projected/d7194752-8651-4e1b-8973-1f821bf23755-kube-api-access-cqksf\") pod \"auto-csr-approver-29556690-d6xbb\" (UID: \"d7194752-8651-4e1b-8973-1f821bf23755\") " pod="openshift-infra/auto-csr-approver-29556690-d6xbb" Mar 13 11:30:00 crc kubenswrapper[4632]: I0313 11:30:00.406286 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9dc72a85-cdb5-4b11-9e0a-158d269edf96-config-volume\") pod \"collect-profiles-29556690-p82jh\" (UID: \"9dc72a85-cdb5-4b11-9e0a-158d269edf96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556690-p82jh" Mar 13 11:30:00 crc kubenswrapper[4632]: I0313 11:30:00.406410 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9dc72a85-cdb5-4b11-9e0a-158d269edf96-secret-volume\") pod \"collect-profiles-29556690-p82jh\" (UID: \"9dc72a85-cdb5-4b11-9e0a-158d269edf96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556690-p82jh" Mar 13 11:30:00 crc kubenswrapper[4632]: I0313 11:30:00.406497 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jclp2\" (UniqueName: \"kubernetes.io/projected/9dc72a85-cdb5-4b11-9e0a-158d269edf96-kube-api-access-jclp2\") pod \"collect-profiles-29556690-p82jh\" (UID: \"9dc72a85-cdb5-4b11-9e0a-158d269edf96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556690-p82jh" Mar 13 11:30:00 crc kubenswrapper[4632]: I0313 11:30:00.408005 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9dc72a85-cdb5-4b11-9e0a-158d269edf96-config-volume\") pod \"collect-profiles-29556690-p82jh\" (UID: \"9dc72a85-cdb5-4b11-9e0a-158d269edf96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556690-p82jh" Mar 13 11:30:00 crc kubenswrapper[4632]: I0313 11:30:00.430211 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqksf\" (UniqueName: \"kubernetes.io/projected/d7194752-8651-4e1b-8973-1f821bf23755-kube-api-access-cqksf\") pod \"auto-csr-approver-29556690-d6xbb\" (UID: \"d7194752-8651-4e1b-8973-1f821bf23755\") " pod="openshift-infra/auto-csr-approver-29556690-d6xbb" Mar 13 11:30:00 crc kubenswrapper[4632]: I0313 11:30:00.430736 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9dc72a85-cdb5-4b11-9e0a-158d269edf96-secret-volume\") pod \"collect-profiles-29556690-p82jh\" (UID: \"9dc72a85-cdb5-4b11-9e0a-158d269edf96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556690-p82jh" Mar 13 11:30:00 crc kubenswrapper[4632]: I0313 11:30:00.442133 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jclp2\" (UniqueName: \"kubernetes.io/projected/9dc72a85-cdb5-4b11-9e0a-158d269edf96-kube-api-access-jclp2\") pod \"collect-profiles-29556690-p82jh\" (UID: \"9dc72a85-cdb5-4b11-9e0a-158d269edf96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556690-p82jh" Mar 13 11:30:00 crc kubenswrapper[4632]: I0313 11:30:00.546152 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556690-d6xbb" Mar 13 11:30:00 crc kubenswrapper[4632]: I0313 11:30:00.566102 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556690-p82jh" Mar 13 11:30:01 crc kubenswrapper[4632]: I0313 11:30:01.100712 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556690-d6xbb"] Mar 13 11:30:01 crc kubenswrapper[4632]: W0313 11:30:01.107236 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7194752_8651_4e1b_8973_1f821bf23755.slice/crio-a3de5a9af6c5ad84f4ba96edda904272f20626a70fd4e02596d6dc30b9c18f6b WatchSource:0}: Error finding container a3de5a9af6c5ad84f4ba96edda904272f20626a70fd4e02596d6dc30b9c18f6b: Status 404 returned error can't find the container with id a3de5a9af6c5ad84f4ba96edda904272f20626a70fd4e02596d6dc30b9c18f6b Mar 13 11:30:01 crc kubenswrapper[4632]: I0313 11:30:01.214755 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556690-p82jh"] Mar 13 11:30:01 crc kubenswrapper[4632]: W0313 11:30:01.222631 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9dc72a85_cdb5_4b11_9e0a_158d269edf96.slice/crio-e633c3a60e4e59ec14ddf042913c9d08e67dd9113f7b4b60d8100e72e69b8565 WatchSource:0}: Error finding container e633c3a60e4e59ec14ddf042913c9d08e67dd9113f7b4b60d8100e72e69b8565: Status 404 returned error can't find the container with id e633c3a60e4e59ec14ddf042913c9d08e67dd9113f7b4b60d8100e72e69b8565 Mar 13 11:30:01 crc kubenswrapper[4632]: I0313 11:30:01.873141 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556690-d6xbb" event={"ID":"d7194752-8651-4e1b-8973-1f821bf23755","Type":"ContainerStarted","Data":"a3de5a9af6c5ad84f4ba96edda904272f20626a70fd4e02596d6dc30b9c18f6b"} Mar 13 11:30:01 crc kubenswrapper[4632]: I0313 11:30:01.874845 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556690-p82jh" event={"ID":"9dc72a85-cdb5-4b11-9e0a-158d269edf96","Type":"ContainerStarted","Data":"526c0b7d143109242f29250c0cffd4a40f383eaf78da9d0786f09bf0aa0eccb3"} Mar 13 11:30:01 crc kubenswrapper[4632]: I0313 11:30:01.874933 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556690-p82jh" event={"ID":"9dc72a85-cdb5-4b11-9e0a-158d269edf96","Type":"ContainerStarted","Data":"e633c3a60e4e59ec14ddf042913c9d08e67dd9113f7b4b60d8100e72e69b8565"} Mar 13 11:30:01 crc kubenswrapper[4632]: I0313 11:30:01.900354 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29556690-p82jh" podStartSLOduration=1.90033506 podStartE2EDuration="1.90033506s" podCreationTimestamp="2026-03-13 11:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:30:01.892367503 +0000 UTC m=+5175.914897646" watchObservedRunningTime="2026-03-13 11:30:01.90033506 +0000 UTC m=+5175.922865193" Mar 13 11:30:02 crc kubenswrapper[4632]: I0313 11:30:02.886461 4632 generic.go:334] "Generic (PLEG): container finished" podID="9dc72a85-cdb5-4b11-9e0a-158d269edf96" containerID="526c0b7d143109242f29250c0cffd4a40f383eaf78da9d0786f09bf0aa0eccb3" exitCode=0 Mar 13 11:30:02 crc kubenswrapper[4632]: I0313 11:30:02.886550 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556690-p82jh" event={"ID":"9dc72a85-cdb5-4b11-9e0a-158d269edf96","Type":"ContainerDied","Data":"526c0b7d143109242f29250c0cffd4a40f383eaf78da9d0786f09bf0aa0eccb3"} Mar 13 11:30:04 crc kubenswrapper[4632]: I0313 11:30:04.291965 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556690-p82jh" Mar 13 11:30:04 crc kubenswrapper[4632]: I0313 11:30:04.391055 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9dc72a85-cdb5-4b11-9e0a-158d269edf96-secret-volume\") pod \"9dc72a85-cdb5-4b11-9e0a-158d269edf96\" (UID: \"9dc72a85-cdb5-4b11-9e0a-158d269edf96\") " Mar 13 11:30:04 crc kubenswrapper[4632]: I0313 11:30:04.391160 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9dc72a85-cdb5-4b11-9e0a-158d269edf96-config-volume\") pod \"9dc72a85-cdb5-4b11-9e0a-158d269edf96\" (UID: \"9dc72a85-cdb5-4b11-9e0a-158d269edf96\") " Mar 13 11:30:04 crc kubenswrapper[4632]: I0313 11:30:04.391448 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jclp2\" (UniqueName: \"kubernetes.io/projected/9dc72a85-cdb5-4b11-9e0a-158d269edf96-kube-api-access-jclp2\") pod \"9dc72a85-cdb5-4b11-9e0a-158d269edf96\" (UID: \"9dc72a85-cdb5-4b11-9e0a-158d269edf96\") " Mar 13 11:30:04 crc kubenswrapper[4632]: I0313 11:30:04.391788 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dc72a85-cdb5-4b11-9e0a-158d269edf96-config-volume" (OuterVolumeSpecName: "config-volume") pod "9dc72a85-cdb5-4b11-9e0a-158d269edf96" (UID: "9dc72a85-cdb5-4b11-9e0a-158d269edf96"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:30:04 crc kubenswrapper[4632]: I0313 11:30:04.392518 4632 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9dc72a85-cdb5-4b11-9e0a-158d269edf96-config-volume\") on node \"crc\" DevicePath \"\"" Mar 13 11:30:04 crc kubenswrapper[4632]: I0313 11:30:04.397610 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dc72a85-cdb5-4b11-9e0a-158d269edf96-kube-api-access-jclp2" (OuterVolumeSpecName: "kube-api-access-jclp2") pod "9dc72a85-cdb5-4b11-9e0a-158d269edf96" (UID: "9dc72a85-cdb5-4b11-9e0a-158d269edf96"). InnerVolumeSpecName "kube-api-access-jclp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:30:04 crc kubenswrapper[4632]: I0313 11:30:04.398137 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc72a85-cdb5-4b11-9e0a-158d269edf96-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9dc72a85-cdb5-4b11-9e0a-158d269edf96" (UID: "9dc72a85-cdb5-4b11-9e0a-158d269edf96"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:30:04 crc kubenswrapper[4632]: I0313 11:30:04.494404 4632 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9dc72a85-cdb5-4b11-9e0a-158d269edf96-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 13 11:30:04 crc kubenswrapper[4632]: I0313 11:30:04.494435 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jclp2\" (UniqueName: \"kubernetes.io/projected/9dc72a85-cdb5-4b11-9e0a-158d269edf96-kube-api-access-jclp2\") on node \"crc\" DevicePath \"\"" Mar 13 11:30:04 crc kubenswrapper[4632]: I0313 11:30:04.910047 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556690-d6xbb" event={"ID":"d7194752-8651-4e1b-8973-1f821bf23755","Type":"ContainerStarted","Data":"a14245a1819d4e34ca7541b00cd28e096f96a6ad8f1b997d5a64006b8dd2f7c5"} Mar 13 11:30:04 crc kubenswrapper[4632]: I0313 11:30:04.917145 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556690-p82jh" event={"ID":"9dc72a85-cdb5-4b11-9e0a-158d269edf96","Type":"ContainerDied","Data":"e633c3a60e4e59ec14ddf042913c9d08e67dd9113f7b4b60d8100e72e69b8565"} Mar 13 11:30:04 crc kubenswrapper[4632]: I0313 11:30:04.917182 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e633c3a60e4e59ec14ddf042913c9d08e67dd9113f7b4b60d8100e72e69b8565" Mar 13 11:30:04 crc kubenswrapper[4632]: I0313 11:30:04.917234 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556690-p82jh" Mar 13 11:30:04 crc kubenswrapper[4632]: I0313 11:30:04.937602 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556690-d6xbb" podStartSLOduration=2.906815343 podStartE2EDuration="4.937578749s" podCreationTimestamp="2026-03-13 11:30:00 +0000 UTC" firstStartedPulling="2026-03-13 11:30:01.111020509 +0000 UTC m=+5175.133550652" lastFinishedPulling="2026-03-13 11:30:03.141783925 +0000 UTC m=+5177.164314058" observedRunningTime="2026-03-13 11:30:04.92872052 +0000 UTC m=+5178.951250653" watchObservedRunningTime="2026-03-13 11:30:04.937578749 +0000 UTC m=+5178.960108882" Mar 13 11:30:05 crc kubenswrapper[4632]: I0313 11:30:05.370352 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556645-4btb8"] Mar 13 11:30:05 crc kubenswrapper[4632]: I0313 11:30:05.386610 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556645-4btb8"] Mar 13 11:30:05 crc kubenswrapper[4632]: I0313 11:30:05.929786 4632 generic.go:334] "Generic (PLEG): container finished" podID="d7194752-8651-4e1b-8973-1f821bf23755" containerID="a14245a1819d4e34ca7541b00cd28e096f96a6ad8f1b997d5a64006b8dd2f7c5" exitCode=0 Mar 13 11:30:05 crc kubenswrapper[4632]: I0313 11:30:05.929893 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556690-d6xbb" event={"ID":"d7194752-8651-4e1b-8973-1f821bf23755","Type":"ContainerDied","Data":"a14245a1819d4e34ca7541b00cd28e096f96a6ad8f1b997d5a64006b8dd2f7c5"} Mar 13 11:30:06 crc kubenswrapper[4632]: I0313 11:30:06.079897 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cf52265-21b3-40f0-a2f5-d379c03cc045" path="/var/lib/kubelet/pods/8cf52265-21b3-40f0-a2f5-d379c03cc045/volumes" Mar 13 11:30:07 crc kubenswrapper[4632]: I0313 11:30:07.372607 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556690-d6xbb" Mar 13 11:30:07 crc kubenswrapper[4632]: I0313 11:30:07.454485 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqksf\" (UniqueName: \"kubernetes.io/projected/d7194752-8651-4e1b-8973-1f821bf23755-kube-api-access-cqksf\") pod \"d7194752-8651-4e1b-8973-1f821bf23755\" (UID: \"d7194752-8651-4e1b-8973-1f821bf23755\") " Mar 13 11:30:07 crc kubenswrapper[4632]: I0313 11:30:07.467209 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7194752-8651-4e1b-8973-1f821bf23755-kube-api-access-cqksf" (OuterVolumeSpecName: "kube-api-access-cqksf") pod "d7194752-8651-4e1b-8973-1f821bf23755" (UID: "d7194752-8651-4e1b-8973-1f821bf23755"). InnerVolumeSpecName "kube-api-access-cqksf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:30:07 crc kubenswrapper[4632]: I0313 11:30:07.557260 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqksf\" (UniqueName: \"kubernetes.io/projected/d7194752-8651-4e1b-8973-1f821bf23755-kube-api-access-cqksf\") on node \"crc\" DevicePath \"\"" Mar 13 11:30:07 crc kubenswrapper[4632]: I0313 11:30:07.950093 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556690-d6xbb" event={"ID":"d7194752-8651-4e1b-8973-1f821bf23755","Type":"ContainerDied","Data":"a3de5a9af6c5ad84f4ba96edda904272f20626a70fd4e02596d6dc30b9c18f6b"} Mar 13 11:30:07 crc kubenswrapper[4632]: I0313 11:30:07.950191 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556690-d6xbb" Mar 13 11:30:07 crc kubenswrapper[4632]: I0313 11:30:07.950928 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3de5a9af6c5ad84f4ba96edda904272f20626a70fd4e02596d6dc30b9c18f6b" Mar 13 11:30:08 crc kubenswrapper[4632]: I0313 11:30:08.005326 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556684-bd9st"] Mar 13 11:30:08 crc kubenswrapper[4632]: I0313 11:30:08.022139 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556684-bd9st"] Mar 13 11:30:08 crc kubenswrapper[4632]: I0313 11:30:08.058068 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb45f1ce-58e0-4f55-afd6-2e14db5f24ca" path="/var/lib/kubelet/pods/fb45f1ce-58e0-4f55-afd6-2e14db5f24ca/volumes" Mar 13 11:30:13 crc kubenswrapper[4632]: I0313 11:30:13.045746 4632 scope.go:117] "RemoveContainer" containerID="d23237a6fa10676a84af0ea53cff2f624fc9045a8ba857339106215f64dbb8e3" Mar 13 11:30:13 crc kubenswrapper[4632]: E0313 11:30:13.047215 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:30:20 crc kubenswrapper[4632]: I0313 11:30:20.431444 4632 scope.go:117] "RemoveContainer" containerID="8e20db958c001216e89a657171c617c2e4d78b297bcd654a9af9c2d8d32242ac" Mar 13 11:30:20 crc kubenswrapper[4632]: I0313 11:30:20.552116 4632 scope.go:117] "RemoveContainer" containerID="2af97c9efffc6f3dc7413dcff6c97889a640ef442506af8ad264876a675427dc" Mar 13 11:30:27 crc kubenswrapper[4632]: I0313 11:30:27.044908 4632 scope.go:117] "RemoveContainer" containerID="d23237a6fa10676a84af0ea53cff2f624fc9045a8ba857339106215f64dbb8e3" Mar 13 11:30:27 crc kubenswrapper[4632]: E0313 11:30:27.047064 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:30:39 crc kubenswrapper[4632]: I0313 11:30:39.043877 4632 scope.go:117] "RemoveContainer" containerID="d23237a6fa10676a84af0ea53cff2f624fc9045a8ba857339106215f64dbb8e3" Mar 13 11:30:39 crc kubenswrapper[4632]: E0313 11:30:39.044642 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:30:53 crc kubenswrapper[4632]: I0313 11:30:53.044145 4632 scope.go:117] "RemoveContainer" containerID="d23237a6fa10676a84af0ea53cff2f624fc9045a8ba857339106215f64dbb8e3" Mar 13 11:30:53 crc kubenswrapper[4632]: E0313 11:30:53.045245 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:31:05 crc kubenswrapper[4632]: I0313 11:31:05.044679 4632 scope.go:117] "RemoveContainer" containerID="d23237a6fa10676a84af0ea53cff2f624fc9045a8ba857339106215f64dbb8e3" Mar 13 11:31:05 crc kubenswrapper[4632]: E0313 11:31:05.045531 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:31:17 crc kubenswrapper[4632]: I0313 11:31:17.045212 4632 scope.go:117] "RemoveContainer" containerID="d23237a6fa10676a84af0ea53cff2f624fc9045a8ba857339106215f64dbb8e3" Mar 13 11:31:17 crc kubenswrapper[4632]: E0313 11:31:17.046488 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:31:31 crc kubenswrapper[4632]: I0313 11:31:31.044684 4632 scope.go:117] "RemoveContainer" containerID="d23237a6fa10676a84af0ea53cff2f624fc9045a8ba857339106215f64dbb8e3" Mar 13 11:31:31 crc kubenswrapper[4632]: E0313 11:31:31.045436 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:31:43 crc kubenswrapper[4632]: I0313 11:31:43.044391 4632 scope.go:117] "RemoveContainer" containerID="d23237a6fa10676a84af0ea53cff2f624fc9045a8ba857339106215f64dbb8e3" Mar 13 11:31:43 crc kubenswrapper[4632]: E0313 11:31:43.046709 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:31:57 crc kubenswrapper[4632]: I0313 11:31:57.044085 4632 scope.go:117] "RemoveContainer" containerID="d23237a6fa10676a84af0ea53cff2f624fc9045a8ba857339106215f64dbb8e3" Mar 13 11:31:57 crc kubenswrapper[4632]: E0313 11:31:57.045794 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:32:00 crc kubenswrapper[4632]: I0313 11:32:00.155404 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556692-4kd7v"] Mar 13 11:32:00 crc kubenswrapper[4632]: E0313 11:32:00.156147 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dc72a85-cdb5-4b11-9e0a-158d269edf96" containerName="collect-profiles" Mar 13 11:32:00 crc kubenswrapper[4632]: I0313 11:32:00.156165 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dc72a85-cdb5-4b11-9e0a-158d269edf96" containerName="collect-profiles" Mar 13 11:32:00 crc kubenswrapper[4632]: E0313 11:32:00.156206 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7194752-8651-4e1b-8973-1f821bf23755" containerName="oc" Mar 13 11:32:00 crc kubenswrapper[4632]: I0313 11:32:00.156214 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7194752-8651-4e1b-8973-1f821bf23755" containerName="oc" Mar 13 11:32:00 crc kubenswrapper[4632]: I0313 11:32:00.156427 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dc72a85-cdb5-4b11-9e0a-158d269edf96" containerName="collect-profiles" Mar 13 11:32:00 crc kubenswrapper[4632]: I0313 11:32:00.156457 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7194752-8651-4e1b-8973-1f821bf23755" containerName="oc" Mar 13 11:32:00 crc kubenswrapper[4632]: I0313 11:32:00.157212 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556692-4kd7v" Mar 13 11:32:00 crc kubenswrapper[4632]: I0313 11:32:00.162519 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 11:32:00 crc kubenswrapper[4632]: I0313 11:32:00.163081 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 11:32:00 crc kubenswrapper[4632]: I0313 11:32:00.163409 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 11:32:00 crc kubenswrapper[4632]: I0313 11:32:00.165748 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556692-4kd7v"] Mar 13 11:32:00 crc kubenswrapper[4632]: I0313 11:32:00.319982 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2lwc\" (UniqueName: \"kubernetes.io/projected/77ae44b0-e101-4d21-87e5-9e213e024e9e-kube-api-access-v2lwc\") pod \"auto-csr-approver-29556692-4kd7v\" (UID: \"77ae44b0-e101-4d21-87e5-9e213e024e9e\") " pod="openshift-infra/auto-csr-approver-29556692-4kd7v" Mar 13 11:32:00 crc kubenswrapper[4632]: I0313 11:32:00.421868 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2lwc\" (UniqueName: \"kubernetes.io/projected/77ae44b0-e101-4d21-87e5-9e213e024e9e-kube-api-access-v2lwc\") pod \"auto-csr-approver-29556692-4kd7v\" (UID: \"77ae44b0-e101-4d21-87e5-9e213e024e9e\") " pod="openshift-infra/auto-csr-approver-29556692-4kd7v" Mar 13 11:32:00 crc kubenswrapper[4632]: I0313 11:32:00.452867 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2lwc\" (UniqueName: \"kubernetes.io/projected/77ae44b0-e101-4d21-87e5-9e213e024e9e-kube-api-access-v2lwc\") pod \"auto-csr-approver-29556692-4kd7v\" (UID: \"77ae44b0-e101-4d21-87e5-9e213e024e9e\") " pod="openshift-infra/auto-csr-approver-29556692-4kd7v" Mar 13 11:32:00 crc kubenswrapper[4632]: I0313 11:32:00.481856 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556692-4kd7v" Mar 13 11:32:00 crc kubenswrapper[4632]: I0313 11:32:00.958795 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556692-4kd7v"] Mar 13 11:32:00 crc kubenswrapper[4632]: I0313 11:32:00.967355 4632 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 11:32:01 crc kubenswrapper[4632]: I0313 11:32:01.102605 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556692-4kd7v" event={"ID":"77ae44b0-e101-4d21-87e5-9e213e024e9e","Type":"ContainerStarted","Data":"daa9f46f0da478ec17ba30a85b783ac0ac768a525ff861a88de96fe1e24a934f"} Mar 13 11:32:03 crc kubenswrapper[4632]: I0313 11:32:03.121039 4632 generic.go:334] "Generic (PLEG): container finished" podID="77ae44b0-e101-4d21-87e5-9e213e024e9e" containerID="b8f22a62b885c530c5401c31b24e17ea8bcf63d6debf02e44b4f01a4ab4c1102" exitCode=0 Mar 13 11:32:03 crc kubenswrapper[4632]: I0313 11:32:03.121479 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556692-4kd7v" event={"ID":"77ae44b0-e101-4d21-87e5-9e213e024e9e","Type":"ContainerDied","Data":"b8f22a62b885c530c5401c31b24e17ea8bcf63d6debf02e44b4f01a4ab4c1102"} Mar 13 11:32:04 crc kubenswrapper[4632]: I0313 11:32:04.575209 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556692-4kd7v" Mar 13 11:32:04 crc kubenswrapper[4632]: I0313 11:32:04.721429 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2lwc\" (UniqueName: \"kubernetes.io/projected/77ae44b0-e101-4d21-87e5-9e213e024e9e-kube-api-access-v2lwc\") pod \"77ae44b0-e101-4d21-87e5-9e213e024e9e\" (UID: \"77ae44b0-e101-4d21-87e5-9e213e024e9e\") " Mar 13 11:32:04 crc kubenswrapper[4632]: I0313 11:32:04.730190 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77ae44b0-e101-4d21-87e5-9e213e024e9e-kube-api-access-v2lwc" (OuterVolumeSpecName: "kube-api-access-v2lwc") pod "77ae44b0-e101-4d21-87e5-9e213e024e9e" (UID: "77ae44b0-e101-4d21-87e5-9e213e024e9e"). InnerVolumeSpecName "kube-api-access-v2lwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:32:04 crc kubenswrapper[4632]: I0313 11:32:04.824390 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2lwc\" (UniqueName: \"kubernetes.io/projected/77ae44b0-e101-4d21-87e5-9e213e024e9e-kube-api-access-v2lwc\") on node \"crc\" DevicePath \"\"" Mar 13 11:32:05 crc kubenswrapper[4632]: I0313 11:32:05.143833 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556692-4kd7v" event={"ID":"77ae44b0-e101-4d21-87e5-9e213e024e9e","Type":"ContainerDied","Data":"daa9f46f0da478ec17ba30a85b783ac0ac768a525ff861a88de96fe1e24a934f"} Mar 13 11:32:05 crc kubenswrapper[4632]: I0313 11:32:05.144195 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="daa9f46f0da478ec17ba30a85b783ac0ac768a525ff861a88de96fe1e24a934f" Mar 13 11:32:05 crc kubenswrapper[4632]: I0313 11:32:05.143904 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556692-4kd7v" Mar 13 11:32:05 crc kubenswrapper[4632]: I0313 11:32:05.691056 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556686-v9c2p"] Mar 13 11:32:05 crc kubenswrapper[4632]: I0313 11:32:05.704263 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556686-v9c2p"] Mar 13 11:32:06 crc kubenswrapper[4632]: I0313 11:32:06.064188 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78358d6e-af24-4da5-8c77-9453e6228cda" path="/var/lib/kubelet/pods/78358d6e-af24-4da5-8c77-9453e6228cda/volumes" Mar 13 11:32:08 crc kubenswrapper[4632]: I0313 11:32:08.046786 4632 scope.go:117] "RemoveContainer" containerID="d23237a6fa10676a84af0ea53cff2f624fc9045a8ba857339106215f64dbb8e3" Mar 13 11:32:08 crc kubenswrapper[4632]: E0313 11:32:08.047405 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:32:20 crc kubenswrapper[4632]: I0313 11:32:20.731294 4632 scope.go:117] "RemoveContainer" containerID="0306bee576d03326b01d2c08c76cf2909394a8ec9e729a13ca12d86ebb721532" Mar 13 11:32:23 crc kubenswrapper[4632]: I0313 11:32:23.044834 4632 scope.go:117] "RemoveContainer" containerID="d23237a6fa10676a84af0ea53cff2f624fc9045a8ba857339106215f64dbb8e3" Mar 13 11:32:23 crc kubenswrapper[4632]: E0313 11:32:23.045885 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:32:38 crc kubenswrapper[4632]: I0313 11:32:38.050781 4632 scope.go:117] "RemoveContainer" containerID="d23237a6fa10676a84af0ea53cff2f624fc9045a8ba857339106215f64dbb8e3" Mar 13 11:32:38 crc kubenswrapper[4632]: E0313 11:32:38.051530 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:32:52 crc kubenswrapper[4632]: I0313 11:32:52.044368 4632 scope.go:117] "RemoveContainer" containerID="d23237a6fa10676a84af0ea53cff2f624fc9045a8ba857339106215f64dbb8e3" Mar 13 11:32:52 crc kubenswrapper[4632]: E0313 11:32:52.045206 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:33:04 crc kubenswrapper[4632]: I0313 11:33:04.045589 4632 scope.go:117] "RemoveContainer" containerID="d23237a6fa10676a84af0ea53cff2f624fc9045a8ba857339106215f64dbb8e3" Mar 13 11:33:04 crc kubenswrapper[4632]: E0313 11:33:04.046698 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:33:15 crc kubenswrapper[4632]: I0313 11:33:15.044027 4632 scope.go:117] "RemoveContainer" containerID="d23237a6fa10676a84af0ea53cff2f624fc9045a8ba857339106215f64dbb8e3" Mar 13 11:33:15 crc kubenswrapper[4632]: E0313 11:33:15.044660 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:33:30 crc kubenswrapper[4632]: I0313 11:33:30.044108 4632 scope.go:117] "RemoveContainer" containerID="d23237a6fa10676a84af0ea53cff2f624fc9045a8ba857339106215f64dbb8e3" Mar 13 11:33:30 crc kubenswrapper[4632]: E0313 11:33:30.045921 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:33:42 crc kubenswrapper[4632]: I0313 11:33:42.045683 4632 scope.go:117] "RemoveContainer" containerID="d23237a6fa10676a84af0ea53cff2f624fc9045a8ba857339106215f64dbb8e3" Mar 13 11:33:43 crc kubenswrapper[4632]: I0313 11:33:43.102562 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerStarted","Data":"b28a3031014e23a161560bdf4de3a19a21d26729102cf99acd465c2bd90c33f9"} Mar 13 11:34:00 crc kubenswrapper[4632]: I0313 11:34:00.147281 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556694-jmf5t"] Mar 13 11:34:00 crc kubenswrapper[4632]: E0313 11:34:00.149490 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77ae44b0-e101-4d21-87e5-9e213e024e9e" containerName="oc" Mar 13 11:34:00 crc kubenswrapper[4632]: I0313 11:34:00.149664 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="77ae44b0-e101-4d21-87e5-9e213e024e9e" containerName="oc" Mar 13 11:34:00 crc kubenswrapper[4632]: I0313 11:34:00.150251 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="77ae44b0-e101-4d21-87e5-9e213e024e9e" containerName="oc" Mar 13 11:34:00 crc kubenswrapper[4632]: I0313 11:34:00.151880 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556694-jmf5t" Mar 13 11:34:00 crc kubenswrapper[4632]: I0313 11:34:00.158555 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 11:34:00 crc kubenswrapper[4632]: I0313 11:34:00.159262 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 11:34:00 crc kubenswrapper[4632]: I0313 11:34:00.159277 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 11:34:00 crc kubenswrapper[4632]: I0313 11:34:00.168335 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556694-jmf5t"] Mar 13 11:34:00 crc kubenswrapper[4632]: I0313 11:34:00.217856 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5tl6\" (UniqueName: \"kubernetes.io/projected/266f7f6e-de91-4256-8605-0a71adef85fc-kube-api-access-l5tl6\") pod \"auto-csr-approver-29556694-jmf5t\" (UID: \"266f7f6e-de91-4256-8605-0a71adef85fc\") " pod="openshift-infra/auto-csr-approver-29556694-jmf5t" Mar 13 11:34:00 crc kubenswrapper[4632]: I0313 11:34:00.318855 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5tl6\" (UniqueName: \"kubernetes.io/projected/266f7f6e-de91-4256-8605-0a71adef85fc-kube-api-access-l5tl6\") pod \"auto-csr-approver-29556694-jmf5t\" (UID: \"266f7f6e-de91-4256-8605-0a71adef85fc\") " pod="openshift-infra/auto-csr-approver-29556694-jmf5t" Mar 13 11:34:00 crc kubenswrapper[4632]: I0313 11:34:00.340718 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5tl6\" (UniqueName: \"kubernetes.io/projected/266f7f6e-de91-4256-8605-0a71adef85fc-kube-api-access-l5tl6\") pod \"auto-csr-approver-29556694-jmf5t\" (UID: \"266f7f6e-de91-4256-8605-0a71adef85fc\") " pod="openshift-infra/auto-csr-approver-29556694-jmf5t" Mar 13 11:34:00 crc kubenswrapper[4632]: I0313 11:34:00.477138 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556694-jmf5t" Mar 13 11:34:01 crc kubenswrapper[4632]: I0313 11:34:01.616488 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556694-jmf5t"] Mar 13 11:34:02 crc kubenswrapper[4632]: I0313 11:34:02.282109 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556694-jmf5t" event={"ID":"266f7f6e-de91-4256-8605-0a71adef85fc","Type":"ContainerStarted","Data":"6159272a14d87ccbf5e486985e550d0df0ab2f55be3e4a304e8ba69362c2f3cf"} Mar 13 11:34:03 crc kubenswrapper[4632]: I0313 11:34:03.292113 4632 generic.go:334] "Generic (PLEG): container finished" podID="266f7f6e-de91-4256-8605-0a71adef85fc" containerID="dd29187096f712bf2f18fa46086683fcb900aea6c3d89212b78286e73075a17b" exitCode=0 Mar 13 11:34:03 crc kubenswrapper[4632]: I0313 11:34:03.292226 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556694-jmf5t" event={"ID":"266f7f6e-de91-4256-8605-0a71adef85fc","Type":"ContainerDied","Data":"dd29187096f712bf2f18fa46086683fcb900aea6c3d89212b78286e73075a17b"} Mar 13 11:34:04 crc kubenswrapper[4632]: I0313 11:34:04.717281 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556694-jmf5t" Mar 13 11:34:04 crc kubenswrapper[4632]: I0313 11:34:04.824244 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5tl6\" (UniqueName: \"kubernetes.io/projected/266f7f6e-de91-4256-8605-0a71adef85fc-kube-api-access-l5tl6\") pod \"266f7f6e-de91-4256-8605-0a71adef85fc\" (UID: \"266f7f6e-de91-4256-8605-0a71adef85fc\") " Mar 13 11:34:04 crc kubenswrapper[4632]: I0313 11:34:04.833288 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/266f7f6e-de91-4256-8605-0a71adef85fc-kube-api-access-l5tl6" (OuterVolumeSpecName: "kube-api-access-l5tl6") pod "266f7f6e-de91-4256-8605-0a71adef85fc" (UID: "266f7f6e-de91-4256-8605-0a71adef85fc"). InnerVolumeSpecName "kube-api-access-l5tl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:34:04 crc kubenswrapper[4632]: I0313 11:34:04.926621 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5tl6\" (UniqueName: \"kubernetes.io/projected/266f7f6e-de91-4256-8605-0a71adef85fc-kube-api-access-l5tl6\") on node \"crc\" DevicePath \"\"" Mar 13 11:34:05 crc kubenswrapper[4632]: I0313 11:34:05.311021 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556694-jmf5t" event={"ID":"266f7f6e-de91-4256-8605-0a71adef85fc","Type":"ContainerDied","Data":"6159272a14d87ccbf5e486985e550d0df0ab2f55be3e4a304e8ba69362c2f3cf"} Mar 13 11:34:05 crc kubenswrapper[4632]: I0313 11:34:05.311086 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6159272a14d87ccbf5e486985e550d0df0ab2f55be3e4a304e8ba69362c2f3cf" Mar 13 11:34:05 crc kubenswrapper[4632]: I0313 11:34:05.311159 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556694-jmf5t" Mar 13 11:34:05 crc kubenswrapper[4632]: I0313 11:34:05.796812 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556688-btp9b"] Mar 13 11:34:05 crc kubenswrapper[4632]: I0313 11:34:05.807478 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556688-btp9b"] Mar 13 11:34:06 crc kubenswrapper[4632]: I0313 11:34:06.057358 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99bcc88d-9858-4d4f-97e5-68e185f06401" path="/var/lib/kubelet/pods/99bcc88d-9858-4d4f-97e5-68e185f06401/volumes" Mar 13 11:34:20 crc kubenswrapper[4632]: I0313 11:34:20.875772 4632 scope.go:117] "RemoveContainer" containerID="419020d6577499249f09db214d08f7500440b2163407ca9b721f361af1ab72f7" Mar 13 11:36:00 crc kubenswrapper[4632]: I0313 11:36:00.151652 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556696-6dgq6"] Mar 13 11:36:00 crc kubenswrapper[4632]: E0313 11:36:00.152713 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="266f7f6e-de91-4256-8605-0a71adef85fc" containerName="oc" Mar 13 11:36:00 crc kubenswrapper[4632]: I0313 11:36:00.152757 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="266f7f6e-de91-4256-8605-0a71adef85fc" containerName="oc" Mar 13 11:36:00 crc kubenswrapper[4632]: I0313 11:36:00.153042 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="266f7f6e-de91-4256-8605-0a71adef85fc" containerName="oc" Mar 13 11:36:00 crc kubenswrapper[4632]: I0313 11:36:00.153775 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556696-6dgq6" Mar 13 11:36:00 crc kubenswrapper[4632]: I0313 11:36:00.155764 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 11:36:00 crc kubenswrapper[4632]: I0313 11:36:00.156085 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 11:36:00 crc kubenswrapper[4632]: I0313 11:36:00.156738 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 11:36:00 crc kubenswrapper[4632]: I0313 11:36:00.173606 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556696-6dgq6"] Mar 13 11:36:00 crc kubenswrapper[4632]: I0313 11:36:00.294176 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rtg5\" (UniqueName: \"kubernetes.io/projected/f8600f7f-45fb-4aa6-b13b-9d6be5354009-kube-api-access-2rtg5\") pod \"auto-csr-approver-29556696-6dgq6\" (UID: \"f8600f7f-45fb-4aa6-b13b-9d6be5354009\") " pod="openshift-infra/auto-csr-approver-29556696-6dgq6" Mar 13 11:36:00 crc kubenswrapper[4632]: I0313 11:36:00.397042 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rtg5\" (UniqueName: \"kubernetes.io/projected/f8600f7f-45fb-4aa6-b13b-9d6be5354009-kube-api-access-2rtg5\") pod \"auto-csr-approver-29556696-6dgq6\" (UID: \"f8600f7f-45fb-4aa6-b13b-9d6be5354009\") " pod="openshift-infra/auto-csr-approver-29556696-6dgq6" Mar 13 11:36:00 crc kubenswrapper[4632]: I0313 11:36:00.425438 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rtg5\" (UniqueName: \"kubernetes.io/projected/f8600f7f-45fb-4aa6-b13b-9d6be5354009-kube-api-access-2rtg5\") pod \"auto-csr-approver-29556696-6dgq6\" (UID: \"f8600f7f-45fb-4aa6-b13b-9d6be5354009\") " pod="openshift-infra/auto-csr-approver-29556696-6dgq6" Mar 13 11:36:00 crc kubenswrapper[4632]: I0313 11:36:00.479648 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556696-6dgq6" Mar 13 11:36:01 crc kubenswrapper[4632]: I0313 11:36:01.156420 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556696-6dgq6"] Mar 13 11:36:01 crc kubenswrapper[4632]: W0313 11:36:01.359496 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8600f7f_45fb_4aa6_b13b_9d6be5354009.slice/crio-80667f994774ca7286e4b847a63dc26b70323a254a39964981595c2cdd8a61d7 WatchSource:0}: Error finding container 80667f994774ca7286e4b847a63dc26b70323a254a39964981595c2cdd8a61d7: Status 404 returned error can't find the container with id 80667f994774ca7286e4b847a63dc26b70323a254a39964981595c2cdd8a61d7 Mar 13 11:36:01 crc kubenswrapper[4632]: I0313 11:36:01.448245 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556696-6dgq6" event={"ID":"f8600f7f-45fb-4aa6-b13b-9d6be5354009","Type":"ContainerStarted","Data":"80667f994774ca7286e4b847a63dc26b70323a254a39964981595c2cdd8a61d7"} Mar 13 11:36:02 crc kubenswrapper[4632]: I0313 11:36:02.459719 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556696-6dgq6" event={"ID":"f8600f7f-45fb-4aa6-b13b-9d6be5354009","Type":"ContainerStarted","Data":"ffbf598df91f4bb7277b432bb2bc1355e735cdb640ec4482a312abc6e198f0af"} Mar 13 11:36:05 crc kubenswrapper[4632]: I0313 11:36:05.489006 4632 generic.go:334] "Generic (PLEG): container finished" podID="f8600f7f-45fb-4aa6-b13b-9d6be5354009" containerID="ffbf598df91f4bb7277b432bb2bc1355e735cdb640ec4482a312abc6e198f0af" exitCode=0 Mar 13 11:36:05 crc kubenswrapper[4632]: I0313 11:36:05.489086 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556696-6dgq6" event={"ID":"f8600f7f-45fb-4aa6-b13b-9d6be5354009","Type":"ContainerDied","Data":"ffbf598df91f4bb7277b432bb2bc1355e735cdb640ec4482a312abc6e198f0af"} Mar 13 11:36:06 crc kubenswrapper[4632]: I0313 11:36:06.963577 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556696-6dgq6" Mar 13 11:36:07 crc kubenswrapper[4632]: I0313 11:36:07.037547 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rtg5\" (UniqueName: \"kubernetes.io/projected/f8600f7f-45fb-4aa6-b13b-9d6be5354009-kube-api-access-2rtg5\") pod \"f8600f7f-45fb-4aa6-b13b-9d6be5354009\" (UID: \"f8600f7f-45fb-4aa6-b13b-9d6be5354009\") " Mar 13 11:36:07 crc kubenswrapper[4632]: I0313 11:36:07.045356 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8600f7f-45fb-4aa6-b13b-9d6be5354009-kube-api-access-2rtg5" (OuterVolumeSpecName: "kube-api-access-2rtg5") pod "f8600f7f-45fb-4aa6-b13b-9d6be5354009" (UID: "f8600f7f-45fb-4aa6-b13b-9d6be5354009"). InnerVolumeSpecName "kube-api-access-2rtg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:36:07 crc kubenswrapper[4632]: I0313 11:36:07.142395 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rtg5\" (UniqueName: \"kubernetes.io/projected/f8600f7f-45fb-4aa6-b13b-9d6be5354009-kube-api-access-2rtg5\") on node \"crc\" DevicePath \"\"" Mar 13 11:36:07 crc kubenswrapper[4632]: I0313 11:36:07.511117 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556696-6dgq6" event={"ID":"f8600f7f-45fb-4aa6-b13b-9d6be5354009","Type":"ContainerDied","Data":"80667f994774ca7286e4b847a63dc26b70323a254a39964981595c2cdd8a61d7"} Mar 13 11:36:07 crc kubenswrapper[4632]: I0313 11:36:07.511436 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80667f994774ca7286e4b847a63dc26b70323a254a39964981595c2cdd8a61d7" Mar 13 11:36:07 crc kubenswrapper[4632]: I0313 11:36:07.511180 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556696-6dgq6" Mar 13 11:36:07 crc kubenswrapper[4632]: I0313 11:36:07.578907 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556690-d6xbb"] Mar 13 11:36:07 crc kubenswrapper[4632]: I0313 11:36:07.587309 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556690-d6xbb"] Mar 13 11:36:08 crc kubenswrapper[4632]: I0313 11:36:08.064655 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7194752-8651-4e1b-8973-1f821bf23755" path="/var/lib/kubelet/pods/d7194752-8651-4e1b-8973-1f821bf23755/volumes" Mar 13 11:36:10 crc kubenswrapper[4632]: I0313 11:36:10.461449 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 11:36:10 crc kubenswrapper[4632]: I0313 11:36:10.476680 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 11:36:18 crc kubenswrapper[4632]: I0313 11:36:18.978455 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bv9w5"] Mar 13 11:36:18 crc kubenswrapper[4632]: E0313 11:36:18.980518 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8600f7f-45fb-4aa6-b13b-9d6be5354009" containerName="oc" Mar 13 11:36:18 crc kubenswrapper[4632]: I0313 11:36:18.980628 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8600f7f-45fb-4aa6-b13b-9d6be5354009" containerName="oc" Mar 13 11:36:18 crc kubenswrapper[4632]: I0313 11:36:18.980886 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8600f7f-45fb-4aa6-b13b-9d6be5354009" containerName="oc" Mar 13 11:36:18 crc kubenswrapper[4632]: I0313 11:36:18.982464 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bv9w5" Mar 13 11:36:18 crc kubenswrapper[4632]: I0313 11:36:18.988574 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb192b24-7638-4c11-9936-ad94a6842ce9-catalog-content\") pod \"redhat-marketplace-bv9w5\" (UID: \"bb192b24-7638-4c11-9936-ad94a6842ce9\") " pod="openshift-marketplace/redhat-marketplace-bv9w5" Mar 13 11:36:18 crc kubenswrapper[4632]: I0313 11:36:18.988820 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb192b24-7638-4c11-9936-ad94a6842ce9-utilities\") pod \"redhat-marketplace-bv9w5\" (UID: \"bb192b24-7638-4c11-9936-ad94a6842ce9\") " pod="openshift-marketplace/redhat-marketplace-bv9w5" Mar 13 11:36:18 crc kubenswrapper[4632]: I0313 11:36:18.988931 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97zsq\" (UniqueName: \"kubernetes.io/projected/bb192b24-7638-4c11-9936-ad94a6842ce9-kube-api-access-97zsq\") pod \"redhat-marketplace-bv9w5\" (UID: \"bb192b24-7638-4c11-9936-ad94a6842ce9\") " pod="openshift-marketplace/redhat-marketplace-bv9w5" Mar 13 11:36:19 crc kubenswrapper[4632]: I0313 11:36:19.002281 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bv9w5"] Mar 13 11:36:19 crc kubenswrapper[4632]: I0313 11:36:19.091730 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb192b24-7638-4c11-9936-ad94a6842ce9-catalog-content\") pod \"redhat-marketplace-bv9w5\" (UID: \"bb192b24-7638-4c11-9936-ad94a6842ce9\") " pod="openshift-marketplace/redhat-marketplace-bv9w5" Mar 13 11:36:19 crc kubenswrapper[4632]: I0313 11:36:19.091784 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb192b24-7638-4c11-9936-ad94a6842ce9-utilities\") pod \"redhat-marketplace-bv9w5\" (UID: \"bb192b24-7638-4c11-9936-ad94a6842ce9\") " pod="openshift-marketplace/redhat-marketplace-bv9w5" Mar 13 11:36:19 crc kubenswrapper[4632]: I0313 11:36:19.091851 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97zsq\" (UniqueName: \"kubernetes.io/projected/bb192b24-7638-4c11-9936-ad94a6842ce9-kube-api-access-97zsq\") pod \"redhat-marketplace-bv9w5\" (UID: \"bb192b24-7638-4c11-9936-ad94a6842ce9\") " pod="openshift-marketplace/redhat-marketplace-bv9w5" Mar 13 11:36:19 crc kubenswrapper[4632]: I0313 11:36:19.092595 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb192b24-7638-4c11-9936-ad94a6842ce9-utilities\") pod \"redhat-marketplace-bv9w5\" (UID: \"bb192b24-7638-4c11-9936-ad94a6842ce9\") " pod="openshift-marketplace/redhat-marketplace-bv9w5" Mar 13 11:36:19 crc kubenswrapper[4632]: I0313 11:36:19.093041 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb192b24-7638-4c11-9936-ad94a6842ce9-catalog-content\") pod \"redhat-marketplace-bv9w5\" (UID: \"bb192b24-7638-4c11-9936-ad94a6842ce9\") " pod="openshift-marketplace/redhat-marketplace-bv9w5" Mar 13 11:36:19 crc kubenswrapper[4632]: I0313 11:36:19.112787 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97zsq\" (UniqueName: \"kubernetes.io/projected/bb192b24-7638-4c11-9936-ad94a6842ce9-kube-api-access-97zsq\") pod \"redhat-marketplace-bv9w5\" (UID: \"bb192b24-7638-4c11-9936-ad94a6842ce9\") " pod="openshift-marketplace/redhat-marketplace-bv9w5" Mar 13 11:36:19 crc kubenswrapper[4632]: I0313 11:36:19.306273 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bv9w5" Mar 13 11:36:20 crc kubenswrapper[4632]: I0313 11:36:20.169391 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bv9w5"] Mar 13 11:36:20 crc kubenswrapper[4632]: I0313 11:36:20.677965 4632 generic.go:334] "Generic (PLEG): container finished" podID="bb192b24-7638-4c11-9936-ad94a6842ce9" containerID="6d026b8061dc2c3d9960ed047459f56a28590e7270cb04b536cdac0b85745294" exitCode=0 Mar 13 11:36:20 crc kubenswrapper[4632]: I0313 11:36:20.678068 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bv9w5" event={"ID":"bb192b24-7638-4c11-9936-ad94a6842ce9","Type":"ContainerDied","Data":"6d026b8061dc2c3d9960ed047459f56a28590e7270cb04b536cdac0b85745294"} Mar 13 11:36:20 crc kubenswrapper[4632]: I0313 11:36:20.678290 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bv9w5" event={"ID":"bb192b24-7638-4c11-9936-ad94a6842ce9","Type":"ContainerStarted","Data":"ea657aa4cd516e96e0356217039e8d93bddea9701a5de2aa302d85b9b36bebd7"} Mar 13 11:36:21 crc kubenswrapper[4632]: I0313 11:36:21.082096 4632 scope.go:117] "RemoveContainer" containerID="a14245a1819d4e34ca7541b00cd28e096f96a6ad8f1b997d5a64006b8dd2f7c5" Mar 13 11:36:21 crc kubenswrapper[4632]: I0313 11:36:21.695584 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bv9w5" event={"ID":"bb192b24-7638-4c11-9936-ad94a6842ce9","Type":"ContainerStarted","Data":"f1ce3fc4d0f620c27005352e1949578082140c75a9cff811a6bb65a60520f0d9"} Mar 13 11:36:23 crc kubenswrapper[4632]: I0313 11:36:23.717207 4632 generic.go:334] "Generic (PLEG): container finished" podID="bb192b24-7638-4c11-9936-ad94a6842ce9" containerID="f1ce3fc4d0f620c27005352e1949578082140c75a9cff811a6bb65a60520f0d9" exitCode=0 Mar 13 11:36:23 crc kubenswrapper[4632]: I0313 11:36:23.717310 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bv9w5" event={"ID":"bb192b24-7638-4c11-9936-ad94a6842ce9","Type":"ContainerDied","Data":"f1ce3fc4d0f620c27005352e1949578082140c75a9cff811a6bb65a60520f0d9"} Mar 13 11:36:24 crc kubenswrapper[4632]: I0313 11:36:24.731226 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bv9w5" event={"ID":"bb192b24-7638-4c11-9936-ad94a6842ce9","Type":"ContainerStarted","Data":"471b3e9129676bd35b3dfeb779906ed80e7614a37e876dbde9d3751b6bc56c61"} Mar 13 11:36:24 crc kubenswrapper[4632]: I0313 11:36:24.759509 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bv9w5" podStartSLOduration=3.223662111 podStartE2EDuration="6.759484632s" podCreationTimestamp="2026-03-13 11:36:18 +0000 UTC" firstStartedPulling="2026-03-13 11:36:20.680004387 +0000 UTC m=+5554.702534520" lastFinishedPulling="2026-03-13 11:36:24.215826908 +0000 UTC m=+5558.238357041" observedRunningTime="2026-03-13 11:36:24.757484113 +0000 UTC m=+5558.780014246" watchObservedRunningTime="2026-03-13 11:36:24.759484632 +0000 UTC m=+5558.782014775" Mar 13 11:36:29 crc kubenswrapper[4632]: I0313 11:36:29.307443 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bv9w5" Mar 13 11:36:29 crc kubenswrapper[4632]: I0313 11:36:29.310702 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bv9w5" Mar 13 11:36:30 crc kubenswrapper[4632]: I0313 11:36:30.356528 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-bv9w5" podUID="bb192b24-7638-4c11-9936-ad94a6842ce9" containerName="registry-server" probeResult="failure" output=< Mar 13 11:36:30 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:36:30 crc kubenswrapper[4632]: > Mar 13 11:36:39 crc kubenswrapper[4632]: I0313 11:36:39.367266 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bv9w5" Mar 13 11:36:39 crc kubenswrapper[4632]: I0313 11:36:39.432219 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bv9w5" Mar 13 11:36:39 crc kubenswrapper[4632]: I0313 11:36:39.625718 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bv9w5"] Mar 13 11:36:40 crc kubenswrapper[4632]: I0313 11:36:40.460681 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 11:36:40 crc kubenswrapper[4632]: I0313 11:36:40.460759 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 11:36:40 crc kubenswrapper[4632]: I0313 11:36:40.893859 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bv9w5" podUID="bb192b24-7638-4c11-9936-ad94a6842ce9" containerName="registry-server" containerID="cri-o://471b3e9129676bd35b3dfeb779906ed80e7614a37e876dbde9d3751b6bc56c61" gracePeriod=2 Mar 13 11:36:41 crc kubenswrapper[4632]: I0313 11:36:41.501164 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bv9w5" Mar 13 11:36:41 crc kubenswrapper[4632]: I0313 11:36:41.677836 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97zsq\" (UniqueName: \"kubernetes.io/projected/bb192b24-7638-4c11-9936-ad94a6842ce9-kube-api-access-97zsq\") pod \"bb192b24-7638-4c11-9936-ad94a6842ce9\" (UID: \"bb192b24-7638-4c11-9936-ad94a6842ce9\") " Mar 13 11:36:41 crc kubenswrapper[4632]: I0313 11:36:41.678002 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb192b24-7638-4c11-9936-ad94a6842ce9-utilities\") pod \"bb192b24-7638-4c11-9936-ad94a6842ce9\" (UID: \"bb192b24-7638-4c11-9936-ad94a6842ce9\") " Mar 13 11:36:41 crc kubenswrapper[4632]: I0313 11:36:41.678202 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb192b24-7638-4c11-9936-ad94a6842ce9-catalog-content\") pod \"bb192b24-7638-4c11-9936-ad94a6842ce9\" (UID: \"bb192b24-7638-4c11-9936-ad94a6842ce9\") " Mar 13 11:36:41 crc kubenswrapper[4632]: I0313 11:36:41.678726 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb192b24-7638-4c11-9936-ad94a6842ce9-utilities" (OuterVolumeSpecName: "utilities") pod "bb192b24-7638-4c11-9936-ad94a6842ce9" (UID: "bb192b24-7638-4c11-9936-ad94a6842ce9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:36:41 crc kubenswrapper[4632]: I0313 11:36:41.684289 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb192b24-7638-4c11-9936-ad94a6842ce9-kube-api-access-97zsq" (OuterVolumeSpecName: "kube-api-access-97zsq") pod "bb192b24-7638-4c11-9936-ad94a6842ce9" (UID: "bb192b24-7638-4c11-9936-ad94a6842ce9"). InnerVolumeSpecName "kube-api-access-97zsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:36:41 crc kubenswrapper[4632]: I0313 11:36:41.710655 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb192b24-7638-4c11-9936-ad94a6842ce9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb192b24-7638-4c11-9936-ad94a6842ce9" (UID: "bb192b24-7638-4c11-9936-ad94a6842ce9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:36:41 crc kubenswrapper[4632]: I0313 11:36:41.780671 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb192b24-7638-4c11-9936-ad94a6842ce9-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 11:36:41 crc kubenswrapper[4632]: I0313 11:36:41.780726 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97zsq\" (UniqueName: \"kubernetes.io/projected/bb192b24-7638-4c11-9936-ad94a6842ce9-kube-api-access-97zsq\") on node \"crc\" DevicePath \"\"" Mar 13 11:36:41 crc kubenswrapper[4632]: I0313 11:36:41.780742 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb192b24-7638-4c11-9936-ad94a6842ce9-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 11:36:41 crc kubenswrapper[4632]: I0313 11:36:41.904768 4632 generic.go:334] "Generic (PLEG): container finished" podID="bb192b24-7638-4c11-9936-ad94a6842ce9" containerID="471b3e9129676bd35b3dfeb779906ed80e7614a37e876dbde9d3751b6bc56c61" exitCode=0 Mar 13 11:36:41 crc kubenswrapper[4632]: I0313 11:36:41.904815 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bv9w5" event={"ID":"bb192b24-7638-4c11-9936-ad94a6842ce9","Type":"ContainerDied","Data":"471b3e9129676bd35b3dfeb779906ed80e7614a37e876dbde9d3751b6bc56c61"} Mar 13 11:36:41 crc kubenswrapper[4632]: I0313 11:36:41.904852 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bv9w5" event={"ID":"bb192b24-7638-4c11-9936-ad94a6842ce9","Type":"ContainerDied","Data":"ea657aa4cd516e96e0356217039e8d93bddea9701a5de2aa302d85b9b36bebd7"} Mar 13 11:36:41 crc kubenswrapper[4632]: I0313 11:36:41.904846 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bv9w5" Mar 13 11:36:41 crc kubenswrapper[4632]: I0313 11:36:41.904868 4632 scope.go:117] "RemoveContainer" containerID="471b3e9129676bd35b3dfeb779906ed80e7614a37e876dbde9d3751b6bc56c61" Mar 13 11:36:41 crc kubenswrapper[4632]: I0313 11:36:41.936605 4632 scope.go:117] "RemoveContainer" containerID="f1ce3fc4d0f620c27005352e1949578082140c75a9cff811a6bb65a60520f0d9" Mar 13 11:36:41 crc kubenswrapper[4632]: I0313 11:36:41.950386 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bv9w5"] Mar 13 11:36:41 crc kubenswrapper[4632]: I0313 11:36:41.958418 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bv9w5"] Mar 13 11:36:41 crc kubenswrapper[4632]: I0313 11:36:41.959765 4632 scope.go:117] "RemoveContainer" containerID="6d026b8061dc2c3d9960ed047459f56a28590e7270cb04b536cdac0b85745294" Mar 13 11:36:42 crc kubenswrapper[4632]: I0313 11:36:42.003660 4632 scope.go:117] "RemoveContainer" containerID="471b3e9129676bd35b3dfeb779906ed80e7614a37e876dbde9d3751b6bc56c61" Mar 13 11:36:42 crc kubenswrapper[4632]: E0313 11:36:42.008480 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"471b3e9129676bd35b3dfeb779906ed80e7614a37e876dbde9d3751b6bc56c61\": container with ID starting with 471b3e9129676bd35b3dfeb779906ed80e7614a37e876dbde9d3751b6bc56c61 not found: ID does not exist" containerID="471b3e9129676bd35b3dfeb779906ed80e7614a37e876dbde9d3751b6bc56c61" Mar 13 11:36:42 crc kubenswrapper[4632]: I0313 11:36:42.008537 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"471b3e9129676bd35b3dfeb779906ed80e7614a37e876dbde9d3751b6bc56c61"} err="failed to get container status \"471b3e9129676bd35b3dfeb779906ed80e7614a37e876dbde9d3751b6bc56c61\": rpc error: code = NotFound desc = could not find container \"471b3e9129676bd35b3dfeb779906ed80e7614a37e876dbde9d3751b6bc56c61\": container with ID starting with 471b3e9129676bd35b3dfeb779906ed80e7614a37e876dbde9d3751b6bc56c61 not found: ID does not exist" Mar 13 11:36:42 crc kubenswrapper[4632]: I0313 11:36:42.008565 4632 scope.go:117] "RemoveContainer" containerID="f1ce3fc4d0f620c27005352e1949578082140c75a9cff811a6bb65a60520f0d9" Mar 13 11:36:42 crc kubenswrapper[4632]: E0313 11:36:42.009086 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1ce3fc4d0f620c27005352e1949578082140c75a9cff811a6bb65a60520f0d9\": container with ID starting with f1ce3fc4d0f620c27005352e1949578082140c75a9cff811a6bb65a60520f0d9 not found: ID does not exist" containerID="f1ce3fc4d0f620c27005352e1949578082140c75a9cff811a6bb65a60520f0d9" Mar 13 11:36:42 crc kubenswrapper[4632]: I0313 11:36:42.009110 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1ce3fc4d0f620c27005352e1949578082140c75a9cff811a6bb65a60520f0d9"} err="failed to get container status \"f1ce3fc4d0f620c27005352e1949578082140c75a9cff811a6bb65a60520f0d9\": rpc error: code = NotFound desc = could not find container \"f1ce3fc4d0f620c27005352e1949578082140c75a9cff811a6bb65a60520f0d9\": container with ID starting with f1ce3fc4d0f620c27005352e1949578082140c75a9cff811a6bb65a60520f0d9 not found: ID does not exist" Mar 13 11:36:42 crc kubenswrapper[4632]: I0313 11:36:42.009124 4632 scope.go:117] "RemoveContainer" containerID="6d026b8061dc2c3d9960ed047459f56a28590e7270cb04b536cdac0b85745294" Mar 13 11:36:42 crc kubenswrapper[4632]: E0313 11:36:42.009385 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d026b8061dc2c3d9960ed047459f56a28590e7270cb04b536cdac0b85745294\": container with ID starting with 6d026b8061dc2c3d9960ed047459f56a28590e7270cb04b536cdac0b85745294 not found: ID does not exist" containerID="6d026b8061dc2c3d9960ed047459f56a28590e7270cb04b536cdac0b85745294" Mar 13 11:36:42 crc kubenswrapper[4632]: I0313 11:36:42.009421 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d026b8061dc2c3d9960ed047459f56a28590e7270cb04b536cdac0b85745294"} err="failed to get container status \"6d026b8061dc2c3d9960ed047459f56a28590e7270cb04b536cdac0b85745294\": rpc error: code = NotFound desc = could not find container \"6d026b8061dc2c3d9960ed047459f56a28590e7270cb04b536cdac0b85745294\": container with ID starting with 6d026b8061dc2c3d9960ed047459f56a28590e7270cb04b536cdac0b85745294 not found: ID does not exist" Mar 13 11:36:42 crc kubenswrapper[4632]: I0313 11:36:42.057312 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb192b24-7638-4c11-9936-ad94a6842ce9" path="/var/lib/kubelet/pods/bb192b24-7638-4c11-9936-ad94a6842ce9/volumes" Mar 13 11:37:10 crc kubenswrapper[4632]: I0313 11:37:10.462073 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 11:37:10 crc kubenswrapper[4632]: I0313 11:37:10.462706 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 11:37:10 crc kubenswrapper[4632]: I0313 11:37:10.462761 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 11:37:10 crc kubenswrapper[4632]: I0313 11:37:10.482612 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b28a3031014e23a161560bdf4de3a19a21d26729102cf99acd465c2bd90c33f9"} pod="openshift-machine-config-operator/machine-config-daemon-zkscb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 11:37:10 crc kubenswrapper[4632]: I0313 11:37:10.482769 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" containerID="cri-o://b28a3031014e23a161560bdf4de3a19a21d26729102cf99acd465c2bd90c33f9" gracePeriod=600 Mar 13 11:37:11 crc kubenswrapper[4632]: I0313 11:37:11.246349 4632 generic.go:334] "Generic (PLEG): container finished" podID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerID="b28a3031014e23a161560bdf4de3a19a21d26729102cf99acd465c2bd90c33f9" exitCode=0 Mar 13 11:37:11 crc kubenswrapper[4632]: I0313 11:37:11.246392 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerDied","Data":"b28a3031014e23a161560bdf4de3a19a21d26729102cf99acd465c2bd90c33f9"} Mar 13 11:37:11 crc kubenswrapper[4632]: I0313 11:37:11.246663 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerStarted","Data":"8e0b51539a4ce69896fef2ee7c7e710d1eb74e5257b7d06373268059e30a34f6"} Mar 13 11:37:11 crc kubenswrapper[4632]: I0313 11:37:11.246683 4632 scope.go:117] "RemoveContainer" containerID="d23237a6fa10676a84af0ea53cff2f624fc9045a8ba857339106215f64dbb8e3" Mar 13 11:38:00 crc kubenswrapper[4632]: I0313 11:38:00.152556 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556698-ncrvx"] Mar 13 11:38:00 crc kubenswrapper[4632]: E0313 11:38:00.153567 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb192b24-7638-4c11-9936-ad94a6842ce9" containerName="extract-utilities" Mar 13 11:38:00 crc kubenswrapper[4632]: I0313 11:38:00.153586 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb192b24-7638-4c11-9936-ad94a6842ce9" containerName="extract-utilities" Mar 13 11:38:00 crc kubenswrapper[4632]: E0313 11:38:00.153614 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb192b24-7638-4c11-9936-ad94a6842ce9" containerName="registry-server" Mar 13 11:38:00 crc kubenswrapper[4632]: I0313 11:38:00.153623 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb192b24-7638-4c11-9936-ad94a6842ce9" containerName="registry-server" Mar 13 11:38:00 crc kubenswrapper[4632]: E0313 11:38:00.153642 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb192b24-7638-4c11-9936-ad94a6842ce9" containerName="extract-content" Mar 13 11:38:00 crc kubenswrapper[4632]: I0313 11:38:00.153649 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb192b24-7638-4c11-9936-ad94a6842ce9" containerName="extract-content" Mar 13 11:38:00 crc kubenswrapper[4632]: I0313 11:38:00.153852 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb192b24-7638-4c11-9936-ad94a6842ce9" containerName="registry-server" Mar 13 11:38:00 crc kubenswrapper[4632]: I0313 11:38:00.155018 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556698-ncrvx" Mar 13 11:38:00 crc kubenswrapper[4632]: I0313 11:38:00.161816 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 11:38:00 crc kubenswrapper[4632]: I0313 11:38:00.162664 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 11:38:00 crc kubenswrapper[4632]: I0313 11:38:00.167480 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556698-ncrvx"] Mar 13 11:38:00 crc kubenswrapper[4632]: I0313 11:38:00.172814 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 11:38:00 crc kubenswrapper[4632]: I0313 11:38:00.233803 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lt2x\" (UniqueName: \"kubernetes.io/projected/08ee143e-f1cf-4c38-a811-d31496082a75-kube-api-access-4lt2x\") pod \"auto-csr-approver-29556698-ncrvx\" (UID: \"08ee143e-f1cf-4c38-a811-d31496082a75\") " pod="openshift-infra/auto-csr-approver-29556698-ncrvx" Mar 13 11:38:00 crc kubenswrapper[4632]: I0313 11:38:00.335453 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lt2x\" (UniqueName: \"kubernetes.io/projected/08ee143e-f1cf-4c38-a811-d31496082a75-kube-api-access-4lt2x\") pod \"auto-csr-approver-29556698-ncrvx\" (UID: \"08ee143e-f1cf-4c38-a811-d31496082a75\") " pod="openshift-infra/auto-csr-approver-29556698-ncrvx" Mar 13 11:38:00 crc kubenswrapper[4632]: I0313 11:38:00.357934 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lt2x\" (UniqueName: \"kubernetes.io/projected/08ee143e-f1cf-4c38-a811-d31496082a75-kube-api-access-4lt2x\") pod \"auto-csr-approver-29556698-ncrvx\" (UID: \"08ee143e-f1cf-4c38-a811-d31496082a75\") " pod="openshift-infra/auto-csr-approver-29556698-ncrvx" Mar 13 11:38:00 crc kubenswrapper[4632]: I0313 11:38:00.481139 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556698-ncrvx" Mar 13 11:38:00 crc kubenswrapper[4632]: I0313 11:38:00.975176 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556698-ncrvx"] Mar 13 11:38:00 crc kubenswrapper[4632]: I0313 11:38:00.994086 4632 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 11:38:01 crc kubenswrapper[4632]: I0313 11:38:01.006679 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556698-ncrvx" event={"ID":"08ee143e-f1cf-4c38-a811-d31496082a75","Type":"ContainerStarted","Data":"204b62853491c1d97261545b38204bb2892aa81ad3ee8d8220f0b5d0fdcd889b"} Mar 13 11:38:03 crc kubenswrapper[4632]: I0313 11:38:03.027144 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556698-ncrvx" event={"ID":"08ee143e-f1cf-4c38-a811-d31496082a75","Type":"ContainerStarted","Data":"19b28d2a56d1971c59c024b2b42655c24314722844900a0860bc74bbd0e6dfd4"} Mar 13 11:38:03 crc kubenswrapper[4632]: I0313 11:38:03.054568 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556698-ncrvx" podStartSLOduration=1.916666324 podStartE2EDuration="3.054549273s" podCreationTimestamp="2026-03-13 11:38:00 +0000 UTC" firstStartedPulling="2026-03-13 11:38:00.993825497 +0000 UTC m=+5655.016355620" lastFinishedPulling="2026-03-13 11:38:02.131708436 +0000 UTC m=+5656.154238569" observedRunningTime="2026-03-13 11:38:03.054138614 +0000 UTC m=+5657.076668757" watchObservedRunningTime="2026-03-13 11:38:03.054549273 +0000 UTC m=+5657.077079406" Mar 13 11:38:05 crc kubenswrapper[4632]: I0313 11:38:05.057893 4632 generic.go:334] "Generic (PLEG): container finished" podID="08ee143e-f1cf-4c38-a811-d31496082a75" containerID="19b28d2a56d1971c59c024b2b42655c24314722844900a0860bc74bbd0e6dfd4" exitCode=0 Mar 13 11:38:05 crc kubenswrapper[4632]: I0313 11:38:05.058006 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556698-ncrvx" event={"ID":"08ee143e-f1cf-4c38-a811-d31496082a75","Type":"ContainerDied","Data":"19b28d2a56d1971c59c024b2b42655c24314722844900a0860bc74bbd0e6dfd4"} Mar 13 11:38:06 crc kubenswrapper[4632]: I0313 11:38:06.487054 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556698-ncrvx" Mar 13 11:38:06 crc kubenswrapper[4632]: I0313 11:38:06.562824 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lt2x\" (UniqueName: \"kubernetes.io/projected/08ee143e-f1cf-4c38-a811-d31496082a75-kube-api-access-4lt2x\") pod \"08ee143e-f1cf-4c38-a811-d31496082a75\" (UID: \"08ee143e-f1cf-4c38-a811-d31496082a75\") " Mar 13 11:38:06 crc kubenswrapper[4632]: I0313 11:38:06.583271 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08ee143e-f1cf-4c38-a811-d31496082a75-kube-api-access-4lt2x" (OuterVolumeSpecName: "kube-api-access-4lt2x") pod "08ee143e-f1cf-4c38-a811-d31496082a75" (UID: "08ee143e-f1cf-4c38-a811-d31496082a75"). InnerVolumeSpecName "kube-api-access-4lt2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:38:06 crc kubenswrapper[4632]: I0313 11:38:06.666172 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lt2x\" (UniqueName: \"kubernetes.io/projected/08ee143e-f1cf-4c38-a811-d31496082a75-kube-api-access-4lt2x\") on node \"crc\" DevicePath \"\"" Mar 13 11:38:07 crc kubenswrapper[4632]: I0313 11:38:07.089637 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556698-ncrvx" event={"ID":"08ee143e-f1cf-4c38-a811-d31496082a75","Type":"ContainerDied","Data":"204b62853491c1d97261545b38204bb2892aa81ad3ee8d8220f0b5d0fdcd889b"} Mar 13 11:38:07 crc kubenswrapper[4632]: I0313 11:38:07.089902 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556698-ncrvx" Mar 13 11:38:07 crc kubenswrapper[4632]: I0313 11:38:07.089909 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="204b62853491c1d97261545b38204bb2892aa81ad3ee8d8220f0b5d0fdcd889b" Mar 13 11:38:07 crc kubenswrapper[4632]: I0313 11:38:07.166426 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556692-4kd7v"] Mar 13 11:38:07 crc kubenswrapper[4632]: I0313 11:38:07.177705 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556692-4kd7v"] Mar 13 11:38:08 crc kubenswrapper[4632]: I0313 11:38:08.058194 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77ae44b0-e101-4d21-87e5-9e213e024e9e" path="/var/lib/kubelet/pods/77ae44b0-e101-4d21-87e5-9e213e024e9e/volumes" Mar 13 11:38:17 crc kubenswrapper[4632]: I0313 11:38:17.031426 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c87c2"] Mar 13 11:38:17 crc kubenswrapper[4632]: E0313 11:38:17.040669 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08ee143e-f1cf-4c38-a811-d31496082a75" containerName="oc" Mar 13 11:38:17 crc kubenswrapper[4632]: I0313 11:38:17.040711 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="08ee143e-f1cf-4c38-a811-d31496082a75" containerName="oc" Mar 13 11:38:17 crc kubenswrapper[4632]: I0313 11:38:17.040885 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="08ee143e-f1cf-4c38-a811-d31496082a75" containerName="oc" Mar 13 11:38:17 crc kubenswrapper[4632]: I0313 11:38:17.042841 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c87c2" Mar 13 11:38:17 crc kubenswrapper[4632]: I0313 11:38:17.045557 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c87c2"] Mar 13 11:38:17 crc kubenswrapper[4632]: I0313 11:38:17.188466 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bbc76a6-d812-41c7-a63b-09f6fdb37405-catalog-content\") pod \"community-operators-c87c2\" (UID: \"7bbc76a6-d812-41c7-a63b-09f6fdb37405\") " pod="openshift-marketplace/community-operators-c87c2" Mar 13 11:38:17 crc kubenswrapper[4632]: I0313 11:38:17.190483 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntrdr\" (UniqueName: \"kubernetes.io/projected/7bbc76a6-d812-41c7-a63b-09f6fdb37405-kube-api-access-ntrdr\") pod \"community-operators-c87c2\" (UID: \"7bbc76a6-d812-41c7-a63b-09f6fdb37405\") " pod="openshift-marketplace/community-operators-c87c2" Mar 13 11:38:17 crc kubenswrapper[4632]: I0313 11:38:17.190792 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bbc76a6-d812-41c7-a63b-09f6fdb37405-utilities\") pod \"community-operators-c87c2\" (UID: \"7bbc76a6-d812-41c7-a63b-09f6fdb37405\") " pod="openshift-marketplace/community-operators-c87c2" Mar 13 11:38:17 crc kubenswrapper[4632]: I0313 11:38:17.292873 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntrdr\" (UniqueName: \"kubernetes.io/projected/7bbc76a6-d812-41c7-a63b-09f6fdb37405-kube-api-access-ntrdr\") pod \"community-operators-c87c2\" (UID: \"7bbc76a6-d812-41c7-a63b-09f6fdb37405\") " pod="openshift-marketplace/community-operators-c87c2" Mar 13 11:38:17 crc kubenswrapper[4632]: I0313 11:38:17.293382 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bbc76a6-d812-41c7-a63b-09f6fdb37405-utilities\") pod \"community-operators-c87c2\" (UID: \"7bbc76a6-d812-41c7-a63b-09f6fdb37405\") " pod="openshift-marketplace/community-operators-c87c2" Mar 13 11:38:17 crc kubenswrapper[4632]: I0313 11:38:17.293733 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bbc76a6-d812-41c7-a63b-09f6fdb37405-catalog-content\") pod \"community-operators-c87c2\" (UID: \"7bbc76a6-d812-41c7-a63b-09f6fdb37405\") " pod="openshift-marketplace/community-operators-c87c2" Mar 13 11:38:17 crc kubenswrapper[4632]: I0313 11:38:17.294520 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bbc76a6-d812-41c7-a63b-09f6fdb37405-utilities\") pod \"community-operators-c87c2\" (UID: \"7bbc76a6-d812-41c7-a63b-09f6fdb37405\") " pod="openshift-marketplace/community-operators-c87c2" Mar 13 11:38:17 crc kubenswrapper[4632]: I0313 11:38:17.294683 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bbc76a6-d812-41c7-a63b-09f6fdb37405-catalog-content\") pod \"community-operators-c87c2\" (UID: \"7bbc76a6-d812-41c7-a63b-09f6fdb37405\") " pod="openshift-marketplace/community-operators-c87c2" Mar 13 11:38:17 crc kubenswrapper[4632]: I0313 11:38:17.312142 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntrdr\" (UniqueName: \"kubernetes.io/projected/7bbc76a6-d812-41c7-a63b-09f6fdb37405-kube-api-access-ntrdr\") pod \"community-operators-c87c2\" (UID: \"7bbc76a6-d812-41c7-a63b-09f6fdb37405\") " pod="openshift-marketplace/community-operators-c87c2" Mar 13 11:38:17 crc kubenswrapper[4632]: I0313 11:38:17.393460 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c87c2" Mar 13 11:38:18 crc kubenswrapper[4632]: I0313 11:38:18.134505 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c87c2"] Mar 13 11:38:18 crc kubenswrapper[4632]: I0313 11:38:18.200573 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c87c2" event={"ID":"7bbc76a6-d812-41c7-a63b-09f6fdb37405","Type":"ContainerStarted","Data":"8b195e74d7a07f4a43f98dd358110c2996bc32ef4665c1a538b5616819d59f0b"} Mar 13 11:38:19 crc kubenswrapper[4632]: I0313 11:38:19.224758 4632 generic.go:334] "Generic (PLEG): container finished" podID="7bbc76a6-d812-41c7-a63b-09f6fdb37405" containerID="29b9253bb98b4a2906b8867d032633c0a6084d2ee5611106305860f394c9da23" exitCode=0 Mar 13 11:38:19 crc kubenswrapper[4632]: I0313 11:38:19.224801 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c87c2" event={"ID":"7bbc76a6-d812-41c7-a63b-09f6fdb37405","Type":"ContainerDied","Data":"29b9253bb98b4a2906b8867d032633c0a6084d2ee5611106305860f394c9da23"} Mar 13 11:38:21 crc kubenswrapper[4632]: I0313 11:38:21.235912 4632 scope.go:117] "RemoveContainer" containerID="b8f22a62b885c530c5401c31b24e17ea8bcf63d6debf02e44b4f01a4ab4c1102" Mar 13 11:38:27 crc kubenswrapper[4632]: I0313 11:38:27.299649 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c87c2" event={"ID":"7bbc76a6-d812-41c7-a63b-09f6fdb37405","Type":"ContainerStarted","Data":"90efdae9ee51f26b819dfe72dc997f0e8e40c177d9f2a50823b81b5fdbf64e1e"} Mar 13 11:38:28 crc kubenswrapper[4632]: I0313 11:38:28.311488 4632 generic.go:334] "Generic (PLEG): container finished" podID="7bbc76a6-d812-41c7-a63b-09f6fdb37405" containerID="90efdae9ee51f26b819dfe72dc997f0e8e40c177d9f2a50823b81b5fdbf64e1e" exitCode=0 Mar 13 11:38:28 crc kubenswrapper[4632]: I0313 11:38:28.311544 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c87c2" event={"ID":"7bbc76a6-d812-41c7-a63b-09f6fdb37405","Type":"ContainerDied","Data":"90efdae9ee51f26b819dfe72dc997f0e8e40c177d9f2a50823b81b5fdbf64e1e"} Mar 13 11:38:29 crc kubenswrapper[4632]: I0313 11:38:29.322395 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c87c2" event={"ID":"7bbc76a6-d812-41c7-a63b-09f6fdb37405","Type":"ContainerStarted","Data":"3e4f69b8253bf86d8b9955ae26ef60a4e60f02816ad73b2bf8a731285c5e7153"} Mar 13 11:38:29 crc kubenswrapper[4632]: I0313 11:38:29.342932 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c87c2" podStartSLOduration=2.777515627 podStartE2EDuration="12.342913268s" podCreationTimestamp="2026-03-13 11:38:17 +0000 UTC" firstStartedPulling="2026-03-13 11:38:19.226827598 +0000 UTC m=+5673.249357741" lastFinishedPulling="2026-03-13 11:38:28.792225249 +0000 UTC m=+5682.814755382" observedRunningTime="2026-03-13 11:38:29.342278172 +0000 UTC m=+5683.364808315" watchObservedRunningTime="2026-03-13 11:38:29.342913268 +0000 UTC m=+5683.365443401" Mar 13 11:38:37 crc kubenswrapper[4632]: I0313 11:38:37.393928 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c87c2" Mar 13 11:38:37 crc kubenswrapper[4632]: I0313 11:38:37.395577 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c87c2" Mar 13 11:38:37 crc kubenswrapper[4632]: I0313 11:38:37.448023 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c87c2" Mar 13 11:38:38 crc kubenswrapper[4632]: I0313 11:38:38.455166 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c87c2" Mar 13 11:38:38 crc kubenswrapper[4632]: I0313 11:38:38.655837 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c87c2"] Mar 13 11:38:38 crc kubenswrapper[4632]: I0313 11:38:38.711800 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-774lb"] Mar 13 11:38:38 crc kubenswrapper[4632]: I0313 11:38:38.716774 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-774lb" podUID="560629a7-9dec-4eb7-8c73-a8f097293daa" containerName="registry-server" containerID="cri-o://1b995d3ea46318dbc1da1ae83e15d1a1943f08993ba4772ae9cb4b946ae10e86" gracePeriod=2 Mar 13 11:38:39 crc kubenswrapper[4632]: I0313 11:38:39.412904 4632 generic.go:334] "Generic (PLEG): container finished" podID="560629a7-9dec-4eb7-8c73-a8f097293daa" containerID="1b995d3ea46318dbc1da1ae83e15d1a1943f08993ba4772ae9cb4b946ae10e86" exitCode=0 Mar 13 11:38:39 crc kubenswrapper[4632]: I0313 11:38:39.412978 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-774lb" event={"ID":"560629a7-9dec-4eb7-8c73-a8f097293daa","Type":"ContainerDied","Data":"1b995d3ea46318dbc1da1ae83e15d1a1943f08993ba4772ae9cb4b946ae10e86"} Mar 13 11:38:39 crc kubenswrapper[4632]: I0313 11:38:39.922503 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-774lb" Mar 13 11:38:40 crc kubenswrapper[4632]: I0313 11:38:40.091111 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rc4f5\" (UniqueName: \"kubernetes.io/projected/560629a7-9dec-4eb7-8c73-a8f097293daa-kube-api-access-rc4f5\") pod \"560629a7-9dec-4eb7-8c73-a8f097293daa\" (UID: \"560629a7-9dec-4eb7-8c73-a8f097293daa\") " Mar 13 11:38:40 crc kubenswrapper[4632]: I0313 11:38:40.091473 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/560629a7-9dec-4eb7-8c73-a8f097293daa-catalog-content\") pod \"560629a7-9dec-4eb7-8c73-a8f097293daa\" (UID: \"560629a7-9dec-4eb7-8c73-a8f097293daa\") " Mar 13 11:38:40 crc kubenswrapper[4632]: I0313 11:38:40.091845 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/560629a7-9dec-4eb7-8c73-a8f097293daa-utilities\") pod \"560629a7-9dec-4eb7-8c73-a8f097293daa\" (UID: \"560629a7-9dec-4eb7-8c73-a8f097293daa\") " Mar 13 11:38:40 crc kubenswrapper[4632]: I0313 11:38:40.093655 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/560629a7-9dec-4eb7-8c73-a8f097293daa-utilities" (OuterVolumeSpecName: "utilities") pod "560629a7-9dec-4eb7-8c73-a8f097293daa" (UID: "560629a7-9dec-4eb7-8c73-a8f097293daa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:38:40 crc kubenswrapper[4632]: I0313 11:38:40.119718 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/560629a7-9dec-4eb7-8c73-a8f097293daa-kube-api-access-rc4f5" (OuterVolumeSpecName: "kube-api-access-rc4f5") pod "560629a7-9dec-4eb7-8c73-a8f097293daa" (UID: "560629a7-9dec-4eb7-8c73-a8f097293daa"). InnerVolumeSpecName "kube-api-access-rc4f5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:38:40 crc kubenswrapper[4632]: I0313 11:38:40.193797 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/560629a7-9dec-4eb7-8c73-a8f097293daa-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 11:38:40 crc kubenswrapper[4632]: I0313 11:38:40.193832 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rc4f5\" (UniqueName: \"kubernetes.io/projected/560629a7-9dec-4eb7-8c73-a8f097293daa-kube-api-access-rc4f5\") on node \"crc\" DevicePath \"\"" Mar 13 11:38:40 crc kubenswrapper[4632]: I0313 11:38:40.304275 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/560629a7-9dec-4eb7-8c73-a8f097293daa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "560629a7-9dec-4eb7-8c73-a8f097293daa" (UID: "560629a7-9dec-4eb7-8c73-a8f097293daa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:38:40 crc kubenswrapper[4632]: I0313 11:38:40.397500 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/560629a7-9dec-4eb7-8c73-a8f097293daa-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 11:38:40 crc kubenswrapper[4632]: I0313 11:38:40.425341 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-774lb" event={"ID":"560629a7-9dec-4eb7-8c73-a8f097293daa","Type":"ContainerDied","Data":"4ae9494de264dfa5dcfb2c9e6166d64886aa8f640f54445b6eadb498ad356c8c"} Mar 13 11:38:40 crc kubenswrapper[4632]: I0313 11:38:40.425409 4632 scope.go:117] "RemoveContainer" containerID="1b995d3ea46318dbc1da1ae83e15d1a1943f08993ba4772ae9cb4b946ae10e86" Mar 13 11:38:40 crc kubenswrapper[4632]: I0313 11:38:40.426146 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-774lb" Mar 13 11:38:40 crc kubenswrapper[4632]: I0313 11:38:40.464101 4632 scope.go:117] "RemoveContainer" containerID="d5e77ef64ff23f92ed48258b81d7d0310ada291a691626009608a75068a59888" Mar 13 11:38:40 crc kubenswrapper[4632]: I0313 11:38:40.508206 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-774lb"] Mar 13 11:38:40 crc kubenswrapper[4632]: I0313 11:38:40.514114 4632 scope.go:117] "RemoveContainer" containerID="f9fbff406d14d8da11f86810a1b1b035215dd5c6179ac20e5ddd29194bd3f5d6" Mar 13 11:38:40 crc kubenswrapper[4632]: I0313 11:38:40.530849 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-774lb"] Mar 13 11:38:42 crc kubenswrapper[4632]: I0313 11:38:42.062588 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="560629a7-9dec-4eb7-8c73-a8f097293daa" path="/var/lib/kubelet/pods/560629a7-9dec-4eb7-8c73-a8f097293daa/volumes" Mar 13 11:39:10 crc kubenswrapper[4632]: I0313 11:39:10.461164 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 11:39:10 crc kubenswrapper[4632]: I0313 11:39:10.461686 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 11:39:40 crc kubenswrapper[4632]: I0313 11:39:40.460700 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 11:39:40 crc kubenswrapper[4632]: I0313 11:39:40.461120 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 11:40:00 crc kubenswrapper[4632]: I0313 11:40:00.166741 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556700-rjhgg"] Mar 13 11:40:00 crc kubenswrapper[4632]: E0313 11:40:00.168041 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="560629a7-9dec-4eb7-8c73-a8f097293daa" containerName="extract-content" Mar 13 11:40:00 crc kubenswrapper[4632]: I0313 11:40:00.168066 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="560629a7-9dec-4eb7-8c73-a8f097293daa" containerName="extract-content" Mar 13 11:40:00 crc kubenswrapper[4632]: E0313 11:40:00.168100 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="560629a7-9dec-4eb7-8c73-a8f097293daa" containerName="extract-utilities" Mar 13 11:40:00 crc kubenswrapper[4632]: I0313 11:40:00.168112 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="560629a7-9dec-4eb7-8c73-a8f097293daa" containerName="extract-utilities" Mar 13 11:40:00 crc kubenswrapper[4632]: E0313 11:40:00.168165 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="560629a7-9dec-4eb7-8c73-a8f097293daa" containerName="registry-server" Mar 13 11:40:00 crc kubenswrapper[4632]: I0313 11:40:00.168178 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="560629a7-9dec-4eb7-8c73-a8f097293daa" containerName="registry-server" Mar 13 11:40:00 crc kubenswrapper[4632]: I0313 11:40:00.168582 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="560629a7-9dec-4eb7-8c73-a8f097293daa" containerName="registry-server" Mar 13 11:40:00 crc kubenswrapper[4632]: I0313 11:40:00.169570 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556700-rjhgg" Mar 13 11:40:00 crc kubenswrapper[4632]: I0313 11:40:00.172144 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 11:40:00 crc kubenswrapper[4632]: I0313 11:40:00.177813 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 11:40:00 crc kubenswrapper[4632]: I0313 11:40:00.179706 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 11:40:00 crc kubenswrapper[4632]: I0313 11:40:00.209921 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556700-rjhgg"] Mar 13 11:40:00 crc kubenswrapper[4632]: I0313 11:40:00.232572 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrnzz\" (UniqueName: \"kubernetes.io/projected/d39670e0-5c6f-40f3-b7b1-46dc6fdc5fc5-kube-api-access-xrnzz\") pod \"auto-csr-approver-29556700-rjhgg\" (UID: \"d39670e0-5c6f-40f3-b7b1-46dc6fdc5fc5\") " pod="openshift-infra/auto-csr-approver-29556700-rjhgg" Mar 13 11:40:00 crc kubenswrapper[4632]: I0313 11:40:00.335539 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrnzz\" (UniqueName: \"kubernetes.io/projected/d39670e0-5c6f-40f3-b7b1-46dc6fdc5fc5-kube-api-access-xrnzz\") pod \"auto-csr-approver-29556700-rjhgg\" (UID: \"d39670e0-5c6f-40f3-b7b1-46dc6fdc5fc5\") " pod="openshift-infra/auto-csr-approver-29556700-rjhgg" Mar 13 11:40:00 crc kubenswrapper[4632]: I0313 11:40:00.356400 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrnzz\" (UniqueName: \"kubernetes.io/projected/d39670e0-5c6f-40f3-b7b1-46dc6fdc5fc5-kube-api-access-xrnzz\") pod \"auto-csr-approver-29556700-rjhgg\" (UID: \"d39670e0-5c6f-40f3-b7b1-46dc6fdc5fc5\") " pod="openshift-infra/auto-csr-approver-29556700-rjhgg" Mar 13 11:40:00 crc kubenswrapper[4632]: I0313 11:40:00.488931 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556700-rjhgg" Mar 13 11:40:00 crc kubenswrapper[4632]: I0313 11:40:00.978923 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556700-rjhgg"] Mar 13 11:40:01 crc kubenswrapper[4632]: I0313 11:40:01.213352 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556700-rjhgg" event={"ID":"d39670e0-5c6f-40f3-b7b1-46dc6fdc5fc5","Type":"ContainerStarted","Data":"ffa8ab9e687203b969b410a56351f55d13c66e081d5375016372e44985afa4a7"} Mar 13 11:40:03 crc kubenswrapper[4632]: I0313 11:40:03.231885 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556700-rjhgg" event={"ID":"d39670e0-5c6f-40f3-b7b1-46dc6fdc5fc5","Type":"ContainerStarted","Data":"e2dc32a91f84dbc41f05188f09b5ec2790c5d429e256411e1defae69e3e43deb"} Mar 13 11:40:03 crc kubenswrapper[4632]: I0313 11:40:03.252989 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556700-rjhgg" podStartSLOduration=1.854841785 podStartE2EDuration="3.252931148s" podCreationTimestamp="2026-03-13 11:40:00 +0000 UTC" firstStartedPulling="2026-03-13 11:40:00.985329014 +0000 UTC m=+5775.007859147" lastFinishedPulling="2026-03-13 11:40:02.383418377 +0000 UTC m=+5776.405948510" observedRunningTime="2026-03-13 11:40:03.244716225 +0000 UTC m=+5777.267246358" watchObservedRunningTime="2026-03-13 11:40:03.252931148 +0000 UTC m=+5777.275461281" Mar 13 11:40:05 crc kubenswrapper[4632]: I0313 11:40:05.251445 4632 generic.go:334] "Generic (PLEG): container finished" podID="d39670e0-5c6f-40f3-b7b1-46dc6fdc5fc5" containerID="e2dc32a91f84dbc41f05188f09b5ec2790c5d429e256411e1defae69e3e43deb" exitCode=0 Mar 13 11:40:05 crc kubenswrapper[4632]: I0313 11:40:05.251668 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556700-rjhgg" event={"ID":"d39670e0-5c6f-40f3-b7b1-46dc6fdc5fc5","Type":"ContainerDied","Data":"e2dc32a91f84dbc41f05188f09b5ec2790c5d429e256411e1defae69e3e43deb"} Mar 13 11:40:06 crc kubenswrapper[4632]: I0313 11:40:06.281401 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-flb98"] Mar 13 11:40:06 crc kubenswrapper[4632]: I0313 11:40:06.286382 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-flb98" Mar 13 11:40:06 crc kubenswrapper[4632]: I0313 11:40:06.316469 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-flb98"] Mar 13 11:40:06 crc kubenswrapper[4632]: I0313 11:40:06.360772 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz4bb\" (UniqueName: \"kubernetes.io/projected/dda62ea0-a8e2-46bb-a080-1b771e45feec-kube-api-access-wz4bb\") pod \"redhat-operators-flb98\" (UID: \"dda62ea0-a8e2-46bb-a080-1b771e45feec\") " pod="openshift-marketplace/redhat-operators-flb98" Mar 13 11:40:06 crc kubenswrapper[4632]: I0313 11:40:06.360860 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dda62ea0-a8e2-46bb-a080-1b771e45feec-utilities\") pod \"redhat-operators-flb98\" (UID: \"dda62ea0-a8e2-46bb-a080-1b771e45feec\") " pod="openshift-marketplace/redhat-operators-flb98" Mar 13 11:40:06 crc kubenswrapper[4632]: I0313 11:40:06.360951 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dda62ea0-a8e2-46bb-a080-1b771e45feec-catalog-content\") pod \"redhat-operators-flb98\" (UID: \"dda62ea0-a8e2-46bb-a080-1b771e45feec\") " pod="openshift-marketplace/redhat-operators-flb98" Mar 13 11:40:06 crc kubenswrapper[4632]: I0313 11:40:06.463762 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz4bb\" (UniqueName: \"kubernetes.io/projected/dda62ea0-a8e2-46bb-a080-1b771e45feec-kube-api-access-wz4bb\") pod \"redhat-operators-flb98\" (UID: \"dda62ea0-a8e2-46bb-a080-1b771e45feec\") " pod="openshift-marketplace/redhat-operators-flb98" Mar 13 11:40:06 crc kubenswrapper[4632]: I0313 11:40:06.463971 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dda62ea0-a8e2-46bb-a080-1b771e45feec-utilities\") pod \"redhat-operators-flb98\" (UID: \"dda62ea0-a8e2-46bb-a080-1b771e45feec\") " pod="openshift-marketplace/redhat-operators-flb98" Mar 13 11:40:06 crc kubenswrapper[4632]: I0313 11:40:06.464787 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dda62ea0-a8e2-46bb-a080-1b771e45feec-utilities\") pod \"redhat-operators-flb98\" (UID: \"dda62ea0-a8e2-46bb-a080-1b771e45feec\") " pod="openshift-marketplace/redhat-operators-flb98" Mar 13 11:40:06 crc kubenswrapper[4632]: I0313 11:40:06.464836 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dda62ea0-a8e2-46bb-a080-1b771e45feec-catalog-content\") pod \"redhat-operators-flb98\" (UID: \"dda62ea0-a8e2-46bb-a080-1b771e45feec\") " pod="openshift-marketplace/redhat-operators-flb98" Mar 13 11:40:06 crc kubenswrapper[4632]: I0313 11:40:06.464851 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dda62ea0-a8e2-46bb-a080-1b771e45feec-catalog-content\") pod \"redhat-operators-flb98\" (UID: \"dda62ea0-a8e2-46bb-a080-1b771e45feec\") " pod="openshift-marketplace/redhat-operators-flb98" Mar 13 11:40:06 crc kubenswrapper[4632]: I0313 11:40:06.482754 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz4bb\" (UniqueName: \"kubernetes.io/projected/dda62ea0-a8e2-46bb-a080-1b771e45feec-kube-api-access-wz4bb\") pod \"redhat-operators-flb98\" (UID: \"dda62ea0-a8e2-46bb-a080-1b771e45feec\") " pod="openshift-marketplace/redhat-operators-flb98" Mar 13 11:40:06 crc kubenswrapper[4632]: I0313 11:40:06.618363 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-flb98" Mar 13 11:40:06 crc kubenswrapper[4632]: I0313 11:40:06.857362 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556700-rjhgg" Mar 13 11:40:06 crc kubenswrapper[4632]: I0313 11:40:06.980105 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrnzz\" (UniqueName: \"kubernetes.io/projected/d39670e0-5c6f-40f3-b7b1-46dc6fdc5fc5-kube-api-access-xrnzz\") pod \"d39670e0-5c6f-40f3-b7b1-46dc6fdc5fc5\" (UID: \"d39670e0-5c6f-40f3-b7b1-46dc6fdc5fc5\") " Mar 13 11:40:06 crc kubenswrapper[4632]: I0313 11:40:06.986610 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d39670e0-5c6f-40f3-b7b1-46dc6fdc5fc5-kube-api-access-xrnzz" (OuterVolumeSpecName: "kube-api-access-xrnzz") pod "d39670e0-5c6f-40f3-b7b1-46dc6fdc5fc5" (UID: "d39670e0-5c6f-40f3-b7b1-46dc6fdc5fc5"). InnerVolumeSpecName "kube-api-access-xrnzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:40:07 crc kubenswrapper[4632]: I0313 11:40:07.083398 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrnzz\" (UniqueName: \"kubernetes.io/projected/d39670e0-5c6f-40f3-b7b1-46dc6fdc5fc5-kube-api-access-xrnzz\") on node \"crc\" DevicePath \"\"" Mar 13 11:40:07 crc kubenswrapper[4632]: I0313 11:40:07.270726 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556700-rjhgg" event={"ID":"d39670e0-5c6f-40f3-b7b1-46dc6fdc5fc5","Type":"ContainerDied","Data":"ffa8ab9e687203b969b410a56351f55d13c66e081d5375016372e44985afa4a7"} Mar 13 11:40:07 crc kubenswrapper[4632]: I0313 11:40:07.270784 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffa8ab9e687203b969b410a56351f55d13c66e081d5375016372e44985afa4a7" Mar 13 11:40:07 crc kubenswrapper[4632]: I0313 11:40:07.270782 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556700-rjhgg" Mar 13 11:40:07 crc kubenswrapper[4632]: I0313 11:40:07.361324 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-flb98"] Mar 13 11:40:07 crc kubenswrapper[4632]: W0313 11:40:07.373762 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddda62ea0_a8e2_46bb_a080_1b771e45feec.slice/crio-bc8f50d20d6be8ef47a9be50fca7331e6e4176f53dc58080b0e546eb62f4a959 WatchSource:0}: Error finding container bc8f50d20d6be8ef47a9be50fca7331e6e4176f53dc58080b0e546eb62f4a959: Status 404 returned error can't find the container with id bc8f50d20d6be8ef47a9be50fca7331e6e4176f53dc58080b0e546eb62f4a959 Mar 13 11:40:07 crc kubenswrapper[4632]: I0313 11:40:07.374532 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556694-jmf5t"] Mar 13 11:40:07 crc kubenswrapper[4632]: I0313 11:40:07.420693 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556694-jmf5t"] Mar 13 11:40:08 crc kubenswrapper[4632]: I0313 11:40:08.060575 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="266f7f6e-de91-4256-8605-0a71adef85fc" path="/var/lib/kubelet/pods/266f7f6e-de91-4256-8605-0a71adef85fc/volumes" Mar 13 11:40:08 crc kubenswrapper[4632]: I0313 11:40:08.281680 4632 generic.go:334] "Generic (PLEG): container finished" podID="dda62ea0-a8e2-46bb-a080-1b771e45feec" containerID="f8176fa837f25c1697cac7d62b754ee2cfecbadb578ca4e441339c0f0c3b11c5" exitCode=0 Mar 13 11:40:08 crc kubenswrapper[4632]: I0313 11:40:08.281731 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flb98" event={"ID":"dda62ea0-a8e2-46bb-a080-1b771e45feec","Type":"ContainerDied","Data":"f8176fa837f25c1697cac7d62b754ee2cfecbadb578ca4e441339c0f0c3b11c5"} Mar 13 11:40:08 crc kubenswrapper[4632]: I0313 11:40:08.281780 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flb98" event={"ID":"dda62ea0-a8e2-46bb-a080-1b771e45feec","Type":"ContainerStarted","Data":"bc8f50d20d6be8ef47a9be50fca7331e6e4176f53dc58080b0e546eb62f4a959"} Mar 13 11:40:09 crc kubenswrapper[4632]: I0313 11:40:09.292150 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flb98" event={"ID":"dda62ea0-a8e2-46bb-a080-1b771e45feec","Type":"ContainerStarted","Data":"879dd7d998eed4fd26b7e2d5e411da489d9eceae2f429769ea2bcc4ae3aaf24d"} Mar 13 11:40:10 crc kubenswrapper[4632]: I0313 11:40:10.462696 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 11:40:10 crc kubenswrapper[4632]: I0313 11:40:10.463175 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 11:40:10 crc kubenswrapper[4632]: I0313 11:40:10.463229 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 11:40:10 crc kubenswrapper[4632]: I0313 11:40:10.464071 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8e0b51539a4ce69896fef2ee7c7e710d1eb74e5257b7d06373268059e30a34f6"} pod="openshift-machine-config-operator/machine-config-daemon-zkscb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 11:40:10 crc kubenswrapper[4632]: I0313 11:40:10.464153 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" containerID="cri-o://8e0b51539a4ce69896fef2ee7c7e710d1eb74e5257b7d06373268059e30a34f6" gracePeriod=600 Mar 13 11:40:11 crc kubenswrapper[4632]: I0313 11:40:11.082511 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kc89d"] Mar 13 11:40:11 crc kubenswrapper[4632]: E0313 11:40:11.083326 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d39670e0-5c6f-40f3-b7b1-46dc6fdc5fc5" containerName="oc" Mar 13 11:40:11 crc kubenswrapper[4632]: I0313 11:40:11.083343 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="d39670e0-5c6f-40f3-b7b1-46dc6fdc5fc5" containerName="oc" Mar 13 11:40:11 crc kubenswrapper[4632]: I0313 11:40:11.083533 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="d39670e0-5c6f-40f3-b7b1-46dc6fdc5fc5" containerName="oc" Mar 13 11:40:11 crc kubenswrapper[4632]: I0313 11:40:11.084843 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kc89d" Mar 13 11:40:11 crc kubenswrapper[4632]: I0313 11:40:11.104038 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kc89d"] Mar 13 11:40:11 crc kubenswrapper[4632]: E0313 11:40:11.162115 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:40:11 crc kubenswrapper[4632]: I0313 11:40:11.169392 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ad0d32e-2227-4d12-bf62-19eb24597391-utilities\") pod \"certified-operators-kc89d\" (UID: \"2ad0d32e-2227-4d12-bf62-19eb24597391\") " pod="openshift-marketplace/certified-operators-kc89d" Mar 13 11:40:11 crc kubenswrapper[4632]: I0313 11:40:11.169562 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbnzm\" (UniqueName: \"kubernetes.io/projected/2ad0d32e-2227-4d12-bf62-19eb24597391-kube-api-access-cbnzm\") pod \"certified-operators-kc89d\" (UID: \"2ad0d32e-2227-4d12-bf62-19eb24597391\") " pod="openshift-marketplace/certified-operators-kc89d" Mar 13 11:40:11 crc kubenswrapper[4632]: I0313 11:40:11.169735 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ad0d32e-2227-4d12-bf62-19eb24597391-catalog-content\") pod \"certified-operators-kc89d\" (UID: \"2ad0d32e-2227-4d12-bf62-19eb24597391\") " pod="openshift-marketplace/certified-operators-kc89d" Mar 13 11:40:11 crc kubenswrapper[4632]: I0313 11:40:11.271727 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ad0d32e-2227-4d12-bf62-19eb24597391-utilities\") pod \"certified-operators-kc89d\" (UID: \"2ad0d32e-2227-4d12-bf62-19eb24597391\") " pod="openshift-marketplace/certified-operators-kc89d" Mar 13 11:40:11 crc kubenswrapper[4632]: I0313 11:40:11.271824 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbnzm\" (UniqueName: \"kubernetes.io/projected/2ad0d32e-2227-4d12-bf62-19eb24597391-kube-api-access-cbnzm\") pod \"certified-operators-kc89d\" (UID: \"2ad0d32e-2227-4d12-bf62-19eb24597391\") " pod="openshift-marketplace/certified-operators-kc89d" Mar 13 11:40:11 crc kubenswrapper[4632]: I0313 11:40:11.271923 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ad0d32e-2227-4d12-bf62-19eb24597391-catalog-content\") pod \"certified-operators-kc89d\" (UID: \"2ad0d32e-2227-4d12-bf62-19eb24597391\") " pod="openshift-marketplace/certified-operators-kc89d" Mar 13 11:40:11 crc kubenswrapper[4632]: I0313 11:40:11.272863 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ad0d32e-2227-4d12-bf62-19eb24597391-catalog-content\") pod \"certified-operators-kc89d\" (UID: \"2ad0d32e-2227-4d12-bf62-19eb24597391\") " pod="openshift-marketplace/certified-operators-kc89d" Mar 13 11:40:11 crc kubenswrapper[4632]: I0313 11:40:11.272909 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ad0d32e-2227-4d12-bf62-19eb24597391-utilities\") pod \"certified-operators-kc89d\" (UID: \"2ad0d32e-2227-4d12-bf62-19eb24597391\") " pod="openshift-marketplace/certified-operators-kc89d" Mar 13 11:40:11 crc kubenswrapper[4632]: I0313 11:40:11.294848 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbnzm\" (UniqueName: \"kubernetes.io/projected/2ad0d32e-2227-4d12-bf62-19eb24597391-kube-api-access-cbnzm\") pod \"certified-operators-kc89d\" (UID: \"2ad0d32e-2227-4d12-bf62-19eb24597391\") " pod="openshift-marketplace/certified-operators-kc89d" Mar 13 11:40:11 crc kubenswrapper[4632]: I0313 11:40:11.310332 4632 generic.go:334] "Generic (PLEG): container finished" podID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerID="8e0b51539a4ce69896fef2ee7c7e710d1eb74e5257b7d06373268059e30a34f6" exitCode=0 Mar 13 11:40:11 crc kubenswrapper[4632]: I0313 11:40:11.310382 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerDied","Data":"8e0b51539a4ce69896fef2ee7c7e710d1eb74e5257b7d06373268059e30a34f6"} Mar 13 11:40:11 crc kubenswrapper[4632]: I0313 11:40:11.310424 4632 scope.go:117] "RemoveContainer" containerID="b28a3031014e23a161560bdf4de3a19a21d26729102cf99acd465c2bd90c33f9" Mar 13 11:40:11 crc kubenswrapper[4632]: I0313 11:40:11.311107 4632 scope.go:117] "RemoveContainer" containerID="8e0b51539a4ce69896fef2ee7c7e710d1eb74e5257b7d06373268059e30a34f6" Mar 13 11:40:11 crc kubenswrapper[4632]: E0313 11:40:11.311622 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:40:11 crc kubenswrapper[4632]: I0313 11:40:11.406451 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kc89d" Mar 13 11:40:11 crc kubenswrapper[4632]: I0313 11:40:11.899397 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kc89d"] Mar 13 11:40:12 crc kubenswrapper[4632]: I0313 11:40:12.323838 4632 generic.go:334] "Generic (PLEG): container finished" podID="2ad0d32e-2227-4d12-bf62-19eb24597391" containerID="4f578f7aba25fc35c2994f8884e7599a303084b2db092f8a745f47b140231631" exitCode=0 Mar 13 11:40:12 crc kubenswrapper[4632]: I0313 11:40:12.323905 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kc89d" event={"ID":"2ad0d32e-2227-4d12-bf62-19eb24597391","Type":"ContainerDied","Data":"4f578f7aba25fc35c2994f8884e7599a303084b2db092f8a745f47b140231631"} Mar 13 11:40:12 crc kubenswrapper[4632]: I0313 11:40:12.323984 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kc89d" event={"ID":"2ad0d32e-2227-4d12-bf62-19eb24597391","Type":"ContainerStarted","Data":"7f8725f4be963837d7f72aed20800307d623dcf5ab9a99a449d6c1c7ef19f63e"} Mar 13 11:40:14 crc kubenswrapper[4632]: I0313 11:40:14.345324 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kc89d" event={"ID":"2ad0d32e-2227-4d12-bf62-19eb24597391","Type":"ContainerStarted","Data":"38648afa2577e5a9fb5d9d7e00b5f9e414dd8af022764679faa186da16c72b26"} Mar 13 11:40:19 crc kubenswrapper[4632]: I0313 11:40:19.396349 4632 generic.go:334] "Generic (PLEG): container finished" podID="2ad0d32e-2227-4d12-bf62-19eb24597391" containerID="38648afa2577e5a9fb5d9d7e00b5f9e414dd8af022764679faa186da16c72b26" exitCode=0 Mar 13 11:40:19 crc kubenswrapper[4632]: I0313 11:40:19.396413 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kc89d" event={"ID":"2ad0d32e-2227-4d12-bf62-19eb24597391","Type":"ContainerDied","Data":"38648afa2577e5a9fb5d9d7e00b5f9e414dd8af022764679faa186da16c72b26"} Mar 13 11:40:21 crc kubenswrapper[4632]: I0313 11:40:21.437979 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kc89d" event={"ID":"2ad0d32e-2227-4d12-bf62-19eb24597391","Type":"ContainerStarted","Data":"9d9bde8c088f7036968dfb6799f01abee05cb617fe7dd0b877a31e0a9fec56bb"} Mar 13 11:40:21 crc kubenswrapper[4632]: I0313 11:40:21.442558 4632 generic.go:334] "Generic (PLEG): container finished" podID="dda62ea0-a8e2-46bb-a080-1b771e45feec" containerID="879dd7d998eed4fd26b7e2d5e411da489d9eceae2f429769ea2bcc4ae3aaf24d" exitCode=0 Mar 13 11:40:21 crc kubenswrapper[4632]: I0313 11:40:21.442615 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flb98" event={"ID":"dda62ea0-a8e2-46bb-a080-1b771e45feec","Type":"ContainerDied","Data":"879dd7d998eed4fd26b7e2d5e411da489d9eceae2f429769ea2bcc4ae3aaf24d"} Mar 13 11:40:21 crc kubenswrapper[4632]: I0313 11:40:21.488261 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kc89d" podStartSLOduration=2.65704836 podStartE2EDuration="10.488219566s" podCreationTimestamp="2026-03-13 11:40:11 +0000 UTC" firstStartedPulling="2026-03-13 11:40:12.326371451 +0000 UTC m=+5786.348901594" lastFinishedPulling="2026-03-13 11:40:20.157542667 +0000 UTC m=+5794.180072800" observedRunningTime="2026-03-13 11:40:21.467766691 +0000 UTC m=+5795.490296844" watchObservedRunningTime="2026-03-13 11:40:21.488219566 +0000 UTC m=+5795.510749719" Mar 13 11:40:21 crc kubenswrapper[4632]: I0313 11:40:21.709604 4632 scope.go:117] "RemoveContainer" containerID="dd29187096f712bf2f18fa46086683fcb900aea6c3d89212b78286e73075a17b" Mar 13 11:40:23 crc kubenswrapper[4632]: I0313 11:40:23.044797 4632 scope.go:117] "RemoveContainer" containerID="8e0b51539a4ce69896fef2ee7c7e710d1eb74e5257b7d06373268059e30a34f6" Mar 13 11:40:23 crc kubenswrapper[4632]: E0313 11:40:23.045469 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:40:23 crc kubenswrapper[4632]: I0313 11:40:23.462735 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flb98" event={"ID":"dda62ea0-a8e2-46bb-a080-1b771e45feec","Type":"ContainerStarted","Data":"902661c18f068b146e4aecda28b4265925f2ef61195dd89d1bcb5ed19cf93dd6"} Mar 13 11:40:23 crc kubenswrapper[4632]: I0313 11:40:23.486131 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-flb98" podStartSLOduration=3.603650438 podStartE2EDuration="17.48611495s" podCreationTimestamp="2026-03-13 11:40:06 +0000 UTC" firstStartedPulling="2026-03-13 11:40:08.283971682 +0000 UTC m=+5782.306501815" lastFinishedPulling="2026-03-13 11:40:22.166436194 +0000 UTC m=+5796.188966327" observedRunningTime="2026-03-13 11:40:23.48567673 +0000 UTC m=+5797.508206883" watchObservedRunningTime="2026-03-13 11:40:23.48611495 +0000 UTC m=+5797.508645083" Mar 13 11:40:26 crc kubenswrapper[4632]: I0313 11:40:26.618604 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-flb98" Mar 13 11:40:26 crc kubenswrapper[4632]: I0313 11:40:26.619141 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-flb98" Mar 13 11:40:27 crc kubenswrapper[4632]: I0313 11:40:27.677866 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-flb98" podUID="dda62ea0-a8e2-46bb-a080-1b771e45feec" containerName="registry-server" probeResult="failure" output=< Mar 13 11:40:27 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:40:27 crc kubenswrapper[4632]: > Mar 13 11:40:31 crc kubenswrapper[4632]: I0313 11:40:31.407580 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kc89d" Mar 13 11:40:31 crc kubenswrapper[4632]: I0313 11:40:31.408300 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kc89d" Mar 13 11:40:32 crc kubenswrapper[4632]: I0313 11:40:32.459577 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-kc89d" podUID="2ad0d32e-2227-4d12-bf62-19eb24597391" containerName="registry-server" probeResult="failure" output=< Mar 13 11:40:32 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:40:32 crc kubenswrapper[4632]: > Mar 13 11:40:36 crc kubenswrapper[4632]: I0313 11:40:36.044453 4632 scope.go:117] "RemoveContainer" containerID="8e0b51539a4ce69896fef2ee7c7e710d1eb74e5257b7d06373268059e30a34f6" Mar 13 11:40:36 crc kubenswrapper[4632]: E0313 11:40:36.045373 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:40:37 crc kubenswrapper[4632]: I0313 11:40:37.665380 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-flb98" podUID="dda62ea0-a8e2-46bb-a080-1b771e45feec" containerName="registry-server" probeResult="failure" output=< Mar 13 11:40:37 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:40:37 crc kubenswrapper[4632]: > Mar 13 11:40:42 crc kubenswrapper[4632]: I0313 11:40:42.466876 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-kc89d" podUID="2ad0d32e-2227-4d12-bf62-19eb24597391" containerName="registry-server" probeResult="failure" output=< Mar 13 11:40:42 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:40:42 crc kubenswrapper[4632]: > Mar 13 11:40:47 crc kubenswrapper[4632]: I0313 11:40:47.666537 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-flb98" podUID="dda62ea0-a8e2-46bb-a080-1b771e45feec" containerName="registry-server" probeResult="failure" output=< Mar 13 11:40:47 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:40:47 crc kubenswrapper[4632]: > Mar 13 11:40:49 crc kubenswrapper[4632]: I0313 11:40:49.044796 4632 scope.go:117] "RemoveContainer" containerID="8e0b51539a4ce69896fef2ee7c7e710d1eb74e5257b7d06373268059e30a34f6" Mar 13 11:40:49 crc kubenswrapper[4632]: E0313 11:40:49.045622 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:40:51 crc kubenswrapper[4632]: I0313 11:40:51.524963 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kc89d" Mar 13 11:40:51 crc kubenswrapper[4632]: I0313 11:40:51.608864 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kc89d" Mar 13 11:40:51 crc kubenswrapper[4632]: I0313 11:40:51.775625 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kc89d"] Mar 13 11:40:52 crc kubenswrapper[4632]: I0313 11:40:52.765227 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kc89d" podUID="2ad0d32e-2227-4d12-bf62-19eb24597391" containerName="registry-server" containerID="cri-o://9d9bde8c088f7036968dfb6799f01abee05cb617fe7dd0b877a31e0a9fec56bb" gracePeriod=2 Mar 13 11:40:53 crc kubenswrapper[4632]: I0313 11:40:53.777830 4632 generic.go:334] "Generic (PLEG): container finished" podID="2ad0d32e-2227-4d12-bf62-19eb24597391" containerID="9d9bde8c088f7036968dfb6799f01abee05cb617fe7dd0b877a31e0a9fec56bb" exitCode=0 Mar 13 11:40:53 crc kubenswrapper[4632]: I0313 11:40:53.778142 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kc89d" event={"ID":"2ad0d32e-2227-4d12-bf62-19eb24597391","Type":"ContainerDied","Data":"9d9bde8c088f7036968dfb6799f01abee05cb617fe7dd0b877a31e0a9fec56bb"} Mar 13 11:40:53 crc kubenswrapper[4632]: I0313 11:40:53.778174 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kc89d" event={"ID":"2ad0d32e-2227-4d12-bf62-19eb24597391","Type":"ContainerDied","Data":"7f8725f4be963837d7f72aed20800307d623dcf5ab9a99a449d6c1c7ef19f63e"} Mar 13 11:40:53 crc kubenswrapper[4632]: I0313 11:40:53.778188 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f8725f4be963837d7f72aed20800307d623dcf5ab9a99a449d6c1c7ef19f63e" Mar 13 11:40:53 crc kubenswrapper[4632]: I0313 11:40:53.824360 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kc89d" Mar 13 11:40:53 crc kubenswrapper[4632]: I0313 11:40:53.859347 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ad0d32e-2227-4d12-bf62-19eb24597391-catalog-content\") pod \"2ad0d32e-2227-4d12-bf62-19eb24597391\" (UID: \"2ad0d32e-2227-4d12-bf62-19eb24597391\") " Mar 13 11:40:53 crc kubenswrapper[4632]: I0313 11:40:53.859665 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ad0d32e-2227-4d12-bf62-19eb24597391-utilities\") pod \"2ad0d32e-2227-4d12-bf62-19eb24597391\" (UID: \"2ad0d32e-2227-4d12-bf62-19eb24597391\") " Mar 13 11:40:53 crc kubenswrapper[4632]: I0313 11:40:53.859756 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbnzm\" (UniqueName: \"kubernetes.io/projected/2ad0d32e-2227-4d12-bf62-19eb24597391-kube-api-access-cbnzm\") pod \"2ad0d32e-2227-4d12-bf62-19eb24597391\" (UID: \"2ad0d32e-2227-4d12-bf62-19eb24597391\") " Mar 13 11:40:53 crc kubenswrapper[4632]: I0313 11:40:53.861490 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ad0d32e-2227-4d12-bf62-19eb24597391-utilities" (OuterVolumeSpecName: "utilities") pod "2ad0d32e-2227-4d12-bf62-19eb24597391" (UID: "2ad0d32e-2227-4d12-bf62-19eb24597391"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:40:53 crc kubenswrapper[4632]: I0313 11:40:53.885064 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ad0d32e-2227-4d12-bf62-19eb24597391-kube-api-access-cbnzm" (OuterVolumeSpecName: "kube-api-access-cbnzm") pod "2ad0d32e-2227-4d12-bf62-19eb24597391" (UID: "2ad0d32e-2227-4d12-bf62-19eb24597391"). InnerVolumeSpecName "kube-api-access-cbnzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:40:53 crc kubenswrapper[4632]: I0313 11:40:53.963017 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbnzm\" (UniqueName: \"kubernetes.io/projected/2ad0d32e-2227-4d12-bf62-19eb24597391-kube-api-access-cbnzm\") on node \"crc\" DevicePath \"\"" Mar 13 11:40:53 crc kubenswrapper[4632]: I0313 11:40:53.963482 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ad0d32e-2227-4d12-bf62-19eb24597391-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 11:40:53 crc kubenswrapper[4632]: I0313 11:40:53.998619 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ad0d32e-2227-4d12-bf62-19eb24597391-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ad0d32e-2227-4d12-bf62-19eb24597391" (UID: "2ad0d32e-2227-4d12-bf62-19eb24597391"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:40:54 crc kubenswrapper[4632]: I0313 11:40:54.065626 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ad0d32e-2227-4d12-bf62-19eb24597391-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 11:40:54 crc kubenswrapper[4632]: I0313 11:40:54.786851 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kc89d" Mar 13 11:40:54 crc kubenswrapper[4632]: I0313 11:40:54.818637 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kc89d"] Mar 13 11:40:54 crc kubenswrapper[4632]: I0313 11:40:54.827375 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kc89d"] Mar 13 11:40:56 crc kubenswrapper[4632]: I0313 11:40:56.069148 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ad0d32e-2227-4d12-bf62-19eb24597391" path="/var/lib/kubelet/pods/2ad0d32e-2227-4d12-bf62-19eb24597391/volumes" Mar 13 11:40:57 crc kubenswrapper[4632]: I0313 11:40:57.667211 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-flb98" podUID="dda62ea0-a8e2-46bb-a080-1b771e45feec" containerName="registry-server" probeResult="failure" output=< Mar 13 11:40:57 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:40:57 crc kubenswrapper[4632]: > Mar 13 11:41:00 crc kubenswrapper[4632]: I0313 11:41:00.045040 4632 scope.go:117] "RemoveContainer" containerID="8e0b51539a4ce69896fef2ee7c7e710d1eb74e5257b7d06373268059e30a34f6" Mar 13 11:41:00 crc kubenswrapper[4632]: E0313 11:41:00.045606 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:41:07 crc kubenswrapper[4632]: I0313 11:41:07.665762 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-flb98" podUID="dda62ea0-a8e2-46bb-a080-1b771e45feec" containerName="registry-server" probeResult="failure" output=< Mar 13 11:41:07 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:41:07 crc kubenswrapper[4632]: > Mar 13 11:41:11 crc kubenswrapper[4632]: I0313 11:41:11.044148 4632 scope.go:117] "RemoveContainer" containerID="8e0b51539a4ce69896fef2ee7c7e710d1eb74e5257b7d06373268059e30a34f6" Mar 13 11:41:11 crc kubenswrapper[4632]: E0313 11:41:11.044992 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:41:17 crc kubenswrapper[4632]: I0313 11:41:17.674552 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-flb98" podUID="dda62ea0-a8e2-46bb-a080-1b771e45feec" containerName="registry-server" probeResult="failure" output=< Mar 13 11:41:17 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:41:17 crc kubenswrapper[4632]: > Mar 13 11:41:22 crc kubenswrapper[4632]: I0313 11:41:22.044232 4632 scope.go:117] "RemoveContainer" containerID="8e0b51539a4ce69896fef2ee7c7e710d1eb74e5257b7d06373268059e30a34f6" Mar 13 11:41:22 crc kubenswrapper[4632]: E0313 11:41:22.044866 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:41:26 crc kubenswrapper[4632]: I0313 11:41:26.673343 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-flb98" Mar 13 11:41:26 crc kubenswrapper[4632]: I0313 11:41:26.725998 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-flb98" Mar 13 11:41:26 crc kubenswrapper[4632]: I0313 11:41:26.925353 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-flb98"] Mar 13 11:41:28 crc kubenswrapper[4632]: I0313 11:41:28.147237 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-flb98" podUID="dda62ea0-a8e2-46bb-a080-1b771e45feec" containerName="registry-server" containerID="cri-o://902661c18f068b146e4aecda28b4265925f2ef61195dd89d1bcb5ed19cf93dd6" gracePeriod=2 Mar 13 11:41:28 crc kubenswrapper[4632]: I0313 11:41:28.934566 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-flb98" Mar 13 11:41:29 crc kubenswrapper[4632]: I0313 11:41:29.043796 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dda62ea0-a8e2-46bb-a080-1b771e45feec-catalog-content\") pod \"dda62ea0-a8e2-46bb-a080-1b771e45feec\" (UID: \"dda62ea0-a8e2-46bb-a080-1b771e45feec\") " Mar 13 11:41:29 crc kubenswrapper[4632]: I0313 11:41:29.044154 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dda62ea0-a8e2-46bb-a080-1b771e45feec-utilities\") pod \"dda62ea0-a8e2-46bb-a080-1b771e45feec\" (UID: \"dda62ea0-a8e2-46bb-a080-1b771e45feec\") " Mar 13 11:41:29 crc kubenswrapper[4632]: I0313 11:41:29.044372 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wz4bb\" (UniqueName: \"kubernetes.io/projected/dda62ea0-a8e2-46bb-a080-1b771e45feec-kube-api-access-wz4bb\") pod \"dda62ea0-a8e2-46bb-a080-1b771e45feec\" (UID: \"dda62ea0-a8e2-46bb-a080-1b771e45feec\") " Mar 13 11:41:29 crc kubenswrapper[4632]: I0313 11:41:29.047087 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dda62ea0-a8e2-46bb-a080-1b771e45feec-utilities" (OuterVolumeSpecName: "utilities") pod "dda62ea0-a8e2-46bb-a080-1b771e45feec" (UID: "dda62ea0-a8e2-46bb-a080-1b771e45feec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:41:29 crc kubenswrapper[4632]: I0313 11:41:29.068049 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dda62ea0-a8e2-46bb-a080-1b771e45feec-kube-api-access-wz4bb" (OuterVolumeSpecName: "kube-api-access-wz4bb") pod "dda62ea0-a8e2-46bb-a080-1b771e45feec" (UID: "dda62ea0-a8e2-46bb-a080-1b771e45feec"). InnerVolumeSpecName "kube-api-access-wz4bb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:41:29 crc kubenswrapper[4632]: I0313 11:41:29.147523 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wz4bb\" (UniqueName: \"kubernetes.io/projected/dda62ea0-a8e2-46bb-a080-1b771e45feec-kube-api-access-wz4bb\") on node \"crc\" DevicePath \"\"" Mar 13 11:41:29 crc kubenswrapper[4632]: I0313 11:41:29.147732 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dda62ea0-a8e2-46bb-a080-1b771e45feec-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 11:41:29 crc kubenswrapper[4632]: I0313 11:41:29.161924 4632 generic.go:334] "Generic (PLEG): container finished" podID="dda62ea0-a8e2-46bb-a080-1b771e45feec" containerID="902661c18f068b146e4aecda28b4265925f2ef61195dd89d1bcb5ed19cf93dd6" exitCode=0 Mar 13 11:41:29 crc kubenswrapper[4632]: I0313 11:41:29.162008 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flb98" event={"ID":"dda62ea0-a8e2-46bb-a080-1b771e45feec","Type":"ContainerDied","Data":"902661c18f068b146e4aecda28b4265925f2ef61195dd89d1bcb5ed19cf93dd6"} Mar 13 11:41:29 crc kubenswrapper[4632]: I0313 11:41:29.162048 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flb98" event={"ID":"dda62ea0-a8e2-46bb-a080-1b771e45feec","Type":"ContainerDied","Data":"bc8f50d20d6be8ef47a9be50fca7331e6e4176f53dc58080b0e546eb62f4a959"} Mar 13 11:41:29 crc kubenswrapper[4632]: I0313 11:41:29.162073 4632 scope.go:117] "RemoveContainer" containerID="902661c18f068b146e4aecda28b4265925f2ef61195dd89d1bcb5ed19cf93dd6" Mar 13 11:41:29 crc kubenswrapper[4632]: I0313 11:41:29.162292 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-flb98" Mar 13 11:41:29 crc kubenswrapper[4632]: I0313 11:41:29.205740 4632 scope.go:117] "RemoveContainer" containerID="879dd7d998eed4fd26b7e2d5e411da489d9eceae2f429769ea2bcc4ae3aaf24d" Mar 13 11:41:29 crc kubenswrapper[4632]: I0313 11:41:29.242362 4632 scope.go:117] "RemoveContainer" containerID="f8176fa837f25c1697cac7d62b754ee2cfecbadb578ca4e441339c0f0c3b11c5" Mar 13 11:41:29 crc kubenswrapper[4632]: I0313 11:41:29.252054 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dda62ea0-a8e2-46bb-a080-1b771e45feec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dda62ea0-a8e2-46bb-a080-1b771e45feec" (UID: "dda62ea0-a8e2-46bb-a080-1b771e45feec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:41:29 crc kubenswrapper[4632]: I0313 11:41:29.279292 4632 scope.go:117] "RemoveContainer" containerID="902661c18f068b146e4aecda28b4265925f2ef61195dd89d1bcb5ed19cf93dd6" Mar 13 11:41:29 crc kubenswrapper[4632]: E0313 11:41:29.286377 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"902661c18f068b146e4aecda28b4265925f2ef61195dd89d1bcb5ed19cf93dd6\": container with ID starting with 902661c18f068b146e4aecda28b4265925f2ef61195dd89d1bcb5ed19cf93dd6 not found: ID does not exist" containerID="902661c18f068b146e4aecda28b4265925f2ef61195dd89d1bcb5ed19cf93dd6" Mar 13 11:41:29 crc kubenswrapper[4632]: I0313 11:41:29.286458 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"902661c18f068b146e4aecda28b4265925f2ef61195dd89d1bcb5ed19cf93dd6"} err="failed to get container status \"902661c18f068b146e4aecda28b4265925f2ef61195dd89d1bcb5ed19cf93dd6\": rpc error: code = NotFound desc = could not find container \"902661c18f068b146e4aecda28b4265925f2ef61195dd89d1bcb5ed19cf93dd6\": container with ID starting with 902661c18f068b146e4aecda28b4265925f2ef61195dd89d1bcb5ed19cf93dd6 not found: ID does not exist" Mar 13 11:41:29 crc kubenswrapper[4632]: I0313 11:41:29.286493 4632 scope.go:117] "RemoveContainer" containerID="879dd7d998eed4fd26b7e2d5e411da489d9eceae2f429769ea2bcc4ae3aaf24d" Mar 13 11:41:29 crc kubenswrapper[4632]: E0313 11:41:29.286991 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"879dd7d998eed4fd26b7e2d5e411da489d9eceae2f429769ea2bcc4ae3aaf24d\": container with ID starting with 879dd7d998eed4fd26b7e2d5e411da489d9eceae2f429769ea2bcc4ae3aaf24d not found: ID does not exist" containerID="879dd7d998eed4fd26b7e2d5e411da489d9eceae2f429769ea2bcc4ae3aaf24d" Mar 13 11:41:29 crc kubenswrapper[4632]: I0313 11:41:29.287065 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"879dd7d998eed4fd26b7e2d5e411da489d9eceae2f429769ea2bcc4ae3aaf24d"} err="failed to get container status \"879dd7d998eed4fd26b7e2d5e411da489d9eceae2f429769ea2bcc4ae3aaf24d\": rpc error: code = NotFound desc = could not find container \"879dd7d998eed4fd26b7e2d5e411da489d9eceae2f429769ea2bcc4ae3aaf24d\": container with ID starting with 879dd7d998eed4fd26b7e2d5e411da489d9eceae2f429769ea2bcc4ae3aaf24d not found: ID does not exist" Mar 13 11:41:29 crc kubenswrapper[4632]: I0313 11:41:29.287118 4632 scope.go:117] "RemoveContainer" containerID="f8176fa837f25c1697cac7d62b754ee2cfecbadb578ca4e441339c0f0c3b11c5" Mar 13 11:41:29 crc kubenswrapper[4632]: E0313 11:41:29.287821 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8176fa837f25c1697cac7d62b754ee2cfecbadb578ca4e441339c0f0c3b11c5\": container with ID starting with f8176fa837f25c1697cac7d62b754ee2cfecbadb578ca4e441339c0f0c3b11c5 not found: ID does not exist" containerID="f8176fa837f25c1697cac7d62b754ee2cfecbadb578ca4e441339c0f0c3b11c5" Mar 13 11:41:29 crc kubenswrapper[4632]: I0313 11:41:29.287856 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8176fa837f25c1697cac7d62b754ee2cfecbadb578ca4e441339c0f0c3b11c5"} err="failed to get container status \"f8176fa837f25c1697cac7d62b754ee2cfecbadb578ca4e441339c0f0c3b11c5\": rpc error: code = NotFound desc = could not find container \"f8176fa837f25c1697cac7d62b754ee2cfecbadb578ca4e441339c0f0c3b11c5\": container with ID starting with f8176fa837f25c1697cac7d62b754ee2cfecbadb578ca4e441339c0f0c3b11c5 not found: ID does not exist" Mar 13 11:41:29 crc kubenswrapper[4632]: I0313 11:41:29.352388 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dda62ea0-a8e2-46bb-a080-1b771e45feec-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 11:41:29 crc kubenswrapper[4632]: I0313 11:41:29.516449 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-flb98"] Mar 13 11:41:29 crc kubenswrapper[4632]: I0313 11:41:29.524547 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-flb98"] Mar 13 11:41:30 crc kubenswrapper[4632]: I0313 11:41:30.056539 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dda62ea0-a8e2-46bb-a080-1b771e45feec" path="/var/lib/kubelet/pods/dda62ea0-a8e2-46bb-a080-1b771e45feec/volumes" Mar 13 11:41:33 crc kubenswrapper[4632]: I0313 11:41:33.044953 4632 scope.go:117] "RemoveContainer" containerID="8e0b51539a4ce69896fef2ee7c7e710d1eb74e5257b7d06373268059e30a34f6" Mar 13 11:41:33 crc kubenswrapper[4632]: E0313 11:41:33.045898 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:41:45 crc kubenswrapper[4632]: I0313 11:41:45.045368 4632 scope.go:117] "RemoveContainer" containerID="8e0b51539a4ce69896fef2ee7c7e710d1eb74e5257b7d06373268059e30a34f6" Mar 13 11:41:45 crc kubenswrapper[4632]: E0313 11:41:45.046262 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:41:56 crc kubenswrapper[4632]: I0313 11:41:56.049232 4632 scope.go:117] "RemoveContainer" containerID="8e0b51539a4ce69896fef2ee7c7e710d1eb74e5257b7d06373268059e30a34f6" Mar 13 11:41:56 crc kubenswrapper[4632]: E0313 11:41:56.050922 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:42:00 crc kubenswrapper[4632]: I0313 11:42:00.159401 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556702-prtsf"] Mar 13 11:42:00 crc kubenswrapper[4632]: E0313 11:42:00.160555 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ad0d32e-2227-4d12-bf62-19eb24597391" containerName="extract-utilities" Mar 13 11:42:00 crc kubenswrapper[4632]: I0313 11:42:00.160574 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ad0d32e-2227-4d12-bf62-19eb24597391" containerName="extract-utilities" Mar 13 11:42:00 crc kubenswrapper[4632]: E0313 11:42:00.160595 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dda62ea0-a8e2-46bb-a080-1b771e45feec" containerName="extract-content" Mar 13 11:42:00 crc kubenswrapper[4632]: I0313 11:42:00.160604 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="dda62ea0-a8e2-46bb-a080-1b771e45feec" containerName="extract-content" Mar 13 11:42:00 crc kubenswrapper[4632]: E0313 11:42:00.160628 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dda62ea0-a8e2-46bb-a080-1b771e45feec" containerName="registry-server" Mar 13 11:42:00 crc kubenswrapper[4632]: I0313 11:42:00.160636 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="dda62ea0-a8e2-46bb-a080-1b771e45feec" containerName="registry-server" Mar 13 11:42:00 crc kubenswrapper[4632]: E0313 11:42:00.160659 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ad0d32e-2227-4d12-bf62-19eb24597391" containerName="registry-server" Mar 13 11:42:00 crc kubenswrapper[4632]: I0313 11:42:00.160668 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ad0d32e-2227-4d12-bf62-19eb24597391" containerName="registry-server" Mar 13 11:42:00 crc kubenswrapper[4632]: E0313 11:42:00.160690 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ad0d32e-2227-4d12-bf62-19eb24597391" containerName="extract-content" Mar 13 11:42:00 crc kubenswrapper[4632]: I0313 11:42:00.160698 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ad0d32e-2227-4d12-bf62-19eb24597391" containerName="extract-content" Mar 13 11:42:00 crc kubenswrapper[4632]: E0313 11:42:00.160729 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dda62ea0-a8e2-46bb-a080-1b771e45feec" containerName="extract-utilities" Mar 13 11:42:00 crc kubenswrapper[4632]: I0313 11:42:00.160737 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="dda62ea0-a8e2-46bb-a080-1b771e45feec" containerName="extract-utilities" Mar 13 11:42:00 crc kubenswrapper[4632]: I0313 11:42:00.161030 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="dda62ea0-a8e2-46bb-a080-1b771e45feec" containerName="registry-server" Mar 13 11:42:00 crc kubenswrapper[4632]: I0313 11:42:00.161085 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ad0d32e-2227-4d12-bf62-19eb24597391" containerName="registry-server" Mar 13 11:42:00 crc kubenswrapper[4632]: I0313 11:42:00.162515 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556702-prtsf" Mar 13 11:42:00 crc kubenswrapper[4632]: I0313 11:42:00.165420 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 11:42:00 crc kubenswrapper[4632]: I0313 11:42:00.165684 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 11:42:00 crc kubenswrapper[4632]: I0313 11:42:00.169050 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 11:42:00 crc kubenswrapper[4632]: I0313 11:42:00.169548 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556702-prtsf"] Mar 13 11:42:00 crc kubenswrapper[4632]: I0313 11:42:00.318004 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb6f7\" (UniqueName: \"kubernetes.io/projected/ec8968a0-0c4c-47e1-87d8-3703bea87a89-kube-api-access-mb6f7\") pod \"auto-csr-approver-29556702-prtsf\" (UID: \"ec8968a0-0c4c-47e1-87d8-3703bea87a89\") " pod="openshift-infra/auto-csr-approver-29556702-prtsf" Mar 13 11:42:00 crc kubenswrapper[4632]: I0313 11:42:00.420178 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb6f7\" (UniqueName: \"kubernetes.io/projected/ec8968a0-0c4c-47e1-87d8-3703bea87a89-kube-api-access-mb6f7\") pod \"auto-csr-approver-29556702-prtsf\" (UID: \"ec8968a0-0c4c-47e1-87d8-3703bea87a89\") " pod="openshift-infra/auto-csr-approver-29556702-prtsf" Mar 13 11:42:00 crc kubenswrapper[4632]: I0313 11:42:00.445793 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb6f7\" (UniqueName: \"kubernetes.io/projected/ec8968a0-0c4c-47e1-87d8-3703bea87a89-kube-api-access-mb6f7\") pod \"auto-csr-approver-29556702-prtsf\" (UID: \"ec8968a0-0c4c-47e1-87d8-3703bea87a89\") " pod="openshift-infra/auto-csr-approver-29556702-prtsf" Mar 13 11:42:00 crc kubenswrapper[4632]: I0313 11:42:00.492681 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556702-prtsf" Mar 13 11:42:01 crc kubenswrapper[4632]: I0313 11:42:01.072767 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556702-prtsf"] Mar 13 11:42:01 crc kubenswrapper[4632]: I0313 11:42:01.463017 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556702-prtsf" event={"ID":"ec8968a0-0c4c-47e1-87d8-3703bea87a89","Type":"ContainerStarted","Data":"3d873398315a97348760f80182a041f20e51b27ec456a28c5d8999e39a7149d1"} Mar 13 11:42:03 crc kubenswrapper[4632]: I0313 11:42:03.481798 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556702-prtsf" event={"ID":"ec8968a0-0c4c-47e1-87d8-3703bea87a89","Type":"ContainerStarted","Data":"6ff82271933ceb662a6f6b867ecc2729be9d4acd3b4299ec77fdefa80de44bf3"} Mar 13 11:42:04 crc kubenswrapper[4632]: I0313 11:42:04.552202 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556702-prtsf" podStartSLOduration=3.662121479 podStartE2EDuration="4.552182376s" podCreationTimestamp="2026-03-13 11:42:00 +0000 UTC" firstStartedPulling="2026-03-13 11:42:01.079267308 +0000 UTC m=+5895.101797441" lastFinishedPulling="2026-03-13 11:42:01.969328205 +0000 UTC m=+5895.991858338" observedRunningTime="2026-03-13 11:42:04.549628843 +0000 UTC m=+5898.572158976" watchObservedRunningTime="2026-03-13 11:42:04.552182376 +0000 UTC m=+5898.574712509" Mar 13 11:42:06 crc kubenswrapper[4632]: I0313 11:42:06.547111 4632 generic.go:334] "Generic (PLEG): container finished" podID="ec8968a0-0c4c-47e1-87d8-3703bea87a89" containerID="6ff82271933ceb662a6f6b867ecc2729be9d4acd3b4299ec77fdefa80de44bf3" exitCode=0 Mar 13 11:42:06 crc kubenswrapper[4632]: I0313 11:42:06.547252 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556702-prtsf" event={"ID":"ec8968a0-0c4c-47e1-87d8-3703bea87a89","Type":"ContainerDied","Data":"6ff82271933ceb662a6f6b867ecc2729be9d4acd3b4299ec77fdefa80de44bf3"} Mar 13 11:42:07 crc kubenswrapper[4632]: I0313 11:42:07.952499 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556702-prtsf" Mar 13 11:42:08 crc kubenswrapper[4632]: I0313 11:42:08.090329 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb6f7\" (UniqueName: \"kubernetes.io/projected/ec8968a0-0c4c-47e1-87d8-3703bea87a89-kube-api-access-mb6f7\") pod \"ec8968a0-0c4c-47e1-87d8-3703bea87a89\" (UID: \"ec8968a0-0c4c-47e1-87d8-3703bea87a89\") " Mar 13 11:42:08 crc kubenswrapper[4632]: I0313 11:42:08.102158 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec8968a0-0c4c-47e1-87d8-3703bea87a89-kube-api-access-mb6f7" (OuterVolumeSpecName: "kube-api-access-mb6f7") pod "ec8968a0-0c4c-47e1-87d8-3703bea87a89" (UID: "ec8968a0-0c4c-47e1-87d8-3703bea87a89"). InnerVolumeSpecName "kube-api-access-mb6f7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:42:08 crc kubenswrapper[4632]: I0313 11:42:08.195057 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb6f7\" (UniqueName: \"kubernetes.io/projected/ec8968a0-0c4c-47e1-87d8-3703bea87a89-kube-api-access-mb6f7\") on node \"crc\" DevicePath \"\"" Mar 13 11:42:08 crc kubenswrapper[4632]: I0313 11:42:08.565027 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556702-prtsf" event={"ID":"ec8968a0-0c4c-47e1-87d8-3703bea87a89","Type":"ContainerDied","Data":"3d873398315a97348760f80182a041f20e51b27ec456a28c5d8999e39a7149d1"} Mar 13 11:42:08 crc kubenswrapper[4632]: I0313 11:42:08.565076 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d873398315a97348760f80182a041f20e51b27ec456a28c5d8999e39a7149d1" Mar 13 11:42:08 crc kubenswrapper[4632]: I0313 11:42:08.565119 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556702-prtsf" Mar 13 11:42:08 crc kubenswrapper[4632]: I0313 11:42:08.647553 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556696-6dgq6"] Mar 13 11:42:08 crc kubenswrapper[4632]: I0313 11:42:08.657610 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556696-6dgq6"] Mar 13 11:42:10 crc kubenswrapper[4632]: I0313 11:42:10.044214 4632 scope.go:117] "RemoveContainer" containerID="8e0b51539a4ce69896fef2ee7c7e710d1eb74e5257b7d06373268059e30a34f6" Mar 13 11:42:10 crc kubenswrapper[4632]: E0313 11:42:10.044853 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:42:10 crc kubenswrapper[4632]: I0313 11:42:10.058312 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8600f7f-45fb-4aa6-b13b-9d6be5354009" path="/var/lib/kubelet/pods/f8600f7f-45fb-4aa6-b13b-9d6be5354009/volumes" Mar 13 11:42:21 crc kubenswrapper[4632]: I0313 11:42:21.954877 4632 scope.go:117] "RemoveContainer" containerID="ffbf598df91f4bb7277b432bb2bc1355e735cdb640ec4482a312abc6e198f0af" Mar 13 11:42:24 crc kubenswrapper[4632]: I0313 11:42:24.044217 4632 scope.go:117] "RemoveContainer" containerID="8e0b51539a4ce69896fef2ee7c7e710d1eb74e5257b7d06373268059e30a34f6" Mar 13 11:42:24 crc kubenswrapper[4632]: E0313 11:42:24.045216 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:42:38 crc kubenswrapper[4632]: I0313 11:42:38.050571 4632 scope.go:117] "RemoveContainer" containerID="8e0b51539a4ce69896fef2ee7c7e710d1eb74e5257b7d06373268059e30a34f6" Mar 13 11:42:38 crc kubenswrapper[4632]: E0313 11:42:38.052609 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:42:53 crc kubenswrapper[4632]: I0313 11:42:53.044926 4632 scope.go:117] "RemoveContainer" containerID="8e0b51539a4ce69896fef2ee7c7e710d1eb74e5257b7d06373268059e30a34f6" Mar 13 11:42:53 crc kubenswrapper[4632]: E0313 11:42:53.046106 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:43:06 crc kubenswrapper[4632]: I0313 11:43:06.048660 4632 scope.go:117] "RemoveContainer" containerID="8e0b51539a4ce69896fef2ee7c7e710d1eb74e5257b7d06373268059e30a34f6" Mar 13 11:43:06 crc kubenswrapper[4632]: E0313 11:43:06.053865 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:43:17 crc kubenswrapper[4632]: I0313 11:43:17.044221 4632 scope.go:117] "RemoveContainer" containerID="8e0b51539a4ce69896fef2ee7c7e710d1eb74e5257b7d06373268059e30a34f6" Mar 13 11:43:17 crc kubenswrapper[4632]: E0313 11:43:17.045030 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:43:28 crc kubenswrapper[4632]: I0313 11:43:28.054324 4632 scope.go:117] "RemoveContainer" containerID="8e0b51539a4ce69896fef2ee7c7e710d1eb74e5257b7d06373268059e30a34f6" Mar 13 11:43:28 crc kubenswrapper[4632]: E0313 11:43:28.055497 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:43:43 crc kubenswrapper[4632]: I0313 11:43:43.045022 4632 scope.go:117] "RemoveContainer" containerID="8e0b51539a4ce69896fef2ee7c7e710d1eb74e5257b7d06373268059e30a34f6" Mar 13 11:43:43 crc kubenswrapper[4632]: E0313 11:43:43.045915 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:43:54 crc kubenswrapper[4632]: I0313 11:43:54.046655 4632 scope.go:117] "RemoveContainer" containerID="8e0b51539a4ce69896fef2ee7c7e710d1eb74e5257b7d06373268059e30a34f6" Mar 13 11:43:54 crc kubenswrapper[4632]: E0313 11:43:54.047191 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:44:00 crc kubenswrapper[4632]: I0313 11:44:00.156473 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556704-r82jw"] Mar 13 11:44:00 crc kubenswrapper[4632]: E0313 11:44:00.158417 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec8968a0-0c4c-47e1-87d8-3703bea87a89" containerName="oc" Mar 13 11:44:00 crc kubenswrapper[4632]: I0313 11:44:00.158501 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec8968a0-0c4c-47e1-87d8-3703bea87a89" containerName="oc" Mar 13 11:44:00 crc kubenswrapper[4632]: I0313 11:44:00.158743 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec8968a0-0c4c-47e1-87d8-3703bea87a89" containerName="oc" Mar 13 11:44:00 crc kubenswrapper[4632]: I0313 11:44:00.159451 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556704-r82jw" Mar 13 11:44:00 crc kubenswrapper[4632]: I0313 11:44:00.167861 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556704-r82jw"] Mar 13 11:44:00 crc kubenswrapper[4632]: I0313 11:44:00.168603 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 11:44:00 crc kubenswrapper[4632]: I0313 11:44:00.168601 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 11:44:00 crc kubenswrapper[4632]: I0313 11:44:00.177669 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 11:44:00 crc kubenswrapper[4632]: I0313 11:44:00.337300 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcks9\" (UniqueName: \"kubernetes.io/projected/6300cb33-fba3-4d08-948b-0c6584d2ef26-kube-api-access-dcks9\") pod \"auto-csr-approver-29556704-r82jw\" (UID: \"6300cb33-fba3-4d08-948b-0c6584d2ef26\") " pod="openshift-infra/auto-csr-approver-29556704-r82jw" Mar 13 11:44:00 crc kubenswrapper[4632]: I0313 11:44:00.441863 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcks9\" (UniqueName: \"kubernetes.io/projected/6300cb33-fba3-4d08-948b-0c6584d2ef26-kube-api-access-dcks9\") pod \"auto-csr-approver-29556704-r82jw\" (UID: \"6300cb33-fba3-4d08-948b-0c6584d2ef26\") " pod="openshift-infra/auto-csr-approver-29556704-r82jw" Mar 13 11:44:00 crc kubenswrapper[4632]: I0313 11:44:00.475217 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcks9\" (UniqueName: \"kubernetes.io/projected/6300cb33-fba3-4d08-948b-0c6584d2ef26-kube-api-access-dcks9\") pod \"auto-csr-approver-29556704-r82jw\" (UID: \"6300cb33-fba3-4d08-948b-0c6584d2ef26\") " pod="openshift-infra/auto-csr-approver-29556704-r82jw" Mar 13 11:44:00 crc kubenswrapper[4632]: I0313 11:44:00.480065 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556704-r82jw" Mar 13 11:44:00 crc kubenswrapper[4632]: I0313 11:44:00.968527 4632 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 11:44:00 crc kubenswrapper[4632]: I0313 11:44:00.970605 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556704-r82jw"] Mar 13 11:44:01 crc kubenswrapper[4632]: I0313 11:44:01.808487 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556704-r82jw" event={"ID":"6300cb33-fba3-4d08-948b-0c6584d2ef26","Type":"ContainerStarted","Data":"3ec1b3268e2b207a1a34f7039203387c5cea5c789a84c70b274d99fe9ffa654c"} Mar 13 11:44:02 crc kubenswrapper[4632]: E0313 11:44:02.497060 4632 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6300cb33_fba3_4d08_948b_0c6584d2ef26.slice/crio-be61603d8895006f002bb067cf8ee34fe273ab45dfec9b8aa73261bbdd0ea048.scope\": RecentStats: unable to find data in memory cache]" Mar 13 11:44:02 crc kubenswrapper[4632]: I0313 11:44:02.822428 4632 generic.go:334] "Generic (PLEG): container finished" podID="6300cb33-fba3-4d08-948b-0c6584d2ef26" containerID="be61603d8895006f002bb067cf8ee34fe273ab45dfec9b8aa73261bbdd0ea048" exitCode=0 Mar 13 11:44:02 crc kubenswrapper[4632]: I0313 11:44:02.822486 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556704-r82jw" event={"ID":"6300cb33-fba3-4d08-948b-0c6584d2ef26","Type":"ContainerDied","Data":"be61603d8895006f002bb067cf8ee34fe273ab45dfec9b8aa73261bbdd0ea048"} Mar 13 11:44:04 crc kubenswrapper[4632]: I0313 11:44:04.201490 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556704-r82jw" Mar 13 11:44:04 crc kubenswrapper[4632]: I0313 11:44:04.318361 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcks9\" (UniqueName: \"kubernetes.io/projected/6300cb33-fba3-4d08-948b-0c6584d2ef26-kube-api-access-dcks9\") pod \"6300cb33-fba3-4d08-948b-0c6584d2ef26\" (UID: \"6300cb33-fba3-4d08-948b-0c6584d2ef26\") " Mar 13 11:44:04 crc kubenswrapper[4632]: I0313 11:44:04.324162 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6300cb33-fba3-4d08-948b-0c6584d2ef26-kube-api-access-dcks9" (OuterVolumeSpecName: "kube-api-access-dcks9") pod "6300cb33-fba3-4d08-948b-0c6584d2ef26" (UID: "6300cb33-fba3-4d08-948b-0c6584d2ef26"). InnerVolumeSpecName "kube-api-access-dcks9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:44:04 crc kubenswrapper[4632]: I0313 11:44:04.421411 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcks9\" (UniqueName: \"kubernetes.io/projected/6300cb33-fba3-4d08-948b-0c6584d2ef26-kube-api-access-dcks9\") on node \"crc\" DevicePath \"\"" Mar 13 11:44:04 crc kubenswrapper[4632]: I0313 11:44:04.845652 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556704-r82jw" event={"ID":"6300cb33-fba3-4d08-948b-0c6584d2ef26","Type":"ContainerDied","Data":"3ec1b3268e2b207a1a34f7039203387c5cea5c789a84c70b274d99fe9ffa654c"} Mar 13 11:44:04 crc kubenswrapper[4632]: I0313 11:44:04.845911 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ec1b3268e2b207a1a34f7039203387c5cea5c789a84c70b274d99fe9ffa654c" Mar 13 11:44:04 crc kubenswrapper[4632]: I0313 11:44:04.845764 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556704-r82jw" Mar 13 11:44:05 crc kubenswrapper[4632]: I0313 11:44:05.314901 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556698-ncrvx"] Mar 13 11:44:05 crc kubenswrapper[4632]: I0313 11:44:05.330274 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556698-ncrvx"] Mar 13 11:44:06 crc kubenswrapper[4632]: I0313 11:44:06.056705 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08ee143e-f1cf-4c38-a811-d31496082a75" path="/var/lib/kubelet/pods/08ee143e-f1cf-4c38-a811-d31496082a75/volumes" Mar 13 11:44:09 crc kubenswrapper[4632]: I0313 11:44:09.043512 4632 scope.go:117] "RemoveContainer" containerID="8e0b51539a4ce69896fef2ee7c7e710d1eb74e5257b7d06373268059e30a34f6" Mar 13 11:44:09 crc kubenswrapper[4632]: E0313 11:44:09.043815 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:44:22 crc kubenswrapper[4632]: I0313 11:44:22.199556 4632 scope.go:117] "RemoveContainer" containerID="19b28d2a56d1971c59c024b2b42655c24314722844900a0860bc74bbd0e6dfd4" Mar 13 11:44:23 crc kubenswrapper[4632]: I0313 11:44:23.045359 4632 scope.go:117] "RemoveContainer" containerID="8e0b51539a4ce69896fef2ee7c7e710d1eb74e5257b7d06373268059e30a34f6" Mar 13 11:44:23 crc kubenswrapper[4632]: E0313 11:44:23.046027 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:44:34 crc kubenswrapper[4632]: I0313 11:44:34.044048 4632 scope.go:117] "RemoveContainer" containerID="8e0b51539a4ce69896fef2ee7c7e710d1eb74e5257b7d06373268059e30a34f6" Mar 13 11:44:34 crc kubenswrapper[4632]: E0313 11:44:34.045001 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:44:49 crc kubenswrapper[4632]: I0313 11:44:49.045351 4632 scope.go:117] "RemoveContainer" containerID="8e0b51539a4ce69896fef2ee7c7e710d1eb74e5257b7d06373268059e30a34f6" Mar 13 11:44:49 crc kubenswrapper[4632]: E0313 11:44:49.046440 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:45:00 crc kubenswrapper[4632]: I0313 11:45:00.044884 4632 scope.go:117] "RemoveContainer" containerID="8e0b51539a4ce69896fef2ee7c7e710d1eb74e5257b7d06373268059e30a34f6" Mar 13 11:45:00 crc kubenswrapper[4632]: E0313 11:45:00.045686 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:45:00 crc kubenswrapper[4632]: I0313 11:45:00.201268 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556705-cdjnv"] Mar 13 11:45:00 crc kubenswrapper[4632]: E0313 11:45:00.201965 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6300cb33-fba3-4d08-948b-0c6584d2ef26" containerName="oc" Mar 13 11:45:00 crc kubenswrapper[4632]: I0313 11:45:00.201979 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="6300cb33-fba3-4d08-948b-0c6584d2ef26" containerName="oc" Mar 13 11:45:00 crc kubenswrapper[4632]: I0313 11:45:00.202361 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="6300cb33-fba3-4d08-948b-0c6584d2ef26" containerName="oc" Mar 13 11:45:00 crc kubenswrapper[4632]: I0313 11:45:00.204601 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556705-cdjnv" Mar 13 11:45:00 crc kubenswrapper[4632]: I0313 11:45:00.207953 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 13 11:45:00 crc kubenswrapper[4632]: I0313 11:45:00.208136 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 13 11:45:00 crc kubenswrapper[4632]: I0313 11:45:00.208701 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556705-cdjnv"] Mar 13 11:45:00 crc kubenswrapper[4632]: I0313 11:45:00.364023 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7-config-volume\") pod \"collect-profiles-29556705-cdjnv\" (UID: \"964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556705-cdjnv" Mar 13 11:45:00 crc kubenswrapper[4632]: I0313 11:45:00.364063 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7-secret-volume\") pod \"collect-profiles-29556705-cdjnv\" (UID: \"964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556705-cdjnv" Mar 13 11:45:00 crc kubenswrapper[4632]: I0313 11:45:00.364118 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbkn7\" (UniqueName: \"kubernetes.io/projected/964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7-kube-api-access-fbkn7\") pod \"collect-profiles-29556705-cdjnv\" (UID: \"964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556705-cdjnv" Mar 13 11:45:00 crc kubenswrapper[4632]: I0313 11:45:00.465989 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7-config-volume\") pod \"collect-profiles-29556705-cdjnv\" (UID: \"964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556705-cdjnv" Mar 13 11:45:00 crc kubenswrapper[4632]: I0313 11:45:00.466031 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7-secret-volume\") pod \"collect-profiles-29556705-cdjnv\" (UID: \"964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556705-cdjnv" Mar 13 11:45:00 crc kubenswrapper[4632]: I0313 11:45:00.466094 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbkn7\" (UniqueName: \"kubernetes.io/projected/964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7-kube-api-access-fbkn7\") pod \"collect-profiles-29556705-cdjnv\" (UID: \"964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556705-cdjnv" Mar 13 11:45:00 crc kubenswrapper[4632]: I0313 11:45:00.466810 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7-config-volume\") pod \"collect-profiles-29556705-cdjnv\" (UID: \"964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556705-cdjnv" Mar 13 11:45:00 crc kubenswrapper[4632]: I0313 11:45:00.479091 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7-secret-volume\") pod \"collect-profiles-29556705-cdjnv\" (UID: \"964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556705-cdjnv" Mar 13 11:45:00 crc kubenswrapper[4632]: I0313 11:45:00.488833 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbkn7\" (UniqueName: \"kubernetes.io/projected/964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7-kube-api-access-fbkn7\") pod \"collect-profiles-29556705-cdjnv\" (UID: \"964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556705-cdjnv" Mar 13 11:45:00 crc kubenswrapper[4632]: I0313 11:45:00.538611 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556705-cdjnv" Mar 13 11:45:01 crc kubenswrapper[4632]: I0313 11:45:01.138851 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556705-cdjnv"] Mar 13 11:45:01 crc kubenswrapper[4632]: I0313 11:45:01.408069 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556705-cdjnv" event={"ID":"964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7","Type":"ContainerStarted","Data":"a50d24de30277dacbb16bc71e07335e3c84d2cedb12dfb6c3d660775ff2f0c54"} Mar 13 11:45:01 crc kubenswrapper[4632]: I0313 11:45:01.408116 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556705-cdjnv" event={"ID":"964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7","Type":"ContainerStarted","Data":"21cccd049d8f0c721f003daa0c370957ce291331c311203a423c768e560fb2f2"} Mar 13 11:45:02 crc kubenswrapper[4632]: I0313 11:45:02.421797 4632 generic.go:334] "Generic (PLEG): container finished" podID="964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7" containerID="a50d24de30277dacbb16bc71e07335e3c84d2cedb12dfb6c3d660775ff2f0c54" exitCode=0 Mar 13 11:45:02 crc kubenswrapper[4632]: I0313 11:45:02.421894 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556705-cdjnv" event={"ID":"964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7","Type":"ContainerDied","Data":"a50d24de30277dacbb16bc71e07335e3c84d2cedb12dfb6c3d660775ff2f0c54"} Mar 13 11:45:03 crc kubenswrapper[4632]: I0313 11:45:03.806296 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556705-cdjnv" Mar 13 11:45:03 crc kubenswrapper[4632]: I0313 11:45:03.948587 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbkn7\" (UniqueName: \"kubernetes.io/projected/964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7-kube-api-access-fbkn7\") pod \"964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7\" (UID: \"964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7\") " Mar 13 11:45:03 crc kubenswrapper[4632]: I0313 11:45:03.948641 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7-config-volume\") pod \"964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7\" (UID: \"964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7\") " Mar 13 11:45:03 crc kubenswrapper[4632]: I0313 11:45:03.948759 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7-secret-volume\") pod \"964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7\" (UID: \"964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7\") " Mar 13 11:45:03 crc kubenswrapper[4632]: I0313 11:45:03.949674 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7-config-volume" (OuterVolumeSpecName: "config-volume") pod "964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7" (UID: "964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:45:03 crc kubenswrapper[4632]: I0313 11:45:03.950863 4632 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7-config-volume\") on node \"crc\" DevicePath \"\"" Mar 13 11:45:03 crc kubenswrapper[4632]: I0313 11:45:03.969588 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7" (UID: "964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:45:03 crc kubenswrapper[4632]: I0313 11:45:03.969621 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7-kube-api-access-fbkn7" (OuterVolumeSpecName: "kube-api-access-fbkn7") pod "964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7" (UID: "964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7"). InnerVolumeSpecName "kube-api-access-fbkn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:45:04 crc kubenswrapper[4632]: I0313 11:45:04.052435 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbkn7\" (UniqueName: \"kubernetes.io/projected/964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7-kube-api-access-fbkn7\") on node \"crc\" DevicePath \"\"" Mar 13 11:45:04 crc kubenswrapper[4632]: I0313 11:45:04.052472 4632 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 13 11:45:04 crc kubenswrapper[4632]: I0313 11:45:04.449978 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556705-cdjnv" event={"ID":"964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7","Type":"ContainerDied","Data":"21cccd049d8f0c721f003daa0c370957ce291331c311203a423c768e560fb2f2"} Mar 13 11:45:04 crc kubenswrapper[4632]: I0313 11:45:04.450022 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21cccd049d8f0c721f003daa0c370957ce291331c311203a423c768e560fb2f2" Mar 13 11:45:04 crc kubenswrapper[4632]: I0313 11:45:04.450092 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556705-cdjnv" Mar 13 11:45:04 crc kubenswrapper[4632]: I0313 11:45:04.521205 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556660-7vph8"] Mar 13 11:45:04 crc kubenswrapper[4632]: I0313 11:45:04.530092 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556660-7vph8"] Mar 13 11:45:06 crc kubenswrapper[4632]: I0313 11:45:06.057959 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f506e288-f3da-4d62-a6a2-bb598a62ed13" path="/var/lib/kubelet/pods/f506e288-f3da-4d62-a6a2-bb598a62ed13/volumes" Mar 13 11:45:13 crc kubenswrapper[4632]: I0313 11:45:13.044542 4632 scope.go:117] "RemoveContainer" containerID="8e0b51539a4ce69896fef2ee7c7e710d1eb74e5257b7d06373268059e30a34f6" Mar 13 11:45:13 crc kubenswrapper[4632]: I0313 11:45:13.546835 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerStarted","Data":"a91f451a2842f8b1b73b10a99ff94ea342a17276601161b96bf6802b9f5327a9"} Mar 13 11:45:22 crc kubenswrapper[4632]: I0313 11:45:22.316873 4632 scope.go:117] "RemoveContainer" containerID="df99b126bcdc13810e89ae823dc76bf43eab9d932c52b6dd430fa449a698c642" Mar 13 11:46:00 crc kubenswrapper[4632]: I0313 11:46:00.154455 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556706-psc7t"] Mar 13 11:46:00 crc kubenswrapper[4632]: E0313 11:46:00.155431 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7" containerName="collect-profiles" Mar 13 11:46:00 crc kubenswrapper[4632]: I0313 11:46:00.155445 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7" containerName="collect-profiles" Mar 13 11:46:00 crc kubenswrapper[4632]: I0313 11:46:00.155656 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7" containerName="collect-profiles" Mar 13 11:46:00 crc kubenswrapper[4632]: I0313 11:46:00.156415 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556706-psc7t" Mar 13 11:46:00 crc kubenswrapper[4632]: I0313 11:46:00.159106 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 11:46:00 crc kubenswrapper[4632]: I0313 11:46:00.159616 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 11:46:00 crc kubenswrapper[4632]: I0313 11:46:00.161775 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 11:46:00 crc kubenswrapper[4632]: I0313 11:46:00.169414 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556706-psc7t"] Mar 13 11:46:00 crc kubenswrapper[4632]: I0313 11:46:00.279753 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wcp2\" (UniqueName: \"kubernetes.io/projected/8980f067-488f-497f-8ba7-5ee2d3069d62-kube-api-access-2wcp2\") pod \"auto-csr-approver-29556706-psc7t\" (UID: \"8980f067-488f-497f-8ba7-5ee2d3069d62\") " pod="openshift-infra/auto-csr-approver-29556706-psc7t" Mar 13 11:46:00 crc kubenswrapper[4632]: I0313 11:46:00.381509 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wcp2\" (UniqueName: \"kubernetes.io/projected/8980f067-488f-497f-8ba7-5ee2d3069d62-kube-api-access-2wcp2\") pod \"auto-csr-approver-29556706-psc7t\" (UID: \"8980f067-488f-497f-8ba7-5ee2d3069d62\") " pod="openshift-infra/auto-csr-approver-29556706-psc7t" Mar 13 11:46:00 crc kubenswrapper[4632]: I0313 11:46:00.412187 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wcp2\" (UniqueName: \"kubernetes.io/projected/8980f067-488f-497f-8ba7-5ee2d3069d62-kube-api-access-2wcp2\") pod \"auto-csr-approver-29556706-psc7t\" (UID: \"8980f067-488f-497f-8ba7-5ee2d3069d62\") " pod="openshift-infra/auto-csr-approver-29556706-psc7t" Mar 13 11:46:00 crc kubenswrapper[4632]: I0313 11:46:00.479214 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556706-psc7t" Mar 13 11:46:01 crc kubenswrapper[4632]: I0313 11:46:01.032056 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556706-psc7t"] Mar 13 11:46:01 crc kubenswrapper[4632]: I0313 11:46:01.235244 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556706-psc7t" event={"ID":"8980f067-488f-497f-8ba7-5ee2d3069d62","Type":"ContainerStarted","Data":"5c5f685d468e35ff54098d18f75b65f3dcbdea73c7e4e084c9f376a192c7c67f"} Mar 13 11:46:03 crc kubenswrapper[4632]: I0313 11:46:03.259205 4632 generic.go:334] "Generic (PLEG): container finished" podID="8980f067-488f-497f-8ba7-5ee2d3069d62" containerID="49f7bf435fba27e68a413e86a923b4ddacb7432c6b3ec46cefd0935c8e2aecc2" exitCode=0 Mar 13 11:46:03 crc kubenswrapper[4632]: I0313 11:46:03.259313 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556706-psc7t" event={"ID":"8980f067-488f-497f-8ba7-5ee2d3069d62","Type":"ContainerDied","Data":"49f7bf435fba27e68a413e86a923b4ddacb7432c6b3ec46cefd0935c8e2aecc2"} Mar 13 11:46:04 crc kubenswrapper[4632]: I0313 11:46:04.679919 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556706-psc7t" Mar 13 11:46:04 crc kubenswrapper[4632]: I0313 11:46:04.802772 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wcp2\" (UniqueName: \"kubernetes.io/projected/8980f067-488f-497f-8ba7-5ee2d3069d62-kube-api-access-2wcp2\") pod \"8980f067-488f-497f-8ba7-5ee2d3069d62\" (UID: \"8980f067-488f-497f-8ba7-5ee2d3069d62\") " Mar 13 11:46:04 crc kubenswrapper[4632]: I0313 11:46:04.809422 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8980f067-488f-497f-8ba7-5ee2d3069d62-kube-api-access-2wcp2" (OuterVolumeSpecName: "kube-api-access-2wcp2") pod "8980f067-488f-497f-8ba7-5ee2d3069d62" (UID: "8980f067-488f-497f-8ba7-5ee2d3069d62"). InnerVolumeSpecName "kube-api-access-2wcp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:46:04 crc kubenswrapper[4632]: I0313 11:46:04.905015 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wcp2\" (UniqueName: \"kubernetes.io/projected/8980f067-488f-497f-8ba7-5ee2d3069d62-kube-api-access-2wcp2\") on node \"crc\" DevicePath \"\"" Mar 13 11:46:05 crc kubenswrapper[4632]: I0313 11:46:05.280138 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556706-psc7t" event={"ID":"8980f067-488f-497f-8ba7-5ee2d3069d62","Type":"ContainerDied","Data":"5c5f685d468e35ff54098d18f75b65f3dcbdea73c7e4e084c9f376a192c7c67f"} Mar 13 11:46:05 crc kubenswrapper[4632]: I0313 11:46:05.280179 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c5f685d468e35ff54098d18f75b65f3dcbdea73c7e4e084c9f376a192c7c67f" Mar 13 11:46:05 crc kubenswrapper[4632]: I0313 11:46:05.280232 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556706-psc7t" Mar 13 11:46:05 crc kubenswrapper[4632]: I0313 11:46:05.764502 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556700-rjhgg"] Mar 13 11:46:05 crc kubenswrapper[4632]: I0313 11:46:05.776926 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556700-rjhgg"] Mar 13 11:46:06 crc kubenswrapper[4632]: I0313 11:46:06.058826 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d39670e0-5c6f-40f3-b7b1-46dc6fdc5fc5" path="/var/lib/kubelet/pods/d39670e0-5c6f-40f3-b7b1-46dc6fdc5fc5/volumes" Mar 13 11:46:22 crc kubenswrapper[4632]: I0313 11:46:22.380329 4632 scope.go:117] "RemoveContainer" containerID="9d9bde8c088f7036968dfb6799f01abee05cb617fe7dd0b877a31e0a9fec56bb" Mar 13 11:46:22 crc kubenswrapper[4632]: I0313 11:46:22.403396 4632 scope.go:117] "RemoveContainer" containerID="38648afa2577e5a9fb5d9d7e00b5f9e414dd8af022764679faa186da16c72b26" Mar 13 11:46:22 crc kubenswrapper[4632]: I0313 11:46:22.429544 4632 scope.go:117] "RemoveContainer" containerID="e2dc32a91f84dbc41f05188f09b5ec2790c5d429e256411e1defae69e3e43deb" Mar 13 11:46:22 crc kubenswrapper[4632]: I0313 11:46:22.504298 4632 scope.go:117] "RemoveContainer" containerID="4f578f7aba25fc35c2994f8884e7599a303084b2db092f8a745f47b140231631" Mar 13 11:47:13 crc kubenswrapper[4632]: I0313 11:47:13.485381 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hrpzk"] Mar 13 11:47:13 crc kubenswrapper[4632]: E0313 11:47:13.486343 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8980f067-488f-497f-8ba7-5ee2d3069d62" containerName="oc" Mar 13 11:47:13 crc kubenswrapper[4632]: I0313 11:47:13.486360 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="8980f067-488f-497f-8ba7-5ee2d3069d62" containerName="oc" Mar 13 11:47:13 crc kubenswrapper[4632]: I0313 11:47:13.487110 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="8980f067-488f-497f-8ba7-5ee2d3069d62" containerName="oc" Mar 13 11:47:13 crc kubenswrapper[4632]: I0313 11:47:13.488684 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrpzk" Mar 13 11:47:13 crc kubenswrapper[4632]: I0313 11:47:13.510479 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrpzk"] Mar 13 11:47:13 crc kubenswrapper[4632]: I0313 11:47:13.616316 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqpsp\" (UniqueName: \"kubernetes.io/projected/b73e9e28-59a7-4e69-818c-03972ee9f6db-kube-api-access-bqpsp\") pod \"redhat-marketplace-hrpzk\" (UID: \"b73e9e28-59a7-4e69-818c-03972ee9f6db\") " pod="openshift-marketplace/redhat-marketplace-hrpzk" Mar 13 11:47:13 crc kubenswrapper[4632]: I0313 11:47:13.616386 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b73e9e28-59a7-4e69-818c-03972ee9f6db-catalog-content\") pod \"redhat-marketplace-hrpzk\" (UID: \"b73e9e28-59a7-4e69-818c-03972ee9f6db\") " pod="openshift-marketplace/redhat-marketplace-hrpzk" Mar 13 11:47:13 crc kubenswrapper[4632]: I0313 11:47:13.616464 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b73e9e28-59a7-4e69-818c-03972ee9f6db-utilities\") pod \"redhat-marketplace-hrpzk\" (UID: \"b73e9e28-59a7-4e69-818c-03972ee9f6db\") " pod="openshift-marketplace/redhat-marketplace-hrpzk" Mar 13 11:47:13 crc kubenswrapper[4632]: I0313 11:47:13.718109 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b73e9e28-59a7-4e69-818c-03972ee9f6db-utilities\") pod \"redhat-marketplace-hrpzk\" (UID: \"b73e9e28-59a7-4e69-818c-03972ee9f6db\") " pod="openshift-marketplace/redhat-marketplace-hrpzk" Mar 13 11:47:13 crc kubenswrapper[4632]: I0313 11:47:13.718736 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b73e9e28-59a7-4e69-818c-03972ee9f6db-utilities\") pod \"redhat-marketplace-hrpzk\" (UID: \"b73e9e28-59a7-4e69-818c-03972ee9f6db\") " pod="openshift-marketplace/redhat-marketplace-hrpzk" Mar 13 11:47:13 crc kubenswrapper[4632]: I0313 11:47:13.719087 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqpsp\" (UniqueName: \"kubernetes.io/projected/b73e9e28-59a7-4e69-818c-03972ee9f6db-kube-api-access-bqpsp\") pod \"redhat-marketplace-hrpzk\" (UID: \"b73e9e28-59a7-4e69-818c-03972ee9f6db\") " pod="openshift-marketplace/redhat-marketplace-hrpzk" Mar 13 11:47:13 crc kubenswrapper[4632]: I0313 11:47:13.719534 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b73e9e28-59a7-4e69-818c-03972ee9f6db-catalog-content\") pod \"redhat-marketplace-hrpzk\" (UID: \"b73e9e28-59a7-4e69-818c-03972ee9f6db\") " pod="openshift-marketplace/redhat-marketplace-hrpzk" Mar 13 11:47:13 crc kubenswrapper[4632]: I0313 11:47:13.719856 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b73e9e28-59a7-4e69-818c-03972ee9f6db-catalog-content\") pod \"redhat-marketplace-hrpzk\" (UID: \"b73e9e28-59a7-4e69-818c-03972ee9f6db\") " pod="openshift-marketplace/redhat-marketplace-hrpzk" Mar 13 11:47:13 crc kubenswrapper[4632]: I0313 11:47:13.737920 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqpsp\" (UniqueName: \"kubernetes.io/projected/b73e9e28-59a7-4e69-818c-03972ee9f6db-kube-api-access-bqpsp\") pod \"redhat-marketplace-hrpzk\" (UID: \"b73e9e28-59a7-4e69-818c-03972ee9f6db\") " pod="openshift-marketplace/redhat-marketplace-hrpzk" Mar 13 11:47:13 crc kubenswrapper[4632]: I0313 11:47:13.852748 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrpzk" Mar 13 11:47:14 crc kubenswrapper[4632]: I0313 11:47:14.349262 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrpzk"] Mar 13 11:47:15 crc kubenswrapper[4632]: I0313 11:47:15.000731 4632 generic.go:334] "Generic (PLEG): container finished" podID="b73e9e28-59a7-4e69-818c-03972ee9f6db" containerID="daed49207d919944fcd55fa320fc1c031d61060fce906a2b0f553c91d9ebbb5b" exitCode=0 Mar 13 11:47:15 crc kubenswrapper[4632]: I0313 11:47:15.001001 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrpzk" event={"ID":"b73e9e28-59a7-4e69-818c-03972ee9f6db","Type":"ContainerDied","Data":"daed49207d919944fcd55fa320fc1c031d61060fce906a2b0f553c91d9ebbb5b"} Mar 13 11:47:15 crc kubenswrapper[4632]: I0313 11:47:15.001061 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrpzk" event={"ID":"b73e9e28-59a7-4e69-818c-03972ee9f6db","Type":"ContainerStarted","Data":"68fa17ac498b5d3b5b99bb554433e7a897e9d668ce1084e5bebf2b51c96391f9"} Mar 13 11:47:17 crc kubenswrapper[4632]: I0313 11:47:17.032970 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrpzk" event={"ID":"b73e9e28-59a7-4e69-818c-03972ee9f6db","Type":"ContainerStarted","Data":"8f3b80c84e004d10de4ab88d81eca36dc2a9a31a2533f8d5d74d789e09b9f3c1"} Mar 13 11:47:18 crc kubenswrapper[4632]: I0313 11:47:18.045531 4632 generic.go:334] "Generic (PLEG): container finished" podID="b73e9e28-59a7-4e69-818c-03972ee9f6db" containerID="8f3b80c84e004d10de4ab88d81eca36dc2a9a31a2533f8d5d74d789e09b9f3c1" exitCode=0 Mar 13 11:47:18 crc kubenswrapper[4632]: I0313 11:47:18.057707 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrpzk" event={"ID":"b73e9e28-59a7-4e69-818c-03972ee9f6db","Type":"ContainerDied","Data":"8f3b80c84e004d10de4ab88d81eca36dc2a9a31a2533f8d5d74d789e09b9f3c1"} Mar 13 11:47:19 crc kubenswrapper[4632]: I0313 11:47:19.057401 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrpzk" event={"ID":"b73e9e28-59a7-4e69-818c-03972ee9f6db","Type":"ContainerStarted","Data":"aad74b1bbd85b819ef3895de0f161045236f8fae19bf8e68390588fd962447e5"} Mar 13 11:47:19 crc kubenswrapper[4632]: I0313 11:47:19.080964 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hrpzk" podStartSLOduration=2.362119064 podStartE2EDuration="6.080931074s" podCreationTimestamp="2026-03-13 11:47:13 +0000 UTC" firstStartedPulling="2026-03-13 11:47:15.007456103 +0000 UTC m=+6209.029986236" lastFinishedPulling="2026-03-13 11:47:18.726268113 +0000 UTC m=+6212.748798246" observedRunningTime="2026-03-13 11:47:19.074370742 +0000 UTC m=+6213.096900875" watchObservedRunningTime="2026-03-13 11:47:19.080931074 +0000 UTC m=+6213.103461207" Mar 13 11:47:23 crc kubenswrapper[4632]: I0313 11:47:23.853680 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hrpzk" Mar 13 11:47:23 crc kubenswrapper[4632]: I0313 11:47:23.854263 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hrpzk" Mar 13 11:47:24 crc kubenswrapper[4632]: I0313 11:47:24.896120 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-hrpzk" podUID="b73e9e28-59a7-4e69-818c-03972ee9f6db" containerName="registry-server" probeResult="failure" output=< Mar 13 11:47:24 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:47:24 crc kubenswrapper[4632]: > Mar 13 11:47:33 crc kubenswrapper[4632]: I0313 11:47:33.898543 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hrpzk" Mar 13 11:47:33 crc kubenswrapper[4632]: I0313 11:47:33.954306 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hrpzk" Mar 13 11:47:34 crc kubenswrapper[4632]: I0313 11:47:34.170143 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrpzk"] Mar 13 11:47:35 crc kubenswrapper[4632]: I0313 11:47:35.199407 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hrpzk" podUID="b73e9e28-59a7-4e69-818c-03972ee9f6db" containerName="registry-server" containerID="cri-o://aad74b1bbd85b819ef3895de0f161045236f8fae19bf8e68390588fd962447e5" gracePeriod=2 Mar 13 11:47:35 crc kubenswrapper[4632]: I0313 11:47:35.770095 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrpzk" Mar 13 11:47:35 crc kubenswrapper[4632]: I0313 11:47:35.897993 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqpsp\" (UniqueName: \"kubernetes.io/projected/b73e9e28-59a7-4e69-818c-03972ee9f6db-kube-api-access-bqpsp\") pod \"b73e9e28-59a7-4e69-818c-03972ee9f6db\" (UID: \"b73e9e28-59a7-4e69-818c-03972ee9f6db\") " Mar 13 11:47:35 crc kubenswrapper[4632]: I0313 11:47:35.898104 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b73e9e28-59a7-4e69-818c-03972ee9f6db-utilities\") pod \"b73e9e28-59a7-4e69-818c-03972ee9f6db\" (UID: \"b73e9e28-59a7-4e69-818c-03972ee9f6db\") " Mar 13 11:47:35 crc kubenswrapper[4632]: I0313 11:47:35.898349 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b73e9e28-59a7-4e69-818c-03972ee9f6db-catalog-content\") pod \"b73e9e28-59a7-4e69-818c-03972ee9f6db\" (UID: \"b73e9e28-59a7-4e69-818c-03972ee9f6db\") " Mar 13 11:47:35 crc kubenswrapper[4632]: I0313 11:47:35.899356 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b73e9e28-59a7-4e69-818c-03972ee9f6db-utilities" (OuterVolumeSpecName: "utilities") pod "b73e9e28-59a7-4e69-818c-03972ee9f6db" (UID: "b73e9e28-59a7-4e69-818c-03972ee9f6db"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:47:35 crc kubenswrapper[4632]: I0313 11:47:35.911143 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b73e9e28-59a7-4e69-818c-03972ee9f6db-kube-api-access-bqpsp" (OuterVolumeSpecName: "kube-api-access-bqpsp") pod "b73e9e28-59a7-4e69-818c-03972ee9f6db" (UID: "b73e9e28-59a7-4e69-818c-03972ee9f6db"). InnerVolumeSpecName "kube-api-access-bqpsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:47:35 crc kubenswrapper[4632]: I0313 11:47:35.924959 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b73e9e28-59a7-4e69-818c-03972ee9f6db-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b73e9e28-59a7-4e69-818c-03972ee9f6db" (UID: "b73e9e28-59a7-4e69-818c-03972ee9f6db"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:47:36 crc kubenswrapper[4632]: I0313 11:47:36.000164 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b73e9e28-59a7-4e69-818c-03972ee9f6db-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 11:47:36 crc kubenswrapper[4632]: I0313 11:47:36.000217 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqpsp\" (UniqueName: \"kubernetes.io/projected/b73e9e28-59a7-4e69-818c-03972ee9f6db-kube-api-access-bqpsp\") on node \"crc\" DevicePath \"\"" Mar 13 11:47:36 crc kubenswrapper[4632]: I0313 11:47:36.000231 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b73e9e28-59a7-4e69-818c-03972ee9f6db-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 11:47:36 crc kubenswrapper[4632]: I0313 11:47:36.211339 4632 generic.go:334] "Generic (PLEG): container finished" podID="b73e9e28-59a7-4e69-818c-03972ee9f6db" containerID="aad74b1bbd85b819ef3895de0f161045236f8fae19bf8e68390588fd962447e5" exitCode=0 Mar 13 11:47:36 crc kubenswrapper[4632]: I0313 11:47:36.211381 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrpzk" event={"ID":"b73e9e28-59a7-4e69-818c-03972ee9f6db","Type":"ContainerDied","Data":"aad74b1bbd85b819ef3895de0f161045236f8fae19bf8e68390588fd962447e5"} Mar 13 11:47:36 crc kubenswrapper[4632]: I0313 11:47:36.211648 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrpzk" event={"ID":"b73e9e28-59a7-4e69-818c-03972ee9f6db","Type":"ContainerDied","Data":"68fa17ac498b5d3b5b99bb554433e7a897e9d668ce1084e5bebf2b51c96391f9"} Mar 13 11:47:36 crc kubenswrapper[4632]: I0313 11:47:36.211668 4632 scope.go:117] "RemoveContainer" containerID="aad74b1bbd85b819ef3895de0f161045236f8fae19bf8e68390588fd962447e5" Mar 13 11:47:36 crc kubenswrapper[4632]: I0313 11:47:36.211455 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrpzk" Mar 13 11:47:36 crc kubenswrapper[4632]: I0313 11:47:36.237708 4632 scope.go:117] "RemoveContainer" containerID="8f3b80c84e004d10de4ab88d81eca36dc2a9a31a2533f8d5d74d789e09b9f3c1" Mar 13 11:47:36 crc kubenswrapper[4632]: I0313 11:47:36.261069 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrpzk"] Mar 13 11:47:36 crc kubenswrapper[4632]: I0313 11:47:36.261615 4632 scope.go:117] "RemoveContainer" containerID="daed49207d919944fcd55fa320fc1c031d61060fce906a2b0f553c91d9ebbb5b" Mar 13 11:47:36 crc kubenswrapper[4632]: I0313 11:47:36.268469 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrpzk"] Mar 13 11:47:36 crc kubenswrapper[4632]: I0313 11:47:36.338806 4632 scope.go:117] "RemoveContainer" containerID="aad74b1bbd85b819ef3895de0f161045236f8fae19bf8e68390588fd962447e5" Mar 13 11:47:36 crc kubenswrapper[4632]: E0313 11:47:36.341780 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aad74b1bbd85b819ef3895de0f161045236f8fae19bf8e68390588fd962447e5\": container with ID starting with aad74b1bbd85b819ef3895de0f161045236f8fae19bf8e68390588fd962447e5 not found: ID does not exist" containerID="aad74b1bbd85b819ef3895de0f161045236f8fae19bf8e68390588fd962447e5" Mar 13 11:47:36 crc kubenswrapper[4632]: I0313 11:47:36.341832 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aad74b1bbd85b819ef3895de0f161045236f8fae19bf8e68390588fd962447e5"} err="failed to get container status \"aad74b1bbd85b819ef3895de0f161045236f8fae19bf8e68390588fd962447e5\": rpc error: code = NotFound desc = could not find container \"aad74b1bbd85b819ef3895de0f161045236f8fae19bf8e68390588fd962447e5\": container with ID starting with aad74b1bbd85b819ef3895de0f161045236f8fae19bf8e68390588fd962447e5 not found: ID does not exist" Mar 13 11:47:36 crc kubenswrapper[4632]: I0313 11:47:36.341885 4632 scope.go:117] "RemoveContainer" containerID="8f3b80c84e004d10de4ab88d81eca36dc2a9a31a2533f8d5d74d789e09b9f3c1" Mar 13 11:47:36 crc kubenswrapper[4632]: E0313 11:47:36.342710 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f3b80c84e004d10de4ab88d81eca36dc2a9a31a2533f8d5d74d789e09b9f3c1\": container with ID starting with 8f3b80c84e004d10de4ab88d81eca36dc2a9a31a2533f8d5d74d789e09b9f3c1 not found: ID does not exist" containerID="8f3b80c84e004d10de4ab88d81eca36dc2a9a31a2533f8d5d74d789e09b9f3c1" Mar 13 11:47:36 crc kubenswrapper[4632]: I0313 11:47:36.342904 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f3b80c84e004d10de4ab88d81eca36dc2a9a31a2533f8d5d74d789e09b9f3c1"} err="failed to get container status \"8f3b80c84e004d10de4ab88d81eca36dc2a9a31a2533f8d5d74d789e09b9f3c1\": rpc error: code = NotFound desc = could not find container \"8f3b80c84e004d10de4ab88d81eca36dc2a9a31a2533f8d5d74d789e09b9f3c1\": container with ID starting with 8f3b80c84e004d10de4ab88d81eca36dc2a9a31a2533f8d5d74d789e09b9f3c1 not found: ID does not exist" Mar 13 11:47:36 crc kubenswrapper[4632]: I0313 11:47:36.342956 4632 scope.go:117] "RemoveContainer" containerID="daed49207d919944fcd55fa320fc1c031d61060fce906a2b0f553c91d9ebbb5b" Mar 13 11:47:36 crc kubenswrapper[4632]: E0313 11:47:36.343390 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daed49207d919944fcd55fa320fc1c031d61060fce906a2b0f553c91d9ebbb5b\": container with ID starting with daed49207d919944fcd55fa320fc1c031d61060fce906a2b0f553c91d9ebbb5b not found: ID does not exist" containerID="daed49207d919944fcd55fa320fc1c031d61060fce906a2b0f553c91d9ebbb5b" Mar 13 11:47:36 crc kubenswrapper[4632]: I0313 11:47:36.343408 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daed49207d919944fcd55fa320fc1c031d61060fce906a2b0f553c91d9ebbb5b"} err="failed to get container status \"daed49207d919944fcd55fa320fc1c031d61060fce906a2b0f553c91d9ebbb5b\": rpc error: code = NotFound desc = could not find container \"daed49207d919944fcd55fa320fc1c031d61060fce906a2b0f553c91d9ebbb5b\": container with ID starting with daed49207d919944fcd55fa320fc1c031d61060fce906a2b0f553c91d9ebbb5b not found: ID does not exist" Mar 13 11:47:38 crc kubenswrapper[4632]: I0313 11:47:38.060497 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b73e9e28-59a7-4e69-818c-03972ee9f6db" path="/var/lib/kubelet/pods/b73e9e28-59a7-4e69-818c-03972ee9f6db/volumes" Mar 13 11:47:40 crc kubenswrapper[4632]: I0313 11:47:40.461095 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 11:47:40 crc kubenswrapper[4632]: I0313 11:47:40.461553 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 11:48:00 crc kubenswrapper[4632]: I0313 11:48:00.191073 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556708-f6hsv"] Mar 13 11:48:00 crc kubenswrapper[4632]: E0313 11:48:00.192042 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b73e9e28-59a7-4e69-818c-03972ee9f6db" containerName="extract-content" Mar 13 11:48:00 crc kubenswrapper[4632]: I0313 11:48:00.192057 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="b73e9e28-59a7-4e69-818c-03972ee9f6db" containerName="extract-content" Mar 13 11:48:00 crc kubenswrapper[4632]: E0313 11:48:00.192066 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b73e9e28-59a7-4e69-818c-03972ee9f6db" containerName="registry-server" Mar 13 11:48:00 crc kubenswrapper[4632]: I0313 11:48:00.192072 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="b73e9e28-59a7-4e69-818c-03972ee9f6db" containerName="registry-server" Mar 13 11:48:00 crc kubenswrapper[4632]: E0313 11:48:00.192116 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b73e9e28-59a7-4e69-818c-03972ee9f6db" containerName="extract-utilities" Mar 13 11:48:00 crc kubenswrapper[4632]: I0313 11:48:00.192125 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="b73e9e28-59a7-4e69-818c-03972ee9f6db" containerName="extract-utilities" Mar 13 11:48:00 crc kubenswrapper[4632]: I0313 11:48:00.192301 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="b73e9e28-59a7-4e69-818c-03972ee9f6db" containerName="registry-server" Mar 13 11:48:00 crc kubenswrapper[4632]: I0313 11:48:00.192926 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556708-f6hsv" Mar 13 11:48:00 crc kubenswrapper[4632]: I0313 11:48:00.195872 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 11:48:00 crc kubenswrapper[4632]: I0313 11:48:00.196400 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 11:48:00 crc kubenswrapper[4632]: I0313 11:48:00.196679 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 11:48:00 crc kubenswrapper[4632]: I0313 11:48:00.210922 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556708-f6hsv"] Mar 13 11:48:00 crc kubenswrapper[4632]: I0313 11:48:00.296205 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsjt5\" (UniqueName: \"kubernetes.io/projected/5464d278-31e9-45aa-9e87-78ef3e96115e-kube-api-access-gsjt5\") pod \"auto-csr-approver-29556708-f6hsv\" (UID: \"5464d278-31e9-45aa-9e87-78ef3e96115e\") " pod="openshift-infra/auto-csr-approver-29556708-f6hsv" Mar 13 11:48:00 crc kubenswrapper[4632]: I0313 11:48:00.397894 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsjt5\" (UniqueName: \"kubernetes.io/projected/5464d278-31e9-45aa-9e87-78ef3e96115e-kube-api-access-gsjt5\") pod \"auto-csr-approver-29556708-f6hsv\" (UID: \"5464d278-31e9-45aa-9e87-78ef3e96115e\") " pod="openshift-infra/auto-csr-approver-29556708-f6hsv" Mar 13 11:48:00 crc kubenswrapper[4632]: I0313 11:48:00.422077 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsjt5\" (UniqueName: \"kubernetes.io/projected/5464d278-31e9-45aa-9e87-78ef3e96115e-kube-api-access-gsjt5\") pod \"auto-csr-approver-29556708-f6hsv\" (UID: \"5464d278-31e9-45aa-9e87-78ef3e96115e\") " pod="openshift-infra/auto-csr-approver-29556708-f6hsv" Mar 13 11:48:00 crc kubenswrapper[4632]: I0313 11:48:00.526530 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556708-f6hsv" Mar 13 11:48:00 crc kubenswrapper[4632]: I0313 11:48:00.869722 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556708-f6hsv"] Mar 13 11:48:01 crc kubenswrapper[4632]: I0313 11:48:01.462859 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556708-f6hsv" event={"ID":"5464d278-31e9-45aa-9e87-78ef3e96115e","Type":"ContainerStarted","Data":"3bbec0ef6d8358428ebfe87e34ae623dc5307c008bdbed4bc8220f8b08a51513"} Mar 13 11:48:03 crc kubenswrapper[4632]: I0313 11:48:03.489532 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556708-f6hsv" event={"ID":"5464d278-31e9-45aa-9e87-78ef3e96115e","Type":"ContainerStarted","Data":"6f4e30c3bf10310c255b10e3f6602511c866fd5af961f5f486fed69de586adb4"} Mar 13 11:48:03 crc kubenswrapper[4632]: I0313 11:48:03.515815 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556708-f6hsv" podStartSLOduration=2.099296206 podStartE2EDuration="3.515792034s" podCreationTimestamp="2026-03-13 11:48:00 +0000 UTC" firstStartedPulling="2026-03-13 11:48:00.878026856 +0000 UTC m=+6254.900556989" lastFinishedPulling="2026-03-13 11:48:02.294522684 +0000 UTC m=+6256.317052817" observedRunningTime="2026-03-13 11:48:03.506807703 +0000 UTC m=+6257.529337846" watchObservedRunningTime="2026-03-13 11:48:03.515792034 +0000 UTC m=+6257.538322167" Mar 13 11:48:04 crc kubenswrapper[4632]: I0313 11:48:04.503365 4632 generic.go:334] "Generic (PLEG): container finished" podID="5464d278-31e9-45aa-9e87-78ef3e96115e" containerID="6f4e30c3bf10310c255b10e3f6602511c866fd5af961f5f486fed69de586adb4" exitCode=0 Mar 13 11:48:04 crc kubenswrapper[4632]: I0313 11:48:04.503705 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556708-f6hsv" event={"ID":"5464d278-31e9-45aa-9e87-78ef3e96115e","Type":"ContainerDied","Data":"6f4e30c3bf10310c255b10e3f6602511c866fd5af961f5f486fed69de586adb4"} Mar 13 11:48:05 crc kubenswrapper[4632]: I0313 11:48:05.950993 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556708-f6hsv" Mar 13 11:48:06 crc kubenswrapper[4632]: I0313 11:48:06.104964 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsjt5\" (UniqueName: \"kubernetes.io/projected/5464d278-31e9-45aa-9e87-78ef3e96115e-kube-api-access-gsjt5\") pod \"5464d278-31e9-45aa-9e87-78ef3e96115e\" (UID: \"5464d278-31e9-45aa-9e87-78ef3e96115e\") " Mar 13 11:48:06 crc kubenswrapper[4632]: I0313 11:48:06.111203 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5464d278-31e9-45aa-9e87-78ef3e96115e-kube-api-access-gsjt5" (OuterVolumeSpecName: "kube-api-access-gsjt5") pod "5464d278-31e9-45aa-9e87-78ef3e96115e" (UID: "5464d278-31e9-45aa-9e87-78ef3e96115e"). InnerVolumeSpecName "kube-api-access-gsjt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:48:06 crc kubenswrapper[4632]: I0313 11:48:06.208267 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsjt5\" (UniqueName: \"kubernetes.io/projected/5464d278-31e9-45aa-9e87-78ef3e96115e-kube-api-access-gsjt5\") on node \"crc\" DevicePath \"\"" Mar 13 11:48:06 crc kubenswrapper[4632]: I0313 11:48:06.521044 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556708-f6hsv" event={"ID":"5464d278-31e9-45aa-9e87-78ef3e96115e","Type":"ContainerDied","Data":"3bbec0ef6d8358428ebfe87e34ae623dc5307c008bdbed4bc8220f8b08a51513"} Mar 13 11:48:06 crc kubenswrapper[4632]: I0313 11:48:06.521108 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bbec0ef6d8358428ebfe87e34ae623dc5307c008bdbed4bc8220f8b08a51513" Mar 13 11:48:06 crc kubenswrapper[4632]: I0313 11:48:06.521112 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556708-f6hsv" Mar 13 11:48:06 crc kubenswrapper[4632]: I0313 11:48:06.586298 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556702-prtsf"] Mar 13 11:48:06 crc kubenswrapper[4632]: I0313 11:48:06.593916 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556702-prtsf"] Mar 13 11:48:08 crc kubenswrapper[4632]: I0313 11:48:08.062322 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec8968a0-0c4c-47e1-87d8-3703bea87a89" path="/var/lib/kubelet/pods/ec8968a0-0c4c-47e1-87d8-3703bea87a89/volumes" Mar 13 11:48:10 crc kubenswrapper[4632]: I0313 11:48:10.461187 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 11:48:10 crc kubenswrapper[4632]: I0313 11:48:10.461554 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 11:48:19 crc kubenswrapper[4632]: I0313 11:48:19.771078 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-72gcl"] Mar 13 11:48:19 crc kubenswrapper[4632]: E0313 11:48:19.772325 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5464d278-31e9-45aa-9e87-78ef3e96115e" containerName="oc" Mar 13 11:48:19 crc kubenswrapper[4632]: I0313 11:48:19.772346 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="5464d278-31e9-45aa-9e87-78ef3e96115e" containerName="oc" Mar 13 11:48:19 crc kubenswrapper[4632]: I0313 11:48:19.772609 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="5464d278-31e9-45aa-9e87-78ef3e96115e" containerName="oc" Mar 13 11:48:19 crc kubenswrapper[4632]: I0313 11:48:19.778029 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-72gcl" Mar 13 11:48:19 crc kubenswrapper[4632]: I0313 11:48:19.786115 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-72gcl"] Mar 13 11:48:19 crc kubenswrapper[4632]: I0313 11:48:19.900397 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d87f4ca1-c949-445e-86d5-ec3f446e07d7-utilities\") pod \"community-operators-72gcl\" (UID: \"d87f4ca1-c949-445e-86d5-ec3f446e07d7\") " pod="openshift-marketplace/community-operators-72gcl" Mar 13 11:48:19 crc kubenswrapper[4632]: I0313 11:48:19.900488 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sqz2\" (UniqueName: \"kubernetes.io/projected/d87f4ca1-c949-445e-86d5-ec3f446e07d7-kube-api-access-7sqz2\") pod \"community-operators-72gcl\" (UID: \"d87f4ca1-c949-445e-86d5-ec3f446e07d7\") " pod="openshift-marketplace/community-operators-72gcl" Mar 13 11:48:19 crc kubenswrapper[4632]: I0313 11:48:19.900571 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d87f4ca1-c949-445e-86d5-ec3f446e07d7-catalog-content\") pod \"community-operators-72gcl\" (UID: \"d87f4ca1-c949-445e-86d5-ec3f446e07d7\") " pod="openshift-marketplace/community-operators-72gcl" Mar 13 11:48:20 crc kubenswrapper[4632]: I0313 11:48:20.002698 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sqz2\" (UniqueName: \"kubernetes.io/projected/d87f4ca1-c949-445e-86d5-ec3f446e07d7-kube-api-access-7sqz2\") pod \"community-operators-72gcl\" (UID: \"d87f4ca1-c949-445e-86d5-ec3f446e07d7\") " pod="openshift-marketplace/community-operators-72gcl" Mar 13 11:48:20 crc kubenswrapper[4632]: I0313 11:48:20.002752 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d87f4ca1-c949-445e-86d5-ec3f446e07d7-catalog-content\") pod \"community-operators-72gcl\" (UID: \"d87f4ca1-c949-445e-86d5-ec3f446e07d7\") " pod="openshift-marketplace/community-operators-72gcl" Mar 13 11:48:20 crc kubenswrapper[4632]: I0313 11:48:20.002930 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d87f4ca1-c949-445e-86d5-ec3f446e07d7-utilities\") pod \"community-operators-72gcl\" (UID: \"d87f4ca1-c949-445e-86d5-ec3f446e07d7\") " pod="openshift-marketplace/community-operators-72gcl" Mar 13 11:48:20 crc kubenswrapper[4632]: I0313 11:48:20.003755 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d87f4ca1-c949-445e-86d5-ec3f446e07d7-catalog-content\") pod \"community-operators-72gcl\" (UID: \"d87f4ca1-c949-445e-86d5-ec3f446e07d7\") " pod="openshift-marketplace/community-operators-72gcl" Mar 13 11:48:20 crc kubenswrapper[4632]: I0313 11:48:20.003821 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d87f4ca1-c949-445e-86d5-ec3f446e07d7-utilities\") pod \"community-operators-72gcl\" (UID: \"d87f4ca1-c949-445e-86d5-ec3f446e07d7\") " pod="openshift-marketplace/community-operators-72gcl" Mar 13 11:48:20 crc kubenswrapper[4632]: I0313 11:48:20.024121 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sqz2\" (UniqueName: \"kubernetes.io/projected/d87f4ca1-c949-445e-86d5-ec3f446e07d7-kube-api-access-7sqz2\") pod \"community-operators-72gcl\" (UID: \"d87f4ca1-c949-445e-86d5-ec3f446e07d7\") " pod="openshift-marketplace/community-operators-72gcl" Mar 13 11:48:20 crc kubenswrapper[4632]: I0313 11:48:20.101865 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-72gcl" Mar 13 11:48:20 crc kubenswrapper[4632]: I0313 11:48:20.673283 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-72gcl"] Mar 13 11:48:21 crc kubenswrapper[4632]: I0313 11:48:21.682409 4632 generic.go:334] "Generic (PLEG): container finished" podID="d87f4ca1-c949-445e-86d5-ec3f446e07d7" containerID="2194314bd57a0d0a7f1493e557a6ebb9931f0f83fb3a3632d522d74b98d03d0f" exitCode=0 Mar 13 11:48:21 crc kubenswrapper[4632]: I0313 11:48:21.682488 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-72gcl" event={"ID":"d87f4ca1-c949-445e-86d5-ec3f446e07d7","Type":"ContainerDied","Data":"2194314bd57a0d0a7f1493e557a6ebb9931f0f83fb3a3632d522d74b98d03d0f"} Mar 13 11:48:21 crc kubenswrapper[4632]: I0313 11:48:21.682652 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-72gcl" event={"ID":"d87f4ca1-c949-445e-86d5-ec3f446e07d7","Type":"ContainerStarted","Data":"9b74ae1b97730cf4914fb6b8ac551cf06716d9c442d8192bb2995649e62562de"} Mar 13 11:48:22 crc kubenswrapper[4632]: I0313 11:48:22.592388 4632 scope.go:117] "RemoveContainer" containerID="6ff82271933ceb662a6f6b867ecc2729be9d4acd3b4299ec77fdefa80de44bf3" Mar 13 11:48:23 crc kubenswrapper[4632]: I0313 11:48:23.707680 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-72gcl" event={"ID":"d87f4ca1-c949-445e-86d5-ec3f446e07d7","Type":"ContainerStarted","Data":"5cab776fc2c7b37147eb6269a29d34f464d6a44dbfff425d5956d4872660b7ab"} Mar 13 11:48:25 crc kubenswrapper[4632]: I0313 11:48:25.726153 4632 generic.go:334] "Generic (PLEG): container finished" podID="d87f4ca1-c949-445e-86d5-ec3f446e07d7" containerID="5cab776fc2c7b37147eb6269a29d34f464d6a44dbfff425d5956d4872660b7ab" exitCode=0 Mar 13 11:48:25 crc kubenswrapper[4632]: I0313 11:48:25.726232 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-72gcl" event={"ID":"d87f4ca1-c949-445e-86d5-ec3f446e07d7","Type":"ContainerDied","Data":"5cab776fc2c7b37147eb6269a29d34f464d6a44dbfff425d5956d4872660b7ab"} Mar 13 11:48:27 crc kubenswrapper[4632]: I0313 11:48:27.745867 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-72gcl" event={"ID":"d87f4ca1-c949-445e-86d5-ec3f446e07d7","Type":"ContainerStarted","Data":"989b7b9ba9a1536075bf1e4277ed84ded29045da7f5e204ef2c5670f8f34cbe5"} Mar 13 11:48:27 crc kubenswrapper[4632]: I0313 11:48:27.773259 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-72gcl" podStartSLOduration=3.616099 podStartE2EDuration="8.773235275s" podCreationTimestamp="2026-03-13 11:48:19 +0000 UTC" firstStartedPulling="2026-03-13 11:48:21.685359524 +0000 UTC m=+6275.707889657" lastFinishedPulling="2026-03-13 11:48:26.842495799 +0000 UTC m=+6280.865025932" observedRunningTime="2026-03-13 11:48:27.76936213 +0000 UTC m=+6281.791892283" watchObservedRunningTime="2026-03-13 11:48:27.773235275 +0000 UTC m=+6281.795765408" Mar 13 11:48:30 crc kubenswrapper[4632]: I0313 11:48:30.102711 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-72gcl" Mar 13 11:48:30 crc kubenswrapper[4632]: I0313 11:48:30.103372 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-72gcl" Mar 13 11:48:31 crc kubenswrapper[4632]: I0313 11:48:31.167221 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-72gcl" podUID="d87f4ca1-c949-445e-86d5-ec3f446e07d7" containerName="registry-server" probeResult="failure" output=< Mar 13 11:48:31 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:48:31 crc kubenswrapper[4632]: > Mar 13 11:48:40 crc kubenswrapper[4632]: I0313 11:48:40.148578 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-72gcl" Mar 13 11:48:40 crc kubenswrapper[4632]: I0313 11:48:40.197785 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-72gcl" Mar 13 11:48:40 crc kubenswrapper[4632]: I0313 11:48:40.407058 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-72gcl"] Mar 13 11:48:40 crc kubenswrapper[4632]: I0313 11:48:40.461192 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 11:48:40 crc kubenswrapper[4632]: I0313 11:48:40.461254 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 11:48:40 crc kubenswrapper[4632]: I0313 11:48:40.461305 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 11:48:40 crc kubenswrapper[4632]: I0313 11:48:40.482292 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a91f451a2842f8b1b73b10a99ff94ea342a17276601161b96bf6802b9f5327a9"} pod="openshift-machine-config-operator/machine-config-daemon-zkscb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 11:48:40 crc kubenswrapper[4632]: I0313 11:48:40.482652 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" containerID="cri-o://a91f451a2842f8b1b73b10a99ff94ea342a17276601161b96bf6802b9f5327a9" gracePeriod=600 Mar 13 11:48:40 crc kubenswrapper[4632]: I0313 11:48:40.877845 4632 generic.go:334] "Generic (PLEG): container finished" podID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerID="a91f451a2842f8b1b73b10a99ff94ea342a17276601161b96bf6802b9f5327a9" exitCode=0 Mar 13 11:48:40 crc kubenswrapper[4632]: I0313 11:48:40.877954 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerDied","Data":"a91f451a2842f8b1b73b10a99ff94ea342a17276601161b96bf6802b9f5327a9"} Mar 13 11:48:40 crc kubenswrapper[4632]: I0313 11:48:40.878422 4632 scope.go:117] "RemoveContainer" containerID="8e0b51539a4ce69896fef2ee7c7e710d1eb74e5257b7d06373268059e30a34f6" Mar 13 11:48:41 crc kubenswrapper[4632]: I0313 11:48:41.894077 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerStarted","Data":"09f62a713fe019f208bcff213bc55f14995ec3a8014d027c3bf7cfc3b5b612e6"} Mar 13 11:48:41 crc kubenswrapper[4632]: I0313 11:48:41.894233 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-72gcl" podUID="d87f4ca1-c949-445e-86d5-ec3f446e07d7" containerName="registry-server" containerID="cri-o://989b7b9ba9a1536075bf1e4277ed84ded29045da7f5e204ef2c5670f8f34cbe5" gracePeriod=2 Mar 13 11:48:42 crc kubenswrapper[4632]: I0313 11:48:42.837172 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-72gcl" Mar 13 11:48:42 crc kubenswrapper[4632]: I0313 11:48:42.906044 4632 generic.go:334] "Generic (PLEG): container finished" podID="d87f4ca1-c949-445e-86d5-ec3f446e07d7" containerID="989b7b9ba9a1536075bf1e4277ed84ded29045da7f5e204ef2c5670f8f34cbe5" exitCode=0 Mar 13 11:48:42 crc kubenswrapper[4632]: I0313 11:48:42.906129 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-72gcl" Mar 13 11:48:42 crc kubenswrapper[4632]: I0313 11:48:42.906158 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-72gcl" event={"ID":"d87f4ca1-c949-445e-86d5-ec3f446e07d7","Type":"ContainerDied","Data":"989b7b9ba9a1536075bf1e4277ed84ded29045da7f5e204ef2c5670f8f34cbe5"} Mar 13 11:48:42 crc kubenswrapper[4632]: I0313 11:48:42.907524 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-72gcl" event={"ID":"d87f4ca1-c949-445e-86d5-ec3f446e07d7","Type":"ContainerDied","Data":"9b74ae1b97730cf4914fb6b8ac551cf06716d9c442d8192bb2995649e62562de"} Mar 13 11:48:42 crc kubenswrapper[4632]: I0313 11:48:42.907592 4632 scope.go:117] "RemoveContainer" containerID="989b7b9ba9a1536075bf1e4277ed84ded29045da7f5e204ef2c5670f8f34cbe5" Mar 13 11:48:42 crc kubenswrapper[4632]: I0313 11:48:42.932746 4632 scope.go:117] "RemoveContainer" containerID="5cab776fc2c7b37147eb6269a29d34f464d6a44dbfff425d5956d4872660b7ab" Mar 13 11:48:42 crc kubenswrapper[4632]: I0313 11:48:42.957764 4632 scope.go:117] "RemoveContainer" containerID="2194314bd57a0d0a7f1493e557a6ebb9931f0f83fb3a3632d522d74b98d03d0f" Mar 13 11:48:42 crc kubenswrapper[4632]: I0313 11:48:42.969327 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d87f4ca1-c949-445e-86d5-ec3f446e07d7-utilities\") pod \"d87f4ca1-c949-445e-86d5-ec3f446e07d7\" (UID: \"d87f4ca1-c949-445e-86d5-ec3f446e07d7\") " Mar 13 11:48:42 crc kubenswrapper[4632]: I0313 11:48:42.969689 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sqz2\" (UniqueName: \"kubernetes.io/projected/d87f4ca1-c949-445e-86d5-ec3f446e07d7-kube-api-access-7sqz2\") pod \"d87f4ca1-c949-445e-86d5-ec3f446e07d7\" (UID: \"d87f4ca1-c949-445e-86d5-ec3f446e07d7\") " Mar 13 11:48:42 crc kubenswrapper[4632]: I0313 11:48:42.970019 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d87f4ca1-c949-445e-86d5-ec3f446e07d7-catalog-content\") pod \"d87f4ca1-c949-445e-86d5-ec3f446e07d7\" (UID: \"d87f4ca1-c949-445e-86d5-ec3f446e07d7\") " Mar 13 11:48:42 crc kubenswrapper[4632]: I0313 11:48:42.970159 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d87f4ca1-c949-445e-86d5-ec3f446e07d7-utilities" (OuterVolumeSpecName: "utilities") pod "d87f4ca1-c949-445e-86d5-ec3f446e07d7" (UID: "d87f4ca1-c949-445e-86d5-ec3f446e07d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:48:42 crc kubenswrapper[4632]: I0313 11:48:42.970711 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d87f4ca1-c949-445e-86d5-ec3f446e07d7-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 11:48:42 crc kubenswrapper[4632]: I0313 11:48:42.976268 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d87f4ca1-c949-445e-86d5-ec3f446e07d7-kube-api-access-7sqz2" (OuterVolumeSpecName: "kube-api-access-7sqz2") pod "d87f4ca1-c949-445e-86d5-ec3f446e07d7" (UID: "d87f4ca1-c949-445e-86d5-ec3f446e07d7"). InnerVolumeSpecName "kube-api-access-7sqz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:48:43 crc kubenswrapper[4632]: I0313 11:48:43.025560 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d87f4ca1-c949-445e-86d5-ec3f446e07d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d87f4ca1-c949-445e-86d5-ec3f446e07d7" (UID: "d87f4ca1-c949-445e-86d5-ec3f446e07d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:48:43 crc kubenswrapper[4632]: I0313 11:48:43.042668 4632 scope.go:117] "RemoveContainer" containerID="989b7b9ba9a1536075bf1e4277ed84ded29045da7f5e204ef2c5670f8f34cbe5" Mar 13 11:48:43 crc kubenswrapper[4632]: E0313 11:48:43.043684 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"989b7b9ba9a1536075bf1e4277ed84ded29045da7f5e204ef2c5670f8f34cbe5\": container with ID starting with 989b7b9ba9a1536075bf1e4277ed84ded29045da7f5e204ef2c5670f8f34cbe5 not found: ID does not exist" containerID="989b7b9ba9a1536075bf1e4277ed84ded29045da7f5e204ef2c5670f8f34cbe5" Mar 13 11:48:43 crc kubenswrapper[4632]: I0313 11:48:43.043764 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"989b7b9ba9a1536075bf1e4277ed84ded29045da7f5e204ef2c5670f8f34cbe5"} err="failed to get container status \"989b7b9ba9a1536075bf1e4277ed84ded29045da7f5e204ef2c5670f8f34cbe5\": rpc error: code = NotFound desc = could not find container \"989b7b9ba9a1536075bf1e4277ed84ded29045da7f5e204ef2c5670f8f34cbe5\": container with ID starting with 989b7b9ba9a1536075bf1e4277ed84ded29045da7f5e204ef2c5670f8f34cbe5 not found: ID does not exist" Mar 13 11:48:43 crc kubenswrapper[4632]: I0313 11:48:43.043803 4632 scope.go:117] "RemoveContainer" containerID="5cab776fc2c7b37147eb6269a29d34f464d6a44dbfff425d5956d4872660b7ab" Mar 13 11:48:43 crc kubenswrapper[4632]: E0313 11:48:43.044226 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cab776fc2c7b37147eb6269a29d34f464d6a44dbfff425d5956d4872660b7ab\": container with ID starting with 5cab776fc2c7b37147eb6269a29d34f464d6a44dbfff425d5956d4872660b7ab not found: ID does not exist" containerID="5cab776fc2c7b37147eb6269a29d34f464d6a44dbfff425d5956d4872660b7ab" Mar 13 11:48:43 crc kubenswrapper[4632]: I0313 11:48:43.044256 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cab776fc2c7b37147eb6269a29d34f464d6a44dbfff425d5956d4872660b7ab"} err="failed to get container status \"5cab776fc2c7b37147eb6269a29d34f464d6a44dbfff425d5956d4872660b7ab\": rpc error: code = NotFound desc = could not find container \"5cab776fc2c7b37147eb6269a29d34f464d6a44dbfff425d5956d4872660b7ab\": container with ID starting with 5cab776fc2c7b37147eb6269a29d34f464d6a44dbfff425d5956d4872660b7ab not found: ID does not exist" Mar 13 11:48:43 crc kubenswrapper[4632]: I0313 11:48:43.044275 4632 scope.go:117] "RemoveContainer" containerID="2194314bd57a0d0a7f1493e557a6ebb9931f0f83fb3a3632d522d74b98d03d0f" Mar 13 11:48:43 crc kubenswrapper[4632]: E0313 11:48:43.044809 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2194314bd57a0d0a7f1493e557a6ebb9931f0f83fb3a3632d522d74b98d03d0f\": container with ID starting with 2194314bd57a0d0a7f1493e557a6ebb9931f0f83fb3a3632d522d74b98d03d0f not found: ID does not exist" containerID="2194314bd57a0d0a7f1493e557a6ebb9931f0f83fb3a3632d522d74b98d03d0f" Mar 13 11:48:43 crc kubenswrapper[4632]: I0313 11:48:43.044846 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2194314bd57a0d0a7f1493e557a6ebb9931f0f83fb3a3632d522d74b98d03d0f"} err="failed to get container status \"2194314bd57a0d0a7f1493e557a6ebb9931f0f83fb3a3632d522d74b98d03d0f\": rpc error: code = NotFound desc = could not find container \"2194314bd57a0d0a7f1493e557a6ebb9931f0f83fb3a3632d522d74b98d03d0f\": container with ID starting with 2194314bd57a0d0a7f1493e557a6ebb9931f0f83fb3a3632d522d74b98d03d0f not found: ID does not exist" Mar 13 11:48:43 crc kubenswrapper[4632]: I0313 11:48:43.072420 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d87f4ca1-c949-445e-86d5-ec3f446e07d7-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 11:48:43 crc kubenswrapper[4632]: I0313 11:48:43.072457 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sqz2\" (UniqueName: \"kubernetes.io/projected/d87f4ca1-c949-445e-86d5-ec3f446e07d7-kube-api-access-7sqz2\") on node \"crc\" DevicePath \"\"" Mar 13 11:48:43 crc kubenswrapper[4632]: I0313 11:48:43.245880 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-72gcl"] Mar 13 11:48:43 crc kubenswrapper[4632]: I0313 11:48:43.254351 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-72gcl"] Mar 13 11:48:44 crc kubenswrapper[4632]: I0313 11:48:44.054737 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d87f4ca1-c949-445e-86d5-ec3f446e07d7" path="/var/lib/kubelet/pods/d87f4ca1-c949-445e-86d5-ec3f446e07d7/volumes" Mar 13 11:50:00 crc kubenswrapper[4632]: I0313 11:50:00.163401 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556710-t4gtk"] Mar 13 11:50:00 crc kubenswrapper[4632]: E0313 11:50:00.164240 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d87f4ca1-c949-445e-86d5-ec3f446e07d7" containerName="extract-content" Mar 13 11:50:00 crc kubenswrapper[4632]: I0313 11:50:00.164253 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="d87f4ca1-c949-445e-86d5-ec3f446e07d7" containerName="extract-content" Mar 13 11:50:00 crc kubenswrapper[4632]: E0313 11:50:00.164270 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d87f4ca1-c949-445e-86d5-ec3f446e07d7" containerName="extract-utilities" Mar 13 11:50:00 crc kubenswrapper[4632]: I0313 11:50:00.164355 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="d87f4ca1-c949-445e-86d5-ec3f446e07d7" containerName="extract-utilities" Mar 13 11:50:00 crc kubenswrapper[4632]: E0313 11:50:00.164371 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d87f4ca1-c949-445e-86d5-ec3f446e07d7" containerName="registry-server" Mar 13 11:50:00 crc kubenswrapper[4632]: I0313 11:50:00.164380 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="d87f4ca1-c949-445e-86d5-ec3f446e07d7" containerName="registry-server" Mar 13 11:50:00 crc kubenswrapper[4632]: I0313 11:50:00.164639 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="d87f4ca1-c949-445e-86d5-ec3f446e07d7" containerName="registry-server" Mar 13 11:50:00 crc kubenswrapper[4632]: I0313 11:50:00.165755 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556710-t4gtk" Mar 13 11:50:00 crc kubenswrapper[4632]: I0313 11:50:00.179515 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556710-t4gtk"] Mar 13 11:50:00 crc kubenswrapper[4632]: I0313 11:50:00.179884 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 11:50:00 crc kubenswrapper[4632]: I0313 11:50:00.180196 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 11:50:00 crc kubenswrapper[4632]: I0313 11:50:00.180343 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 11:50:00 crc kubenswrapper[4632]: I0313 11:50:00.323032 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h429\" (UniqueName: \"kubernetes.io/projected/38d73c58-f065-4efc-9fe2-b6c0ed9fa5ca-kube-api-access-4h429\") pod \"auto-csr-approver-29556710-t4gtk\" (UID: \"38d73c58-f065-4efc-9fe2-b6c0ed9fa5ca\") " pod="openshift-infra/auto-csr-approver-29556710-t4gtk" Mar 13 11:50:00 crc kubenswrapper[4632]: I0313 11:50:00.425478 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h429\" (UniqueName: \"kubernetes.io/projected/38d73c58-f065-4efc-9fe2-b6c0ed9fa5ca-kube-api-access-4h429\") pod \"auto-csr-approver-29556710-t4gtk\" (UID: \"38d73c58-f065-4efc-9fe2-b6c0ed9fa5ca\") " pod="openshift-infra/auto-csr-approver-29556710-t4gtk" Mar 13 11:50:00 crc kubenswrapper[4632]: I0313 11:50:00.459763 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h429\" (UniqueName: \"kubernetes.io/projected/38d73c58-f065-4efc-9fe2-b6c0ed9fa5ca-kube-api-access-4h429\") pod \"auto-csr-approver-29556710-t4gtk\" (UID: \"38d73c58-f065-4efc-9fe2-b6c0ed9fa5ca\") " pod="openshift-infra/auto-csr-approver-29556710-t4gtk" Mar 13 11:50:00 crc kubenswrapper[4632]: I0313 11:50:00.494677 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556710-t4gtk" Mar 13 11:50:01 crc kubenswrapper[4632]: I0313 11:50:01.002392 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556710-t4gtk"] Mar 13 11:50:01 crc kubenswrapper[4632]: I0313 11:50:01.012300 4632 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 11:50:01 crc kubenswrapper[4632]: W0313 11:50:01.012135 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38d73c58_f065_4efc_9fe2_b6c0ed9fa5ca.slice/crio-a0ffd34e6d6d3b29265d6aeaac0f5a22d37f711ef23a833e058ddb0a16cb2146 WatchSource:0}: Error finding container a0ffd34e6d6d3b29265d6aeaac0f5a22d37f711ef23a833e058ddb0a16cb2146: Status 404 returned error can't find the container with id a0ffd34e6d6d3b29265d6aeaac0f5a22d37f711ef23a833e058ddb0a16cb2146 Mar 13 11:50:01 crc kubenswrapper[4632]: I0313 11:50:01.659270 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556710-t4gtk" event={"ID":"38d73c58-f065-4efc-9fe2-b6c0ed9fa5ca","Type":"ContainerStarted","Data":"a0ffd34e6d6d3b29265d6aeaac0f5a22d37f711ef23a833e058ddb0a16cb2146"} Mar 13 11:50:02 crc kubenswrapper[4632]: I0313 11:50:02.672038 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556710-t4gtk" event={"ID":"38d73c58-f065-4efc-9fe2-b6c0ed9fa5ca","Type":"ContainerStarted","Data":"57e073e1e04617c49dfbb2c194d77f02cda77aac917eb626f73490bd0abacbcb"} Mar 13 11:50:02 crc kubenswrapper[4632]: I0313 11:50:02.702932 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556710-t4gtk" podStartSLOduration=1.5159475059999998 podStartE2EDuration="2.702910372s" podCreationTimestamp="2026-03-13 11:50:00 +0000 UTC" firstStartedPulling="2026-03-13 11:50:01.012078427 +0000 UTC m=+6375.034608560" lastFinishedPulling="2026-03-13 11:50:02.199041293 +0000 UTC m=+6376.221571426" observedRunningTime="2026-03-13 11:50:02.691547343 +0000 UTC m=+6376.714077486" watchObservedRunningTime="2026-03-13 11:50:02.702910372 +0000 UTC m=+6376.725440515" Mar 13 11:50:03 crc kubenswrapper[4632]: I0313 11:50:03.682633 4632 generic.go:334] "Generic (PLEG): container finished" podID="38d73c58-f065-4efc-9fe2-b6c0ed9fa5ca" containerID="57e073e1e04617c49dfbb2c194d77f02cda77aac917eb626f73490bd0abacbcb" exitCode=0 Mar 13 11:50:03 crc kubenswrapper[4632]: I0313 11:50:03.682750 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556710-t4gtk" event={"ID":"38d73c58-f065-4efc-9fe2-b6c0ed9fa5ca","Type":"ContainerDied","Data":"57e073e1e04617c49dfbb2c194d77f02cda77aac917eb626f73490bd0abacbcb"} Mar 13 11:50:05 crc kubenswrapper[4632]: I0313 11:50:05.056733 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556710-t4gtk" Mar 13 11:50:05 crc kubenswrapper[4632]: I0313 11:50:05.240101 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4h429\" (UniqueName: \"kubernetes.io/projected/38d73c58-f065-4efc-9fe2-b6c0ed9fa5ca-kube-api-access-4h429\") pod \"38d73c58-f065-4efc-9fe2-b6c0ed9fa5ca\" (UID: \"38d73c58-f065-4efc-9fe2-b6c0ed9fa5ca\") " Mar 13 11:50:05 crc kubenswrapper[4632]: I0313 11:50:05.252284 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38d73c58-f065-4efc-9fe2-b6c0ed9fa5ca-kube-api-access-4h429" (OuterVolumeSpecName: "kube-api-access-4h429") pod "38d73c58-f065-4efc-9fe2-b6c0ed9fa5ca" (UID: "38d73c58-f065-4efc-9fe2-b6c0ed9fa5ca"). InnerVolumeSpecName "kube-api-access-4h429". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:50:05 crc kubenswrapper[4632]: I0313 11:50:05.343219 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4h429\" (UniqueName: \"kubernetes.io/projected/38d73c58-f065-4efc-9fe2-b6c0ed9fa5ca-kube-api-access-4h429\") on node \"crc\" DevicePath \"\"" Mar 13 11:50:05 crc kubenswrapper[4632]: I0313 11:50:05.703002 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556710-t4gtk" event={"ID":"38d73c58-f065-4efc-9fe2-b6c0ed9fa5ca","Type":"ContainerDied","Data":"a0ffd34e6d6d3b29265d6aeaac0f5a22d37f711ef23a833e058ddb0a16cb2146"} Mar 13 11:50:05 crc kubenswrapper[4632]: I0313 11:50:05.703350 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0ffd34e6d6d3b29265d6aeaac0f5a22d37f711ef23a833e058ddb0a16cb2146" Mar 13 11:50:05 crc kubenswrapper[4632]: I0313 11:50:05.703045 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556710-t4gtk" Mar 13 11:50:05 crc kubenswrapper[4632]: I0313 11:50:05.763267 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556704-r82jw"] Mar 13 11:50:05 crc kubenswrapper[4632]: I0313 11:50:05.780758 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556704-r82jw"] Mar 13 11:50:06 crc kubenswrapper[4632]: I0313 11:50:06.066856 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6300cb33-fba3-4d08-948b-0c6584d2ef26" path="/var/lib/kubelet/pods/6300cb33-fba3-4d08-948b-0c6584d2ef26/volumes" Mar 13 11:50:22 crc kubenswrapper[4632]: I0313 11:50:22.805367 4632 scope.go:117] "RemoveContainer" containerID="be61603d8895006f002bb067cf8ee34fe273ab45dfec9b8aa73261bbdd0ea048" Mar 13 11:50:40 crc kubenswrapper[4632]: I0313 11:50:40.461622 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 11:50:40 crc kubenswrapper[4632]: I0313 11:50:40.462300 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 11:50:44 crc kubenswrapper[4632]: I0313 11:50:44.021917 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t9ff9"] Mar 13 11:50:44 crc kubenswrapper[4632]: E0313 11:50:44.023044 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38d73c58-f065-4efc-9fe2-b6c0ed9fa5ca" containerName="oc" Mar 13 11:50:44 crc kubenswrapper[4632]: I0313 11:50:44.023059 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="38d73c58-f065-4efc-9fe2-b6c0ed9fa5ca" containerName="oc" Mar 13 11:50:44 crc kubenswrapper[4632]: I0313 11:50:44.023312 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="38d73c58-f065-4efc-9fe2-b6c0ed9fa5ca" containerName="oc" Mar 13 11:50:44 crc kubenswrapper[4632]: I0313 11:50:44.025564 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t9ff9" Mar 13 11:50:44 crc kubenswrapper[4632]: I0313 11:50:44.068753 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t9ff9"] Mar 13 11:50:44 crc kubenswrapper[4632]: I0313 11:50:44.161672 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvw6j\" (UniqueName: \"kubernetes.io/projected/c4fa6838-8789-4c78-873b-26a25f0abdf1-kube-api-access-hvw6j\") pod \"certified-operators-t9ff9\" (UID: \"c4fa6838-8789-4c78-873b-26a25f0abdf1\") " pod="openshift-marketplace/certified-operators-t9ff9" Mar 13 11:50:44 crc kubenswrapper[4632]: I0313 11:50:44.161737 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4fa6838-8789-4c78-873b-26a25f0abdf1-utilities\") pod \"certified-operators-t9ff9\" (UID: \"c4fa6838-8789-4c78-873b-26a25f0abdf1\") " pod="openshift-marketplace/certified-operators-t9ff9" Mar 13 11:50:44 crc kubenswrapper[4632]: I0313 11:50:44.161850 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4fa6838-8789-4c78-873b-26a25f0abdf1-catalog-content\") pod \"certified-operators-t9ff9\" (UID: \"c4fa6838-8789-4c78-873b-26a25f0abdf1\") " pod="openshift-marketplace/certified-operators-t9ff9" Mar 13 11:50:44 crc kubenswrapper[4632]: I0313 11:50:44.264117 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4fa6838-8789-4c78-873b-26a25f0abdf1-catalog-content\") pod \"certified-operators-t9ff9\" (UID: \"c4fa6838-8789-4c78-873b-26a25f0abdf1\") " pod="openshift-marketplace/certified-operators-t9ff9" Mar 13 11:50:44 crc kubenswrapper[4632]: I0313 11:50:44.264208 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvw6j\" (UniqueName: \"kubernetes.io/projected/c4fa6838-8789-4c78-873b-26a25f0abdf1-kube-api-access-hvw6j\") pod \"certified-operators-t9ff9\" (UID: \"c4fa6838-8789-4c78-873b-26a25f0abdf1\") " pod="openshift-marketplace/certified-operators-t9ff9" Mar 13 11:50:44 crc kubenswrapper[4632]: I0313 11:50:44.264241 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4fa6838-8789-4c78-873b-26a25f0abdf1-utilities\") pod \"certified-operators-t9ff9\" (UID: \"c4fa6838-8789-4c78-873b-26a25f0abdf1\") " pod="openshift-marketplace/certified-operators-t9ff9" Mar 13 11:50:44 crc kubenswrapper[4632]: I0313 11:50:44.264682 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4fa6838-8789-4c78-873b-26a25f0abdf1-utilities\") pod \"certified-operators-t9ff9\" (UID: \"c4fa6838-8789-4c78-873b-26a25f0abdf1\") " pod="openshift-marketplace/certified-operators-t9ff9" Mar 13 11:50:44 crc kubenswrapper[4632]: I0313 11:50:44.264859 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4fa6838-8789-4c78-873b-26a25f0abdf1-catalog-content\") pod \"certified-operators-t9ff9\" (UID: \"c4fa6838-8789-4c78-873b-26a25f0abdf1\") " pod="openshift-marketplace/certified-operators-t9ff9" Mar 13 11:50:44 crc kubenswrapper[4632]: I0313 11:50:44.389650 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvw6j\" (UniqueName: \"kubernetes.io/projected/c4fa6838-8789-4c78-873b-26a25f0abdf1-kube-api-access-hvw6j\") pod \"certified-operators-t9ff9\" (UID: \"c4fa6838-8789-4c78-873b-26a25f0abdf1\") " pod="openshift-marketplace/certified-operators-t9ff9" Mar 13 11:50:44 crc kubenswrapper[4632]: I0313 11:50:44.664380 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t9ff9" Mar 13 11:50:45 crc kubenswrapper[4632]: I0313 11:50:45.455285 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t9ff9"] Mar 13 11:50:45 crc kubenswrapper[4632]: I0313 11:50:45.536889 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9ff9" event={"ID":"c4fa6838-8789-4c78-873b-26a25f0abdf1","Type":"ContainerStarted","Data":"80d8cf95439ed179efd6dc42eb5f2fb0a7e3f615b39f025e0434d011b74ad3da"} Mar 13 11:50:46 crc kubenswrapper[4632]: I0313 11:50:46.547975 4632 generic.go:334] "Generic (PLEG): container finished" podID="c4fa6838-8789-4c78-873b-26a25f0abdf1" containerID="a5633eaa5f191129873caf3bceb3e511fa8394fc5d911663daf8420e23f54b2c" exitCode=0 Mar 13 11:50:46 crc kubenswrapper[4632]: I0313 11:50:46.548134 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9ff9" event={"ID":"c4fa6838-8789-4c78-873b-26a25f0abdf1","Type":"ContainerDied","Data":"a5633eaa5f191129873caf3bceb3e511fa8394fc5d911663daf8420e23f54b2c"} Mar 13 11:50:48 crc kubenswrapper[4632]: I0313 11:50:48.570454 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9ff9" event={"ID":"c4fa6838-8789-4c78-873b-26a25f0abdf1","Type":"ContainerStarted","Data":"80a53111cc96cd63ea8104fbe328156cf485cbea590d1fee2e9ca120cca06ac5"} Mar 13 11:50:51 crc kubenswrapper[4632]: I0313 11:50:51.598435 4632 generic.go:334] "Generic (PLEG): container finished" podID="c4fa6838-8789-4c78-873b-26a25f0abdf1" containerID="80a53111cc96cd63ea8104fbe328156cf485cbea590d1fee2e9ca120cca06ac5" exitCode=0 Mar 13 11:50:51 crc kubenswrapper[4632]: I0313 11:50:51.598507 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9ff9" event={"ID":"c4fa6838-8789-4c78-873b-26a25f0abdf1","Type":"ContainerDied","Data":"80a53111cc96cd63ea8104fbe328156cf485cbea590d1fee2e9ca120cca06ac5"} Mar 13 11:50:52 crc kubenswrapper[4632]: I0313 11:50:52.611720 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9ff9" event={"ID":"c4fa6838-8789-4c78-873b-26a25f0abdf1","Type":"ContainerStarted","Data":"0f58089f35227117cdd24ba80408858b56b232cdfd763e852159342974bc1d04"} Mar 13 11:50:52 crc kubenswrapper[4632]: I0313 11:50:52.665057 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t9ff9" podStartSLOduration=4.008162918 podStartE2EDuration="9.66503438s" podCreationTimestamp="2026-03-13 11:50:43 +0000 UTC" firstStartedPulling="2026-03-13 11:50:46.550250558 +0000 UTC m=+6420.572780701" lastFinishedPulling="2026-03-13 11:50:52.20712203 +0000 UTC m=+6426.229652163" observedRunningTime="2026-03-13 11:50:52.659522864 +0000 UTC m=+6426.682053007" watchObservedRunningTime="2026-03-13 11:50:52.66503438 +0000 UTC m=+6426.687564513" Mar 13 11:50:54 crc kubenswrapper[4632]: I0313 11:50:54.665133 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t9ff9" Mar 13 11:50:54 crc kubenswrapper[4632]: I0313 11:50:54.665614 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t9ff9" Mar 13 11:50:55 crc kubenswrapper[4632]: I0313 11:50:55.228670 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hh6sg"] Mar 13 11:50:55 crc kubenswrapper[4632]: I0313 11:50:55.230693 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hh6sg" Mar 13 11:50:55 crc kubenswrapper[4632]: I0313 11:50:55.251196 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hh6sg"] Mar 13 11:50:55 crc kubenswrapper[4632]: I0313 11:50:55.290487 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9077b2a-7fb9-405f-8fb4-b472d5ac00a9-catalog-content\") pod \"redhat-operators-hh6sg\" (UID: \"b9077b2a-7fb9-405f-8fb4-b472d5ac00a9\") " pod="openshift-marketplace/redhat-operators-hh6sg" Mar 13 11:50:55 crc kubenswrapper[4632]: I0313 11:50:55.290746 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9077b2a-7fb9-405f-8fb4-b472d5ac00a9-utilities\") pod \"redhat-operators-hh6sg\" (UID: \"b9077b2a-7fb9-405f-8fb4-b472d5ac00a9\") " pod="openshift-marketplace/redhat-operators-hh6sg" Mar 13 11:50:55 crc kubenswrapper[4632]: I0313 11:50:55.290825 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2vhf\" (UniqueName: \"kubernetes.io/projected/b9077b2a-7fb9-405f-8fb4-b472d5ac00a9-kube-api-access-n2vhf\") pod \"redhat-operators-hh6sg\" (UID: \"b9077b2a-7fb9-405f-8fb4-b472d5ac00a9\") " pod="openshift-marketplace/redhat-operators-hh6sg" Mar 13 11:50:55 crc kubenswrapper[4632]: I0313 11:50:55.392490 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2vhf\" (UniqueName: \"kubernetes.io/projected/b9077b2a-7fb9-405f-8fb4-b472d5ac00a9-kube-api-access-n2vhf\") pod \"redhat-operators-hh6sg\" (UID: \"b9077b2a-7fb9-405f-8fb4-b472d5ac00a9\") " pod="openshift-marketplace/redhat-operators-hh6sg" Mar 13 11:50:55 crc kubenswrapper[4632]: I0313 11:50:55.392643 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9077b2a-7fb9-405f-8fb4-b472d5ac00a9-catalog-content\") pod \"redhat-operators-hh6sg\" (UID: \"b9077b2a-7fb9-405f-8fb4-b472d5ac00a9\") " pod="openshift-marketplace/redhat-operators-hh6sg" Mar 13 11:50:55 crc kubenswrapper[4632]: I0313 11:50:55.392670 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9077b2a-7fb9-405f-8fb4-b472d5ac00a9-utilities\") pod \"redhat-operators-hh6sg\" (UID: \"b9077b2a-7fb9-405f-8fb4-b472d5ac00a9\") " pod="openshift-marketplace/redhat-operators-hh6sg" Mar 13 11:50:55 crc kubenswrapper[4632]: I0313 11:50:55.397021 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9077b2a-7fb9-405f-8fb4-b472d5ac00a9-utilities\") pod \"redhat-operators-hh6sg\" (UID: \"b9077b2a-7fb9-405f-8fb4-b472d5ac00a9\") " pod="openshift-marketplace/redhat-operators-hh6sg" Mar 13 11:50:55 crc kubenswrapper[4632]: I0313 11:50:55.397312 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9077b2a-7fb9-405f-8fb4-b472d5ac00a9-catalog-content\") pod \"redhat-operators-hh6sg\" (UID: \"b9077b2a-7fb9-405f-8fb4-b472d5ac00a9\") " pod="openshift-marketplace/redhat-operators-hh6sg" Mar 13 11:50:55 crc kubenswrapper[4632]: I0313 11:50:55.422467 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2vhf\" (UniqueName: \"kubernetes.io/projected/b9077b2a-7fb9-405f-8fb4-b472d5ac00a9-kube-api-access-n2vhf\") pod \"redhat-operators-hh6sg\" (UID: \"b9077b2a-7fb9-405f-8fb4-b472d5ac00a9\") " pod="openshift-marketplace/redhat-operators-hh6sg" Mar 13 11:50:55 crc kubenswrapper[4632]: I0313 11:50:55.555647 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hh6sg" Mar 13 11:50:55 crc kubenswrapper[4632]: I0313 11:50:55.723051 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-t9ff9" podUID="c4fa6838-8789-4c78-873b-26a25f0abdf1" containerName="registry-server" probeResult="failure" output=< Mar 13 11:50:55 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:50:55 crc kubenswrapper[4632]: > Mar 13 11:50:56 crc kubenswrapper[4632]: I0313 11:50:56.741585 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hh6sg"] Mar 13 11:50:57 crc kubenswrapper[4632]: I0313 11:50:57.657663 4632 generic.go:334] "Generic (PLEG): container finished" podID="b9077b2a-7fb9-405f-8fb4-b472d5ac00a9" containerID="9c6386e4fe91a3027f76952e4436deca9559a2bb03eb25f5a25ef1623adbd2a9" exitCode=0 Mar 13 11:50:57 crc kubenswrapper[4632]: I0313 11:50:57.657782 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hh6sg" event={"ID":"b9077b2a-7fb9-405f-8fb4-b472d5ac00a9","Type":"ContainerDied","Data":"9c6386e4fe91a3027f76952e4436deca9559a2bb03eb25f5a25ef1623adbd2a9"} Mar 13 11:50:57 crc kubenswrapper[4632]: I0313 11:50:57.658050 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hh6sg" event={"ID":"b9077b2a-7fb9-405f-8fb4-b472d5ac00a9","Type":"ContainerStarted","Data":"0d0fb98fbc9e26b81dbba0ff1bf940b997a137857387dead6a902c8814d568eb"} Mar 13 11:50:58 crc kubenswrapper[4632]: I0313 11:50:58.673394 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hh6sg" event={"ID":"b9077b2a-7fb9-405f-8fb4-b472d5ac00a9","Type":"ContainerStarted","Data":"b2ce3710fc9f4762a6cd2dddd49956965e52724f5194ed7f857b989905d5d86e"} Mar 13 11:51:05 crc kubenswrapper[4632]: I0313 11:51:05.717295 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-t9ff9" podUID="c4fa6838-8789-4c78-873b-26a25f0abdf1" containerName="registry-server" probeResult="failure" output=< Mar 13 11:51:05 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:51:05 crc kubenswrapper[4632]: > Mar 13 11:51:05 crc kubenswrapper[4632]: I0313 11:51:05.746374 4632 generic.go:334] "Generic (PLEG): container finished" podID="b9077b2a-7fb9-405f-8fb4-b472d5ac00a9" containerID="b2ce3710fc9f4762a6cd2dddd49956965e52724f5194ed7f857b989905d5d86e" exitCode=0 Mar 13 11:51:05 crc kubenswrapper[4632]: I0313 11:51:05.746418 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hh6sg" event={"ID":"b9077b2a-7fb9-405f-8fb4-b472d5ac00a9","Type":"ContainerDied","Data":"b2ce3710fc9f4762a6cd2dddd49956965e52724f5194ed7f857b989905d5d86e"} Mar 13 11:51:06 crc kubenswrapper[4632]: I0313 11:51:06.757278 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hh6sg" event={"ID":"b9077b2a-7fb9-405f-8fb4-b472d5ac00a9","Type":"ContainerStarted","Data":"bfcfca447a7d94e7df3e9d778de055bafa913106d63c3b943bb83e244830618d"} Mar 13 11:51:06 crc kubenswrapper[4632]: I0313 11:51:06.786494 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hh6sg" podStartSLOduration=3.191258481 podStartE2EDuration="11.786465862s" podCreationTimestamp="2026-03-13 11:50:55 +0000 UTC" firstStartedPulling="2026-03-13 11:50:57.659862755 +0000 UTC m=+6431.682392888" lastFinishedPulling="2026-03-13 11:51:06.255070136 +0000 UTC m=+6440.277600269" observedRunningTime="2026-03-13 11:51:06.775023691 +0000 UTC m=+6440.797553824" watchObservedRunningTime="2026-03-13 11:51:06.786465862 +0000 UTC m=+6440.808995995" Mar 13 11:51:10 crc kubenswrapper[4632]: I0313 11:51:10.461616 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 11:51:10 crc kubenswrapper[4632]: I0313 11:51:10.462005 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 11:51:15 crc kubenswrapper[4632]: I0313 11:51:15.556879 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hh6sg" Mar 13 11:51:15 crc kubenswrapper[4632]: I0313 11:51:15.557483 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hh6sg" Mar 13 11:51:15 crc kubenswrapper[4632]: I0313 11:51:15.712524 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-t9ff9" podUID="c4fa6838-8789-4c78-873b-26a25f0abdf1" containerName="registry-server" probeResult="failure" output=< Mar 13 11:51:15 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:51:15 crc kubenswrapper[4632]: > Mar 13 11:51:16 crc kubenswrapper[4632]: I0313 11:51:16.612008 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hh6sg" podUID="b9077b2a-7fb9-405f-8fb4-b472d5ac00a9" containerName="registry-server" probeResult="failure" output=< Mar 13 11:51:16 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:51:16 crc kubenswrapper[4632]: > Mar 13 11:51:24 crc kubenswrapper[4632]: I0313 11:51:24.743061 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t9ff9" Mar 13 11:51:24 crc kubenswrapper[4632]: I0313 11:51:24.844173 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t9ff9" Mar 13 11:51:24 crc kubenswrapper[4632]: I0313 11:51:24.992744 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t9ff9"] Mar 13 11:51:25 crc kubenswrapper[4632]: I0313 11:51:25.947980 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t9ff9" podUID="c4fa6838-8789-4c78-873b-26a25f0abdf1" containerName="registry-server" containerID="cri-o://0f58089f35227117cdd24ba80408858b56b232cdfd763e852159342974bc1d04" gracePeriod=2 Mar 13 11:51:26 crc kubenswrapper[4632]: I0313 11:51:26.604715 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hh6sg" podUID="b9077b2a-7fb9-405f-8fb4-b472d5ac00a9" containerName="registry-server" probeResult="failure" output=< Mar 13 11:51:26 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:51:26 crc kubenswrapper[4632]: > Mar 13 11:51:26 crc kubenswrapper[4632]: I0313 11:51:26.746740 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t9ff9" Mar 13 11:51:26 crc kubenswrapper[4632]: I0313 11:51:26.948461 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4fa6838-8789-4c78-873b-26a25f0abdf1-catalog-content\") pod \"c4fa6838-8789-4c78-873b-26a25f0abdf1\" (UID: \"c4fa6838-8789-4c78-873b-26a25f0abdf1\") " Mar 13 11:51:26 crc kubenswrapper[4632]: I0313 11:51:26.949702 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvw6j\" (UniqueName: \"kubernetes.io/projected/c4fa6838-8789-4c78-873b-26a25f0abdf1-kube-api-access-hvw6j\") pod \"c4fa6838-8789-4c78-873b-26a25f0abdf1\" (UID: \"c4fa6838-8789-4c78-873b-26a25f0abdf1\") " Mar 13 11:51:26 crc kubenswrapper[4632]: I0313 11:51:26.950534 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4fa6838-8789-4c78-873b-26a25f0abdf1-utilities\") pod \"c4fa6838-8789-4c78-873b-26a25f0abdf1\" (UID: \"c4fa6838-8789-4c78-873b-26a25f0abdf1\") " Mar 13 11:51:26 crc kubenswrapper[4632]: I0313 11:51:26.951009 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4fa6838-8789-4c78-873b-26a25f0abdf1-utilities" (OuterVolumeSpecName: "utilities") pod "c4fa6838-8789-4c78-873b-26a25f0abdf1" (UID: "c4fa6838-8789-4c78-873b-26a25f0abdf1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:51:26 crc kubenswrapper[4632]: I0313 11:51:26.951759 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4fa6838-8789-4c78-873b-26a25f0abdf1-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 11:51:26 crc kubenswrapper[4632]: I0313 11:51:26.958877 4632 generic.go:334] "Generic (PLEG): container finished" podID="c4fa6838-8789-4c78-873b-26a25f0abdf1" containerID="0f58089f35227117cdd24ba80408858b56b232cdfd763e852159342974bc1d04" exitCode=0 Mar 13 11:51:26 crc kubenswrapper[4632]: I0313 11:51:26.959284 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9ff9" event={"ID":"c4fa6838-8789-4c78-873b-26a25f0abdf1","Type":"ContainerDied","Data":"0f58089f35227117cdd24ba80408858b56b232cdfd763e852159342974bc1d04"} Mar 13 11:51:26 crc kubenswrapper[4632]: I0313 11:51:26.959358 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9ff9" event={"ID":"c4fa6838-8789-4c78-873b-26a25f0abdf1","Type":"ContainerDied","Data":"80d8cf95439ed179efd6dc42eb5f2fb0a7e3f615b39f025e0434d011b74ad3da"} Mar 13 11:51:26 crc kubenswrapper[4632]: I0313 11:51:26.959388 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t9ff9" Mar 13 11:51:26 crc kubenswrapper[4632]: I0313 11:51:26.960699 4632 scope.go:117] "RemoveContainer" containerID="0f58089f35227117cdd24ba80408858b56b232cdfd763e852159342974bc1d04" Mar 13 11:51:26 crc kubenswrapper[4632]: I0313 11:51:26.969190 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4fa6838-8789-4c78-873b-26a25f0abdf1-kube-api-access-hvw6j" (OuterVolumeSpecName: "kube-api-access-hvw6j") pod "c4fa6838-8789-4c78-873b-26a25f0abdf1" (UID: "c4fa6838-8789-4c78-873b-26a25f0abdf1"). InnerVolumeSpecName "kube-api-access-hvw6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:51:27 crc kubenswrapper[4632]: I0313 11:51:27.044225 4632 scope.go:117] "RemoveContainer" containerID="80a53111cc96cd63ea8104fbe328156cf485cbea590d1fee2e9ca120cca06ac5" Mar 13 11:51:27 crc kubenswrapper[4632]: I0313 11:51:27.046685 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4fa6838-8789-4c78-873b-26a25f0abdf1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4fa6838-8789-4c78-873b-26a25f0abdf1" (UID: "c4fa6838-8789-4c78-873b-26a25f0abdf1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:51:27 crc kubenswrapper[4632]: I0313 11:51:27.053642 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4fa6838-8789-4c78-873b-26a25f0abdf1-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 11:51:27 crc kubenswrapper[4632]: I0313 11:51:27.053875 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvw6j\" (UniqueName: \"kubernetes.io/projected/c4fa6838-8789-4c78-873b-26a25f0abdf1-kube-api-access-hvw6j\") on node \"crc\" DevicePath \"\"" Mar 13 11:51:27 crc kubenswrapper[4632]: I0313 11:51:27.080013 4632 scope.go:117] "RemoveContainer" containerID="a5633eaa5f191129873caf3bceb3e511fa8394fc5d911663daf8420e23f54b2c" Mar 13 11:51:27 crc kubenswrapper[4632]: I0313 11:51:27.110042 4632 scope.go:117] "RemoveContainer" containerID="0f58089f35227117cdd24ba80408858b56b232cdfd763e852159342974bc1d04" Mar 13 11:51:27 crc kubenswrapper[4632]: E0313 11:51:27.114974 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f58089f35227117cdd24ba80408858b56b232cdfd763e852159342974bc1d04\": container with ID starting with 0f58089f35227117cdd24ba80408858b56b232cdfd763e852159342974bc1d04 not found: ID does not exist" containerID="0f58089f35227117cdd24ba80408858b56b232cdfd763e852159342974bc1d04" Mar 13 11:51:27 crc kubenswrapper[4632]: I0313 11:51:27.116337 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f58089f35227117cdd24ba80408858b56b232cdfd763e852159342974bc1d04"} err="failed to get container status \"0f58089f35227117cdd24ba80408858b56b232cdfd763e852159342974bc1d04\": rpc error: code = NotFound desc = could not find container \"0f58089f35227117cdd24ba80408858b56b232cdfd763e852159342974bc1d04\": container with ID starting with 0f58089f35227117cdd24ba80408858b56b232cdfd763e852159342974bc1d04 not found: ID does not exist" Mar 13 11:51:27 crc kubenswrapper[4632]: I0313 11:51:27.116381 4632 scope.go:117] "RemoveContainer" containerID="80a53111cc96cd63ea8104fbe328156cf485cbea590d1fee2e9ca120cca06ac5" Mar 13 11:51:27 crc kubenswrapper[4632]: E0313 11:51:27.117110 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80a53111cc96cd63ea8104fbe328156cf485cbea590d1fee2e9ca120cca06ac5\": container with ID starting with 80a53111cc96cd63ea8104fbe328156cf485cbea590d1fee2e9ca120cca06ac5 not found: ID does not exist" containerID="80a53111cc96cd63ea8104fbe328156cf485cbea590d1fee2e9ca120cca06ac5" Mar 13 11:51:27 crc kubenswrapper[4632]: I0313 11:51:27.117163 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80a53111cc96cd63ea8104fbe328156cf485cbea590d1fee2e9ca120cca06ac5"} err="failed to get container status \"80a53111cc96cd63ea8104fbe328156cf485cbea590d1fee2e9ca120cca06ac5\": rpc error: code = NotFound desc = could not find container \"80a53111cc96cd63ea8104fbe328156cf485cbea590d1fee2e9ca120cca06ac5\": container with ID starting with 80a53111cc96cd63ea8104fbe328156cf485cbea590d1fee2e9ca120cca06ac5 not found: ID does not exist" Mar 13 11:51:27 crc kubenswrapper[4632]: I0313 11:51:27.117194 4632 scope.go:117] "RemoveContainer" containerID="a5633eaa5f191129873caf3bceb3e511fa8394fc5d911663daf8420e23f54b2c" Mar 13 11:51:27 crc kubenswrapper[4632]: E0313 11:51:27.117649 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5633eaa5f191129873caf3bceb3e511fa8394fc5d911663daf8420e23f54b2c\": container with ID starting with a5633eaa5f191129873caf3bceb3e511fa8394fc5d911663daf8420e23f54b2c not found: ID does not exist" containerID="a5633eaa5f191129873caf3bceb3e511fa8394fc5d911663daf8420e23f54b2c" Mar 13 11:51:27 crc kubenswrapper[4632]: I0313 11:51:27.117684 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5633eaa5f191129873caf3bceb3e511fa8394fc5d911663daf8420e23f54b2c"} err="failed to get container status \"a5633eaa5f191129873caf3bceb3e511fa8394fc5d911663daf8420e23f54b2c\": rpc error: code = NotFound desc = could not find container \"a5633eaa5f191129873caf3bceb3e511fa8394fc5d911663daf8420e23f54b2c\": container with ID starting with a5633eaa5f191129873caf3bceb3e511fa8394fc5d911663daf8420e23f54b2c not found: ID does not exist" Mar 13 11:51:27 crc kubenswrapper[4632]: I0313 11:51:27.299151 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t9ff9"] Mar 13 11:51:27 crc kubenswrapper[4632]: I0313 11:51:27.334371 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t9ff9"] Mar 13 11:51:28 crc kubenswrapper[4632]: I0313 11:51:28.056117 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4fa6838-8789-4c78-873b-26a25f0abdf1" path="/var/lib/kubelet/pods/c4fa6838-8789-4c78-873b-26a25f0abdf1/volumes" Mar 13 11:51:36 crc kubenswrapper[4632]: I0313 11:51:36.613577 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hh6sg" podUID="b9077b2a-7fb9-405f-8fb4-b472d5ac00a9" containerName="registry-server" probeResult="failure" output=< Mar 13 11:51:36 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:51:36 crc kubenswrapper[4632]: > Mar 13 11:51:40 crc kubenswrapper[4632]: I0313 11:51:40.460955 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 11:51:40 crc kubenswrapper[4632]: I0313 11:51:40.461460 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 11:51:40 crc kubenswrapper[4632]: I0313 11:51:40.461524 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 11:51:40 crc kubenswrapper[4632]: I0313 11:51:40.462302 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"09f62a713fe019f208bcff213bc55f14995ec3a8014d027c3bf7cfc3b5b612e6"} pod="openshift-machine-config-operator/machine-config-daemon-zkscb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 11:51:40 crc kubenswrapper[4632]: I0313 11:51:40.462352 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" containerID="cri-o://09f62a713fe019f208bcff213bc55f14995ec3a8014d027c3bf7cfc3b5b612e6" gracePeriod=600 Mar 13 11:51:40 crc kubenswrapper[4632]: E0313 11:51:40.586564 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:51:41 crc kubenswrapper[4632]: I0313 11:51:41.095667 4632 generic.go:334] "Generic (PLEG): container finished" podID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerID="09f62a713fe019f208bcff213bc55f14995ec3a8014d027c3bf7cfc3b5b612e6" exitCode=0 Mar 13 11:51:41 crc kubenswrapper[4632]: I0313 11:51:41.095734 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerDied","Data":"09f62a713fe019f208bcff213bc55f14995ec3a8014d027c3bf7cfc3b5b612e6"} Mar 13 11:51:41 crc kubenswrapper[4632]: I0313 11:51:41.095769 4632 scope.go:117] "RemoveContainer" containerID="a91f451a2842f8b1b73b10a99ff94ea342a17276601161b96bf6802b9f5327a9" Mar 13 11:51:41 crc kubenswrapper[4632]: I0313 11:51:41.096520 4632 scope.go:117] "RemoveContainer" containerID="09f62a713fe019f208bcff213bc55f14995ec3a8014d027c3bf7cfc3b5b612e6" Mar 13 11:51:41 crc kubenswrapper[4632]: E0313 11:51:41.096806 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:51:46 crc kubenswrapper[4632]: I0313 11:51:46.617396 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hh6sg" podUID="b9077b2a-7fb9-405f-8fb4-b472d5ac00a9" containerName="registry-server" probeResult="failure" output=< Mar 13 11:51:46 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:51:46 crc kubenswrapper[4632]: > Mar 13 11:51:52 crc kubenswrapper[4632]: I0313 11:51:52.044808 4632 scope.go:117] "RemoveContainer" containerID="09f62a713fe019f208bcff213bc55f14995ec3a8014d027c3bf7cfc3b5b612e6" Mar 13 11:51:52 crc kubenswrapper[4632]: E0313 11:51:52.045637 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:51:55 crc kubenswrapper[4632]: I0313 11:51:55.608057 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hh6sg" Mar 13 11:51:55 crc kubenswrapper[4632]: I0313 11:51:55.666842 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hh6sg" Mar 13 11:51:55 crc kubenswrapper[4632]: I0313 11:51:55.850537 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hh6sg"] Mar 13 11:51:57 crc kubenswrapper[4632]: I0313 11:51:57.258999 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hh6sg" podUID="b9077b2a-7fb9-405f-8fb4-b472d5ac00a9" containerName="registry-server" containerID="cri-o://bfcfca447a7d94e7df3e9d778de055bafa913106d63c3b943bb83e244830618d" gracePeriod=2 Mar 13 11:51:57 crc kubenswrapper[4632]: I0313 11:51:57.898707 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hh6sg" Mar 13 11:51:57 crc kubenswrapper[4632]: I0313 11:51:57.996444 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2vhf\" (UniqueName: \"kubernetes.io/projected/b9077b2a-7fb9-405f-8fb4-b472d5ac00a9-kube-api-access-n2vhf\") pod \"b9077b2a-7fb9-405f-8fb4-b472d5ac00a9\" (UID: \"b9077b2a-7fb9-405f-8fb4-b472d5ac00a9\") " Mar 13 11:51:57 crc kubenswrapper[4632]: I0313 11:51:57.996589 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9077b2a-7fb9-405f-8fb4-b472d5ac00a9-utilities\") pod \"b9077b2a-7fb9-405f-8fb4-b472d5ac00a9\" (UID: \"b9077b2a-7fb9-405f-8fb4-b472d5ac00a9\") " Mar 13 11:51:57 crc kubenswrapper[4632]: I0313 11:51:57.996717 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9077b2a-7fb9-405f-8fb4-b472d5ac00a9-catalog-content\") pod \"b9077b2a-7fb9-405f-8fb4-b472d5ac00a9\" (UID: \"b9077b2a-7fb9-405f-8fb4-b472d5ac00a9\") " Mar 13 11:51:57 crc kubenswrapper[4632]: I0313 11:51:57.997604 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9077b2a-7fb9-405f-8fb4-b472d5ac00a9-utilities" (OuterVolumeSpecName: "utilities") pod "b9077b2a-7fb9-405f-8fb4-b472d5ac00a9" (UID: "b9077b2a-7fb9-405f-8fb4-b472d5ac00a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:51:58 crc kubenswrapper[4632]: I0313 11:51:58.005177 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9077b2a-7fb9-405f-8fb4-b472d5ac00a9-kube-api-access-n2vhf" (OuterVolumeSpecName: "kube-api-access-n2vhf") pod "b9077b2a-7fb9-405f-8fb4-b472d5ac00a9" (UID: "b9077b2a-7fb9-405f-8fb4-b472d5ac00a9"). InnerVolumeSpecName "kube-api-access-n2vhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:51:58 crc kubenswrapper[4632]: I0313 11:51:58.100480 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2vhf\" (UniqueName: \"kubernetes.io/projected/b9077b2a-7fb9-405f-8fb4-b472d5ac00a9-kube-api-access-n2vhf\") on node \"crc\" DevicePath \"\"" Mar 13 11:51:58 crc kubenswrapper[4632]: I0313 11:51:58.100529 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9077b2a-7fb9-405f-8fb4-b472d5ac00a9-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 11:51:58 crc kubenswrapper[4632]: I0313 11:51:58.117783 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9077b2a-7fb9-405f-8fb4-b472d5ac00a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9077b2a-7fb9-405f-8fb4-b472d5ac00a9" (UID: "b9077b2a-7fb9-405f-8fb4-b472d5ac00a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:51:58 crc kubenswrapper[4632]: I0313 11:51:58.202617 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9077b2a-7fb9-405f-8fb4-b472d5ac00a9-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 11:51:58 crc kubenswrapper[4632]: I0313 11:51:58.272002 4632 generic.go:334] "Generic (PLEG): container finished" podID="b9077b2a-7fb9-405f-8fb4-b472d5ac00a9" containerID="bfcfca447a7d94e7df3e9d778de055bafa913106d63c3b943bb83e244830618d" exitCode=0 Mar 13 11:51:58 crc kubenswrapper[4632]: I0313 11:51:58.272081 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hh6sg" event={"ID":"b9077b2a-7fb9-405f-8fb4-b472d5ac00a9","Type":"ContainerDied","Data":"bfcfca447a7d94e7df3e9d778de055bafa913106d63c3b943bb83e244830618d"} Mar 13 11:51:58 crc kubenswrapper[4632]: I0313 11:51:58.272115 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hh6sg" event={"ID":"b9077b2a-7fb9-405f-8fb4-b472d5ac00a9","Type":"ContainerDied","Data":"0d0fb98fbc9e26b81dbba0ff1bf940b997a137857387dead6a902c8814d568eb"} Mar 13 11:51:58 crc kubenswrapper[4632]: I0313 11:51:58.272135 4632 scope.go:117] "RemoveContainer" containerID="bfcfca447a7d94e7df3e9d778de055bafa913106d63c3b943bb83e244830618d" Mar 13 11:51:58 crc kubenswrapper[4632]: I0313 11:51:58.272080 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hh6sg" Mar 13 11:51:58 crc kubenswrapper[4632]: I0313 11:51:58.313726 4632 scope.go:117] "RemoveContainer" containerID="b2ce3710fc9f4762a6cd2dddd49956965e52724f5194ed7f857b989905d5d86e" Mar 13 11:51:58 crc kubenswrapper[4632]: I0313 11:51:58.317666 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hh6sg"] Mar 13 11:51:58 crc kubenswrapper[4632]: I0313 11:51:58.333181 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hh6sg"] Mar 13 11:51:58 crc kubenswrapper[4632]: I0313 11:51:58.337993 4632 scope.go:117] "RemoveContainer" containerID="9c6386e4fe91a3027f76952e4436deca9559a2bb03eb25f5a25ef1623adbd2a9" Mar 13 11:51:58 crc kubenswrapper[4632]: I0313 11:51:58.385723 4632 scope.go:117] "RemoveContainer" containerID="bfcfca447a7d94e7df3e9d778de055bafa913106d63c3b943bb83e244830618d" Mar 13 11:51:58 crc kubenswrapper[4632]: E0313 11:51:58.386545 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfcfca447a7d94e7df3e9d778de055bafa913106d63c3b943bb83e244830618d\": container with ID starting with bfcfca447a7d94e7df3e9d778de055bafa913106d63c3b943bb83e244830618d not found: ID does not exist" containerID="bfcfca447a7d94e7df3e9d778de055bafa913106d63c3b943bb83e244830618d" Mar 13 11:51:58 crc kubenswrapper[4632]: I0313 11:51:58.386578 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfcfca447a7d94e7df3e9d778de055bafa913106d63c3b943bb83e244830618d"} err="failed to get container status \"bfcfca447a7d94e7df3e9d778de055bafa913106d63c3b943bb83e244830618d\": rpc error: code = NotFound desc = could not find container \"bfcfca447a7d94e7df3e9d778de055bafa913106d63c3b943bb83e244830618d\": container with ID starting with bfcfca447a7d94e7df3e9d778de055bafa913106d63c3b943bb83e244830618d not found: ID does not exist" Mar 13 11:51:58 crc kubenswrapper[4632]: I0313 11:51:58.386599 4632 scope.go:117] "RemoveContainer" containerID="b2ce3710fc9f4762a6cd2dddd49956965e52724f5194ed7f857b989905d5d86e" Mar 13 11:51:58 crc kubenswrapper[4632]: E0313 11:51:58.387144 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2ce3710fc9f4762a6cd2dddd49956965e52724f5194ed7f857b989905d5d86e\": container with ID starting with b2ce3710fc9f4762a6cd2dddd49956965e52724f5194ed7f857b989905d5d86e not found: ID does not exist" containerID="b2ce3710fc9f4762a6cd2dddd49956965e52724f5194ed7f857b989905d5d86e" Mar 13 11:51:58 crc kubenswrapper[4632]: I0313 11:51:58.387231 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2ce3710fc9f4762a6cd2dddd49956965e52724f5194ed7f857b989905d5d86e"} err="failed to get container status \"b2ce3710fc9f4762a6cd2dddd49956965e52724f5194ed7f857b989905d5d86e\": rpc error: code = NotFound desc = could not find container \"b2ce3710fc9f4762a6cd2dddd49956965e52724f5194ed7f857b989905d5d86e\": container with ID starting with b2ce3710fc9f4762a6cd2dddd49956965e52724f5194ed7f857b989905d5d86e not found: ID does not exist" Mar 13 11:51:58 crc kubenswrapper[4632]: I0313 11:51:58.387269 4632 scope.go:117] "RemoveContainer" containerID="9c6386e4fe91a3027f76952e4436deca9559a2bb03eb25f5a25ef1623adbd2a9" Mar 13 11:51:58 crc kubenswrapper[4632]: E0313 11:51:58.387800 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c6386e4fe91a3027f76952e4436deca9559a2bb03eb25f5a25ef1623adbd2a9\": container with ID starting with 9c6386e4fe91a3027f76952e4436deca9559a2bb03eb25f5a25ef1623adbd2a9 not found: ID does not exist" containerID="9c6386e4fe91a3027f76952e4436deca9559a2bb03eb25f5a25ef1623adbd2a9" Mar 13 11:51:58 crc kubenswrapper[4632]: I0313 11:51:58.387835 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c6386e4fe91a3027f76952e4436deca9559a2bb03eb25f5a25ef1623adbd2a9"} err="failed to get container status \"9c6386e4fe91a3027f76952e4436deca9559a2bb03eb25f5a25ef1623adbd2a9\": rpc error: code = NotFound desc = could not find container \"9c6386e4fe91a3027f76952e4436deca9559a2bb03eb25f5a25ef1623adbd2a9\": container with ID starting with 9c6386e4fe91a3027f76952e4436deca9559a2bb03eb25f5a25ef1623adbd2a9 not found: ID does not exist" Mar 13 11:52:00 crc kubenswrapper[4632]: I0313 11:52:00.056104 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9077b2a-7fb9-405f-8fb4-b472d5ac00a9" path="/var/lib/kubelet/pods/b9077b2a-7fb9-405f-8fb4-b472d5ac00a9/volumes" Mar 13 11:52:00 crc kubenswrapper[4632]: I0313 11:52:00.204372 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556712-jjq26"] Mar 13 11:52:00 crc kubenswrapper[4632]: E0313 11:52:00.204811 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4fa6838-8789-4c78-873b-26a25f0abdf1" containerName="extract-content" Mar 13 11:52:00 crc kubenswrapper[4632]: I0313 11:52:00.205743 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4fa6838-8789-4c78-873b-26a25f0abdf1" containerName="extract-content" Mar 13 11:52:00 crc kubenswrapper[4632]: E0313 11:52:00.205795 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9077b2a-7fb9-405f-8fb4-b472d5ac00a9" containerName="extract-utilities" Mar 13 11:52:00 crc kubenswrapper[4632]: I0313 11:52:00.205805 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9077b2a-7fb9-405f-8fb4-b472d5ac00a9" containerName="extract-utilities" Mar 13 11:52:00 crc kubenswrapper[4632]: E0313 11:52:00.205828 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4fa6838-8789-4c78-873b-26a25f0abdf1" containerName="extract-utilities" Mar 13 11:52:00 crc kubenswrapper[4632]: I0313 11:52:00.205837 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4fa6838-8789-4c78-873b-26a25f0abdf1" containerName="extract-utilities" Mar 13 11:52:00 crc kubenswrapper[4632]: E0313 11:52:00.205852 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9077b2a-7fb9-405f-8fb4-b472d5ac00a9" containerName="extract-content" Mar 13 11:52:00 crc kubenswrapper[4632]: I0313 11:52:00.205861 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9077b2a-7fb9-405f-8fb4-b472d5ac00a9" containerName="extract-content" Mar 13 11:52:00 crc kubenswrapper[4632]: E0313 11:52:00.205879 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9077b2a-7fb9-405f-8fb4-b472d5ac00a9" containerName="registry-server" Mar 13 11:52:00 crc kubenswrapper[4632]: I0313 11:52:00.205887 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9077b2a-7fb9-405f-8fb4-b472d5ac00a9" containerName="registry-server" Mar 13 11:52:00 crc kubenswrapper[4632]: E0313 11:52:00.205902 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4fa6838-8789-4c78-873b-26a25f0abdf1" containerName="registry-server" Mar 13 11:52:00 crc kubenswrapper[4632]: I0313 11:52:00.205909 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4fa6838-8789-4c78-873b-26a25f0abdf1" containerName="registry-server" Mar 13 11:52:00 crc kubenswrapper[4632]: I0313 11:52:00.206153 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4fa6838-8789-4c78-873b-26a25f0abdf1" containerName="registry-server" Mar 13 11:52:00 crc kubenswrapper[4632]: I0313 11:52:00.206168 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9077b2a-7fb9-405f-8fb4-b472d5ac00a9" containerName="registry-server" Mar 13 11:52:00 crc kubenswrapper[4632]: I0313 11:52:00.206864 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556712-jjq26" Mar 13 11:52:00 crc kubenswrapper[4632]: I0313 11:52:00.222762 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 11:52:00 crc kubenswrapper[4632]: I0313 11:52:00.224929 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 11:52:00 crc kubenswrapper[4632]: I0313 11:52:00.226383 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 11:52:00 crc kubenswrapper[4632]: I0313 11:52:00.233654 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556712-jjq26"] Mar 13 11:52:00 crc kubenswrapper[4632]: I0313 11:52:00.344370 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmz6r\" (UniqueName: \"kubernetes.io/projected/02db6f2d-ef7c-4444-9776-603e9c44c55a-kube-api-access-vmz6r\") pod \"auto-csr-approver-29556712-jjq26\" (UID: \"02db6f2d-ef7c-4444-9776-603e9c44c55a\") " pod="openshift-infra/auto-csr-approver-29556712-jjq26" Mar 13 11:52:00 crc kubenswrapper[4632]: I0313 11:52:00.446525 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmz6r\" (UniqueName: \"kubernetes.io/projected/02db6f2d-ef7c-4444-9776-603e9c44c55a-kube-api-access-vmz6r\") pod \"auto-csr-approver-29556712-jjq26\" (UID: \"02db6f2d-ef7c-4444-9776-603e9c44c55a\") " pod="openshift-infra/auto-csr-approver-29556712-jjq26" Mar 13 11:52:00 crc kubenswrapper[4632]: I0313 11:52:00.474323 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmz6r\" (UniqueName: \"kubernetes.io/projected/02db6f2d-ef7c-4444-9776-603e9c44c55a-kube-api-access-vmz6r\") pod \"auto-csr-approver-29556712-jjq26\" (UID: \"02db6f2d-ef7c-4444-9776-603e9c44c55a\") " pod="openshift-infra/auto-csr-approver-29556712-jjq26" Mar 13 11:52:00 crc kubenswrapper[4632]: I0313 11:52:00.530019 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556712-jjq26" Mar 13 11:52:01 crc kubenswrapper[4632]: I0313 11:52:01.150440 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556712-jjq26"] Mar 13 11:52:01 crc kubenswrapper[4632]: I0313 11:52:01.308279 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556712-jjq26" event={"ID":"02db6f2d-ef7c-4444-9776-603e9c44c55a","Type":"ContainerStarted","Data":"e06748d4089f667badd71fb0750ee1ffdc4180e878cbff38f612301d9128cb9a"} Mar 13 11:52:03 crc kubenswrapper[4632]: I0313 11:52:03.327644 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556712-jjq26" event={"ID":"02db6f2d-ef7c-4444-9776-603e9c44c55a","Type":"ContainerStarted","Data":"1b3640b2bd3a5d0dabd874439006e72fb30cf0784909546f4e8957109f6ffcf0"} Mar 13 11:52:03 crc kubenswrapper[4632]: I0313 11:52:03.345252 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556712-jjq26" podStartSLOduration=2.573870574 podStartE2EDuration="3.34523341s" podCreationTimestamp="2026-03-13 11:52:00 +0000 UTC" firstStartedPulling="2026-03-13 11:52:01.163430234 +0000 UTC m=+6495.185960367" lastFinishedPulling="2026-03-13 11:52:01.93479308 +0000 UTC m=+6495.957323203" observedRunningTime="2026-03-13 11:52:03.339284404 +0000 UTC m=+6497.361814537" watchObservedRunningTime="2026-03-13 11:52:03.34523341 +0000 UTC m=+6497.367763543" Mar 13 11:52:04 crc kubenswrapper[4632]: I0313 11:52:04.340033 4632 generic.go:334] "Generic (PLEG): container finished" podID="02db6f2d-ef7c-4444-9776-603e9c44c55a" containerID="1b3640b2bd3a5d0dabd874439006e72fb30cf0784909546f4e8957109f6ffcf0" exitCode=0 Mar 13 11:52:04 crc kubenswrapper[4632]: I0313 11:52:04.340256 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556712-jjq26" event={"ID":"02db6f2d-ef7c-4444-9776-603e9c44c55a","Type":"ContainerDied","Data":"1b3640b2bd3a5d0dabd874439006e72fb30cf0784909546f4e8957109f6ffcf0"} Mar 13 11:52:05 crc kubenswrapper[4632]: I0313 11:52:05.914503 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556712-jjq26" Mar 13 11:52:06 crc kubenswrapper[4632]: I0313 11:52:06.063318 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmz6r\" (UniqueName: \"kubernetes.io/projected/02db6f2d-ef7c-4444-9776-603e9c44c55a-kube-api-access-vmz6r\") pod \"02db6f2d-ef7c-4444-9776-603e9c44c55a\" (UID: \"02db6f2d-ef7c-4444-9776-603e9c44c55a\") " Mar 13 11:52:06 crc kubenswrapper[4632]: I0313 11:52:06.069874 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02db6f2d-ef7c-4444-9776-603e9c44c55a-kube-api-access-vmz6r" (OuterVolumeSpecName: "kube-api-access-vmz6r") pod "02db6f2d-ef7c-4444-9776-603e9c44c55a" (UID: "02db6f2d-ef7c-4444-9776-603e9c44c55a"). InnerVolumeSpecName "kube-api-access-vmz6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:52:06 crc kubenswrapper[4632]: I0313 11:52:06.167419 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmz6r\" (UniqueName: \"kubernetes.io/projected/02db6f2d-ef7c-4444-9776-603e9c44c55a-kube-api-access-vmz6r\") on node \"crc\" DevicePath \"\"" Mar 13 11:52:06 crc kubenswrapper[4632]: I0313 11:52:06.360544 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556712-jjq26" event={"ID":"02db6f2d-ef7c-4444-9776-603e9c44c55a","Type":"ContainerDied","Data":"e06748d4089f667badd71fb0750ee1ffdc4180e878cbff38f612301d9128cb9a"} Mar 13 11:52:06 crc kubenswrapper[4632]: I0313 11:52:06.360597 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e06748d4089f667badd71fb0750ee1ffdc4180e878cbff38f612301d9128cb9a" Mar 13 11:52:06 crc kubenswrapper[4632]: I0313 11:52:06.360669 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556712-jjq26" Mar 13 11:52:06 crc kubenswrapper[4632]: I0313 11:52:06.488487 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556706-psc7t"] Mar 13 11:52:06 crc kubenswrapper[4632]: I0313 11:52:06.496696 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556706-psc7t"] Mar 13 11:52:07 crc kubenswrapper[4632]: I0313 11:52:07.045416 4632 scope.go:117] "RemoveContainer" containerID="09f62a713fe019f208bcff213bc55f14995ec3a8014d027c3bf7cfc3b5b612e6" Mar 13 11:52:07 crc kubenswrapper[4632]: E0313 11:52:07.045747 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:52:08 crc kubenswrapper[4632]: I0313 11:52:08.057120 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8980f067-488f-497f-8ba7-5ee2d3069d62" path="/var/lib/kubelet/pods/8980f067-488f-497f-8ba7-5ee2d3069d62/volumes" Mar 13 11:52:19 crc kubenswrapper[4632]: I0313 11:52:19.044154 4632 scope.go:117] "RemoveContainer" containerID="09f62a713fe019f208bcff213bc55f14995ec3a8014d027c3bf7cfc3b5b612e6" Mar 13 11:52:19 crc kubenswrapper[4632]: E0313 11:52:19.044963 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:52:23 crc kubenswrapper[4632]: I0313 11:52:23.160077 4632 scope.go:117] "RemoveContainer" containerID="49f7bf435fba27e68a413e86a923b4ddacb7432c6b3ec46cefd0935c8e2aecc2" Mar 13 11:52:31 crc kubenswrapper[4632]: I0313 11:52:31.043881 4632 scope.go:117] "RemoveContainer" containerID="09f62a713fe019f208bcff213bc55f14995ec3a8014d027c3bf7cfc3b5b612e6" Mar 13 11:52:31 crc kubenswrapper[4632]: E0313 11:52:31.044572 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:52:44 crc kubenswrapper[4632]: I0313 11:52:44.044766 4632 scope.go:117] "RemoveContainer" containerID="09f62a713fe019f208bcff213bc55f14995ec3a8014d027c3bf7cfc3b5b612e6" Mar 13 11:52:44 crc kubenswrapper[4632]: E0313 11:52:44.045501 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:52:58 crc kubenswrapper[4632]: I0313 11:52:58.044260 4632 scope.go:117] "RemoveContainer" containerID="09f62a713fe019f208bcff213bc55f14995ec3a8014d027c3bf7cfc3b5b612e6" Mar 13 11:52:58 crc kubenswrapper[4632]: E0313 11:52:58.045049 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:53:10 crc kubenswrapper[4632]: I0313 11:53:10.044315 4632 scope.go:117] "RemoveContainer" containerID="09f62a713fe019f208bcff213bc55f14995ec3a8014d027c3bf7cfc3b5b612e6" Mar 13 11:53:10 crc kubenswrapper[4632]: E0313 11:53:10.045237 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:53:25 crc kubenswrapper[4632]: I0313 11:53:25.045275 4632 scope.go:117] "RemoveContainer" containerID="09f62a713fe019f208bcff213bc55f14995ec3a8014d027c3bf7cfc3b5b612e6" Mar 13 11:53:25 crc kubenswrapper[4632]: E0313 11:53:25.046516 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:53:40 crc kubenswrapper[4632]: I0313 11:53:40.045319 4632 scope.go:117] "RemoveContainer" containerID="09f62a713fe019f208bcff213bc55f14995ec3a8014d027c3bf7cfc3b5b612e6" Mar 13 11:53:40 crc kubenswrapper[4632]: E0313 11:53:40.047025 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:53:52 crc kubenswrapper[4632]: I0313 11:53:52.047323 4632 scope.go:117] "RemoveContainer" containerID="09f62a713fe019f208bcff213bc55f14995ec3a8014d027c3bf7cfc3b5b612e6" Mar 13 11:53:52 crc kubenswrapper[4632]: E0313 11:53:52.049092 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:54:00 crc kubenswrapper[4632]: I0313 11:54:00.159745 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556714-cnqc7"] Mar 13 11:54:00 crc kubenswrapper[4632]: E0313 11:54:00.160845 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02db6f2d-ef7c-4444-9776-603e9c44c55a" containerName="oc" Mar 13 11:54:00 crc kubenswrapper[4632]: I0313 11:54:00.160863 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="02db6f2d-ef7c-4444-9776-603e9c44c55a" containerName="oc" Mar 13 11:54:00 crc kubenswrapper[4632]: I0313 11:54:00.161150 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="02db6f2d-ef7c-4444-9776-603e9c44c55a" containerName="oc" Mar 13 11:54:00 crc kubenswrapper[4632]: I0313 11:54:00.162026 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556714-cnqc7" Mar 13 11:54:00 crc kubenswrapper[4632]: I0313 11:54:00.164836 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 11:54:00 crc kubenswrapper[4632]: I0313 11:54:00.164921 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 11:54:00 crc kubenswrapper[4632]: I0313 11:54:00.164919 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 11:54:00 crc kubenswrapper[4632]: I0313 11:54:00.197266 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556714-cnqc7"] Mar 13 11:54:00 crc kubenswrapper[4632]: I0313 11:54:00.284558 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmbkr\" (UniqueName: \"kubernetes.io/projected/88fb48fa-6650-4b22-b44f-d8c6f839489e-kube-api-access-rmbkr\") pod \"auto-csr-approver-29556714-cnqc7\" (UID: \"88fb48fa-6650-4b22-b44f-d8c6f839489e\") " pod="openshift-infra/auto-csr-approver-29556714-cnqc7" Mar 13 11:54:00 crc kubenswrapper[4632]: I0313 11:54:00.387072 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmbkr\" (UniqueName: \"kubernetes.io/projected/88fb48fa-6650-4b22-b44f-d8c6f839489e-kube-api-access-rmbkr\") pod \"auto-csr-approver-29556714-cnqc7\" (UID: \"88fb48fa-6650-4b22-b44f-d8c6f839489e\") " pod="openshift-infra/auto-csr-approver-29556714-cnqc7" Mar 13 11:54:00 crc kubenswrapper[4632]: I0313 11:54:00.408515 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmbkr\" (UniqueName: \"kubernetes.io/projected/88fb48fa-6650-4b22-b44f-d8c6f839489e-kube-api-access-rmbkr\") pod \"auto-csr-approver-29556714-cnqc7\" (UID: \"88fb48fa-6650-4b22-b44f-d8c6f839489e\") " pod="openshift-infra/auto-csr-approver-29556714-cnqc7" Mar 13 11:54:00 crc kubenswrapper[4632]: I0313 11:54:00.482310 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556714-cnqc7" Mar 13 11:54:01 crc kubenswrapper[4632]: I0313 11:54:01.038803 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556714-cnqc7"] Mar 13 11:54:01 crc kubenswrapper[4632]: I0313 11:54:01.580662 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556714-cnqc7" event={"ID":"88fb48fa-6650-4b22-b44f-d8c6f839489e","Type":"ContainerStarted","Data":"a08cce79366424c102d56b429240137bb4b00fe174f555bd016d51d91a7ff38d"} Mar 13 11:54:03 crc kubenswrapper[4632]: I0313 11:54:03.605527 4632 generic.go:334] "Generic (PLEG): container finished" podID="88fb48fa-6650-4b22-b44f-d8c6f839489e" containerID="a65145278e359710e5ff339e23940020997c56d82631e8e73d581b3ec62c80b2" exitCode=0 Mar 13 11:54:03 crc kubenswrapper[4632]: I0313 11:54:03.605933 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556714-cnqc7" event={"ID":"88fb48fa-6650-4b22-b44f-d8c6f839489e","Type":"ContainerDied","Data":"a65145278e359710e5ff339e23940020997c56d82631e8e73d581b3ec62c80b2"} Mar 13 11:54:04 crc kubenswrapper[4632]: I0313 11:54:04.975697 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556714-cnqc7" Mar 13 11:54:05 crc kubenswrapper[4632]: I0313 11:54:05.045053 4632 scope.go:117] "RemoveContainer" containerID="09f62a713fe019f208bcff213bc55f14995ec3a8014d027c3bf7cfc3b5b612e6" Mar 13 11:54:05 crc kubenswrapper[4632]: E0313 11:54:05.045328 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:54:05 crc kubenswrapper[4632]: I0313 11:54:05.101767 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmbkr\" (UniqueName: \"kubernetes.io/projected/88fb48fa-6650-4b22-b44f-d8c6f839489e-kube-api-access-rmbkr\") pod \"88fb48fa-6650-4b22-b44f-d8c6f839489e\" (UID: \"88fb48fa-6650-4b22-b44f-d8c6f839489e\") " Mar 13 11:54:05 crc kubenswrapper[4632]: I0313 11:54:05.112449 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88fb48fa-6650-4b22-b44f-d8c6f839489e-kube-api-access-rmbkr" (OuterVolumeSpecName: "kube-api-access-rmbkr") pod "88fb48fa-6650-4b22-b44f-d8c6f839489e" (UID: "88fb48fa-6650-4b22-b44f-d8c6f839489e"). InnerVolumeSpecName "kube-api-access-rmbkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:54:05 crc kubenswrapper[4632]: I0313 11:54:05.204049 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmbkr\" (UniqueName: \"kubernetes.io/projected/88fb48fa-6650-4b22-b44f-d8c6f839489e-kube-api-access-rmbkr\") on node \"crc\" DevicePath \"\"" Mar 13 11:54:05 crc kubenswrapper[4632]: I0313 11:54:05.628218 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556714-cnqc7" event={"ID":"88fb48fa-6650-4b22-b44f-d8c6f839489e","Type":"ContainerDied","Data":"a08cce79366424c102d56b429240137bb4b00fe174f555bd016d51d91a7ff38d"} Mar 13 11:54:05 crc kubenswrapper[4632]: I0313 11:54:05.628267 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a08cce79366424c102d56b429240137bb4b00fe174f555bd016d51d91a7ff38d" Mar 13 11:54:05 crc kubenswrapper[4632]: I0313 11:54:05.628329 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556714-cnqc7" Mar 13 11:54:06 crc kubenswrapper[4632]: I0313 11:54:06.080227 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556708-f6hsv"] Mar 13 11:54:06 crc kubenswrapper[4632]: I0313 11:54:06.091534 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556708-f6hsv"] Mar 13 11:54:08 crc kubenswrapper[4632]: I0313 11:54:08.068132 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5464d278-31e9-45aa-9e87-78ef3e96115e" path="/var/lib/kubelet/pods/5464d278-31e9-45aa-9e87-78ef3e96115e/volumes" Mar 13 11:54:18 crc kubenswrapper[4632]: I0313 11:54:18.050592 4632 scope.go:117] "RemoveContainer" containerID="09f62a713fe019f208bcff213bc55f14995ec3a8014d027c3bf7cfc3b5b612e6" Mar 13 11:54:18 crc kubenswrapper[4632]: E0313 11:54:18.051390 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:54:23 crc kubenswrapper[4632]: I0313 11:54:23.297683 4632 scope.go:117] "RemoveContainer" containerID="6f4e30c3bf10310c255b10e3f6602511c866fd5af961f5f486fed69de586adb4" Mar 13 11:54:33 crc kubenswrapper[4632]: I0313 11:54:33.044833 4632 scope.go:117] "RemoveContainer" containerID="09f62a713fe019f208bcff213bc55f14995ec3a8014d027c3bf7cfc3b5b612e6" Mar 13 11:54:33 crc kubenswrapper[4632]: E0313 11:54:33.045602 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:54:48 crc kubenswrapper[4632]: I0313 11:54:48.044338 4632 scope.go:117] "RemoveContainer" containerID="09f62a713fe019f208bcff213bc55f14995ec3a8014d027c3bf7cfc3b5b612e6" Mar 13 11:54:48 crc kubenswrapper[4632]: E0313 11:54:48.045318 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:55:00 crc kubenswrapper[4632]: I0313 11:55:00.044883 4632 scope.go:117] "RemoveContainer" containerID="09f62a713fe019f208bcff213bc55f14995ec3a8014d027c3bf7cfc3b5b612e6" Mar 13 11:55:00 crc kubenswrapper[4632]: E0313 11:55:00.046770 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:55:13 crc kubenswrapper[4632]: I0313 11:55:13.044617 4632 scope.go:117] "RemoveContainer" containerID="09f62a713fe019f208bcff213bc55f14995ec3a8014d027c3bf7cfc3b5b612e6" Mar 13 11:55:13 crc kubenswrapper[4632]: E0313 11:55:13.045588 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:55:25 crc kubenswrapper[4632]: I0313 11:55:25.044104 4632 scope.go:117] "RemoveContainer" containerID="09f62a713fe019f208bcff213bc55f14995ec3a8014d027c3bf7cfc3b5b612e6" Mar 13 11:55:25 crc kubenswrapper[4632]: E0313 11:55:25.044740 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:55:39 crc kubenswrapper[4632]: I0313 11:55:39.044043 4632 scope.go:117] "RemoveContainer" containerID="09f62a713fe019f208bcff213bc55f14995ec3a8014d027c3bf7cfc3b5b612e6" Mar 13 11:55:39 crc kubenswrapper[4632]: E0313 11:55:39.044859 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:55:51 crc kubenswrapper[4632]: I0313 11:55:51.044336 4632 scope.go:117] "RemoveContainer" containerID="09f62a713fe019f208bcff213bc55f14995ec3a8014d027c3bf7cfc3b5b612e6" Mar 13 11:55:51 crc kubenswrapper[4632]: E0313 11:55:51.045101 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:56:00 crc kubenswrapper[4632]: I0313 11:56:00.145836 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556716-2gt9h"] Mar 13 11:56:00 crc kubenswrapper[4632]: E0313 11:56:00.146761 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88fb48fa-6650-4b22-b44f-d8c6f839489e" containerName="oc" Mar 13 11:56:00 crc kubenswrapper[4632]: I0313 11:56:00.146774 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="88fb48fa-6650-4b22-b44f-d8c6f839489e" containerName="oc" Mar 13 11:56:00 crc kubenswrapper[4632]: I0313 11:56:00.147011 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="88fb48fa-6650-4b22-b44f-d8c6f839489e" containerName="oc" Mar 13 11:56:00 crc kubenswrapper[4632]: I0313 11:56:00.147644 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556716-2gt9h" Mar 13 11:56:00 crc kubenswrapper[4632]: I0313 11:56:00.150526 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 11:56:00 crc kubenswrapper[4632]: I0313 11:56:00.150895 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 11:56:00 crc kubenswrapper[4632]: I0313 11:56:00.151448 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 11:56:00 crc kubenswrapper[4632]: I0313 11:56:00.167757 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556716-2gt9h"] Mar 13 11:56:00 crc kubenswrapper[4632]: I0313 11:56:00.297354 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp4vw\" (UniqueName: \"kubernetes.io/projected/38cafee7-6e61-46de-b58b-48b8f7d41bf6-kube-api-access-cp4vw\") pod \"auto-csr-approver-29556716-2gt9h\" (UID: \"38cafee7-6e61-46de-b58b-48b8f7d41bf6\") " pod="openshift-infra/auto-csr-approver-29556716-2gt9h" Mar 13 11:56:00 crc kubenswrapper[4632]: I0313 11:56:00.399373 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp4vw\" (UniqueName: \"kubernetes.io/projected/38cafee7-6e61-46de-b58b-48b8f7d41bf6-kube-api-access-cp4vw\") pod \"auto-csr-approver-29556716-2gt9h\" (UID: \"38cafee7-6e61-46de-b58b-48b8f7d41bf6\") " pod="openshift-infra/auto-csr-approver-29556716-2gt9h" Mar 13 11:56:00 crc kubenswrapper[4632]: I0313 11:56:00.423599 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp4vw\" (UniqueName: \"kubernetes.io/projected/38cafee7-6e61-46de-b58b-48b8f7d41bf6-kube-api-access-cp4vw\") pod \"auto-csr-approver-29556716-2gt9h\" (UID: \"38cafee7-6e61-46de-b58b-48b8f7d41bf6\") " pod="openshift-infra/auto-csr-approver-29556716-2gt9h" Mar 13 11:56:00 crc kubenswrapper[4632]: I0313 11:56:00.468705 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556716-2gt9h" Mar 13 11:56:00 crc kubenswrapper[4632]: I0313 11:56:00.986980 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556716-2gt9h"] Mar 13 11:56:00 crc kubenswrapper[4632]: I0313 11:56:00.987521 4632 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 11:56:01 crc kubenswrapper[4632]: I0313 11:56:01.761303 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556716-2gt9h" event={"ID":"38cafee7-6e61-46de-b58b-48b8f7d41bf6","Type":"ContainerStarted","Data":"f3857abe0b359cb3a75944205d422467fa9c52bab1373b4c2e3b63d8fc99dfb7"} Mar 13 11:56:02 crc kubenswrapper[4632]: I0313 11:56:02.773904 4632 generic.go:334] "Generic (PLEG): container finished" podID="38cafee7-6e61-46de-b58b-48b8f7d41bf6" containerID="334a9f675c9c77aba9558302bf96e3547c17123adf9873e85b3c3871bccb4465" exitCode=0 Mar 13 11:56:02 crc kubenswrapper[4632]: I0313 11:56:02.774191 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556716-2gt9h" event={"ID":"38cafee7-6e61-46de-b58b-48b8f7d41bf6","Type":"ContainerDied","Data":"334a9f675c9c77aba9558302bf96e3547c17123adf9873e85b3c3871bccb4465"} Mar 13 11:56:03 crc kubenswrapper[4632]: I0313 11:56:03.044528 4632 scope.go:117] "RemoveContainer" containerID="09f62a713fe019f208bcff213bc55f14995ec3a8014d027c3bf7cfc3b5b612e6" Mar 13 11:56:03 crc kubenswrapper[4632]: E0313 11:56:03.045029 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:56:04 crc kubenswrapper[4632]: I0313 11:56:04.190696 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556716-2gt9h" Mar 13 11:56:04 crc kubenswrapper[4632]: I0313 11:56:04.218730 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp4vw\" (UniqueName: \"kubernetes.io/projected/38cafee7-6e61-46de-b58b-48b8f7d41bf6-kube-api-access-cp4vw\") pod \"38cafee7-6e61-46de-b58b-48b8f7d41bf6\" (UID: \"38cafee7-6e61-46de-b58b-48b8f7d41bf6\") " Mar 13 11:56:04 crc kubenswrapper[4632]: I0313 11:56:04.233447 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38cafee7-6e61-46de-b58b-48b8f7d41bf6-kube-api-access-cp4vw" (OuterVolumeSpecName: "kube-api-access-cp4vw") pod "38cafee7-6e61-46de-b58b-48b8f7d41bf6" (UID: "38cafee7-6e61-46de-b58b-48b8f7d41bf6"). InnerVolumeSpecName "kube-api-access-cp4vw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:56:04 crc kubenswrapper[4632]: I0313 11:56:04.320407 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp4vw\" (UniqueName: \"kubernetes.io/projected/38cafee7-6e61-46de-b58b-48b8f7d41bf6-kube-api-access-cp4vw\") on node \"crc\" DevicePath \"\"" Mar 13 11:56:04 crc kubenswrapper[4632]: I0313 11:56:04.798179 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556716-2gt9h" event={"ID":"38cafee7-6e61-46de-b58b-48b8f7d41bf6","Type":"ContainerDied","Data":"f3857abe0b359cb3a75944205d422467fa9c52bab1373b4c2e3b63d8fc99dfb7"} Mar 13 11:56:04 crc kubenswrapper[4632]: I0313 11:56:04.798221 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556716-2gt9h" Mar 13 11:56:04 crc kubenswrapper[4632]: I0313 11:56:04.798247 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3857abe0b359cb3a75944205d422467fa9c52bab1373b4c2e3b63d8fc99dfb7" Mar 13 11:56:05 crc kubenswrapper[4632]: I0313 11:56:05.273152 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556710-t4gtk"] Mar 13 11:56:05 crc kubenswrapper[4632]: I0313 11:56:05.282358 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556710-t4gtk"] Mar 13 11:56:06 crc kubenswrapper[4632]: I0313 11:56:06.055115 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38d73c58-f065-4efc-9fe2-b6c0ed9fa5ca" path="/var/lib/kubelet/pods/38d73c58-f065-4efc-9fe2-b6c0ed9fa5ca/volumes" Mar 13 11:56:15 crc kubenswrapper[4632]: I0313 11:56:15.044338 4632 scope.go:117] "RemoveContainer" containerID="09f62a713fe019f208bcff213bc55f14995ec3a8014d027c3bf7cfc3b5b612e6" Mar 13 11:56:15 crc kubenswrapper[4632]: E0313 11:56:15.045175 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:56:23 crc kubenswrapper[4632]: I0313 11:56:23.439967 4632 scope.go:117] "RemoveContainer" containerID="57e073e1e04617c49dfbb2c194d77f02cda77aac917eb626f73490bd0abacbcb" Mar 13 11:56:26 crc kubenswrapper[4632]: I0313 11:56:26.044705 4632 scope.go:117] "RemoveContainer" containerID="09f62a713fe019f208bcff213bc55f14995ec3a8014d027c3bf7cfc3b5b612e6" Mar 13 11:56:26 crc kubenswrapper[4632]: E0313 11:56:26.045446 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 11:56:41 crc kubenswrapper[4632]: I0313 11:56:41.044273 4632 scope.go:117] "RemoveContainer" containerID="09f62a713fe019f208bcff213bc55f14995ec3a8014d027c3bf7cfc3b5b612e6" Mar 13 11:56:42 crc kubenswrapper[4632]: I0313 11:56:42.133285 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerStarted","Data":"f1337fb64ab38c0f489a591d3b3f173d13428642427113f1891b2f17a626304e"} Mar 13 11:58:00 crc kubenswrapper[4632]: I0313 11:58:00.177079 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556718-xxtsr"] Mar 13 11:58:00 crc kubenswrapper[4632]: E0313 11:58:00.178183 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38cafee7-6e61-46de-b58b-48b8f7d41bf6" containerName="oc" Mar 13 11:58:00 crc kubenswrapper[4632]: I0313 11:58:00.178199 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="38cafee7-6e61-46de-b58b-48b8f7d41bf6" containerName="oc" Mar 13 11:58:00 crc kubenswrapper[4632]: I0313 11:58:00.178403 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="38cafee7-6e61-46de-b58b-48b8f7d41bf6" containerName="oc" Mar 13 11:58:00 crc kubenswrapper[4632]: I0313 11:58:00.180134 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556718-xxtsr" Mar 13 11:58:00 crc kubenswrapper[4632]: I0313 11:58:00.184853 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 11:58:00 crc kubenswrapper[4632]: I0313 11:58:00.184860 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 11:58:00 crc kubenswrapper[4632]: I0313 11:58:00.187338 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 11:58:00 crc kubenswrapper[4632]: I0313 11:58:00.187557 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556718-xxtsr"] Mar 13 11:58:00 crc kubenswrapper[4632]: I0313 11:58:00.325490 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8275x\" (UniqueName: \"kubernetes.io/projected/beb533c7-a735-47fa-b5fa-67b1bcba9787-kube-api-access-8275x\") pod \"auto-csr-approver-29556718-xxtsr\" (UID: \"beb533c7-a735-47fa-b5fa-67b1bcba9787\") " pod="openshift-infra/auto-csr-approver-29556718-xxtsr" Mar 13 11:58:00 crc kubenswrapper[4632]: I0313 11:58:00.427696 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8275x\" (UniqueName: \"kubernetes.io/projected/beb533c7-a735-47fa-b5fa-67b1bcba9787-kube-api-access-8275x\") pod \"auto-csr-approver-29556718-xxtsr\" (UID: \"beb533c7-a735-47fa-b5fa-67b1bcba9787\") " pod="openshift-infra/auto-csr-approver-29556718-xxtsr" Mar 13 11:58:00 crc kubenswrapper[4632]: I0313 11:58:00.496778 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8275x\" (UniqueName: \"kubernetes.io/projected/beb533c7-a735-47fa-b5fa-67b1bcba9787-kube-api-access-8275x\") pod \"auto-csr-approver-29556718-xxtsr\" (UID: \"beb533c7-a735-47fa-b5fa-67b1bcba9787\") " pod="openshift-infra/auto-csr-approver-29556718-xxtsr" Mar 13 11:58:00 crc kubenswrapper[4632]: I0313 11:58:00.508843 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556718-xxtsr" Mar 13 11:58:02 crc kubenswrapper[4632]: I0313 11:58:02.170004 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556718-xxtsr"] Mar 13 11:58:02 crc kubenswrapper[4632]: I0313 11:58:02.882999 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556718-xxtsr" event={"ID":"beb533c7-a735-47fa-b5fa-67b1bcba9787","Type":"ContainerStarted","Data":"0d527ee16e85671e3e98d42a2ffcefa3d907752f70215b9be0fc57b705f6902c"} Mar 13 11:58:05 crc kubenswrapper[4632]: I0313 11:58:05.911803 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556718-xxtsr" event={"ID":"beb533c7-a735-47fa-b5fa-67b1bcba9787","Type":"ContainerStarted","Data":"de5a0b9383a1bdabde0e1290cb2d2e2341dbc3e19f3a7e552782ac9f0501a7ce"} Mar 13 11:58:05 crc kubenswrapper[4632]: I0313 11:58:05.931981 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556718-xxtsr" podStartSLOduration=5.137630648 podStartE2EDuration="5.931958709s" podCreationTimestamp="2026-03-13 11:58:00 +0000 UTC" firstStartedPulling="2026-03-13 11:58:02.204507146 +0000 UTC m=+6856.227037279" lastFinishedPulling="2026-03-13 11:58:02.998835207 +0000 UTC m=+6857.021365340" observedRunningTime="2026-03-13 11:58:05.924899244 +0000 UTC m=+6859.947429377" watchObservedRunningTime="2026-03-13 11:58:05.931958709 +0000 UTC m=+6859.954488852" Mar 13 11:58:09 crc kubenswrapper[4632]: I0313 11:58:09.989054 4632 generic.go:334] "Generic (PLEG): container finished" podID="beb533c7-a735-47fa-b5fa-67b1bcba9787" containerID="de5a0b9383a1bdabde0e1290cb2d2e2341dbc3e19f3a7e552782ac9f0501a7ce" exitCode=0 Mar 13 11:58:09 crc kubenswrapper[4632]: I0313 11:58:09.989117 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556718-xxtsr" event={"ID":"beb533c7-a735-47fa-b5fa-67b1bcba9787","Type":"ContainerDied","Data":"de5a0b9383a1bdabde0e1290cb2d2e2341dbc3e19f3a7e552782ac9f0501a7ce"} Mar 13 11:58:11 crc kubenswrapper[4632]: I0313 11:58:11.410041 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556718-xxtsr" Mar 13 11:58:11 crc kubenswrapper[4632]: I0313 11:58:11.574173 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8275x\" (UniqueName: \"kubernetes.io/projected/beb533c7-a735-47fa-b5fa-67b1bcba9787-kube-api-access-8275x\") pod \"beb533c7-a735-47fa-b5fa-67b1bcba9787\" (UID: \"beb533c7-a735-47fa-b5fa-67b1bcba9787\") " Mar 13 11:58:11 crc kubenswrapper[4632]: I0313 11:58:11.586326 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beb533c7-a735-47fa-b5fa-67b1bcba9787-kube-api-access-8275x" (OuterVolumeSpecName: "kube-api-access-8275x") pod "beb533c7-a735-47fa-b5fa-67b1bcba9787" (UID: "beb533c7-a735-47fa-b5fa-67b1bcba9787"). InnerVolumeSpecName "kube-api-access-8275x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:58:11 crc kubenswrapper[4632]: I0313 11:58:11.676198 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8275x\" (UniqueName: \"kubernetes.io/projected/beb533c7-a735-47fa-b5fa-67b1bcba9787-kube-api-access-8275x\") on node \"crc\" DevicePath \"\"" Mar 13 11:58:12 crc kubenswrapper[4632]: I0313 11:58:12.010296 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556718-xxtsr" event={"ID":"beb533c7-a735-47fa-b5fa-67b1bcba9787","Type":"ContainerDied","Data":"0d527ee16e85671e3e98d42a2ffcefa3d907752f70215b9be0fc57b705f6902c"} Mar 13 11:58:12 crc kubenswrapper[4632]: I0313 11:58:12.010336 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d527ee16e85671e3e98d42a2ffcefa3d907752f70215b9be0fc57b705f6902c" Mar 13 11:58:12 crc kubenswrapper[4632]: I0313 11:58:12.010407 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556718-xxtsr" Mar 13 11:58:12 crc kubenswrapper[4632]: I0313 11:58:12.097586 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556712-jjq26"] Mar 13 11:58:12 crc kubenswrapper[4632]: I0313 11:58:12.109011 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556712-jjq26"] Mar 13 11:58:14 crc kubenswrapper[4632]: I0313 11:58:14.059342 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02db6f2d-ef7c-4444-9776-603e9c44c55a" path="/var/lib/kubelet/pods/02db6f2d-ef7c-4444-9776-603e9c44c55a/volumes" Mar 13 11:58:23 crc kubenswrapper[4632]: I0313 11:58:23.843574 4632 scope.go:117] "RemoveContainer" containerID="1b3640b2bd3a5d0dabd874439006e72fb30cf0784909546f4e8957109f6ffcf0" Mar 13 11:58:37 crc kubenswrapper[4632]: I0313 11:58:37.532305 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-drgch"] Mar 13 11:58:37 crc kubenswrapper[4632]: E0313 11:58:37.534403 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beb533c7-a735-47fa-b5fa-67b1bcba9787" containerName="oc" Mar 13 11:58:37 crc kubenswrapper[4632]: I0313 11:58:37.534497 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="beb533c7-a735-47fa-b5fa-67b1bcba9787" containerName="oc" Mar 13 11:58:37 crc kubenswrapper[4632]: I0313 11:58:37.534799 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="beb533c7-a735-47fa-b5fa-67b1bcba9787" containerName="oc" Mar 13 11:58:37 crc kubenswrapper[4632]: I0313 11:58:37.537686 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-drgch" Mar 13 11:58:37 crc kubenswrapper[4632]: I0313 11:58:37.553761 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-drgch"] Mar 13 11:58:37 crc kubenswrapper[4632]: I0313 11:58:37.691547 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nr58\" (UniqueName: \"kubernetes.io/projected/6cd708bb-8ca4-4c04-95f1-dfb27bee6832-kube-api-access-7nr58\") pod \"redhat-marketplace-drgch\" (UID: \"6cd708bb-8ca4-4c04-95f1-dfb27bee6832\") " pod="openshift-marketplace/redhat-marketplace-drgch" Mar 13 11:58:37 crc kubenswrapper[4632]: I0313 11:58:37.691972 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cd708bb-8ca4-4c04-95f1-dfb27bee6832-catalog-content\") pod \"redhat-marketplace-drgch\" (UID: \"6cd708bb-8ca4-4c04-95f1-dfb27bee6832\") " pod="openshift-marketplace/redhat-marketplace-drgch" Mar 13 11:58:37 crc kubenswrapper[4632]: I0313 11:58:37.692098 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cd708bb-8ca4-4c04-95f1-dfb27bee6832-utilities\") pod \"redhat-marketplace-drgch\" (UID: \"6cd708bb-8ca4-4c04-95f1-dfb27bee6832\") " pod="openshift-marketplace/redhat-marketplace-drgch" Mar 13 11:58:37 crc kubenswrapper[4632]: I0313 11:58:37.794267 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nr58\" (UniqueName: \"kubernetes.io/projected/6cd708bb-8ca4-4c04-95f1-dfb27bee6832-kube-api-access-7nr58\") pod \"redhat-marketplace-drgch\" (UID: \"6cd708bb-8ca4-4c04-95f1-dfb27bee6832\") " pod="openshift-marketplace/redhat-marketplace-drgch" Mar 13 11:58:37 crc kubenswrapper[4632]: I0313 11:58:37.794346 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cd708bb-8ca4-4c04-95f1-dfb27bee6832-catalog-content\") pod \"redhat-marketplace-drgch\" (UID: \"6cd708bb-8ca4-4c04-95f1-dfb27bee6832\") " pod="openshift-marketplace/redhat-marketplace-drgch" Mar 13 11:58:37 crc kubenswrapper[4632]: I0313 11:58:37.794374 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cd708bb-8ca4-4c04-95f1-dfb27bee6832-utilities\") pod \"redhat-marketplace-drgch\" (UID: \"6cd708bb-8ca4-4c04-95f1-dfb27bee6832\") " pod="openshift-marketplace/redhat-marketplace-drgch" Mar 13 11:58:37 crc kubenswrapper[4632]: I0313 11:58:37.795103 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cd708bb-8ca4-4c04-95f1-dfb27bee6832-utilities\") pod \"redhat-marketplace-drgch\" (UID: \"6cd708bb-8ca4-4c04-95f1-dfb27bee6832\") " pod="openshift-marketplace/redhat-marketplace-drgch" Mar 13 11:58:37 crc kubenswrapper[4632]: I0313 11:58:37.795371 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cd708bb-8ca4-4c04-95f1-dfb27bee6832-catalog-content\") pod \"redhat-marketplace-drgch\" (UID: \"6cd708bb-8ca4-4c04-95f1-dfb27bee6832\") " pod="openshift-marketplace/redhat-marketplace-drgch" Mar 13 11:58:37 crc kubenswrapper[4632]: I0313 11:58:37.829468 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nr58\" (UniqueName: \"kubernetes.io/projected/6cd708bb-8ca4-4c04-95f1-dfb27bee6832-kube-api-access-7nr58\") pod \"redhat-marketplace-drgch\" (UID: \"6cd708bb-8ca4-4c04-95f1-dfb27bee6832\") " pod="openshift-marketplace/redhat-marketplace-drgch" Mar 13 11:58:37 crc kubenswrapper[4632]: I0313 11:58:37.857618 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-drgch" Mar 13 11:58:38 crc kubenswrapper[4632]: I0313 11:58:38.366816 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-drgch"] Mar 13 11:58:39 crc kubenswrapper[4632]: I0313 11:58:39.281215 4632 generic.go:334] "Generic (PLEG): container finished" podID="6cd708bb-8ca4-4c04-95f1-dfb27bee6832" containerID="0785a5b501b04e1d4e9bc948bd7a7407375731736568cf4ddf2b272fd73d83f6" exitCode=0 Mar 13 11:58:39 crc kubenswrapper[4632]: I0313 11:58:39.281263 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-drgch" event={"ID":"6cd708bb-8ca4-4c04-95f1-dfb27bee6832","Type":"ContainerDied","Data":"0785a5b501b04e1d4e9bc948bd7a7407375731736568cf4ddf2b272fd73d83f6"} Mar 13 11:58:39 crc kubenswrapper[4632]: I0313 11:58:39.281442 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-drgch" event={"ID":"6cd708bb-8ca4-4c04-95f1-dfb27bee6832","Type":"ContainerStarted","Data":"9679a196913903ccdb588bb8ba90e2da048eff47535c211d5b2101f6e140b063"} Mar 13 11:58:41 crc kubenswrapper[4632]: I0313 11:58:41.301304 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-drgch" event={"ID":"6cd708bb-8ca4-4c04-95f1-dfb27bee6832","Type":"ContainerStarted","Data":"5efb7453218a8d0b7abc9c2463a8936543c07c4d4dc2ed95212fd8a5cd84d6d3"} Mar 13 11:58:42 crc kubenswrapper[4632]: I0313 11:58:42.317642 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-drgch" event={"ID":"6cd708bb-8ca4-4c04-95f1-dfb27bee6832","Type":"ContainerDied","Data":"5efb7453218a8d0b7abc9c2463a8936543c07c4d4dc2ed95212fd8a5cd84d6d3"} Mar 13 11:58:42 crc kubenswrapper[4632]: I0313 11:58:42.317699 4632 generic.go:334] "Generic (PLEG): container finished" podID="6cd708bb-8ca4-4c04-95f1-dfb27bee6832" containerID="5efb7453218a8d0b7abc9c2463a8936543c07c4d4dc2ed95212fd8a5cd84d6d3" exitCode=0 Mar 13 11:58:43 crc kubenswrapper[4632]: I0313 11:58:43.329111 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-drgch" event={"ID":"6cd708bb-8ca4-4c04-95f1-dfb27bee6832","Type":"ContainerStarted","Data":"8c3253a43857fd1fd4bdd5757007dffd53fbea2cdd3ce32c464af84fa8752306"} Mar 13 11:58:43 crc kubenswrapper[4632]: I0313 11:58:43.352695 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-drgch" podStartSLOduration=2.919448954 podStartE2EDuration="6.352674241s" podCreationTimestamp="2026-03-13 11:58:37 +0000 UTC" firstStartedPulling="2026-03-13 11:58:39.284482431 +0000 UTC m=+6893.307012564" lastFinishedPulling="2026-03-13 11:58:42.717707718 +0000 UTC m=+6896.740237851" observedRunningTime="2026-03-13 11:58:43.346016768 +0000 UTC m=+6897.368546921" watchObservedRunningTime="2026-03-13 11:58:43.352674241 +0000 UTC m=+6897.375204394" Mar 13 11:58:47 crc kubenswrapper[4632]: I0313 11:58:47.858594 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-drgch" Mar 13 11:58:47 crc kubenswrapper[4632]: I0313 11:58:47.859192 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-drgch" Mar 13 11:58:48 crc kubenswrapper[4632]: I0313 11:58:48.912501 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-drgch" podUID="6cd708bb-8ca4-4c04-95f1-dfb27bee6832" containerName="registry-server" probeResult="failure" output=< Mar 13 11:58:48 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:58:48 crc kubenswrapper[4632]: > Mar 13 11:58:57 crc kubenswrapper[4632]: I0313 11:58:57.909998 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-drgch" Mar 13 11:58:57 crc kubenswrapper[4632]: I0313 11:58:57.973266 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-drgch" Mar 13 11:58:58 crc kubenswrapper[4632]: I0313 11:58:58.157706 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-drgch"] Mar 13 11:58:59 crc kubenswrapper[4632]: I0313 11:58:59.534831 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-drgch" podUID="6cd708bb-8ca4-4c04-95f1-dfb27bee6832" containerName="registry-server" containerID="cri-o://8c3253a43857fd1fd4bdd5757007dffd53fbea2cdd3ce32c464af84fa8752306" gracePeriod=2 Mar 13 11:59:00 crc kubenswrapper[4632]: I0313 11:59:00.068533 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-drgch" Mar 13 11:59:00 crc kubenswrapper[4632]: I0313 11:59:00.181074 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cd708bb-8ca4-4c04-95f1-dfb27bee6832-utilities\") pod \"6cd708bb-8ca4-4c04-95f1-dfb27bee6832\" (UID: \"6cd708bb-8ca4-4c04-95f1-dfb27bee6832\") " Mar 13 11:59:00 crc kubenswrapper[4632]: I0313 11:59:00.181167 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nr58\" (UniqueName: \"kubernetes.io/projected/6cd708bb-8ca4-4c04-95f1-dfb27bee6832-kube-api-access-7nr58\") pod \"6cd708bb-8ca4-4c04-95f1-dfb27bee6832\" (UID: \"6cd708bb-8ca4-4c04-95f1-dfb27bee6832\") " Mar 13 11:59:00 crc kubenswrapper[4632]: I0313 11:59:00.181277 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cd708bb-8ca4-4c04-95f1-dfb27bee6832-catalog-content\") pod \"6cd708bb-8ca4-4c04-95f1-dfb27bee6832\" (UID: \"6cd708bb-8ca4-4c04-95f1-dfb27bee6832\") " Mar 13 11:59:00 crc kubenswrapper[4632]: I0313 11:59:00.182358 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cd708bb-8ca4-4c04-95f1-dfb27bee6832-utilities" (OuterVolumeSpecName: "utilities") pod "6cd708bb-8ca4-4c04-95f1-dfb27bee6832" (UID: "6cd708bb-8ca4-4c04-95f1-dfb27bee6832"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:59:00 crc kubenswrapper[4632]: I0313 11:59:00.183454 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cd708bb-8ca4-4c04-95f1-dfb27bee6832-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 11:59:00 crc kubenswrapper[4632]: I0313 11:59:00.190660 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cd708bb-8ca4-4c04-95f1-dfb27bee6832-kube-api-access-7nr58" (OuterVolumeSpecName: "kube-api-access-7nr58") pod "6cd708bb-8ca4-4c04-95f1-dfb27bee6832" (UID: "6cd708bb-8ca4-4c04-95f1-dfb27bee6832"). InnerVolumeSpecName "kube-api-access-7nr58". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:59:00 crc kubenswrapper[4632]: I0313 11:59:00.213086 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cd708bb-8ca4-4c04-95f1-dfb27bee6832-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6cd708bb-8ca4-4c04-95f1-dfb27bee6832" (UID: "6cd708bb-8ca4-4c04-95f1-dfb27bee6832"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:59:00 crc kubenswrapper[4632]: I0313 11:59:00.285331 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nr58\" (UniqueName: \"kubernetes.io/projected/6cd708bb-8ca4-4c04-95f1-dfb27bee6832-kube-api-access-7nr58\") on node \"crc\" DevicePath \"\"" Mar 13 11:59:00 crc kubenswrapper[4632]: I0313 11:59:00.285409 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cd708bb-8ca4-4c04-95f1-dfb27bee6832-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 11:59:00 crc kubenswrapper[4632]: I0313 11:59:00.545617 4632 generic.go:334] "Generic (PLEG): container finished" podID="6cd708bb-8ca4-4c04-95f1-dfb27bee6832" containerID="8c3253a43857fd1fd4bdd5757007dffd53fbea2cdd3ce32c464af84fa8752306" exitCode=0 Mar 13 11:59:00 crc kubenswrapper[4632]: I0313 11:59:00.545668 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-drgch" event={"ID":"6cd708bb-8ca4-4c04-95f1-dfb27bee6832","Type":"ContainerDied","Data":"8c3253a43857fd1fd4bdd5757007dffd53fbea2cdd3ce32c464af84fa8752306"} Mar 13 11:59:00 crc kubenswrapper[4632]: I0313 11:59:00.545698 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-drgch" event={"ID":"6cd708bb-8ca4-4c04-95f1-dfb27bee6832","Type":"ContainerDied","Data":"9679a196913903ccdb588bb8ba90e2da048eff47535c211d5b2101f6e140b063"} Mar 13 11:59:00 crc kubenswrapper[4632]: I0313 11:59:00.545720 4632 scope.go:117] "RemoveContainer" containerID="8c3253a43857fd1fd4bdd5757007dffd53fbea2cdd3ce32c464af84fa8752306" Mar 13 11:59:00 crc kubenswrapper[4632]: I0313 11:59:00.545871 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-drgch" Mar 13 11:59:00 crc kubenswrapper[4632]: I0313 11:59:00.588295 4632 scope.go:117] "RemoveContainer" containerID="5efb7453218a8d0b7abc9c2463a8936543c07c4d4dc2ed95212fd8a5cd84d6d3" Mar 13 11:59:00 crc kubenswrapper[4632]: I0313 11:59:00.590744 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-drgch"] Mar 13 11:59:00 crc kubenswrapper[4632]: I0313 11:59:00.608299 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-drgch"] Mar 13 11:59:00 crc kubenswrapper[4632]: I0313 11:59:00.614514 4632 scope.go:117] "RemoveContainer" containerID="0785a5b501b04e1d4e9bc948bd7a7407375731736568cf4ddf2b272fd73d83f6" Mar 13 11:59:00 crc kubenswrapper[4632]: I0313 11:59:00.668467 4632 scope.go:117] "RemoveContainer" containerID="8c3253a43857fd1fd4bdd5757007dffd53fbea2cdd3ce32c464af84fa8752306" Mar 13 11:59:00 crc kubenswrapper[4632]: E0313 11:59:00.670025 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c3253a43857fd1fd4bdd5757007dffd53fbea2cdd3ce32c464af84fa8752306\": container with ID starting with 8c3253a43857fd1fd4bdd5757007dffd53fbea2cdd3ce32c464af84fa8752306 not found: ID does not exist" containerID="8c3253a43857fd1fd4bdd5757007dffd53fbea2cdd3ce32c464af84fa8752306" Mar 13 11:59:00 crc kubenswrapper[4632]: I0313 11:59:00.670062 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c3253a43857fd1fd4bdd5757007dffd53fbea2cdd3ce32c464af84fa8752306"} err="failed to get container status \"8c3253a43857fd1fd4bdd5757007dffd53fbea2cdd3ce32c464af84fa8752306\": rpc error: code = NotFound desc = could not find container \"8c3253a43857fd1fd4bdd5757007dffd53fbea2cdd3ce32c464af84fa8752306\": container with ID starting with 8c3253a43857fd1fd4bdd5757007dffd53fbea2cdd3ce32c464af84fa8752306 not found: ID does not exist" Mar 13 11:59:00 crc kubenswrapper[4632]: I0313 11:59:00.670087 4632 scope.go:117] "RemoveContainer" containerID="5efb7453218a8d0b7abc9c2463a8936543c07c4d4dc2ed95212fd8a5cd84d6d3" Mar 13 11:59:00 crc kubenswrapper[4632]: E0313 11:59:00.670576 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5efb7453218a8d0b7abc9c2463a8936543c07c4d4dc2ed95212fd8a5cd84d6d3\": container with ID starting with 5efb7453218a8d0b7abc9c2463a8936543c07c4d4dc2ed95212fd8a5cd84d6d3 not found: ID does not exist" containerID="5efb7453218a8d0b7abc9c2463a8936543c07c4d4dc2ed95212fd8a5cd84d6d3" Mar 13 11:59:00 crc kubenswrapper[4632]: I0313 11:59:00.670601 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5efb7453218a8d0b7abc9c2463a8936543c07c4d4dc2ed95212fd8a5cd84d6d3"} err="failed to get container status \"5efb7453218a8d0b7abc9c2463a8936543c07c4d4dc2ed95212fd8a5cd84d6d3\": rpc error: code = NotFound desc = could not find container \"5efb7453218a8d0b7abc9c2463a8936543c07c4d4dc2ed95212fd8a5cd84d6d3\": container with ID starting with 5efb7453218a8d0b7abc9c2463a8936543c07c4d4dc2ed95212fd8a5cd84d6d3 not found: ID does not exist" Mar 13 11:59:00 crc kubenswrapper[4632]: I0313 11:59:00.670616 4632 scope.go:117] "RemoveContainer" containerID="0785a5b501b04e1d4e9bc948bd7a7407375731736568cf4ddf2b272fd73d83f6" Mar 13 11:59:00 crc kubenswrapper[4632]: E0313 11:59:00.672187 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0785a5b501b04e1d4e9bc948bd7a7407375731736568cf4ddf2b272fd73d83f6\": container with ID starting with 0785a5b501b04e1d4e9bc948bd7a7407375731736568cf4ddf2b272fd73d83f6 not found: ID does not exist" containerID="0785a5b501b04e1d4e9bc948bd7a7407375731736568cf4ddf2b272fd73d83f6" Mar 13 11:59:00 crc kubenswrapper[4632]: I0313 11:59:00.672216 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0785a5b501b04e1d4e9bc948bd7a7407375731736568cf4ddf2b272fd73d83f6"} err="failed to get container status \"0785a5b501b04e1d4e9bc948bd7a7407375731736568cf4ddf2b272fd73d83f6\": rpc error: code = NotFound desc = could not find container \"0785a5b501b04e1d4e9bc948bd7a7407375731736568cf4ddf2b272fd73d83f6\": container with ID starting with 0785a5b501b04e1d4e9bc948bd7a7407375731736568cf4ddf2b272fd73d83f6 not found: ID does not exist" Mar 13 11:59:02 crc kubenswrapper[4632]: I0313 11:59:02.067392 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cd708bb-8ca4-4c04-95f1-dfb27bee6832" path="/var/lib/kubelet/pods/6cd708bb-8ca4-4c04-95f1-dfb27bee6832/volumes" Mar 13 11:59:10 crc kubenswrapper[4632]: I0313 11:59:10.461425 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 11:59:10 crc kubenswrapper[4632]: I0313 11:59:10.461859 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 11:59:16 crc kubenswrapper[4632]: I0313 11:59:16.665099 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wsx2g"] Mar 13 11:59:16 crc kubenswrapper[4632]: E0313 11:59:16.666290 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cd708bb-8ca4-4c04-95f1-dfb27bee6832" containerName="registry-server" Mar 13 11:59:16 crc kubenswrapper[4632]: I0313 11:59:16.666314 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cd708bb-8ca4-4c04-95f1-dfb27bee6832" containerName="registry-server" Mar 13 11:59:16 crc kubenswrapper[4632]: E0313 11:59:16.666357 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cd708bb-8ca4-4c04-95f1-dfb27bee6832" containerName="extract-content" Mar 13 11:59:16 crc kubenswrapper[4632]: I0313 11:59:16.666367 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cd708bb-8ca4-4c04-95f1-dfb27bee6832" containerName="extract-content" Mar 13 11:59:16 crc kubenswrapper[4632]: E0313 11:59:16.666384 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cd708bb-8ca4-4c04-95f1-dfb27bee6832" containerName="extract-utilities" Mar 13 11:59:16 crc kubenswrapper[4632]: I0313 11:59:16.666395 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cd708bb-8ca4-4c04-95f1-dfb27bee6832" containerName="extract-utilities" Mar 13 11:59:16 crc kubenswrapper[4632]: I0313 11:59:16.666656 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cd708bb-8ca4-4c04-95f1-dfb27bee6832" containerName="registry-server" Mar 13 11:59:16 crc kubenswrapper[4632]: I0313 11:59:16.668569 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wsx2g" Mar 13 11:59:16 crc kubenswrapper[4632]: I0313 11:59:16.707043 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wsx2g"] Mar 13 11:59:16 crc kubenswrapper[4632]: I0313 11:59:16.832425 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shzdx\" (UniqueName: \"kubernetes.io/projected/91e35ce1-c54e-46e0-aa1f-01eed2268826-kube-api-access-shzdx\") pod \"community-operators-wsx2g\" (UID: \"91e35ce1-c54e-46e0-aa1f-01eed2268826\") " pod="openshift-marketplace/community-operators-wsx2g" Mar 13 11:59:16 crc kubenswrapper[4632]: I0313 11:59:16.832738 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91e35ce1-c54e-46e0-aa1f-01eed2268826-catalog-content\") pod \"community-operators-wsx2g\" (UID: \"91e35ce1-c54e-46e0-aa1f-01eed2268826\") " pod="openshift-marketplace/community-operators-wsx2g" Mar 13 11:59:16 crc kubenswrapper[4632]: I0313 11:59:16.833004 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91e35ce1-c54e-46e0-aa1f-01eed2268826-utilities\") pod \"community-operators-wsx2g\" (UID: \"91e35ce1-c54e-46e0-aa1f-01eed2268826\") " pod="openshift-marketplace/community-operators-wsx2g" Mar 13 11:59:16 crc kubenswrapper[4632]: I0313 11:59:16.935181 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91e35ce1-c54e-46e0-aa1f-01eed2268826-utilities\") pod \"community-operators-wsx2g\" (UID: \"91e35ce1-c54e-46e0-aa1f-01eed2268826\") " pod="openshift-marketplace/community-operators-wsx2g" Mar 13 11:59:16 crc kubenswrapper[4632]: I0313 11:59:16.935254 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shzdx\" (UniqueName: \"kubernetes.io/projected/91e35ce1-c54e-46e0-aa1f-01eed2268826-kube-api-access-shzdx\") pod \"community-operators-wsx2g\" (UID: \"91e35ce1-c54e-46e0-aa1f-01eed2268826\") " pod="openshift-marketplace/community-operators-wsx2g" Mar 13 11:59:16 crc kubenswrapper[4632]: I0313 11:59:16.935293 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91e35ce1-c54e-46e0-aa1f-01eed2268826-catalog-content\") pod \"community-operators-wsx2g\" (UID: \"91e35ce1-c54e-46e0-aa1f-01eed2268826\") " pod="openshift-marketplace/community-operators-wsx2g" Mar 13 11:59:16 crc kubenswrapper[4632]: I0313 11:59:16.935863 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91e35ce1-c54e-46e0-aa1f-01eed2268826-catalog-content\") pod \"community-operators-wsx2g\" (UID: \"91e35ce1-c54e-46e0-aa1f-01eed2268826\") " pod="openshift-marketplace/community-operators-wsx2g" Mar 13 11:59:16 crc kubenswrapper[4632]: I0313 11:59:16.936248 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91e35ce1-c54e-46e0-aa1f-01eed2268826-utilities\") pod \"community-operators-wsx2g\" (UID: \"91e35ce1-c54e-46e0-aa1f-01eed2268826\") " pod="openshift-marketplace/community-operators-wsx2g" Mar 13 11:59:16 crc kubenswrapper[4632]: I0313 11:59:16.960831 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shzdx\" (UniqueName: \"kubernetes.io/projected/91e35ce1-c54e-46e0-aa1f-01eed2268826-kube-api-access-shzdx\") pod \"community-operators-wsx2g\" (UID: \"91e35ce1-c54e-46e0-aa1f-01eed2268826\") " pod="openshift-marketplace/community-operators-wsx2g" Mar 13 11:59:16 crc kubenswrapper[4632]: I0313 11:59:16.995646 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wsx2g" Mar 13 11:59:17 crc kubenswrapper[4632]: W0313 11:59:17.562626 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91e35ce1_c54e_46e0_aa1f_01eed2268826.slice/crio-64bdcb0f99dfc86dc26ea146e91dc9717e2b551a594e5de0e4aa29b35e623e4d WatchSource:0}: Error finding container 64bdcb0f99dfc86dc26ea146e91dc9717e2b551a594e5de0e4aa29b35e623e4d: Status 404 returned error can't find the container with id 64bdcb0f99dfc86dc26ea146e91dc9717e2b551a594e5de0e4aa29b35e623e4d Mar 13 11:59:17 crc kubenswrapper[4632]: I0313 11:59:17.565933 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wsx2g"] Mar 13 11:59:17 crc kubenswrapper[4632]: I0313 11:59:17.705484 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wsx2g" event={"ID":"91e35ce1-c54e-46e0-aa1f-01eed2268826","Type":"ContainerStarted","Data":"64bdcb0f99dfc86dc26ea146e91dc9717e2b551a594e5de0e4aa29b35e623e4d"} Mar 13 11:59:18 crc kubenswrapper[4632]: I0313 11:59:18.716311 4632 generic.go:334] "Generic (PLEG): container finished" podID="91e35ce1-c54e-46e0-aa1f-01eed2268826" containerID="fd80794757f43c71fc7435b850cacfdc1cc26db2bff61d8d3f5c45e9fc5f1169" exitCode=0 Mar 13 11:59:18 crc kubenswrapper[4632]: I0313 11:59:18.716421 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wsx2g" event={"ID":"91e35ce1-c54e-46e0-aa1f-01eed2268826","Type":"ContainerDied","Data":"fd80794757f43c71fc7435b850cacfdc1cc26db2bff61d8d3f5c45e9fc5f1169"} Mar 13 11:59:20 crc kubenswrapper[4632]: I0313 11:59:20.734987 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wsx2g" event={"ID":"91e35ce1-c54e-46e0-aa1f-01eed2268826","Type":"ContainerStarted","Data":"5e4654df138739a880c0cf4bc5f6fd259cb10ae8d73f8303664c6a10dbea639a"} Mar 13 11:59:23 crc kubenswrapper[4632]: I0313 11:59:23.767554 4632 generic.go:334] "Generic (PLEG): container finished" podID="91e35ce1-c54e-46e0-aa1f-01eed2268826" containerID="5e4654df138739a880c0cf4bc5f6fd259cb10ae8d73f8303664c6a10dbea639a" exitCode=0 Mar 13 11:59:23 crc kubenswrapper[4632]: I0313 11:59:23.767762 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wsx2g" event={"ID":"91e35ce1-c54e-46e0-aa1f-01eed2268826","Type":"ContainerDied","Data":"5e4654df138739a880c0cf4bc5f6fd259cb10ae8d73f8303664c6a10dbea639a"} Mar 13 11:59:24 crc kubenswrapper[4632]: I0313 11:59:24.779337 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wsx2g" event={"ID":"91e35ce1-c54e-46e0-aa1f-01eed2268826","Type":"ContainerStarted","Data":"ac32c8bb9b1ef33627f10d2b87bef22688c20d954eb247669d8659c5d11a8b81"} Mar 13 11:59:24 crc kubenswrapper[4632]: I0313 11:59:24.800622 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wsx2g" podStartSLOduration=3.11447268 podStartE2EDuration="8.800601172s" podCreationTimestamp="2026-03-13 11:59:16 +0000 UTC" firstStartedPulling="2026-03-13 11:59:18.718513864 +0000 UTC m=+6932.741043997" lastFinishedPulling="2026-03-13 11:59:24.404642356 +0000 UTC m=+6938.427172489" observedRunningTime="2026-03-13 11:59:24.796342968 +0000 UTC m=+6938.818873101" watchObservedRunningTime="2026-03-13 11:59:24.800601172 +0000 UTC m=+6938.823131305" Mar 13 11:59:26 crc kubenswrapper[4632]: I0313 11:59:26.996380 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wsx2g" Mar 13 11:59:26 crc kubenswrapper[4632]: I0313 11:59:26.996713 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wsx2g" Mar 13 11:59:28 crc kubenswrapper[4632]: I0313 11:59:28.053229 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-wsx2g" podUID="91e35ce1-c54e-46e0-aa1f-01eed2268826" containerName="registry-server" probeResult="failure" output=< Mar 13 11:59:28 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:59:28 crc kubenswrapper[4632]: > Mar 13 11:59:38 crc kubenswrapper[4632]: I0313 11:59:38.050664 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-wsx2g" podUID="91e35ce1-c54e-46e0-aa1f-01eed2268826" containerName="registry-server" probeResult="failure" output=< Mar 13 11:59:38 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:59:38 crc kubenswrapper[4632]: > Mar 13 11:59:40 crc kubenswrapper[4632]: I0313 11:59:40.461390 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 11:59:40 crc kubenswrapper[4632]: I0313 11:59:40.461718 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 11:59:48 crc kubenswrapper[4632]: I0313 11:59:48.053458 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-wsx2g" podUID="91e35ce1-c54e-46e0-aa1f-01eed2268826" containerName="registry-server" probeResult="failure" output=< Mar 13 11:59:48 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 11:59:48 crc kubenswrapper[4632]: > Mar 13 11:59:57 crc kubenswrapper[4632]: I0313 11:59:57.048509 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wsx2g" Mar 13 11:59:57 crc kubenswrapper[4632]: I0313 11:59:57.117367 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wsx2g" Mar 13 11:59:57 crc kubenswrapper[4632]: I0313 11:59:57.302127 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wsx2g"] Mar 13 11:59:58 crc kubenswrapper[4632]: I0313 11:59:58.095478 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wsx2g" podUID="91e35ce1-c54e-46e0-aa1f-01eed2268826" containerName="registry-server" containerID="cri-o://ac32c8bb9b1ef33627f10d2b87bef22688c20d954eb247669d8659c5d11a8b81" gracePeriod=2 Mar 13 11:59:58 crc kubenswrapper[4632]: I0313 11:59:58.585488 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wsx2g" Mar 13 11:59:58 crc kubenswrapper[4632]: I0313 11:59:58.738904 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shzdx\" (UniqueName: \"kubernetes.io/projected/91e35ce1-c54e-46e0-aa1f-01eed2268826-kube-api-access-shzdx\") pod \"91e35ce1-c54e-46e0-aa1f-01eed2268826\" (UID: \"91e35ce1-c54e-46e0-aa1f-01eed2268826\") " Mar 13 11:59:58 crc kubenswrapper[4632]: I0313 11:59:58.739029 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91e35ce1-c54e-46e0-aa1f-01eed2268826-catalog-content\") pod \"91e35ce1-c54e-46e0-aa1f-01eed2268826\" (UID: \"91e35ce1-c54e-46e0-aa1f-01eed2268826\") " Mar 13 11:59:58 crc kubenswrapper[4632]: I0313 11:59:58.739178 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91e35ce1-c54e-46e0-aa1f-01eed2268826-utilities\") pod \"91e35ce1-c54e-46e0-aa1f-01eed2268826\" (UID: \"91e35ce1-c54e-46e0-aa1f-01eed2268826\") " Mar 13 11:59:58 crc kubenswrapper[4632]: I0313 11:59:58.740221 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91e35ce1-c54e-46e0-aa1f-01eed2268826-utilities" (OuterVolumeSpecName: "utilities") pod "91e35ce1-c54e-46e0-aa1f-01eed2268826" (UID: "91e35ce1-c54e-46e0-aa1f-01eed2268826"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:59:58 crc kubenswrapper[4632]: I0313 11:59:58.748247 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91e35ce1-c54e-46e0-aa1f-01eed2268826-kube-api-access-shzdx" (OuterVolumeSpecName: "kube-api-access-shzdx") pod "91e35ce1-c54e-46e0-aa1f-01eed2268826" (UID: "91e35ce1-c54e-46e0-aa1f-01eed2268826"). InnerVolumeSpecName "kube-api-access-shzdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:59:58 crc kubenswrapper[4632]: I0313 11:59:58.801934 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91e35ce1-c54e-46e0-aa1f-01eed2268826-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91e35ce1-c54e-46e0-aa1f-01eed2268826" (UID: "91e35ce1-c54e-46e0-aa1f-01eed2268826"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:59:58 crc kubenswrapper[4632]: I0313 11:59:58.843518 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91e35ce1-c54e-46e0-aa1f-01eed2268826-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 11:59:58 crc kubenswrapper[4632]: I0313 11:59:58.844121 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shzdx\" (UniqueName: \"kubernetes.io/projected/91e35ce1-c54e-46e0-aa1f-01eed2268826-kube-api-access-shzdx\") on node \"crc\" DevicePath \"\"" Mar 13 11:59:58 crc kubenswrapper[4632]: I0313 11:59:58.844142 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91e35ce1-c54e-46e0-aa1f-01eed2268826-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 11:59:59 crc kubenswrapper[4632]: I0313 11:59:59.106346 4632 generic.go:334] "Generic (PLEG): container finished" podID="91e35ce1-c54e-46e0-aa1f-01eed2268826" containerID="ac32c8bb9b1ef33627f10d2b87bef22688c20d954eb247669d8659c5d11a8b81" exitCode=0 Mar 13 11:59:59 crc kubenswrapper[4632]: I0313 11:59:59.106402 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wsx2g" event={"ID":"91e35ce1-c54e-46e0-aa1f-01eed2268826","Type":"ContainerDied","Data":"ac32c8bb9b1ef33627f10d2b87bef22688c20d954eb247669d8659c5d11a8b81"} Mar 13 11:59:59 crc kubenswrapper[4632]: I0313 11:59:59.106436 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wsx2g" event={"ID":"91e35ce1-c54e-46e0-aa1f-01eed2268826","Type":"ContainerDied","Data":"64bdcb0f99dfc86dc26ea146e91dc9717e2b551a594e5de0e4aa29b35e623e4d"} Mar 13 11:59:59 crc kubenswrapper[4632]: I0313 11:59:59.106459 4632 scope.go:117] "RemoveContainer" containerID="ac32c8bb9b1ef33627f10d2b87bef22688c20d954eb247669d8659c5d11a8b81" Mar 13 11:59:59 crc kubenswrapper[4632]: I0313 11:59:59.106634 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wsx2g" Mar 13 11:59:59 crc kubenswrapper[4632]: I0313 11:59:59.138542 4632 scope.go:117] "RemoveContainer" containerID="5e4654df138739a880c0cf4bc5f6fd259cb10ae8d73f8303664c6a10dbea639a" Mar 13 11:59:59 crc kubenswrapper[4632]: I0313 11:59:59.163300 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wsx2g"] Mar 13 11:59:59 crc kubenswrapper[4632]: I0313 11:59:59.167030 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wsx2g"] Mar 13 11:59:59 crc kubenswrapper[4632]: I0313 11:59:59.175569 4632 scope.go:117] "RemoveContainer" containerID="fd80794757f43c71fc7435b850cacfdc1cc26db2bff61d8d3f5c45e9fc5f1169" Mar 13 11:59:59 crc kubenswrapper[4632]: I0313 11:59:59.212752 4632 scope.go:117] "RemoveContainer" containerID="ac32c8bb9b1ef33627f10d2b87bef22688c20d954eb247669d8659c5d11a8b81" Mar 13 11:59:59 crc kubenswrapper[4632]: E0313 11:59:59.213349 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac32c8bb9b1ef33627f10d2b87bef22688c20d954eb247669d8659c5d11a8b81\": container with ID starting with ac32c8bb9b1ef33627f10d2b87bef22688c20d954eb247669d8659c5d11a8b81 not found: ID does not exist" containerID="ac32c8bb9b1ef33627f10d2b87bef22688c20d954eb247669d8659c5d11a8b81" Mar 13 11:59:59 crc kubenswrapper[4632]: I0313 11:59:59.213429 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac32c8bb9b1ef33627f10d2b87bef22688c20d954eb247669d8659c5d11a8b81"} err="failed to get container status \"ac32c8bb9b1ef33627f10d2b87bef22688c20d954eb247669d8659c5d11a8b81\": rpc error: code = NotFound desc = could not find container \"ac32c8bb9b1ef33627f10d2b87bef22688c20d954eb247669d8659c5d11a8b81\": container with ID starting with ac32c8bb9b1ef33627f10d2b87bef22688c20d954eb247669d8659c5d11a8b81 not found: ID does not exist" Mar 13 11:59:59 crc kubenswrapper[4632]: I0313 11:59:59.213462 4632 scope.go:117] "RemoveContainer" containerID="5e4654df138739a880c0cf4bc5f6fd259cb10ae8d73f8303664c6a10dbea639a" Mar 13 11:59:59 crc kubenswrapper[4632]: E0313 11:59:59.213921 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e4654df138739a880c0cf4bc5f6fd259cb10ae8d73f8303664c6a10dbea639a\": container with ID starting with 5e4654df138739a880c0cf4bc5f6fd259cb10ae8d73f8303664c6a10dbea639a not found: ID does not exist" containerID="5e4654df138739a880c0cf4bc5f6fd259cb10ae8d73f8303664c6a10dbea639a" Mar 13 11:59:59 crc kubenswrapper[4632]: I0313 11:59:59.213976 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e4654df138739a880c0cf4bc5f6fd259cb10ae8d73f8303664c6a10dbea639a"} err="failed to get container status \"5e4654df138739a880c0cf4bc5f6fd259cb10ae8d73f8303664c6a10dbea639a\": rpc error: code = NotFound desc = could not find container \"5e4654df138739a880c0cf4bc5f6fd259cb10ae8d73f8303664c6a10dbea639a\": container with ID starting with 5e4654df138739a880c0cf4bc5f6fd259cb10ae8d73f8303664c6a10dbea639a not found: ID does not exist" Mar 13 11:59:59 crc kubenswrapper[4632]: I0313 11:59:59.214004 4632 scope.go:117] "RemoveContainer" containerID="fd80794757f43c71fc7435b850cacfdc1cc26db2bff61d8d3f5c45e9fc5f1169" Mar 13 11:59:59 crc kubenswrapper[4632]: E0313 11:59:59.214352 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd80794757f43c71fc7435b850cacfdc1cc26db2bff61d8d3f5c45e9fc5f1169\": container with ID starting with fd80794757f43c71fc7435b850cacfdc1cc26db2bff61d8d3f5c45e9fc5f1169 not found: ID does not exist" containerID="fd80794757f43c71fc7435b850cacfdc1cc26db2bff61d8d3f5c45e9fc5f1169" Mar 13 11:59:59 crc kubenswrapper[4632]: I0313 11:59:59.214438 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd80794757f43c71fc7435b850cacfdc1cc26db2bff61d8d3f5c45e9fc5f1169"} err="failed to get container status \"fd80794757f43c71fc7435b850cacfdc1cc26db2bff61d8d3f5c45e9fc5f1169\": rpc error: code = NotFound desc = could not find container \"fd80794757f43c71fc7435b850cacfdc1cc26db2bff61d8d3f5c45e9fc5f1169\": container with ID starting with fd80794757f43c71fc7435b850cacfdc1cc26db2bff61d8d3f5c45e9fc5f1169 not found: ID does not exist" Mar 13 12:00:00 crc kubenswrapper[4632]: I0313 12:00:00.056525 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91e35ce1-c54e-46e0-aa1f-01eed2268826" path="/var/lib/kubelet/pods/91e35ce1-c54e-46e0-aa1f-01eed2268826/volumes" Mar 13 12:00:00 crc kubenswrapper[4632]: I0313 12:00:00.167380 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556720-2985g"] Mar 13 12:00:00 crc kubenswrapper[4632]: E0313 12:00:00.167932 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91e35ce1-c54e-46e0-aa1f-01eed2268826" containerName="extract-utilities" Mar 13 12:00:00 crc kubenswrapper[4632]: I0313 12:00:00.167977 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="91e35ce1-c54e-46e0-aa1f-01eed2268826" containerName="extract-utilities" Mar 13 12:00:00 crc kubenswrapper[4632]: E0313 12:00:00.168010 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91e35ce1-c54e-46e0-aa1f-01eed2268826" containerName="registry-server" Mar 13 12:00:00 crc kubenswrapper[4632]: I0313 12:00:00.168022 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="91e35ce1-c54e-46e0-aa1f-01eed2268826" containerName="registry-server" Mar 13 12:00:00 crc kubenswrapper[4632]: E0313 12:00:00.168059 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91e35ce1-c54e-46e0-aa1f-01eed2268826" containerName="extract-content" Mar 13 12:00:00 crc kubenswrapper[4632]: I0313 12:00:00.168068 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="91e35ce1-c54e-46e0-aa1f-01eed2268826" containerName="extract-content" Mar 13 12:00:00 crc kubenswrapper[4632]: I0313 12:00:00.170144 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="91e35ce1-c54e-46e0-aa1f-01eed2268826" containerName="registry-server" Mar 13 12:00:00 crc kubenswrapper[4632]: I0313 12:00:00.171241 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556720-2985g" Mar 13 12:00:00 crc kubenswrapper[4632]: I0313 12:00:00.185406 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556720-2985g"] Mar 13 12:00:00 crc kubenswrapper[4632]: I0313 12:00:00.186366 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 12:00:00 crc kubenswrapper[4632]: I0313 12:00:00.187999 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 12:00:00 crc kubenswrapper[4632]: I0313 12:00:00.188093 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 12:00:00 crc kubenswrapper[4632]: I0313 12:00:00.271502 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556720-9qfd7"] Mar 13 12:00:00 crc kubenswrapper[4632]: I0313 12:00:00.273569 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556720-9qfd7" Mar 13 12:00:00 crc kubenswrapper[4632]: I0313 12:00:00.276188 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 13 12:00:00 crc kubenswrapper[4632]: I0313 12:00:00.276387 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 13 12:00:00 crc kubenswrapper[4632]: I0313 12:00:00.279780 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5vxf\" (UniqueName: \"kubernetes.io/projected/614f8dd7-8a57-4b22-b741-63c3ed563216-kube-api-access-z5vxf\") pod \"auto-csr-approver-29556720-2985g\" (UID: \"614f8dd7-8a57-4b22-b741-63c3ed563216\") " pod="openshift-infra/auto-csr-approver-29556720-2985g" Mar 13 12:00:00 crc kubenswrapper[4632]: I0313 12:00:00.295111 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556720-9qfd7"] Mar 13 12:00:00 crc kubenswrapper[4632]: I0313 12:00:00.382688 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/453f2bd4-a723-4b7f-9b06-05d75e8df7b8-secret-volume\") pod \"collect-profiles-29556720-9qfd7\" (UID: \"453f2bd4-a723-4b7f-9b06-05d75e8df7b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556720-9qfd7" Mar 13 12:00:00 crc kubenswrapper[4632]: I0313 12:00:00.382972 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5vxf\" (UniqueName: \"kubernetes.io/projected/614f8dd7-8a57-4b22-b741-63c3ed563216-kube-api-access-z5vxf\") pod \"auto-csr-approver-29556720-2985g\" (UID: \"614f8dd7-8a57-4b22-b741-63c3ed563216\") " pod="openshift-infra/auto-csr-approver-29556720-2985g" Mar 13 12:00:00 crc kubenswrapper[4632]: I0313 12:00:00.383072 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/453f2bd4-a723-4b7f-9b06-05d75e8df7b8-config-volume\") pod \"collect-profiles-29556720-9qfd7\" (UID: \"453f2bd4-a723-4b7f-9b06-05d75e8df7b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556720-9qfd7" Mar 13 12:00:00 crc kubenswrapper[4632]: I0313 12:00:00.383219 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmzs7\" (UniqueName: \"kubernetes.io/projected/453f2bd4-a723-4b7f-9b06-05d75e8df7b8-kube-api-access-jmzs7\") pod \"collect-profiles-29556720-9qfd7\" (UID: \"453f2bd4-a723-4b7f-9b06-05d75e8df7b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556720-9qfd7" Mar 13 12:00:00 crc kubenswrapper[4632]: I0313 12:00:00.437364 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5vxf\" (UniqueName: \"kubernetes.io/projected/614f8dd7-8a57-4b22-b741-63c3ed563216-kube-api-access-z5vxf\") pod \"auto-csr-approver-29556720-2985g\" (UID: \"614f8dd7-8a57-4b22-b741-63c3ed563216\") " pod="openshift-infra/auto-csr-approver-29556720-2985g" Mar 13 12:00:00 crc kubenswrapper[4632]: I0313 12:00:00.486816 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/453f2bd4-a723-4b7f-9b06-05d75e8df7b8-secret-volume\") pod \"collect-profiles-29556720-9qfd7\" (UID: \"453f2bd4-a723-4b7f-9b06-05d75e8df7b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556720-9qfd7" Mar 13 12:00:00 crc kubenswrapper[4632]: I0313 12:00:00.487051 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/453f2bd4-a723-4b7f-9b06-05d75e8df7b8-config-volume\") pod \"collect-profiles-29556720-9qfd7\" (UID: \"453f2bd4-a723-4b7f-9b06-05d75e8df7b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556720-9qfd7" Mar 13 12:00:00 crc kubenswrapper[4632]: I0313 12:00:00.487210 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmzs7\" (UniqueName: \"kubernetes.io/projected/453f2bd4-a723-4b7f-9b06-05d75e8df7b8-kube-api-access-jmzs7\") pod \"collect-profiles-29556720-9qfd7\" (UID: \"453f2bd4-a723-4b7f-9b06-05d75e8df7b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556720-9qfd7" Mar 13 12:00:00 crc kubenswrapper[4632]: I0313 12:00:00.490133 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/453f2bd4-a723-4b7f-9b06-05d75e8df7b8-config-volume\") pod \"collect-profiles-29556720-9qfd7\" (UID: \"453f2bd4-a723-4b7f-9b06-05d75e8df7b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556720-9qfd7" Mar 13 12:00:00 crc kubenswrapper[4632]: I0313 12:00:00.490196 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556720-2985g" Mar 13 12:00:00 crc kubenswrapper[4632]: I0313 12:00:00.502222 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/453f2bd4-a723-4b7f-9b06-05d75e8df7b8-secret-volume\") pod \"collect-profiles-29556720-9qfd7\" (UID: \"453f2bd4-a723-4b7f-9b06-05d75e8df7b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556720-9qfd7" Mar 13 12:00:00 crc kubenswrapper[4632]: I0313 12:00:00.533046 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmzs7\" (UniqueName: \"kubernetes.io/projected/453f2bd4-a723-4b7f-9b06-05d75e8df7b8-kube-api-access-jmzs7\") pod \"collect-profiles-29556720-9qfd7\" (UID: \"453f2bd4-a723-4b7f-9b06-05d75e8df7b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556720-9qfd7" Mar 13 12:00:00 crc kubenswrapper[4632]: I0313 12:00:00.592174 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556720-9qfd7" Mar 13 12:00:01 crc kubenswrapper[4632]: I0313 12:00:01.365872 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556720-9qfd7"] Mar 13 12:00:01 crc kubenswrapper[4632]: W0313 12:00:01.369024 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod453f2bd4_a723_4b7f_9b06_05d75e8df7b8.slice/crio-e9f5d693baba17da6616fa665348293847d1c5fe75db701538d10a5b75834253 WatchSource:0}: Error finding container e9f5d693baba17da6616fa665348293847d1c5fe75db701538d10a5b75834253: Status 404 returned error can't find the container with id e9f5d693baba17da6616fa665348293847d1c5fe75db701538d10a5b75834253 Mar 13 12:00:01 crc kubenswrapper[4632]: W0313 12:00:01.506527 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod614f8dd7_8a57_4b22_b741_63c3ed563216.slice/crio-9e7bb3a0a8747b8c4cac419fde52b47e802a05c159ca271aa9d70f819a6c2958 WatchSource:0}: Error finding container 9e7bb3a0a8747b8c4cac419fde52b47e802a05c159ca271aa9d70f819a6c2958: Status 404 returned error can't find the container with id 9e7bb3a0a8747b8c4cac419fde52b47e802a05c159ca271aa9d70f819a6c2958 Mar 13 12:00:01 crc kubenswrapper[4632]: I0313 12:00:01.511159 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556720-2985g"] Mar 13 12:00:02 crc kubenswrapper[4632]: I0313 12:00:02.139036 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556720-9qfd7" event={"ID":"453f2bd4-a723-4b7f-9b06-05d75e8df7b8","Type":"ContainerStarted","Data":"513ba32a6f64209e9e7a4b86369065ec16320243702d6e9f6899a7182c651338"} Mar 13 12:00:02 crc kubenswrapper[4632]: I0313 12:00:02.139421 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556720-9qfd7" event={"ID":"453f2bd4-a723-4b7f-9b06-05d75e8df7b8","Type":"ContainerStarted","Data":"e9f5d693baba17da6616fa665348293847d1c5fe75db701538d10a5b75834253"} Mar 13 12:00:02 crc kubenswrapper[4632]: I0313 12:00:02.141185 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556720-2985g" event={"ID":"614f8dd7-8a57-4b22-b741-63c3ed563216","Type":"ContainerStarted","Data":"9e7bb3a0a8747b8c4cac419fde52b47e802a05c159ca271aa9d70f819a6c2958"} Mar 13 12:00:02 crc kubenswrapper[4632]: I0313 12:00:02.163087 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29556720-9qfd7" podStartSLOduration=2.163068723 podStartE2EDuration="2.163068723s" podCreationTimestamp="2026-03-13 12:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:00:02.161445753 +0000 UTC m=+6976.183975906" watchObservedRunningTime="2026-03-13 12:00:02.163068723 +0000 UTC m=+6976.185598856" Mar 13 12:00:03 crc kubenswrapper[4632]: I0313 12:00:03.152870 4632 generic.go:334] "Generic (PLEG): container finished" podID="453f2bd4-a723-4b7f-9b06-05d75e8df7b8" containerID="513ba32a6f64209e9e7a4b86369065ec16320243702d6e9f6899a7182c651338" exitCode=0 Mar 13 12:00:03 crc kubenswrapper[4632]: I0313 12:00:03.153231 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556720-9qfd7" event={"ID":"453f2bd4-a723-4b7f-9b06-05d75e8df7b8","Type":"ContainerDied","Data":"513ba32a6f64209e9e7a4b86369065ec16320243702d6e9f6899a7182c651338"} Mar 13 12:00:04 crc kubenswrapper[4632]: I0313 12:00:04.545418 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556720-9qfd7" Mar 13 12:00:04 crc kubenswrapper[4632]: I0313 12:00:04.600488 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/453f2bd4-a723-4b7f-9b06-05d75e8df7b8-secret-volume\") pod \"453f2bd4-a723-4b7f-9b06-05d75e8df7b8\" (UID: \"453f2bd4-a723-4b7f-9b06-05d75e8df7b8\") " Mar 13 12:00:04 crc kubenswrapper[4632]: I0313 12:00:04.600555 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/453f2bd4-a723-4b7f-9b06-05d75e8df7b8-config-volume\") pod \"453f2bd4-a723-4b7f-9b06-05d75e8df7b8\" (UID: \"453f2bd4-a723-4b7f-9b06-05d75e8df7b8\") " Mar 13 12:00:04 crc kubenswrapper[4632]: I0313 12:00:04.600616 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmzs7\" (UniqueName: \"kubernetes.io/projected/453f2bd4-a723-4b7f-9b06-05d75e8df7b8-kube-api-access-jmzs7\") pod \"453f2bd4-a723-4b7f-9b06-05d75e8df7b8\" (UID: \"453f2bd4-a723-4b7f-9b06-05d75e8df7b8\") " Mar 13 12:00:04 crc kubenswrapper[4632]: I0313 12:00:04.602773 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/453f2bd4-a723-4b7f-9b06-05d75e8df7b8-config-volume" (OuterVolumeSpecName: "config-volume") pod "453f2bd4-a723-4b7f-9b06-05d75e8df7b8" (UID: "453f2bd4-a723-4b7f-9b06-05d75e8df7b8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:00:04 crc kubenswrapper[4632]: I0313 12:00:04.610117 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/453f2bd4-a723-4b7f-9b06-05d75e8df7b8-kube-api-access-jmzs7" (OuterVolumeSpecName: "kube-api-access-jmzs7") pod "453f2bd4-a723-4b7f-9b06-05d75e8df7b8" (UID: "453f2bd4-a723-4b7f-9b06-05d75e8df7b8"). InnerVolumeSpecName "kube-api-access-jmzs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:00:04 crc kubenswrapper[4632]: I0313 12:00:04.617893 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/453f2bd4-a723-4b7f-9b06-05d75e8df7b8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "453f2bd4-a723-4b7f-9b06-05d75e8df7b8" (UID: "453f2bd4-a723-4b7f-9b06-05d75e8df7b8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:00:04 crc kubenswrapper[4632]: I0313 12:00:04.703203 4632 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/453f2bd4-a723-4b7f-9b06-05d75e8df7b8-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 13 12:00:04 crc kubenswrapper[4632]: I0313 12:00:04.703244 4632 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/453f2bd4-a723-4b7f-9b06-05d75e8df7b8-config-volume\") on node \"crc\" DevicePath \"\"" Mar 13 12:00:04 crc kubenswrapper[4632]: I0313 12:00:04.703254 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmzs7\" (UniqueName: \"kubernetes.io/projected/453f2bd4-a723-4b7f-9b06-05d75e8df7b8-kube-api-access-jmzs7\") on node \"crc\" DevicePath \"\"" Mar 13 12:00:05 crc kubenswrapper[4632]: I0313 12:00:05.174862 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556720-9qfd7" event={"ID":"453f2bd4-a723-4b7f-9b06-05d75e8df7b8","Type":"ContainerDied","Data":"e9f5d693baba17da6616fa665348293847d1c5fe75db701538d10a5b75834253"} Mar 13 12:00:05 crc kubenswrapper[4632]: I0313 12:00:05.175167 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9f5d693baba17da6616fa665348293847d1c5fe75db701538d10a5b75834253" Mar 13 12:00:05 crc kubenswrapper[4632]: I0313 12:00:05.174894 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556720-9qfd7" Mar 13 12:00:05 crc kubenswrapper[4632]: I0313 12:00:05.177193 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556720-2985g" event={"ID":"614f8dd7-8a57-4b22-b741-63c3ed563216","Type":"ContainerStarted","Data":"d1793c511542a0a35aa5afc5e36e94033f5a38dca84400ac841ea4e47dc426f7"} Mar 13 12:00:05 crc kubenswrapper[4632]: I0313 12:00:05.202746 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556720-2985g" podStartSLOduration=1.870040328 podStartE2EDuration="5.202723103s" podCreationTimestamp="2026-03-13 12:00:00 +0000 UTC" firstStartedPulling="2026-03-13 12:00:01.508786245 +0000 UTC m=+6975.531316388" lastFinishedPulling="2026-03-13 12:00:04.84146903 +0000 UTC m=+6978.863999163" observedRunningTime="2026-03-13 12:00:05.192273336 +0000 UTC m=+6979.214803469" watchObservedRunningTime="2026-03-13 12:00:05.202723103 +0000 UTC m=+6979.225253236" Mar 13 12:00:05 crc kubenswrapper[4632]: I0313 12:00:05.620532 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556675-n64w8"] Mar 13 12:00:05 crc kubenswrapper[4632]: I0313 12:00:05.631814 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556675-n64w8"] Mar 13 12:00:06 crc kubenswrapper[4632]: I0313 12:00:06.056293 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9481bb7b-d00a-4ee1-b711-7b90d97907c1" path="/var/lib/kubelet/pods/9481bb7b-d00a-4ee1-b711-7b90d97907c1/volumes" Mar 13 12:00:07 crc kubenswrapper[4632]: I0313 12:00:07.196559 4632 generic.go:334] "Generic (PLEG): container finished" podID="614f8dd7-8a57-4b22-b741-63c3ed563216" containerID="d1793c511542a0a35aa5afc5e36e94033f5a38dca84400ac841ea4e47dc426f7" exitCode=0 Mar 13 12:00:07 crc kubenswrapper[4632]: I0313 12:00:07.196710 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556720-2985g" event={"ID":"614f8dd7-8a57-4b22-b741-63c3ed563216","Type":"ContainerDied","Data":"d1793c511542a0a35aa5afc5e36e94033f5a38dca84400ac841ea4e47dc426f7"} Mar 13 12:00:08 crc kubenswrapper[4632]: I0313 12:00:08.663205 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556720-2985g" Mar 13 12:00:08 crc kubenswrapper[4632]: I0313 12:00:08.678549 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5vxf\" (UniqueName: \"kubernetes.io/projected/614f8dd7-8a57-4b22-b741-63c3ed563216-kube-api-access-z5vxf\") pod \"614f8dd7-8a57-4b22-b741-63c3ed563216\" (UID: \"614f8dd7-8a57-4b22-b741-63c3ed563216\") " Mar 13 12:00:08 crc kubenswrapper[4632]: I0313 12:00:08.687238 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/614f8dd7-8a57-4b22-b741-63c3ed563216-kube-api-access-z5vxf" (OuterVolumeSpecName: "kube-api-access-z5vxf") pod "614f8dd7-8a57-4b22-b741-63c3ed563216" (UID: "614f8dd7-8a57-4b22-b741-63c3ed563216"). InnerVolumeSpecName "kube-api-access-z5vxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:00:08 crc kubenswrapper[4632]: I0313 12:00:08.780976 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5vxf\" (UniqueName: \"kubernetes.io/projected/614f8dd7-8a57-4b22-b741-63c3ed563216-kube-api-access-z5vxf\") on node \"crc\" DevicePath \"\"" Mar 13 12:00:09 crc kubenswrapper[4632]: I0313 12:00:09.217867 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556720-2985g" event={"ID":"614f8dd7-8a57-4b22-b741-63c3ed563216","Type":"ContainerDied","Data":"9e7bb3a0a8747b8c4cac419fde52b47e802a05c159ca271aa9d70f819a6c2958"} Mar 13 12:00:09 crc kubenswrapper[4632]: I0313 12:00:09.217908 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e7bb3a0a8747b8c4cac419fde52b47e802a05c159ca271aa9d70f819a6c2958" Mar 13 12:00:09 crc kubenswrapper[4632]: I0313 12:00:09.217964 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556720-2985g" Mar 13 12:00:09 crc kubenswrapper[4632]: I0313 12:00:09.268541 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556714-cnqc7"] Mar 13 12:00:09 crc kubenswrapper[4632]: I0313 12:00:09.277854 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556714-cnqc7"] Mar 13 12:00:10 crc kubenswrapper[4632]: I0313 12:00:10.054620 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88fb48fa-6650-4b22-b44f-d8c6f839489e" path="/var/lib/kubelet/pods/88fb48fa-6650-4b22-b44f-d8c6f839489e/volumes" Mar 13 12:00:10 crc kubenswrapper[4632]: I0313 12:00:10.461073 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:00:10 crc kubenswrapper[4632]: I0313 12:00:10.461185 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:00:10 crc kubenswrapper[4632]: I0313 12:00:10.461256 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 12:00:10 crc kubenswrapper[4632]: I0313 12:00:10.462497 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f1337fb64ab38c0f489a591d3b3f173d13428642427113f1891b2f17a626304e"} pod="openshift-machine-config-operator/machine-config-daemon-zkscb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 12:00:10 crc kubenswrapper[4632]: I0313 12:00:10.462583 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" containerID="cri-o://f1337fb64ab38c0f489a591d3b3f173d13428642427113f1891b2f17a626304e" gracePeriod=600 Mar 13 12:00:11 crc kubenswrapper[4632]: I0313 12:00:11.254213 4632 generic.go:334] "Generic (PLEG): container finished" podID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerID="f1337fb64ab38c0f489a591d3b3f173d13428642427113f1891b2f17a626304e" exitCode=0 Mar 13 12:00:11 crc kubenswrapper[4632]: I0313 12:00:11.254296 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerDied","Data":"f1337fb64ab38c0f489a591d3b3f173d13428642427113f1891b2f17a626304e"} Mar 13 12:00:11 crc kubenswrapper[4632]: I0313 12:00:11.254537 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerStarted","Data":"acc57c5b8ad70899e139ed86509fb64c5bf067344e4285c37f35406b8db0c7a6"} Mar 13 12:00:11 crc kubenswrapper[4632]: I0313 12:00:11.254555 4632 scope.go:117] "RemoveContainer" containerID="09f62a713fe019f208bcff213bc55f14995ec3a8014d027c3bf7cfc3b5b612e6" Mar 13 12:00:23 crc kubenswrapper[4632]: I0313 12:00:23.998478 4632 scope.go:117] "RemoveContainer" containerID="95360e41112a84b3ea4b235c3e7fd03654d6110fccc446520298cff419091ae2" Mar 13 12:00:24 crc kubenswrapper[4632]: I0313 12:00:24.031549 4632 scope.go:117] "RemoveContainer" containerID="a65145278e359710e5ff339e23940020997c56d82631e8e73d581b3ec62c80b2" Mar 13 12:01:00 crc kubenswrapper[4632]: I0313 12:01:00.168752 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29556721-l54tk"] Mar 13 12:01:00 crc kubenswrapper[4632]: E0313 12:01:00.169728 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="453f2bd4-a723-4b7f-9b06-05d75e8df7b8" containerName="collect-profiles" Mar 13 12:01:00 crc kubenswrapper[4632]: I0313 12:01:00.169744 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="453f2bd4-a723-4b7f-9b06-05d75e8df7b8" containerName="collect-profiles" Mar 13 12:01:00 crc kubenswrapper[4632]: E0313 12:01:00.169766 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="614f8dd7-8a57-4b22-b741-63c3ed563216" containerName="oc" Mar 13 12:01:00 crc kubenswrapper[4632]: I0313 12:01:00.169773 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="614f8dd7-8a57-4b22-b741-63c3ed563216" containerName="oc" Mar 13 12:01:00 crc kubenswrapper[4632]: I0313 12:01:00.169992 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="453f2bd4-a723-4b7f-9b06-05d75e8df7b8" containerName="collect-profiles" Mar 13 12:01:00 crc kubenswrapper[4632]: I0313 12:01:00.170008 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="614f8dd7-8a57-4b22-b741-63c3ed563216" containerName="oc" Mar 13 12:01:00 crc kubenswrapper[4632]: I0313 12:01:00.170679 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29556721-l54tk" Mar 13 12:01:00 crc kubenswrapper[4632]: I0313 12:01:00.185227 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29556721-l54tk"] Mar 13 12:01:00 crc kubenswrapper[4632]: I0313 12:01:00.293645 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8xzd\" (UniqueName: \"kubernetes.io/projected/c8daf4c2-f012-4d18-b11a-e666e00d6a03-kube-api-access-d8xzd\") pod \"keystone-cron-29556721-l54tk\" (UID: \"c8daf4c2-f012-4d18-b11a-e666e00d6a03\") " pod="openstack/keystone-cron-29556721-l54tk" Mar 13 12:01:00 crc kubenswrapper[4632]: I0313 12:01:00.293734 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8daf4c2-f012-4d18-b11a-e666e00d6a03-combined-ca-bundle\") pod \"keystone-cron-29556721-l54tk\" (UID: \"c8daf4c2-f012-4d18-b11a-e666e00d6a03\") " pod="openstack/keystone-cron-29556721-l54tk" Mar 13 12:01:00 crc kubenswrapper[4632]: I0313 12:01:00.293759 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8daf4c2-f012-4d18-b11a-e666e00d6a03-config-data\") pod \"keystone-cron-29556721-l54tk\" (UID: \"c8daf4c2-f012-4d18-b11a-e666e00d6a03\") " pod="openstack/keystone-cron-29556721-l54tk" Mar 13 12:01:00 crc kubenswrapper[4632]: I0313 12:01:00.293840 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c8daf4c2-f012-4d18-b11a-e666e00d6a03-fernet-keys\") pod \"keystone-cron-29556721-l54tk\" (UID: \"c8daf4c2-f012-4d18-b11a-e666e00d6a03\") " pod="openstack/keystone-cron-29556721-l54tk" Mar 13 12:01:00 crc kubenswrapper[4632]: I0313 12:01:00.395915 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c8daf4c2-f012-4d18-b11a-e666e00d6a03-fernet-keys\") pod \"keystone-cron-29556721-l54tk\" (UID: \"c8daf4c2-f012-4d18-b11a-e666e00d6a03\") " pod="openstack/keystone-cron-29556721-l54tk" Mar 13 12:01:00 crc kubenswrapper[4632]: I0313 12:01:00.396542 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8xzd\" (UniqueName: \"kubernetes.io/projected/c8daf4c2-f012-4d18-b11a-e666e00d6a03-kube-api-access-d8xzd\") pod \"keystone-cron-29556721-l54tk\" (UID: \"c8daf4c2-f012-4d18-b11a-e666e00d6a03\") " pod="openstack/keystone-cron-29556721-l54tk" Mar 13 12:01:00 crc kubenswrapper[4632]: I0313 12:01:00.396684 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8daf4c2-f012-4d18-b11a-e666e00d6a03-combined-ca-bundle\") pod \"keystone-cron-29556721-l54tk\" (UID: \"c8daf4c2-f012-4d18-b11a-e666e00d6a03\") " pod="openstack/keystone-cron-29556721-l54tk" Mar 13 12:01:00 crc kubenswrapper[4632]: I0313 12:01:00.396789 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8daf4c2-f012-4d18-b11a-e666e00d6a03-config-data\") pod \"keystone-cron-29556721-l54tk\" (UID: \"c8daf4c2-f012-4d18-b11a-e666e00d6a03\") " pod="openstack/keystone-cron-29556721-l54tk" Mar 13 12:01:00 crc kubenswrapper[4632]: I0313 12:01:00.403385 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8daf4c2-f012-4d18-b11a-e666e00d6a03-combined-ca-bundle\") pod \"keystone-cron-29556721-l54tk\" (UID: \"c8daf4c2-f012-4d18-b11a-e666e00d6a03\") " pod="openstack/keystone-cron-29556721-l54tk" Mar 13 12:01:00 crc kubenswrapper[4632]: I0313 12:01:00.408393 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c8daf4c2-f012-4d18-b11a-e666e00d6a03-fernet-keys\") pod \"keystone-cron-29556721-l54tk\" (UID: \"c8daf4c2-f012-4d18-b11a-e666e00d6a03\") " pod="openstack/keystone-cron-29556721-l54tk" Mar 13 12:01:00 crc kubenswrapper[4632]: I0313 12:01:00.410118 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8daf4c2-f012-4d18-b11a-e666e00d6a03-config-data\") pod \"keystone-cron-29556721-l54tk\" (UID: \"c8daf4c2-f012-4d18-b11a-e666e00d6a03\") " pod="openstack/keystone-cron-29556721-l54tk" Mar 13 12:01:00 crc kubenswrapper[4632]: I0313 12:01:00.422194 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8xzd\" (UniqueName: \"kubernetes.io/projected/c8daf4c2-f012-4d18-b11a-e666e00d6a03-kube-api-access-d8xzd\") pod \"keystone-cron-29556721-l54tk\" (UID: \"c8daf4c2-f012-4d18-b11a-e666e00d6a03\") " pod="openstack/keystone-cron-29556721-l54tk" Mar 13 12:01:00 crc kubenswrapper[4632]: I0313 12:01:00.490734 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29556721-l54tk" Mar 13 12:01:01 crc kubenswrapper[4632]: I0313 12:01:01.070641 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29556721-l54tk"] Mar 13 12:01:01 crc kubenswrapper[4632]: I0313 12:01:01.541618 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29556721-l54tk" event={"ID":"c8daf4c2-f012-4d18-b11a-e666e00d6a03","Type":"ContainerStarted","Data":"0a8c37300ecb47a610879d64ee89a84f73a8a208385e27ede8cead0b63ad2bee"} Mar 13 12:01:01 crc kubenswrapper[4632]: I0313 12:01:01.542019 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29556721-l54tk" event={"ID":"c8daf4c2-f012-4d18-b11a-e666e00d6a03","Type":"ContainerStarted","Data":"0391d4900a2de8d700528f304be2206114d264ce6de429a1a1cb7769b6adf632"} Mar 13 12:01:01 crc kubenswrapper[4632]: I0313 12:01:01.563052 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29556721-l54tk" podStartSLOduration=1.563030205 podStartE2EDuration="1.563030205s" podCreationTimestamp="2026-03-13 12:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:01:01.557785129 +0000 UTC m=+7035.580315282" watchObservedRunningTime="2026-03-13 12:01:01.563030205 +0000 UTC m=+7035.585560338" Mar 13 12:01:08 crc kubenswrapper[4632]: I0313 12:01:08.602619 4632 generic.go:334] "Generic (PLEG): container finished" podID="c8daf4c2-f012-4d18-b11a-e666e00d6a03" containerID="0a8c37300ecb47a610879d64ee89a84f73a8a208385e27ede8cead0b63ad2bee" exitCode=0 Mar 13 12:01:08 crc kubenswrapper[4632]: I0313 12:01:08.602698 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29556721-l54tk" event={"ID":"c8daf4c2-f012-4d18-b11a-e666e00d6a03","Type":"ContainerDied","Data":"0a8c37300ecb47a610879d64ee89a84f73a8a208385e27ede8cead0b63ad2bee"} Mar 13 12:01:10 crc kubenswrapper[4632]: I0313 12:01:10.060825 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29556721-l54tk" Mar 13 12:01:10 crc kubenswrapper[4632]: I0313 12:01:10.193132 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c8daf4c2-f012-4d18-b11a-e666e00d6a03-fernet-keys\") pod \"c8daf4c2-f012-4d18-b11a-e666e00d6a03\" (UID: \"c8daf4c2-f012-4d18-b11a-e666e00d6a03\") " Mar 13 12:01:10 crc kubenswrapper[4632]: I0313 12:01:10.193219 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8xzd\" (UniqueName: \"kubernetes.io/projected/c8daf4c2-f012-4d18-b11a-e666e00d6a03-kube-api-access-d8xzd\") pod \"c8daf4c2-f012-4d18-b11a-e666e00d6a03\" (UID: \"c8daf4c2-f012-4d18-b11a-e666e00d6a03\") " Mar 13 12:01:10 crc kubenswrapper[4632]: I0313 12:01:10.193356 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8daf4c2-f012-4d18-b11a-e666e00d6a03-combined-ca-bundle\") pod \"c8daf4c2-f012-4d18-b11a-e666e00d6a03\" (UID: \"c8daf4c2-f012-4d18-b11a-e666e00d6a03\") " Mar 13 12:01:10 crc kubenswrapper[4632]: I0313 12:01:10.193400 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8daf4c2-f012-4d18-b11a-e666e00d6a03-config-data\") pod \"c8daf4c2-f012-4d18-b11a-e666e00d6a03\" (UID: \"c8daf4c2-f012-4d18-b11a-e666e00d6a03\") " Mar 13 12:01:10 crc kubenswrapper[4632]: I0313 12:01:10.210561 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8daf4c2-f012-4d18-b11a-e666e00d6a03-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c8daf4c2-f012-4d18-b11a-e666e00d6a03" (UID: "c8daf4c2-f012-4d18-b11a-e666e00d6a03"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:01:10 crc kubenswrapper[4632]: I0313 12:01:10.210641 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8daf4c2-f012-4d18-b11a-e666e00d6a03-kube-api-access-d8xzd" (OuterVolumeSpecName: "kube-api-access-d8xzd") pod "c8daf4c2-f012-4d18-b11a-e666e00d6a03" (UID: "c8daf4c2-f012-4d18-b11a-e666e00d6a03"). InnerVolumeSpecName "kube-api-access-d8xzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:01:10 crc kubenswrapper[4632]: I0313 12:01:10.229100 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8daf4c2-f012-4d18-b11a-e666e00d6a03-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8daf4c2-f012-4d18-b11a-e666e00d6a03" (UID: "c8daf4c2-f012-4d18-b11a-e666e00d6a03"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:01:10 crc kubenswrapper[4632]: I0313 12:01:10.269181 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8daf4c2-f012-4d18-b11a-e666e00d6a03-config-data" (OuterVolumeSpecName: "config-data") pod "c8daf4c2-f012-4d18-b11a-e666e00d6a03" (UID: "c8daf4c2-f012-4d18-b11a-e666e00d6a03"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:01:10 crc kubenswrapper[4632]: I0313 12:01:10.299892 4632 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c8daf4c2-f012-4d18-b11a-e666e00d6a03-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 13 12:01:10 crc kubenswrapper[4632]: I0313 12:01:10.300177 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8xzd\" (UniqueName: \"kubernetes.io/projected/c8daf4c2-f012-4d18-b11a-e666e00d6a03-kube-api-access-d8xzd\") on node \"crc\" DevicePath \"\"" Mar 13 12:01:10 crc kubenswrapper[4632]: I0313 12:01:10.300283 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8daf4c2-f012-4d18-b11a-e666e00d6a03-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 12:01:10 crc kubenswrapper[4632]: I0313 12:01:10.300359 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8daf4c2-f012-4d18-b11a-e666e00d6a03-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 12:01:10 crc kubenswrapper[4632]: I0313 12:01:10.628754 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29556721-l54tk" event={"ID":"c8daf4c2-f012-4d18-b11a-e666e00d6a03","Type":"ContainerDied","Data":"0391d4900a2de8d700528f304be2206114d264ce6de429a1a1cb7769b6adf632"} Mar 13 12:01:10 crc kubenswrapper[4632]: I0313 12:01:10.629042 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0391d4900a2de8d700528f304be2206114d264ce6de429a1a1cb7769b6adf632" Mar 13 12:01:10 crc kubenswrapper[4632]: I0313 12:01:10.628848 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29556721-l54tk" Mar 13 12:01:35 crc kubenswrapper[4632]: I0313 12:01:35.999169 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dgt68"] Mar 13 12:01:36 crc kubenswrapper[4632]: E0313 12:01:36.000387 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8daf4c2-f012-4d18-b11a-e666e00d6a03" containerName="keystone-cron" Mar 13 12:01:36 crc kubenswrapper[4632]: I0313 12:01:36.000411 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8daf4c2-f012-4d18-b11a-e666e00d6a03" containerName="keystone-cron" Mar 13 12:01:36 crc kubenswrapper[4632]: I0313 12:01:36.000671 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8daf4c2-f012-4d18-b11a-e666e00d6a03" containerName="keystone-cron" Mar 13 12:01:36 crc kubenswrapper[4632]: I0313 12:01:36.002647 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dgt68" Mar 13 12:01:36 crc kubenswrapper[4632]: I0313 12:01:36.009670 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dgt68"] Mar 13 12:01:36 crc kubenswrapper[4632]: I0313 12:01:36.042395 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccc5cad2-2d89-458e-826f-12b47e70afd6-catalog-content\") pod \"certified-operators-dgt68\" (UID: \"ccc5cad2-2d89-458e-826f-12b47e70afd6\") " pod="openshift-marketplace/certified-operators-dgt68" Mar 13 12:01:36 crc kubenswrapper[4632]: I0313 12:01:36.042700 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccc5cad2-2d89-458e-826f-12b47e70afd6-utilities\") pod \"certified-operators-dgt68\" (UID: \"ccc5cad2-2d89-458e-826f-12b47e70afd6\") " pod="openshift-marketplace/certified-operators-dgt68" Mar 13 12:01:36 crc kubenswrapper[4632]: I0313 12:01:36.043001 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6df8w\" (UniqueName: \"kubernetes.io/projected/ccc5cad2-2d89-458e-826f-12b47e70afd6-kube-api-access-6df8w\") pod \"certified-operators-dgt68\" (UID: \"ccc5cad2-2d89-458e-826f-12b47e70afd6\") " pod="openshift-marketplace/certified-operators-dgt68" Mar 13 12:01:36 crc kubenswrapper[4632]: I0313 12:01:36.145284 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccc5cad2-2d89-458e-826f-12b47e70afd6-utilities\") pod \"certified-operators-dgt68\" (UID: \"ccc5cad2-2d89-458e-826f-12b47e70afd6\") " pod="openshift-marketplace/certified-operators-dgt68" Mar 13 12:01:36 crc kubenswrapper[4632]: I0313 12:01:36.145465 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6df8w\" (UniqueName: \"kubernetes.io/projected/ccc5cad2-2d89-458e-826f-12b47e70afd6-kube-api-access-6df8w\") pod \"certified-operators-dgt68\" (UID: \"ccc5cad2-2d89-458e-826f-12b47e70afd6\") " pod="openshift-marketplace/certified-operators-dgt68" Mar 13 12:01:36 crc kubenswrapper[4632]: I0313 12:01:36.145707 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccc5cad2-2d89-458e-826f-12b47e70afd6-catalog-content\") pod \"certified-operators-dgt68\" (UID: \"ccc5cad2-2d89-458e-826f-12b47e70afd6\") " pod="openshift-marketplace/certified-operators-dgt68" Mar 13 12:01:36 crc kubenswrapper[4632]: I0313 12:01:36.146788 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccc5cad2-2d89-458e-826f-12b47e70afd6-utilities\") pod \"certified-operators-dgt68\" (UID: \"ccc5cad2-2d89-458e-826f-12b47e70afd6\") " pod="openshift-marketplace/certified-operators-dgt68" Mar 13 12:01:36 crc kubenswrapper[4632]: I0313 12:01:36.147313 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccc5cad2-2d89-458e-826f-12b47e70afd6-catalog-content\") pod \"certified-operators-dgt68\" (UID: \"ccc5cad2-2d89-458e-826f-12b47e70afd6\") " pod="openshift-marketplace/certified-operators-dgt68" Mar 13 12:01:36 crc kubenswrapper[4632]: I0313 12:01:36.167172 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6df8w\" (UniqueName: \"kubernetes.io/projected/ccc5cad2-2d89-458e-826f-12b47e70afd6-kube-api-access-6df8w\") pod \"certified-operators-dgt68\" (UID: \"ccc5cad2-2d89-458e-826f-12b47e70afd6\") " pod="openshift-marketplace/certified-operators-dgt68" Mar 13 12:01:36 crc kubenswrapper[4632]: I0313 12:01:36.325200 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dgt68" Mar 13 12:01:36 crc kubenswrapper[4632]: I0313 12:01:36.835089 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dgt68"] Mar 13 12:01:36 crc kubenswrapper[4632]: I0313 12:01:36.883659 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgt68" event={"ID":"ccc5cad2-2d89-458e-826f-12b47e70afd6","Type":"ContainerStarted","Data":"eb7530bf3d8d12731d0a9602228b54fda149b37c977f13fb24f5acdf816e2ca5"} Mar 13 12:01:37 crc kubenswrapper[4632]: I0313 12:01:37.897784 4632 generic.go:334] "Generic (PLEG): container finished" podID="ccc5cad2-2d89-458e-826f-12b47e70afd6" containerID="fe41022c2ad0a86e947222937b1ec880188ea79ad756a089f29f058200df6736" exitCode=0 Mar 13 12:01:37 crc kubenswrapper[4632]: I0313 12:01:37.897846 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgt68" event={"ID":"ccc5cad2-2d89-458e-826f-12b47e70afd6","Type":"ContainerDied","Data":"fe41022c2ad0a86e947222937b1ec880188ea79ad756a089f29f058200df6736"} Mar 13 12:01:37 crc kubenswrapper[4632]: I0313 12:01:37.901996 4632 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 12:01:39 crc kubenswrapper[4632]: I0313 12:01:39.917705 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgt68" event={"ID":"ccc5cad2-2d89-458e-826f-12b47e70afd6","Type":"ContainerStarted","Data":"21ced36348c43768801a7eb8ff5c40cfd7107b222ad4e8d17045732a44d8e9a7"} Mar 13 12:01:42 crc kubenswrapper[4632]: I0313 12:01:42.951532 4632 generic.go:334] "Generic (PLEG): container finished" podID="ccc5cad2-2d89-458e-826f-12b47e70afd6" containerID="21ced36348c43768801a7eb8ff5c40cfd7107b222ad4e8d17045732a44d8e9a7" exitCode=0 Mar 13 12:01:42 crc kubenswrapper[4632]: I0313 12:01:42.951608 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgt68" event={"ID":"ccc5cad2-2d89-458e-826f-12b47e70afd6","Type":"ContainerDied","Data":"21ced36348c43768801a7eb8ff5c40cfd7107b222ad4e8d17045732a44d8e9a7"} Mar 13 12:01:43 crc kubenswrapper[4632]: I0313 12:01:43.964811 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgt68" event={"ID":"ccc5cad2-2d89-458e-826f-12b47e70afd6","Type":"ContainerStarted","Data":"e227e3bd9f407b1dd5a86c9decd8ab735f6ca6d90d2a001533af3524de0eecdf"} Mar 13 12:01:44 crc kubenswrapper[4632]: I0313 12:01:44.009717 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dgt68" podStartSLOduration=3.5380606930000003 podStartE2EDuration="9.009694821s" podCreationTimestamp="2026-03-13 12:01:35 +0000 UTC" firstStartedPulling="2026-03-13 12:01:37.90084969 +0000 UTC m=+7071.923379823" lastFinishedPulling="2026-03-13 12:01:43.372483818 +0000 UTC m=+7077.395013951" observedRunningTime="2026-03-13 12:01:44.003468161 +0000 UTC m=+7078.025998314" watchObservedRunningTime="2026-03-13 12:01:44.009694821 +0000 UTC m=+7078.032224954" Mar 13 12:01:46 crc kubenswrapper[4632]: I0313 12:01:46.325841 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dgt68" Mar 13 12:01:46 crc kubenswrapper[4632]: I0313 12:01:46.325908 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dgt68" Mar 13 12:01:47 crc kubenswrapper[4632]: I0313 12:01:47.384618 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-dgt68" podUID="ccc5cad2-2d89-458e-826f-12b47e70afd6" containerName="registry-server" probeResult="failure" output=< Mar 13 12:01:47 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:01:47 crc kubenswrapper[4632]: > Mar 13 12:01:57 crc kubenswrapper[4632]: I0313 12:01:57.383391 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-dgt68" podUID="ccc5cad2-2d89-458e-826f-12b47e70afd6" containerName="registry-server" probeResult="failure" output=< Mar 13 12:01:57 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:01:57 crc kubenswrapper[4632]: > Mar 13 12:02:00 crc kubenswrapper[4632]: I0313 12:02:00.151446 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556722-r4pp4"] Mar 13 12:02:00 crc kubenswrapper[4632]: I0313 12:02:00.153466 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556722-r4pp4" Mar 13 12:02:00 crc kubenswrapper[4632]: I0313 12:02:00.155546 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 12:02:00 crc kubenswrapper[4632]: I0313 12:02:00.155952 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 12:02:00 crc kubenswrapper[4632]: I0313 12:02:00.162581 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556722-r4pp4"] Mar 13 12:02:00 crc kubenswrapper[4632]: I0313 12:02:00.163243 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 12:02:00 crc kubenswrapper[4632]: I0313 12:02:00.300734 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zgf5\" (UniqueName: \"kubernetes.io/projected/5ae03001-344b-4e5e-baf2-c8171109eb1a-kube-api-access-8zgf5\") pod \"auto-csr-approver-29556722-r4pp4\" (UID: \"5ae03001-344b-4e5e-baf2-c8171109eb1a\") " pod="openshift-infra/auto-csr-approver-29556722-r4pp4" Mar 13 12:02:00 crc kubenswrapper[4632]: I0313 12:02:00.402553 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zgf5\" (UniqueName: \"kubernetes.io/projected/5ae03001-344b-4e5e-baf2-c8171109eb1a-kube-api-access-8zgf5\") pod \"auto-csr-approver-29556722-r4pp4\" (UID: \"5ae03001-344b-4e5e-baf2-c8171109eb1a\") " pod="openshift-infra/auto-csr-approver-29556722-r4pp4" Mar 13 12:02:00 crc kubenswrapper[4632]: I0313 12:02:00.428408 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zgf5\" (UniqueName: \"kubernetes.io/projected/5ae03001-344b-4e5e-baf2-c8171109eb1a-kube-api-access-8zgf5\") pod \"auto-csr-approver-29556722-r4pp4\" (UID: \"5ae03001-344b-4e5e-baf2-c8171109eb1a\") " pod="openshift-infra/auto-csr-approver-29556722-r4pp4" Mar 13 12:02:00 crc kubenswrapper[4632]: I0313 12:02:00.477202 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556722-r4pp4" Mar 13 12:02:01 crc kubenswrapper[4632]: I0313 12:02:01.088686 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556722-r4pp4"] Mar 13 12:02:01 crc kubenswrapper[4632]: I0313 12:02:01.128192 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556722-r4pp4" event={"ID":"5ae03001-344b-4e5e-baf2-c8171109eb1a","Type":"ContainerStarted","Data":"dd78aa588ce6352a368cfc59900ce80669e0738f4b75bcdd822f3fb6256d9f3a"} Mar 13 12:02:03 crc kubenswrapper[4632]: I0313 12:02:03.151684 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556722-r4pp4" event={"ID":"5ae03001-344b-4e5e-baf2-c8171109eb1a","Type":"ContainerStarted","Data":"45cc8231b2d3d1ca8cec2f7a7da9147a3a370632b65df6ead7d138b8c0f615b3"} Mar 13 12:02:03 crc kubenswrapper[4632]: I0313 12:02:03.173233 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556722-r4pp4" podStartSLOduration=2.122412012 podStartE2EDuration="3.173210619s" podCreationTimestamp="2026-03-13 12:02:00 +0000 UTC" firstStartedPulling="2026-03-13 12:02:01.092317683 +0000 UTC m=+7095.114847816" lastFinishedPulling="2026-03-13 12:02:02.1431163 +0000 UTC m=+7096.165646423" observedRunningTime="2026-03-13 12:02:03.166893257 +0000 UTC m=+7097.189423380" watchObservedRunningTime="2026-03-13 12:02:03.173210619 +0000 UTC m=+7097.195740752" Mar 13 12:02:05 crc kubenswrapper[4632]: I0313 12:02:05.172419 4632 generic.go:334] "Generic (PLEG): container finished" podID="5ae03001-344b-4e5e-baf2-c8171109eb1a" containerID="45cc8231b2d3d1ca8cec2f7a7da9147a3a370632b65df6ead7d138b8c0f615b3" exitCode=0 Mar 13 12:02:05 crc kubenswrapper[4632]: I0313 12:02:05.172519 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556722-r4pp4" event={"ID":"5ae03001-344b-4e5e-baf2-c8171109eb1a","Type":"ContainerDied","Data":"45cc8231b2d3d1ca8cec2f7a7da9147a3a370632b65df6ead7d138b8c0f615b3"} Mar 13 12:02:06 crc kubenswrapper[4632]: I0313 12:02:06.380842 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dgt68" Mar 13 12:02:06 crc kubenswrapper[4632]: I0313 12:02:06.439302 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dgt68" Mar 13 12:02:06 crc kubenswrapper[4632]: I0313 12:02:06.970160 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556722-r4pp4" Mar 13 12:02:07 crc kubenswrapper[4632]: I0313 12:02:07.055267 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zgf5\" (UniqueName: \"kubernetes.io/projected/5ae03001-344b-4e5e-baf2-c8171109eb1a-kube-api-access-8zgf5\") pod \"5ae03001-344b-4e5e-baf2-c8171109eb1a\" (UID: \"5ae03001-344b-4e5e-baf2-c8171109eb1a\") " Mar 13 12:02:07 crc kubenswrapper[4632]: I0313 12:02:07.067319 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ae03001-344b-4e5e-baf2-c8171109eb1a-kube-api-access-8zgf5" (OuterVolumeSpecName: "kube-api-access-8zgf5") pod "5ae03001-344b-4e5e-baf2-c8171109eb1a" (UID: "5ae03001-344b-4e5e-baf2-c8171109eb1a"). InnerVolumeSpecName "kube-api-access-8zgf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:02:07 crc kubenswrapper[4632]: I0313 12:02:07.159401 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zgf5\" (UniqueName: \"kubernetes.io/projected/5ae03001-344b-4e5e-baf2-c8171109eb1a-kube-api-access-8zgf5\") on node \"crc\" DevicePath \"\"" Mar 13 12:02:07 crc kubenswrapper[4632]: I0313 12:02:07.196337 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556722-r4pp4" event={"ID":"5ae03001-344b-4e5e-baf2-c8171109eb1a","Type":"ContainerDied","Data":"dd78aa588ce6352a368cfc59900ce80669e0738f4b75bcdd822f3fb6256d9f3a"} Mar 13 12:02:07 crc kubenswrapper[4632]: I0313 12:02:07.196414 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd78aa588ce6352a368cfc59900ce80669e0738f4b75bcdd822f3fb6256d9f3a" Mar 13 12:02:07 crc kubenswrapper[4632]: I0313 12:02:07.196379 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556722-r4pp4" Mar 13 12:02:07 crc kubenswrapper[4632]: I0313 12:02:07.207264 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dgt68"] Mar 13 12:02:07 crc kubenswrapper[4632]: I0313 12:02:07.283253 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556716-2gt9h"] Mar 13 12:02:07 crc kubenswrapper[4632]: I0313 12:02:07.301414 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556716-2gt9h"] Mar 13 12:02:08 crc kubenswrapper[4632]: I0313 12:02:08.058213 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38cafee7-6e61-46de-b58b-48b8f7d41bf6" path="/var/lib/kubelet/pods/38cafee7-6e61-46de-b58b-48b8f7d41bf6/volumes" Mar 13 12:02:08 crc kubenswrapper[4632]: I0313 12:02:08.204670 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dgt68" podUID="ccc5cad2-2d89-458e-826f-12b47e70afd6" containerName="registry-server" containerID="cri-o://e227e3bd9f407b1dd5a86c9decd8ab735f6ca6d90d2a001533af3524de0eecdf" gracePeriod=2 Mar 13 12:02:08 crc kubenswrapper[4632]: I0313 12:02:08.715617 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dgt68" Mar 13 12:02:08 crc kubenswrapper[4632]: I0313 12:02:08.791174 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccc5cad2-2d89-458e-826f-12b47e70afd6-utilities\") pod \"ccc5cad2-2d89-458e-826f-12b47e70afd6\" (UID: \"ccc5cad2-2d89-458e-826f-12b47e70afd6\") " Mar 13 12:02:08 crc kubenswrapper[4632]: I0313 12:02:08.791427 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6df8w\" (UniqueName: \"kubernetes.io/projected/ccc5cad2-2d89-458e-826f-12b47e70afd6-kube-api-access-6df8w\") pod \"ccc5cad2-2d89-458e-826f-12b47e70afd6\" (UID: \"ccc5cad2-2d89-458e-826f-12b47e70afd6\") " Mar 13 12:02:08 crc kubenswrapper[4632]: I0313 12:02:08.791532 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccc5cad2-2d89-458e-826f-12b47e70afd6-catalog-content\") pod \"ccc5cad2-2d89-458e-826f-12b47e70afd6\" (UID: \"ccc5cad2-2d89-458e-826f-12b47e70afd6\") " Mar 13 12:02:08 crc kubenswrapper[4632]: I0313 12:02:08.801544 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccc5cad2-2d89-458e-826f-12b47e70afd6-utilities" (OuterVolumeSpecName: "utilities") pod "ccc5cad2-2d89-458e-826f-12b47e70afd6" (UID: "ccc5cad2-2d89-458e-826f-12b47e70afd6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:02:08 crc kubenswrapper[4632]: I0313 12:02:08.808702 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccc5cad2-2d89-458e-826f-12b47e70afd6-kube-api-access-6df8w" (OuterVolumeSpecName: "kube-api-access-6df8w") pod "ccc5cad2-2d89-458e-826f-12b47e70afd6" (UID: "ccc5cad2-2d89-458e-826f-12b47e70afd6"). InnerVolumeSpecName "kube-api-access-6df8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:02:08 crc kubenswrapper[4632]: I0313 12:02:08.898139 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccc5cad2-2d89-458e-826f-12b47e70afd6-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 12:02:08 crc kubenswrapper[4632]: I0313 12:02:08.898174 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6df8w\" (UniqueName: \"kubernetes.io/projected/ccc5cad2-2d89-458e-826f-12b47e70afd6-kube-api-access-6df8w\") on node \"crc\" DevicePath \"\"" Mar 13 12:02:08 crc kubenswrapper[4632]: I0313 12:02:08.920169 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccc5cad2-2d89-458e-826f-12b47e70afd6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ccc5cad2-2d89-458e-826f-12b47e70afd6" (UID: "ccc5cad2-2d89-458e-826f-12b47e70afd6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:02:08 crc kubenswrapper[4632]: I0313 12:02:08.999711 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccc5cad2-2d89-458e-826f-12b47e70afd6-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 12:02:09 crc kubenswrapper[4632]: I0313 12:02:09.219319 4632 generic.go:334] "Generic (PLEG): container finished" podID="ccc5cad2-2d89-458e-826f-12b47e70afd6" containerID="e227e3bd9f407b1dd5a86c9decd8ab735f6ca6d90d2a001533af3524de0eecdf" exitCode=0 Mar 13 12:02:09 crc kubenswrapper[4632]: I0313 12:02:09.219390 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dgt68" Mar 13 12:02:09 crc kubenswrapper[4632]: I0313 12:02:09.219423 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgt68" event={"ID":"ccc5cad2-2d89-458e-826f-12b47e70afd6","Type":"ContainerDied","Data":"e227e3bd9f407b1dd5a86c9decd8ab735f6ca6d90d2a001533af3524de0eecdf"} Mar 13 12:02:09 crc kubenswrapper[4632]: I0313 12:02:09.219881 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgt68" event={"ID":"ccc5cad2-2d89-458e-826f-12b47e70afd6","Type":"ContainerDied","Data":"eb7530bf3d8d12731d0a9602228b54fda149b37c977f13fb24f5acdf816e2ca5"} Mar 13 12:02:09 crc kubenswrapper[4632]: I0313 12:02:09.219921 4632 scope.go:117] "RemoveContainer" containerID="e227e3bd9f407b1dd5a86c9decd8ab735f6ca6d90d2a001533af3524de0eecdf" Mar 13 12:02:09 crc kubenswrapper[4632]: I0313 12:02:09.251801 4632 scope.go:117] "RemoveContainer" containerID="21ced36348c43768801a7eb8ff5c40cfd7107b222ad4e8d17045732a44d8e9a7" Mar 13 12:02:09 crc kubenswrapper[4632]: I0313 12:02:09.266038 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dgt68"] Mar 13 12:02:09 crc kubenswrapper[4632]: I0313 12:02:09.278441 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dgt68"] Mar 13 12:02:09 crc kubenswrapper[4632]: I0313 12:02:09.322130 4632 scope.go:117] "RemoveContainer" containerID="fe41022c2ad0a86e947222937b1ec880188ea79ad756a089f29f058200df6736" Mar 13 12:02:09 crc kubenswrapper[4632]: I0313 12:02:09.365375 4632 scope.go:117] "RemoveContainer" containerID="e227e3bd9f407b1dd5a86c9decd8ab735f6ca6d90d2a001533af3524de0eecdf" Mar 13 12:02:09 crc kubenswrapper[4632]: E0313 12:02:09.366117 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e227e3bd9f407b1dd5a86c9decd8ab735f6ca6d90d2a001533af3524de0eecdf\": container with ID starting with e227e3bd9f407b1dd5a86c9decd8ab735f6ca6d90d2a001533af3524de0eecdf not found: ID does not exist" containerID="e227e3bd9f407b1dd5a86c9decd8ab735f6ca6d90d2a001533af3524de0eecdf" Mar 13 12:02:09 crc kubenswrapper[4632]: I0313 12:02:09.366160 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e227e3bd9f407b1dd5a86c9decd8ab735f6ca6d90d2a001533af3524de0eecdf"} err="failed to get container status \"e227e3bd9f407b1dd5a86c9decd8ab735f6ca6d90d2a001533af3524de0eecdf\": rpc error: code = NotFound desc = could not find container \"e227e3bd9f407b1dd5a86c9decd8ab735f6ca6d90d2a001533af3524de0eecdf\": container with ID starting with e227e3bd9f407b1dd5a86c9decd8ab735f6ca6d90d2a001533af3524de0eecdf not found: ID does not exist" Mar 13 12:02:09 crc kubenswrapper[4632]: I0313 12:02:09.366189 4632 scope.go:117] "RemoveContainer" containerID="21ced36348c43768801a7eb8ff5c40cfd7107b222ad4e8d17045732a44d8e9a7" Mar 13 12:02:09 crc kubenswrapper[4632]: E0313 12:02:09.366602 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21ced36348c43768801a7eb8ff5c40cfd7107b222ad4e8d17045732a44d8e9a7\": container with ID starting with 21ced36348c43768801a7eb8ff5c40cfd7107b222ad4e8d17045732a44d8e9a7 not found: ID does not exist" containerID="21ced36348c43768801a7eb8ff5c40cfd7107b222ad4e8d17045732a44d8e9a7" Mar 13 12:02:09 crc kubenswrapper[4632]: I0313 12:02:09.366688 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21ced36348c43768801a7eb8ff5c40cfd7107b222ad4e8d17045732a44d8e9a7"} err="failed to get container status \"21ced36348c43768801a7eb8ff5c40cfd7107b222ad4e8d17045732a44d8e9a7\": rpc error: code = NotFound desc = could not find container \"21ced36348c43768801a7eb8ff5c40cfd7107b222ad4e8d17045732a44d8e9a7\": container with ID starting with 21ced36348c43768801a7eb8ff5c40cfd7107b222ad4e8d17045732a44d8e9a7 not found: ID does not exist" Mar 13 12:02:09 crc kubenswrapper[4632]: I0313 12:02:09.366730 4632 scope.go:117] "RemoveContainer" containerID="fe41022c2ad0a86e947222937b1ec880188ea79ad756a089f29f058200df6736" Mar 13 12:02:09 crc kubenswrapper[4632]: E0313 12:02:09.367156 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe41022c2ad0a86e947222937b1ec880188ea79ad756a089f29f058200df6736\": container with ID starting with fe41022c2ad0a86e947222937b1ec880188ea79ad756a089f29f058200df6736 not found: ID does not exist" containerID="fe41022c2ad0a86e947222937b1ec880188ea79ad756a089f29f058200df6736" Mar 13 12:02:09 crc kubenswrapper[4632]: I0313 12:02:09.367194 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe41022c2ad0a86e947222937b1ec880188ea79ad756a089f29f058200df6736"} err="failed to get container status \"fe41022c2ad0a86e947222937b1ec880188ea79ad756a089f29f058200df6736\": rpc error: code = NotFound desc = could not find container \"fe41022c2ad0a86e947222937b1ec880188ea79ad756a089f29f058200df6736\": container with ID starting with fe41022c2ad0a86e947222937b1ec880188ea79ad756a089f29f058200df6736 not found: ID does not exist" Mar 13 12:02:10 crc kubenswrapper[4632]: I0313 12:02:10.057550 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccc5cad2-2d89-458e-826f-12b47e70afd6" path="/var/lib/kubelet/pods/ccc5cad2-2d89-458e-826f-12b47e70afd6/volumes" Mar 13 12:02:10 crc kubenswrapper[4632]: I0313 12:02:10.461173 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:02:10 crc kubenswrapper[4632]: I0313 12:02:10.461234 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:02:24 crc kubenswrapper[4632]: I0313 12:02:24.189168 4632 scope.go:117] "RemoveContainer" containerID="334a9f675c9c77aba9558302bf96e3547c17123adf9873e85b3c3871bccb4465" Mar 13 12:02:40 crc kubenswrapper[4632]: I0313 12:02:40.461449 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:02:40 crc kubenswrapper[4632]: I0313 12:02:40.462183 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:02:55 crc kubenswrapper[4632]: I0313 12:02:55.193830 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qttkd"] Mar 13 12:02:55 crc kubenswrapper[4632]: E0313 12:02:55.195204 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccc5cad2-2d89-458e-826f-12b47e70afd6" containerName="registry-server" Mar 13 12:02:55 crc kubenswrapper[4632]: I0313 12:02:55.195243 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccc5cad2-2d89-458e-826f-12b47e70afd6" containerName="registry-server" Mar 13 12:02:55 crc kubenswrapper[4632]: E0313 12:02:55.195271 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccc5cad2-2d89-458e-826f-12b47e70afd6" containerName="extract-utilities" Mar 13 12:02:55 crc kubenswrapper[4632]: I0313 12:02:55.195281 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccc5cad2-2d89-458e-826f-12b47e70afd6" containerName="extract-utilities" Mar 13 12:02:55 crc kubenswrapper[4632]: E0313 12:02:55.195298 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ae03001-344b-4e5e-baf2-c8171109eb1a" containerName="oc" Mar 13 12:02:55 crc kubenswrapper[4632]: I0313 12:02:55.195307 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ae03001-344b-4e5e-baf2-c8171109eb1a" containerName="oc" Mar 13 12:02:55 crc kubenswrapper[4632]: E0313 12:02:55.195334 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccc5cad2-2d89-458e-826f-12b47e70afd6" containerName="extract-content" Mar 13 12:02:55 crc kubenswrapper[4632]: I0313 12:02:55.195342 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccc5cad2-2d89-458e-826f-12b47e70afd6" containerName="extract-content" Mar 13 12:02:55 crc kubenswrapper[4632]: I0313 12:02:55.195586 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ae03001-344b-4e5e-baf2-c8171109eb1a" containerName="oc" Mar 13 12:02:55 crc kubenswrapper[4632]: I0313 12:02:55.195613 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccc5cad2-2d89-458e-826f-12b47e70afd6" containerName="registry-server" Mar 13 12:02:55 crc kubenswrapper[4632]: I0313 12:02:55.199720 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qttkd" Mar 13 12:02:55 crc kubenswrapper[4632]: I0313 12:02:55.217034 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qttkd"] Mar 13 12:02:55 crc kubenswrapper[4632]: I0313 12:02:55.271848 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d15a7682-8687-4495-9ea5-bab97097930e-utilities\") pod \"redhat-operators-qttkd\" (UID: \"d15a7682-8687-4495-9ea5-bab97097930e\") " pod="openshift-marketplace/redhat-operators-qttkd" Mar 13 12:02:55 crc kubenswrapper[4632]: I0313 12:02:55.271928 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d15a7682-8687-4495-9ea5-bab97097930e-catalog-content\") pod \"redhat-operators-qttkd\" (UID: \"d15a7682-8687-4495-9ea5-bab97097930e\") " pod="openshift-marketplace/redhat-operators-qttkd" Mar 13 12:02:55 crc kubenswrapper[4632]: I0313 12:02:55.272152 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px4g4\" (UniqueName: \"kubernetes.io/projected/d15a7682-8687-4495-9ea5-bab97097930e-kube-api-access-px4g4\") pod \"redhat-operators-qttkd\" (UID: \"d15a7682-8687-4495-9ea5-bab97097930e\") " pod="openshift-marketplace/redhat-operators-qttkd" Mar 13 12:02:55 crc kubenswrapper[4632]: I0313 12:02:55.373678 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px4g4\" (UniqueName: \"kubernetes.io/projected/d15a7682-8687-4495-9ea5-bab97097930e-kube-api-access-px4g4\") pod \"redhat-operators-qttkd\" (UID: \"d15a7682-8687-4495-9ea5-bab97097930e\") " pod="openshift-marketplace/redhat-operators-qttkd" Mar 13 12:02:55 crc kubenswrapper[4632]: I0313 12:02:55.373840 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d15a7682-8687-4495-9ea5-bab97097930e-utilities\") pod \"redhat-operators-qttkd\" (UID: \"d15a7682-8687-4495-9ea5-bab97097930e\") " pod="openshift-marketplace/redhat-operators-qttkd" Mar 13 12:02:55 crc kubenswrapper[4632]: I0313 12:02:55.373887 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d15a7682-8687-4495-9ea5-bab97097930e-catalog-content\") pod \"redhat-operators-qttkd\" (UID: \"d15a7682-8687-4495-9ea5-bab97097930e\") " pod="openshift-marketplace/redhat-operators-qttkd" Mar 13 12:02:55 crc kubenswrapper[4632]: I0313 12:02:55.374422 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d15a7682-8687-4495-9ea5-bab97097930e-utilities\") pod \"redhat-operators-qttkd\" (UID: \"d15a7682-8687-4495-9ea5-bab97097930e\") " pod="openshift-marketplace/redhat-operators-qttkd" Mar 13 12:02:55 crc kubenswrapper[4632]: I0313 12:02:55.374471 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d15a7682-8687-4495-9ea5-bab97097930e-catalog-content\") pod \"redhat-operators-qttkd\" (UID: \"d15a7682-8687-4495-9ea5-bab97097930e\") " pod="openshift-marketplace/redhat-operators-qttkd" Mar 13 12:02:55 crc kubenswrapper[4632]: I0313 12:02:55.402584 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px4g4\" (UniqueName: \"kubernetes.io/projected/d15a7682-8687-4495-9ea5-bab97097930e-kube-api-access-px4g4\") pod \"redhat-operators-qttkd\" (UID: \"d15a7682-8687-4495-9ea5-bab97097930e\") " pod="openshift-marketplace/redhat-operators-qttkd" Mar 13 12:02:55 crc kubenswrapper[4632]: I0313 12:02:55.528442 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qttkd" Mar 13 12:02:56 crc kubenswrapper[4632]: I0313 12:02:56.151101 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qttkd"] Mar 13 12:02:56 crc kubenswrapper[4632]: I0313 12:02:56.674957 4632 generic.go:334] "Generic (PLEG): container finished" podID="d15a7682-8687-4495-9ea5-bab97097930e" containerID="28be9f7463878e855f3ddab509b5ad179e18babc64a9ca78c7b8ffe29d0be6e0" exitCode=0 Mar 13 12:02:56 crc kubenswrapper[4632]: I0313 12:02:56.675459 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qttkd" event={"ID":"d15a7682-8687-4495-9ea5-bab97097930e","Type":"ContainerDied","Data":"28be9f7463878e855f3ddab509b5ad179e18babc64a9ca78c7b8ffe29d0be6e0"} Mar 13 12:02:56 crc kubenswrapper[4632]: I0313 12:02:56.675519 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qttkd" event={"ID":"d15a7682-8687-4495-9ea5-bab97097930e","Type":"ContainerStarted","Data":"ccb1372aabc48631479f228688c95abab44ec403b778ae2c4d4db0abf3ee9d7e"} Mar 13 12:02:58 crc kubenswrapper[4632]: I0313 12:02:58.699466 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qttkd" event={"ID":"d15a7682-8687-4495-9ea5-bab97097930e","Type":"ContainerStarted","Data":"0f02737dbf81fc5b27a98d7116776423f246a75c7aa8570b9234458aedbdf876"} Mar 13 12:03:05 crc kubenswrapper[4632]: I0313 12:03:05.769010 4632 generic.go:334] "Generic (PLEG): container finished" podID="d15a7682-8687-4495-9ea5-bab97097930e" containerID="0f02737dbf81fc5b27a98d7116776423f246a75c7aa8570b9234458aedbdf876" exitCode=0 Mar 13 12:03:05 crc kubenswrapper[4632]: I0313 12:03:05.769061 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qttkd" event={"ID":"d15a7682-8687-4495-9ea5-bab97097930e","Type":"ContainerDied","Data":"0f02737dbf81fc5b27a98d7116776423f246a75c7aa8570b9234458aedbdf876"} Mar 13 12:03:06 crc kubenswrapper[4632]: I0313 12:03:06.783505 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qttkd" event={"ID":"d15a7682-8687-4495-9ea5-bab97097930e","Type":"ContainerStarted","Data":"4dfa9a954b7ca6e925f87876683c3628cf9338c717e6ef370787714a55baee80"} Mar 13 12:03:06 crc kubenswrapper[4632]: I0313 12:03:06.811784 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qttkd" podStartSLOduration=2.316522534 podStartE2EDuration="11.811758497s" podCreationTimestamp="2026-03-13 12:02:55 +0000 UTC" firstStartedPulling="2026-03-13 12:02:56.681884153 +0000 UTC m=+7150.704414286" lastFinishedPulling="2026-03-13 12:03:06.177120116 +0000 UTC m=+7160.199650249" observedRunningTime="2026-03-13 12:03:06.803395366 +0000 UTC m=+7160.825925509" watchObservedRunningTime="2026-03-13 12:03:06.811758497 +0000 UTC m=+7160.834288630" Mar 13 12:03:10 crc kubenswrapper[4632]: I0313 12:03:10.461682 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:03:10 crc kubenswrapper[4632]: I0313 12:03:10.462142 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:03:10 crc kubenswrapper[4632]: I0313 12:03:10.462191 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 12:03:10 crc kubenswrapper[4632]: I0313 12:03:10.463004 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"acc57c5b8ad70899e139ed86509fb64c5bf067344e4285c37f35406b8db0c7a6"} pod="openshift-machine-config-operator/machine-config-daemon-zkscb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 12:03:10 crc kubenswrapper[4632]: I0313 12:03:10.463063 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" containerID="cri-o://acc57c5b8ad70899e139ed86509fb64c5bf067344e4285c37f35406b8db0c7a6" gracePeriod=600 Mar 13 12:03:10 crc kubenswrapper[4632]: E0313 12:03:10.610295 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:03:10 crc kubenswrapper[4632]: I0313 12:03:10.822637 4632 generic.go:334] "Generic (PLEG): container finished" podID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerID="acc57c5b8ad70899e139ed86509fb64c5bf067344e4285c37f35406b8db0c7a6" exitCode=0 Mar 13 12:03:10 crc kubenswrapper[4632]: I0313 12:03:10.822684 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerDied","Data":"acc57c5b8ad70899e139ed86509fb64c5bf067344e4285c37f35406b8db0c7a6"} Mar 13 12:03:10 crc kubenswrapper[4632]: I0313 12:03:10.822723 4632 scope.go:117] "RemoveContainer" containerID="f1337fb64ab38c0f489a591d3b3f173d13428642427113f1891b2f17a626304e" Mar 13 12:03:10 crc kubenswrapper[4632]: I0313 12:03:10.823420 4632 scope.go:117] "RemoveContainer" containerID="acc57c5b8ad70899e139ed86509fb64c5bf067344e4285c37f35406b8db0c7a6" Mar 13 12:03:10 crc kubenswrapper[4632]: E0313 12:03:10.823772 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:03:15 crc kubenswrapper[4632]: I0313 12:03:15.533286 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qttkd" Mar 13 12:03:15 crc kubenswrapper[4632]: I0313 12:03:15.533613 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qttkd" Mar 13 12:03:16 crc kubenswrapper[4632]: I0313 12:03:16.585201 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qttkd" podUID="d15a7682-8687-4495-9ea5-bab97097930e" containerName="registry-server" probeResult="failure" output=< Mar 13 12:03:16 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:03:16 crc kubenswrapper[4632]: > Mar 13 12:03:25 crc kubenswrapper[4632]: I0313 12:03:25.044871 4632 scope.go:117] "RemoveContainer" containerID="acc57c5b8ad70899e139ed86509fb64c5bf067344e4285c37f35406b8db0c7a6" Mar 13 12:03:25 crc kubenswrapper[4632]: E0313 12:03:25.045635 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:03:26 crc kubenswrapper[4632]: I0313 12:03:26.582756 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qttkd" podUID="d15a7682-8687-4495-9ea5-bab97097930e" containerName="registry-server" probeResult="failure" output=< Mar 13 12:03:26 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:03:26 crc kubenswrapper[4632]: > Mar 13 12:03:36 crc kubenswrapper[4632]: I0313 12:03:36.578770 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qttkd" podUID="d15a7682-8687-4495-9ea5-bab97097930e" containerName="registry-server" probeResult="failure" output=< Mar 13 12:03:36 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:03:36 crc kubenswrapper[4632]: > Mar 13 12:03:40 crc kubenswrapper[4632]: I0313 12:03:40.044784 4632 scope.go:117] "RemoveContainer" containerID="acc57c5b8ad70899e139ed86509fb64c5bf067344e4285c37f35406b8db0c7a6" Mar 13 12:03:40 crc kubenswrapper[4632]: E0313 12:03:40.045723 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:03:46 crc kubenswrapper[4632]: I0313 12:03:46.580823 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qttkd" podUID="d15a7682-8687-4495-9ea5-bab97097930e" containerName="registry-server" probeResult="failure" output=< Mar 13 12:03:46 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:03:46 crc kubenswrapper[4632]: > Mar 13 12:03:53 crc kubenswrapper[4632]: I0313 12:03:53.045520 4632 scope.go:117] "RemoveContainer" containerID="acc57c5b8ad70899e139ed86509fb64c5bf067344e4285c37f35406b8db0c7a6" Mar 13 12:03:53 crc kubenswrapper[4632]: E0313 12:03:53.046639 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:03:55 crc kubenswrapper[4632]: I0313 12:03:55.599751 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qttkd" Mar 13 12:03:55 crc kubenswrapper[4632]: I0313 12:03:55.659220 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qttkd" Mar 13 12:03:56 crc kubenswrapper[4632]: I0313 12:03:56.376560 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qttkd"] Mar 13 12:03:57 crc kubenswrapper[4632]: I0313 12:03:57.270265 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qttkd" podUID="d15a7682-8687-4495-9ea5-bab97097930e" containerName="registry-server" containerID="cri-o://4dfa9a954b7ca6e925f87876683c3628cf9338c717e6ef370787714a55baee80" gracePeriod=2 Mar 13 12:03:58 crc kubenswrapper[4632]: I0313 12:03:58.246531 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qttkd" Mar 13 12:03:58 crc kubenswrapper[4632]: I0313 12:03:58.300457 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qttkd" Mar 13 12:03:58 crc kubenswrapper[4632]: I0313 12:03:58.300459 4632 generic.go:334] "Generic (PLEG): container finished" podID="d15a7682-8687-4495-9ea5-bab97097930e" containerID="4dfa9a954b7ca6e925f87876683c3628cf9338c717e6ef370787714a55baee80" exitCode=0 Mar 13 12:03:58 crc kubenswrapper[4632]: I0313 12:03:58.310070 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qttkd" event={"ID":"d15a7682-8687-4495-9ea5-bab97097930e","Type":"ContainerDied","Data":"4dfa9a954b7ca6e925f87876683c3628cf9338c717e6ef370787714a55baee80"} Mar 13 12:03:58 crc kubenswrapper[4632]: I0313 12:03:58.310170 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qttkd" event={"ID":"d15a7682-8687-4495-9ea5-bab97097930e","Type":"ContainerDied","Data":"ccb1372aabc48631479f228688c95abab44ec403b778ae2c4d4db0abf3ee9d7e"} Mar 13 12:03:58 crc kubenswrapper[4632]: I0313 12:03:58.310425 4632 scope.go:117] "RemoveContainer" containerID="4dfa9a954b7ca6e925f87876683c3628cf9338c717e6ef370787714a55baee80" Mar 13 12:03:58 crc kubenswrapper[4632]: I0313 12:03:58.334419 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d15a7682-8687-4495-9ea5-bab97097930e-utilities\") pod \"d15a7682-8687-4495-9ea5-bab97097930e\" (UID: \"d15a7682-8687-4495-9ea5-bab97097930e\") " Mar 13 12:03:58 crc kubenswrapper[4632]: I0313 12:03:58.334576 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-px4g4\" (UniqueName: \"kubernetes.io/projected/d15a7682-8687-4495-9ea5-bab97097930e-kube-api-access-px4g4\") pod \"d15a7682-8687-4495-9ea5-bab97097930e\" (UID: \"d15a7682-8687-4495-9ea5-bab97097930e\") " Mar 13 12:03:58 crc kubenswrapper[4632]: I0313 12:03:58.334732 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d15a7682-8687-4495-9ea5-bab97097930e-catalog-content\") pod \"d15a7682-8687-4495-9ea5-bab97097930e\" (UID: \"d15a7682-8687-4495-9ea5-bab97097930e\") " Mar 13 12:03:58 crc kubenswrapper[4632]: I0313 12:03:58.336383 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d15a7682-8687-4495-9ea5-bab97097930e-utilities" (OuterVolumeSpecName: "utilities") pod "d15a7682-8687-4495-9ea5-bab97097930e" (UID: "d15a7682-8687-4495-9ea5-bab97097930e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:03:58 crc kubenswrapper[4632]: I0313 12:03:58.341494 4632 scope.go:117] "RemoveContainer" containerID="0f02737dbf81fc5b27a98d7116776423f246a75c7aa8570b9234458aedbdf876" Mar 13 12:03:58 crc kubenswrapper[4632]: I0313 12:03:58.378866 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d15a7682-8687-4495-9ea5-bab97097930e-kube-api-access-px4g4" (OuterVolumeSpecName: "kube-api-access-px4g4") pod "d15a7682-8687-4495-9ea5-bab97097930e" (UID: "d15a7682-8687-4495-9ea5-bab97097930e"). InnerVolumeSpecName "kube-api-access-px4g4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:03:58 crc kubenswrapper[4632]: I0313 12:03:58.384244 4632 scope.go:117] "RemoveContainer" containerID="28be9f7463878e855f3ddab509b5ad179e18babc64a9ca78c7b8ffe29d0be6e0" Mar 13 12:03:58 crc kubenswrapper[4632]: I0313 12:03:58.438105 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d15a7682-8687-4495-9ea5-bab97097930e-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 12:03:58 crc kubenswrapper[4632]: I0313 12:03:58.438146 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px4g4\" (UniqueName: \"kubernetes.io/projected/d15a7682-8687-4495-9ea5-bab97097930e-kube-api-access-px4g4\") on node \"crc\" DevicePath \"\"" Mar 13 12:03:58 crc kubenswrapper[4632]: I0313 12:03:58.471062 4632 scope.go:117] "RemoveContainer" containerID="4dfa9a954b7ca6e925f87876683c3628cf9338c717e6ef370787714a55baee80" Mar 13 12:03:58 crc kubenswrapper[4632]: E0313 12:03:58.472347 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dfa9a954b7ca6e925f87876683c3628cf9338c717e6ef370787714a55baee80\": container with ID starting with 4dfa9a954b7ca6e925f87876683c3628cf9338c717e6ef370787714a55baee80 not found: ID does not exist" containerID="4dfa9a954b7ca6e925f87876683c3628cf9338c717e6ef370787714a55baee80" Mar 13 12:03:58 crc kubenswrapper[4632]: I0313 12:03:58.472492 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dfa9a954b7ca6e925f87876683c3628cf9338c717e6ef370787714a55baee80"} err="failed to get container status \"4dfa9a954b7ca6e925f87876683c3628cf9338c717e6ef370787714a55baee80\": rpc error: code = NotFound desc = could not find container \"4dfa9a954b7ca6e925f87876683c3628cf9338c717e6ef370787714a55baee80\": container with ID starting with 4dfa9a954b7ca6e925f87876683c3628cf9338c717e6ef370787714a55baee80 not found: ID does not exist" Mar 13 12:03:58 crc kubenswrapper[4632]: I0313 12:03:58.472636 4632 scope.go:117] "RemoveContainer" containerID="0f02737dbf81fc5b27a98d7116776423f246a75c7aa8570b9234458aedbdf876" Mar 13 12:03:58 crc kubenswrapper[4632]: E0313 12:03:58.473318 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f02737dbf81fc5b27a98d7116776423f246a75c7aa8570b9234458aedbdf876\": container with ID starting with 0f02737dbf81fc5b27a98d7116776423f246a75c7aa8570b9234458aedbdf876 not found: ID does not exist" containerID="0f02737dbf81fc5b27a98d7116776423f246a75c7aa8570b9234458aedbdf876" Mar 13 12:03:58 crc kubenswrapper[4632]: I0313 12:03:58.473366 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f02737dbf81fc5b27a98d7116776423f246a75c7aa8570b9234458aedbdf876"} err="failed to get container status \"0f02737dbf81fc5b27a98d7116776423f246a75c7aa8570b9234458aedbdf876\": rpc error: code = NotFound desc = could not find container \"0f02737dbf81fc5b27a98d7116776423f246a75c7aa8570b9234458aedbdf876\": container with ID starting with 0f02737dbf81fc5b27a98d7116776423f246a75c7aa8570b9234458aedbdf876 not found: ID does not exist" Mar 13 12:03:58 crc kubenswrapper[4632]: I0313 12:03:58.473412 4632 scope.go:117] "RemoveContainer" containerID="28be9f7463878e855f3ddab509b5ad179e18babc64a9ca78c7b8ffe29d0be6e0" Mar 13 12:03:58 crc kubenswrapper[4632]: E0313 12:03:58.475431 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28be9f7463878e855f3ddab509b5ad179e18babc64a9ca78c7b8ffe29d0be6e0\": container with ID starting with 28be9f7463878e855f3ddab509b5ad179e18babc64a9ca78c7b8ffe29d0be6e0 not found: ID does not exist" containerID="28be9f7463878e855f3ddab509b5ad179e18babc64a9ca78c7b8ffe29d0be6e0" Mar 13 12:03:58 crc kubenswrapper[4632]: I0313 12:03:58.475603 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28be9f7463878e855f3ddab509b5ad179e18babc64a9ca78c7b8ffe29d0be6e0"} err="failed to get container status \"28be9f7463878e855f3ddab509b5ad179e18babc64a9ca78c7b8ffe29d0be6e0\": rpc error: code = NotFound desc = could not find container \"28be9f7463878e855f3ddab509b5ad179e18babc64a9ca78c7b8ffe29d0be6e0\": container with ID starting with 28be9f7463878e855f3ddab509b5ad179e18babc64a9ca78c7b8ffe29d0be6e0 not found: ID does not exist" Mar 13 12:03:58 crc kubenswrapper[4632]: I0313 12:03:58.506159 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d15a7682-8687-4495-9ea5-bab97097930e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d15a7682-8687-4495-9ea5-bab97097930e" (UID: "d15a7682-8687-4495-9ea5-bab97097930e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:03:58 crc kubenswrapper[4632]: I0313 12:03:58.539958 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d15a7682-8687-4495-9ea5-bab97097930e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 12:03:58 crc kubenswrapper[4632]: I0313 12:03:58.643746 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qttkd"] Mar 13 12:03:58 crc kubenswrapper[4632]: I0313 12:03:58.652574 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qttkd"] Mar 13 12:04:00 crc kubenswrapper[4632]: I0313 12:04:00.059807 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d15a7682-8687-4495-9ea5-bab97097930e" path="/var/lib/kubelet/pods/d15a7682-8687-4495-9ea5-bab97097930e/volumes" Mar 13 12:04:00 crc kubenswrapper[4632]: I0313 12:04:00.155015 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556724-thbvk"] Mar 13 12:04:00 crc kubenswrapper[4632]: E0313 12:04:00.155522 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d15a7682-8687-4495-9ea5-bab97097930e" containerName="extract-content" Mar 13 12:04:00 crc kubenswrapper[4632]: I0313 12:04:00.155547 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="d15a7682-8687-4495-9ea5-bab97097930e" containerName="extract-content" Mar 13 12:04:00 crc kubenswrapper[4632]: E0313 12:04:00.155583 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d15a7682-8687-4495-9ea5-bab97097930e" containerName="extract-utilities" Mar 13 12:04:00 crc kubenswrapper[4632]: I0313 12:04:00.155596 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="d15a7682-8687-4495-9ea5-bab97097930e" containerName="extract-utilities" Mar 13 12:04:00 crc kubenswrapper[4632]: E0313 12:04:00.155613 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d15a7682-8687-4495-9ea5-bab97097930e" containerName="registry-server" Mar 13 12:04:00 crc kubenswrapper[4632]: I0313 12:04:00.155621 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="d15a7682-8687-4495-9ea5-bab97097930e" containerName="registry-server" Mar 13 12:04:00 crc kubenswrapper[4632]: I0313 12:04:00.155882 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="d15a7682-8687-4495-9ea5-bab97097930e" containerName="registry-server" Mar 13 12:04:00 crc kubenswrapper[4632]: I0313 12:04:00.158455 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556724-thbvk" Mar 13 12:04:00 crc kubenswrapper[4632]: I0313 12:04:00.167638 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556724-thbvk"] Mar 13 12:04:00 crc kubenswrapper[4632]: I0313 12:04:00.174066 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 12:04:00 crc kubenswrapper[4632]: I0313 12:04:00.174549 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 12:04:00 crc kubenswrapper[4632]: I0313 12:04:00.176176 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 12:04:00 crc kubenswrapper[4632]: I0313 12:04:00.303504 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbbw5\" (UniqueName: \"kubernetes.io/projected/af27fbb5-f222-4fcc-a4d0-c4b4673cf1fe-kube-api-access-rbbw5\") pod \"auto-csr-approver-29556724-thbvk\" (UID: \"af27fbb5-f222-4fcc-a4d0-c4b4673cf1fe\") " pod="openshift-infra/auto-csr-approver-29556724-thbvk" Mar 13 12:04:00 crc kubenswrapper[4632]: I0313 12:04:00.405273 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbbw5\" (UniqueName: \"kubernetes.io/projected/af27fbb5-f222-4fcc-a4d0-c4b4673cf1fe-kube-api-access-rbbw5\") pod \"auto-csr-approver-29556724-thbvk\" (UID: \"af27fbb5-f222-4fcc-a4d0-c4b4673cf1fe\") " pod="openshift-infra/auto-csr-approver-29556724-thbvk" Mar 13 12:04:00 crc kubenswrapper[4632]: I0313 12:04:00.434717 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbbw5\" (UniqueName: \"kubernetes.io/projected/af27fbb5-f222-4fcc-a4d0-c4b4673cf1fe-kube-api-access-rbbw5\") pod \"auto-csr-approver-29556724-thbvk\" (UID: \"af27fbb5-f222-4fcc-a4d0-c4b4673cf1fe\") " pod="openshift-infra/auto-csr-approver-29556724-thbvk" Mar 13 12:04:00 crc kubenswrapper[4632]: I0313 12:04:00.480852 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556724-thbvk" Mar 13 12:04:01 crc kubenswrapper[4632]: I0313 12:04:01.033773 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556724-thbvk"] Mar 13 12:04:01 crc kubenswrapper[4632]: I0313 12:04:01.329767 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556724-thbvk" event={"ID":"af27fbb5-f222-4fcc-a4d0-c4b4673cf1fe","Type":"ContainerStarted","Data":"2369d2de44ac16f350468ba3eab4c9d9e96546c32f4677ea6d5ace02630e3153"} Mar 13 12:04:03 crc kubenswrapper[4632]: I0313 12:04:03.349041 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556724-thbvk" event={"ID":"af27fbb5-f222-4fcc-a4d0-c4b4673cf1fe","Type":"ContainerStarted","Data":"b2b91ac6b566e0c21758b3baa48d2497ca87c0016cb01ff16589e8a5fd981c2d"} Mar 13 12:04:03 crc kubenswrapper[4632]: I0313 12:04:03.365670 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556724-thbvk" podStartSLOduration=2.211404459 podStartE2EDuration="3.365649729s" podCreationTimestamp="2026-03-13 12:04:00 +0000 UTC" firstStartedPulling="2026-03-13 12:04:01.042835674 +0000 UTC m=+7215.065365807" lastFinishedPulling="2026-03-13 12:04:02.197080944 +0000 UTC m=+7216.219611077" observedRunningTime="2026-03-13 12:04:03.362891293 +0000 UTC m=+7217.385421426" watchObservedRunningTime="2026-03-13 12:04:03.365649729 +0000 UTC m=+7217.388179872" Mar 13 12:04:04 crc kubenswrapper[4632]: I0313 12:04:04.360521 4632 generic.go:334] "Generic (PLEG): container finished" podID="af27fbb5-f222-4fcc-a4d0-c4b4673cf1fe" containerID="b2b91ac6b566e0c21758b3baa48d2497ca87c0016cb01ff16589e8a5fd981c2d" exitCode=0 Mar 13 12:04:04 crc kubenswrapper[4632]: I0313 12:04:04.360578 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556724-thbvk" event={"ID":"af27fbb5-f222-4fcc-a4d0-c4b4673cf1fe","Type":"ContainerDied","Data":"b2b91ac6b566e0c21758b3baa48d2497ca87c0016cb01ff16589e8a5fd981c2d"} Mar 13 12:04:05 crc kubenswrapper[4632]: I0313 12:04:05.875808 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556724-thbvk" Mar 13 12:04:06 crc kubenswrapper[4632]: I0313 12:04:06.013769 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbbw5\" (UniqueName: \"kubernetes.io/projected/af27fbb5-f222-4fcc-a4d0-c4b4673cf1fe-kube-api-access-rbbw5\") pod \"af27fbb5-f222-4fcc-a4d0-c4b4673cf1fe\" (UID: \"af27fbb5-f222-4fcc-a4d0-c4b4673cf1fe\") " Mar 13 12:04:06 crc kubenswrapper[4632]: I0313 12:04:06.027153 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af27fbb5-f222-4fcc-a4d0-c4b4673cf1fe-kube-api-access-rbbw5" (OuterVolumeSpecName: "kube-api-access-rbbw5") pod "af27fbb5-f222-4fcc-a4d0-c4b4673cf1fe" (UID: "af27fbb5-f222-4fcc-a4d0-c4b4673cf1fe"). InnerVolumeSpecName "kube-api-access-rbbw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:04:06 crc kubenswrapper[4632]: I0313 12:04:06.115988 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbbw5\" (UniqueName: \"kubernetes.io/projected/af27fbb5-f222-4fcc-a4d0-c4b4673cf1fe-kube-api-access-rbbw5\") on node \"crc\" DevicePath \"\"" Mar 13 12:04:06 crc kubenswrapper[4632]: I0313 12:04:06.380766 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556724-thbvk" event={"ID":"af27fbb5-f222-4fcc-a4d0-c4b4673cf1fe","Type":"ContainerDied","Data":"2369d2de44ac16f350468ba3eab4c9d9e96546c32f4677ea6d5ace02630e3153"} Mar 13 12:04:06 crc kubenswrapper[4632]: I0313 12:04:06.380808 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2369d2de44ac16f350468ba3eab4c9d9e96546c32f4677ea6d5ace02630e3153" Mar 13 12:04:06 crc kubenswrapper[4632]: I0313 12:04:06.380805 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556724-thbvk" Mar 13 12:04:06 crc kubenswrapper[4632]: I0313 12:04:06.472705 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556718-xxtsr"] Mar 13 12:04:06 crc kubenswrapper[4632]: I0313 12:04:06.481290 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556718-xxtsr"] Mar 13 12:04:08 crc kubenswrapper[4632]: I0313 12:04:08.049820 4632 scope.go:117] "RemoveContainer" containerID="acc57c5b8ad70899e139ed86509fb64c5bf067344e4285c37f35406b8db0c7a6" Mar 13 12:04:08 crc kubenswrapper[4632]: E0313 12:04:08.050519 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:04:08 crc kubenswrapper[4632]: I0313 12:04:08.060368 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="beb533c7-a735-47fa-b5fa-67b1bcba9787" path="/var/lib/kubelet/pods/beb533c7-a735-47fa-b5fa-67b1bcba9787/volumes" Mar 13 12:04:22 crc kubenswrapper[4632]: I0313 12:04:22.044921 4632 scope.go:117] "RemoveContainer" containerID="acc57c5b8ad70899e139ed86509fb64c5bf067344e4285c37f35406b8db0c7a6" Mar 13 12:04:22 crc kubenswrapper[4632]: E0313 12:04:22.045845 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:04:24 crc kubenswrapper[4632]: I0313 12:04:24.415912 4632 scope.go:117] "RemoveContainer" containerID="de5a0b9383a1bdabde0e1290cb2d2e2341dbc3e19f3a7e552782ac9f0501a7ce" Mar 13 12:04:37 crc kubenswrapper[4632]: I0313 12:04:37.045173 4632 scope.go:117] "RemoveContainer" containerID="acc57c5b8ad70899e139ed86509fb64c5bf067344e4285c37f35406b8db0c7a6" Mar 13 12:04:37 crc kubenswrapper[4632]: E0313 12:04:37.046359 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:04:51 crc kubenswrapper[4632]: I0313 12:04:51.044397 4632 scope.go:117] "RemoveContainer" containerID="acc57c5b8ad70899e139ed86509fb64c5bf067344e4285c37f35406b8db0c7a6" Mar 13 12:04:51 crc kubenswrapper[4632]: E0313 12:04:51.045149 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:05:06 crc kubenswrapper[4632]: I0313 12:05:06.044096 4632 scope.go:117] "RemoveContainer" containerID="acc57c5b8ad70899e139ed86509fb64c5bf067344e4285c37f35406b8db0c7a6" Mar 13 12:05:06 crc kubenswrapper[4632]: E0313 12:05:06.046173 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:05:20 crc kubenswrapper[4632]: I0313 12:05:20.043880 4632 scope.go:117] "RemoveContainer" containerID="acc57c5b8ad70899e139ed86509fb64c5bf067344e4285c37f35406b8db0c7a6" Mar 13 12:05:20 crc kubenswrapper[4632]: E0313 12:05:20.044596 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:05:35 crc kubenswrapper[4632]: I0313 12:05:35.044465 4632 scope.go:117] "RemoveContainer" containerID="acc57c5b8ad70899e139ed86509fb64c5bf067344e4285c37f35406b8db0c7a6" Mar 13 12:05:35 crc kubenswrapper[4632]: E0313 12:05:35.045660 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:05:48 crc kubenswrapper[4632]: I0313 12:05:48.050753 4632 scope.go:117] "RemoveContainer" containerID="acc57c5b8ad70899e139ed86509fb64c5bf067344e4285c37f35406b8db0c7a6" Mar 13 12:05:48 crc kubenswrapper[4632]: E0313 12:05:48.051634 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:06:00 crc kubenswrapper[4632]: I0313 12:06:00.044825 4632 scope.go:117] "RemoveContainer" containerID="acc57c5b8ad70899e139ed86509fb64c5bf067344e4285c37f35406b8db0c7a6" Mar 13 12:06:00 crc kubenswrapper[4632]: E0313 12:06:00.045561 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:06:00 crc kubenswrapper[4632]: I0313 12:06:00.163489 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556726-pjf8m"] Mar 13 12:06:00 crc kubenswrapper[4632]: E0313 12:06:00.164183 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af27fbb5-f222-4fcc-a4d0-c4b4673cf1fe" containerName="oc" Mar 13 12:06:00 crc kubenswrapper[4632]: I0313 12:06:00.164212 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="af27fbb5-f222-4fcc-a4d0-c4b4673cf1fe" containerName="oc" Mar 13 12:06:00 crc kubenswrapper[4632]: I0313 12:06:00.164485 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="af27fbb5-f222-4fcc-a4d0-c4b4673cf1fe" containerName="oc" Mar 13 12:06:00 crc kubenswrapper[4632]: I0313 12:06:00.166237 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556726-pjf8m" Mar 13 12:06:00 crc kubenswrapper[4632]: I0313 12:06:00.168516 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 12:06:00 crc kubenswrapper[4632]: I0313 12:06:00.169161 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 12:06:00 crc kubenswrapper[4632]: I0313 12:06:00.174270 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 12:06:00 crc kubenswrapper[4632]: I0313 12:06:00.178105 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556726-pjf8m"] Mar 13 12:06:00 crc kubenswrapper[4632]: I0313 12:06:00.301740 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whhpb\" (UniqueName: \"kubernetes.io/projected/2bb028cb-1d8f-4f09-8c73-135a6ce2bb46-kube-api-access-whhpb\") pod \"auto-csr-approver-29556726-pjf8m\" (UID: \"2bb028cb-1d8f-4f09-8c73-135a6ce2bb46\") " pod="openshift-infra/auto-csr-approver-29556726-pjf8m" Mar 13 12:06:00 crc kubenswrapper[4632]: I0313 12:06:00.404479 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whhpb\" (UniqueName: \"kubernetes.io/projected/2bb028cb-1d8f-4f09-8c73-135a6ce2bb46-kube-api-access-whhpb\") pod \"auto-csr-approver-29556726-pjf8m\" (UID: \"2bb028cb-1d8f-4f09-8c73-135a6ce2bb46\") " pod="openshift-infra/auto-csr-approver-29556726-pjf8m" Mar 13 12:06:00 crc kubenswrapper[4632]: I0313 12:06:00.429122 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whhpb\" (UniqueName: \"kubernetes.io/projected/2bb028cb-1d8f-4f09-8c73-135a6ce2bb46-kube-api-access-whhpb\") pod \"auto-csr-approver-29556726-pjf8m\" (UID: \"2bb028cb-1d8f-4f09-8c73-135a6ce2bb46\") " pod="openshift-infra/auto-csr-approver-29556726-pjf8m" Mar 13 12:06:00 crc kubenswrapper[4632]: I0313 12:06:00.499325 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556726-pjf8m" Mar 13 12:06:01 crc kubenswrapper[4632]: I0313 12:06:01.092129 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556726-pjf8m"] Mar 13 12:06:02 crc kubenswrapper[4632]: I0313 12:06:02.013616 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556726-pjf8m" event={"ID":"2bb028cb-1d8f-4f09-8c73-135a6ce2bb46","Type":"ContainerStarted","Data":"92f020c422f37715af0272fc7b122cdd5f09f8d4eb1ed9ce0772749e38c83f9d"} Mar 13 12:06:03 crc kubenswrapper[4632]: I0313 12:06:03.026058 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556726-pjf8m" event={"ID":"2bb028cb-1d8f-4f09-8c73-135a6ce2bb46","Type":"ContainerStarted","Data":"d0bcc7d380da406a96a84a48eb87ce7d69302220aa1982823e25de22c01f77e0"} Mar 13 12:06:03 crc kubenswrapper[4632]: I0313 12:06:03.046285 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556726-pjf8m" podStartSLOduration=2.209526589 podStartE2EDuration="3.046262825s" podCreationTimestamp="2026-03-13 12:06:00 +0000 UTC" firstStartedPulling="2026-03-13 12:06:01.102623216 +0000 UTC m=+7335.125153349" lastFinishedPulling="2026-03-13 12:06:01.939359452 +0000 UTC m=+7335.961889585" observedRunningTime="2026-03-13 12:06:03.040577439 +0000 UTC m=+7337.063107592" watchObservedRunningTime="2026-03-13 12:06:03.046262825 +0000 UTC m=+7337.068792958" Mar 13 12:06:04 crc kubenswrapper[4632]: I0313 12:06:04.038342 4632 generic.go:334] "Generic (PLEG): container finished" podID="2bb028cb-1d8f-4f09-8c73-135a6ce2bb46" containerID="d0bcc7d380da406a96a84a48eb87ce7d69302220aa1982823e25de22c01f77e0" exitCode=0 Mar 13 12:06:04 crc kubenswrapper[4632]: I0313 12:06:04.038460 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556726-pjf8m" event={"ID":"2bb028cb-1d8f-4f09-8c73-135a6ce2bb46","Type":"ContainerDied","Data":"d0bcc7d380da406a96a84a48eb87ce7d69302220aa1982823e25de22c01f77e0"} Mar 13 12:06:05 crc kubenswrapper[4632]: I0313 12:06:05.564510 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556726-pjf8m" Mar 13 12:06:05 crc kubenswrapper[4632]: I0313 12:06:05.704702 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whhpb\" (UniqueName: \"kubernetes.io/projected/2bb028cb-1d8f-4f09-8c73-135a6ce2bb46-kube-api-access-whhpb\") pod \"2bb028cb-1d8f-4f09-8c73-135a6ce2bb46\" (UID: \"2bb028cb-1d8f-4f09-8c73-135a6ce2bb46\") " Mar 13 12:06:05 crc kubenswrapper[4632]: I0313 12:06:05.723330 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bb028cb-1d8f-4f09-8c73-135a6ce2bb46-kube-api-access-whhpb" (OuterVolumeSpecName: "kube-api-access-whhpb") pod "2bb028cb-1d8f-4f09-8c73-135a6ce2bb46" (UID: "2bb028cb-1d8f-4f09-8c73-135a6ce2bb46"). InnerVolumeSpecName "kube-api-access-whhpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:06:05 crc kubenswrapper[4632]: I0313 12:06:05.807211 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whhpb\" (UniqueName: \"kubernetes.io/projected/2bb028cb-1d8f-4f09-8c73-135a6ce2bb46-kube-api-access-whhpb\") on node \"crc\" DevicePath \"\"" Mar 13 12:06:06 crc kubenswrapper[4632]: I0313 12:06:06.068537 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556726-pjf8m" event={"ID":"2bb028cb-1d8f-4f09-8c73-135a6ce2bb46","Type":"ContainerDied","Data":"92f020c422f37715af0272fc7b122cdd5f09f8d4eb1ed9ce0772749e38c83f9d"} Mar 13 12:06:06 crc kubenswrapper[4632]: I0313 12:06:06.068584 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92f020c422f37715af0272fc7b122cdd5f09f8d4eb1ed9ce0772749e38c83f9d" Mar 13 12:06:06 crc kubenswrapper[4632]: I0313 12:06:06.068654 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556726-pjf8m" Mar 13 12:06:06 crc kubenswrapper[4632]: I0313 12:06:06.128667 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556720-2985g"] Mar 13 12:06:06 crc kubenswrapper[4632]: I0313 12:06:06.140723 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556720-2985g"] Mar 13 12:06:08 crc kubenswrapper[4632]: I0313 12:06:08.062510 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="614f8dd7-8a57-4b22-b741-63c3ed563216" path="/var/lib/kubelet/pods/614f8dd7-8a57-4b22-b741-63c3ed563216/volumes" Mar 13 12:06:13 crc kubenswrapper[4632]: I0313 12:06:13.044250 4632 scope.go:117] "RemoveContainer" containerID="acc57c5b8ad70899e139ed86509fb64c5bf067344e4285c37f35406b8db0c7a6" Mar 13 12:06:13 crc kubenswrapper[4632]: E0313 12:06:13.045086 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:06:24 crc kubenswrapper[4632]: I0313 12:06:24.563604 4632 scope.go:117] "RemoveContainer" containerID="d1793c511542a0a35aa5afc5e36e94033f5a38dca84400ac841ea4e47dc426f7" Mar 13 12:06:25 crc kubenswrapper[4632]: I0313 12:06:25.044328 4632 scope.go:117] "RemoveContainer" containerID="acc57c5b8ad70899e139ed86509fb64c5bf067344e4285c37f35406b8db0c7a6" Mar 13 12:06:25 crc kubenswrapper[4632]: E0313 12:06:25.044640 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:06:37 crc kubenswrapper[4632]: I0313 12:06:37.044245 4632 scope.go:117] "RemoveContainer" containerID="acc57c5b8ad70899e139ed86509fb64c5bf067344e4285c37f35406b8db0c7a6" Mar 13 12:06:37 crc kubenswrapper[4632]: E0313 12:06:37.045671 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:06:52 crc kubenswrapper[4632]: I0313 12:06:52.045123 4632 scope.go:117] "RemoveContainer" containerID="acc57c5b8ad70899e139ed86509fb64c5bf067344e4285c37f35406b8db0c7a6" Mar 13 12:06:52 crc kubenswrapper[4632]: E0313 12:06:52.046211 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:07:04 crc kubenswrapper[4632]: I0313 12:07:04.045027 4632 scope.go:117] "RemoveContainer" containerID="acc57c5b8ad70899e139ed86509fb64c5bf067344e4285c37f35406b8db0c7a6" Mar 13 12:07:04 crc kubenswrapper[4632]: E0313 12:07:04.045924 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:07:15 crc kubenswrapper[4632]: I0313 12:07:15.045459 4632 scope.go:117] "RemoveContainer" containerID="acc57c5b8ad70899e139ed86509fb64c5bf067344e4285c37f35406b8db0c7a6" Mar 13 12:07:15 crc kubenswrapper[4632]: E0313 12:07:15.046433 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:07:28 crc kubenswrapper[4632]: I0313 12:07:28.055012 4632 scope.go:117] "RemoveContainer" containerID="acc57c5b8ad70899e139ed86509fb64c5bf067344e4285c37f35406b8db0c7a6" Mar 13 12:07:28 crc kubenswrapper[4632]: E0313 12:07:28.055773 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:07:40 crc kubenswrapper[4632]: I0313 12:07:40.045150 4632 scope.go:117] "RemoveContainer" containerID="acc57c5b8ad70899e139ed86509fb64c5bf067344e4285c37f35406b8db0c7a6" Mar 13 12:07:40 crc kubenswrapper[4632]: E0313 12:07:40.046055 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:07:51 crc kubenswrapper[4632]: I0313 12:07:51.044441 4632 scope.go:117] "RemoveContainer" containerID="acc57c5b8ad70899e139ed86509fb64c5bf067344e4285c37f35406b8db0c7a6" Mar 13 12:07:51 crc kubenswrapper[4632]: E0313 12:07:51.045389 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:08:00 crc kubenswrapper[4632]: I0313 12:08:00.146305 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556728-lnbz4"] Mar 13 12:08:00 crc kubenswrapper[4632]: E0313 12:08:00.147115 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bb028cb-1d8f-4f09-8c73-135a6ce2bb46" containerName="oc" Mar 13 12:08:00 crc kubenswrapper[4632]: I0313 12:08:00.147131 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bb028cb-1d8f-4f09-8c73-135a6ce2bb46" containerName="oc" Mar 13 12:08:00 crc kubenswrapper[4632]: I0313 12:08:00.147679 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bb028cb-1d8f-4f09-8c73-135a6ce2bb46" containerName="oc" Mar 13 12:08:00 crc kubenswrapper[4632]: I0313 12:08:00.148347 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556728-lnbz4" Mar 13 12:08:00 crc kubenswrapper[4632]: I0313 12:08:00.150872 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 12:08:00 crc kubenswrapper[4632]: I0313 12:08:00.151074 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 12:08:00 crc kubenswrapper[4632]: I0313 12:08:00.151153 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 12:08:00 crc kubenswrapper[4632]: I0313 12:08:00.167562 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556728-lnbz4"] Mar 13 12:08:00 crc kubenswrapper[4632]: I0313 12:08:00.242102 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpwzp\" (UniqueName: \"kubernetes.io/projected/4980ff16-68a2-4b11-83d2-9d8ad1fa105c-kube-api-access-wpwzp\") pod \"auto-csr-approver-29556728-lnbz4\" (UID: \"4980ff16-68a2-4b11-83d2-9d8ad1fa105c\") " pod="openshift-infra/auto-csr-approver-29556728-lnbz4" Mar 13 12:08:00 crc kubenswrapper[4632]: I0313 12:08:00.343677 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpwzp\" (UniqueName: \"kubernetes.io/projected/4980ff16-68a2-4b11-83d2-9d8ad1fa105c-kube-api-access-wpwzp\") pod \"auto-csr-approver-29556728-lnbz4\" (UID: \"4980ff16-68a2-4b11-83d2-9d8ad1fa105c\") " pod="openshift-infra/auto-csr-approver-29556728-lnbz4" Mar 13 12:08:00 crc kubenswrapper[4632]: I0313 12:08:00.366437 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpwzp\" (UniqueName: \"kubernetes.io/projected/4980ff16-68a2-4b11-83d2-9d8ad1fa105c-kube-api-access-wpwzp\") pod \"auto-csr-approver-29556728-lnbz4\" (UID: \"4980ff16-68a2-4b11-83d2-9d8ad1fa105c\") " pod="openshift-infra/auto-csr-approver-29556728-lnbz4" Mar 13 12:08:00 crc kubenswrapper[4632]: I0313 12:08:00.468427 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556728-lnbz4" Mar 13 12:08:01 crc kubenswrapper[4632]: I0313 12:08:01.044938 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556728-lnbz4"] Mar 13 12:08:01 crc kubenswrapper[4632]: I0313 12:08:01.055257 4632 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 12:08:01 crc kubenswrapper[4632]: I0313 12:08:01.370230 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556728-lnbz4" event={"ID":"4980ff16-68a2-4b11-83d2-9d8ad1fa105c","Type":"ContainerStarted","Data":"28c418bff6764d978c7a7a37a355de25591b1f316732c8050beecc8e6c5b3e72"} Mar 13 12:08:02 crc kubenswrapper[4632]: I0313 12:08:02.382885 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556728-lnbz4" event={"ID":"4980ff16-68a2-4b11-83d2-9d8ad1fa105c","Type":"ContainerStarted","Data":"f67ad12914918a2b5742053c25b75fbf60ba190fadc812d9c25ad10140a8556c"} Mar 13 12:08:02 crc kubenswrapper[4632]: I0313 12:08:02.403859 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556728-lnbz4" podStartSLOduration=1.462827393 podStartE2EDuration="2.403840713s" podCreationTimestamp="2026-03-13 12:08:00 +0000 UTC" firstStartedPulling="2026-03-13 12:08:01.053303039 +0000 UTC m=+7455.075833172" lastFinishedPulling="2026-03-13 12:08:01.994316359 +0000 UTC m=+7456.016846492" observedRunningTime="2026-03-13 12:08:02.397774708 +0000 UTC m=+7456.420304841" watchObservedRunningTime="2026-03-13 12:08:02.403840713 +0000 UTC m=+7456.426370846" Mar 13 12:08:03 crc kubenswrapper[4632]: I0313 12:08:03.044612 4632 scope.go:117] "RemoveContainer" containerID="acc57c5b8ad70899e139ed86509fb64c5bf067344e4285c37f35406b8db0c7a6" Mar 13 12:08:03 crc kubenswrapper[4632]: E0313 12:08:03.044968 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:08:04 crc kubenswrapper[4632]: I0313 12:08:04.403844 4632 generic.go:334] "Generic (PLEG): container finished" podID="4980ff16-68a2-4b11-83d2-9d8ad1fa105c" containerID="f67ad12914918a2b5742053c25b75fbf60ba190fadc812d9c25ad10140a8556c" exitCode=0 Mar 13 12:08:04 crc kubenswrapper[4632]: I0313 12:08:04.404060 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556728-lnbz4" event={"ID":"4980ff16-68a2-4b11-83d2-9d8ad1fa105c","Type":"ContainerDied","Data":"f67ad12914918a2b5742053c25b75fbf60ba190fadc812d9c25ad10140a8556c"} Mar 13 12:08:05 crc kubenswrapper[4632]: I0313 12:08:05.954105 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556728-lnbz4" Mar 13 12:08:06 crc kubenswrapper[4632]: I0313 12:08:06.116870 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpwzp\" (UniqueName: \"kubernetes.io/projected/4980ff16-68a2-4b11-83d2-9d8ad1fa105c-kube-api-access-wpwzp\") pod \"4980ff16-68a2-4b11-83d2-9d8ad1fa105c\" (UID: \"4980ff16-68a2-4b11-83d2-9d8ad1fa105c\") " Mar 13 12:08:06 crc kubenswrapper[4632]: I0313 12:08:06.127131 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4980ff16-68a2-4b11-83d2-9d8ad1fa105c-kube-api-access-wpwzp" (OuterVolumeSpecName: "kube-api-access-wpwzp") pod "4980ff16-68a2-4b11-83d2-9d8ad1fa105c" (UID: "4980ff16-68a2-4b11-83d2-9d8ad1fa105c"). InnerVolumeSpecName "kube-api-access-wpwzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:08:06 crc kubenswrapper[4632]: I0313 12:08:06.219277 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpwzp\" (UniqueName: \"kubernetes.io/projected/4980ff16-68a2-4b11-83d2-9d8ad1fa105c-kube-api-access-wpwzp\") on node \"crc\" DevicePath \"\"" Mar 13 12:08:06 crc kubenswrapper[4632]: I0313 12:08:06.426340 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556728-lnbz4" event={"ID":"4980ff16-68a2-4b11-83d2-9d8ad1fa105c","Type":"ContainerDied","Data":"28c418bff6764d978c7a7a37a355de25591b1f316732c8050beecc8e6c5b3e72"} Mar 13 12:08:06 crc kubenswrapper[4632]: I0313 12:08:06.426391 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556728-lnbz4" Mar 13 12:08:06 crc kubenswrapper[4632]: I0313 12:08:06.426400 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28c418bff6764d978c7a7a37a355de25591b1f316732c8050beecc8e6c5b3e72" Mar 13 12:08:06 crc kubenswrapper[4632]: I0313 12:08:06.509714 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556722-r4pp4"] Mar 13 12:08:06 crc kubenswrapper[4632]: I0313 12:08:06.521377 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556722-r4pp4"] Mar 13 12:08:08 crc kubenswrapper[4632]: I0313 12:08:08.061700 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ae03001-344b-4e5e-baf2-c8171109eb1a" path="/var/lib/kubelet/pods/5ae03001-344b-4e5e-baf2-c8171109eb1a/volumes" Mar 13 12:08:16 crc kubenswrapper[4632]: I0313 12:08:16.047537 4632 scope.go:117] "RemoveContainer" containerID="acc57c5b8ad70899e139ed86509fb64c5bf067344e4285c37f35406b8db0c7a6" Mar 13 12:08:16 crc kubenswrapper[4632]: I0313 12:08:16.525957 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerStarted","Data":"ef6a755da94d8b26aaa61b1a356ec9030e87ec1440f6bdf1f6abec8411efbdd9"} Mar 13 12:08:24 crc kubenswrapper[4632]: I0313 12:08:24.705326 4632 scope.go:117] "RemoveContainer" containerID="45cc8231b2d3d1ca8cec2f7a7da9147a3a370632b65df6ead7d138b8c0f615b3" Mar 13 12:09:08 crc kubenswrapper[4632]: I0313 12:09:08.321089 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9c4xw"] Mar 13 12:09:08 crc kubenswrapper[4632]: E0313 12:09:08.323646 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4980ff16-68a2-4b11-83d2-9d8ad1fa105c" containerName="oc" Mar 13 12:09:08 crc kubenswrapper[4632]: I0313 12:09:08.323678 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="4980ff16-68a2-4b11-83d2-9d8ad1fa105c" containerName="oc" Mar 13 12:09:08 crc kubenswrapper[4632]: I0313 12:09:08.324811 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="4980ff16-68a2-4b11-83d2-9d8ad1fa105c" containerName="oc" Mar 13 12:09:08 crc kubenswrapper[4632]: I0313 12:09:08.330258 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9c4xw" Mar 13 12:09:08 crc kubenswrapper[4632]: I0313 12:09:08.373824 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9c4xw"] Mar 13 12:09:08 crc kubenswrapper[4632]: I0313 12:09:08.431151 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96a9f61e-825e-4999-a5e5-931111334a3c-utilities\") pod \"redhat-marketplace-9c4xw\" (UID: \"96a9f61e-825e-4999-a5e5-931111334a3c\") " pod="openshift-marketplace/redhat-marketplace-9c4xw" Mar 13 12:09:08 crc kubenswrapper[4632]: I0313 12:09:08.431302 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96a9f61e-825e-4999-a5e5-931111334a3c-catalog-content\") pod \"redhat-marketplace-9c4xw\" (UID: \"96a9f61e-825e-4999-a5e5-931111334a3c\") " pod="openshift-marketplace/redhat-marketplace-9c4xw" Mar 13 12:09:08 crc kubenswrapper[4632]: I0313 12:09:08.431342 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5mc2\" (UniqueName: \"kubernetes.io/projected/96a9f61e-825e-4999-a5e5-931111334a3c-kube-api-access-x5mc2\") pod \"redhat-marketplace-9c4xw\" (UID: \"96a9f61e-825e-4999-a5e5-931111334a3c\") " pod="openshift-marketplace/redhat-marketplace-9c4xw" Mar 13 12:09:08 crc kubenswrapper[4632]: I0313 12:09:08.532972 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5mc2\" (UniqueName: \"kubernetes.io/projected/96a9f61e-825e-4999-a5e5-931111334a3c-kube-api-access-x5mc2\") pod \"redhat-marketplace-9c4xw\" (UID: \"96a9f61e-825e-4999-a5e5-931111334a3c\") " pod="openshift-marketplace/redhat-marketplace-9c4xw" Mar 13 12:09:08 crc kubenswrapper[4632]: I0313 12:09:08.533196 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96a9f61e-825e-4999-a5e5-931111334a3c-utilities\") pod \"redhat-marketplace-9c4xw\" (UID: \"96a9f61e-825e-4999-a5e5-931111334a3c\") " pod="openshift-marketplace/redhat-marketplace-9c4xw" Mar 13 12:09:08 crc kubenswrapper[4632]: I0313 12:09:08.533292 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96a9f61e-825e-4999-a5e5-931111334a3c-catalog-content\") pod \"redhat-marketplace-9c4xw\" (UID: \"96a9f61e-825e-4999-a5e5-931111334a3c\") " pod="openshift-marketplace/redhat-marketplace-9c4xw" Mar 13 12:09:08 crc kubenswrapper[4632]: I0313 12:09:08.533812 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96a9f61e-825e-4999-a5e5-931111334a3c-utilities\") pod \"redhat-marketplace-9c4xw\" (UID: \"96a9f61e-825e-4999-a5e5-931111334a3c\") " pod="openshift-marketplace/redhat-marketplace-9c4xw" Mar 13 12:09:08 crc kubenswrapper[4632]: I0313 12:09:08.533859 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96a9f61e-825e-4999-a5e5-931111334a3c-catalog-content\") pod \"redhat-marketplace-9c4xw\" (UID: \"96a9f61e-825e-4999-a5e5-931111334a3c\") " pod="openshift-marketplace/redhat-marketplace-9c4xw" Mar 13 12:09:08 crc kubenswrapper[4632]: I0313 12:09:08.561829 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5mc2\" (UniqueName: \"kubernetes.io/projected/96a9f61e-825e-4999-a5e5-931111334a3c-kube-api-access-x5mc2\") pod \"redhat-marketplace-9c4xw\" (UID: \"96a9f61e-825e-4999-a5e5-931111334a3c\") " pod="openshift-marketplace/redhat-marketplace-9c4xw" Mar 13 12:09:08 crc kubenswrapper[4632]: I0313 12:09:08.674571 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9c4xw" Mar 13 12:09:09 crc kubenswrapper[4632]: I0313 12:09:09.241015 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9c4xw"] Mar 13 12:09:10 crc kubenswrapper[4632]: I0313 12:09:10.079404 4632 generic.go:334] "Generic (PLEG): container finished" podID="96a9f61e-825e-4999-a5e5-931111334a3c" containerID="fef21df0ca08e5753cfe1eb56acc3f885fe3e8067984bbd0d17ccfaaae953502" exitCode=0 Mar 13 12:09:10 crc kubenswrapper[4632]: I0313 12:09:10.079525 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c4xw" event={"ID":"96a9f61e-825e-4999-a5e5-931111334a3c","Type":"ContainerDied","Data":"fef21df0ca08e5753cfe1eb56acc3f885fe3e8067984bbd0d17ccfaaae953502"} Mar 13 12:09:10 crc kubenswrapper[4632]: I0313 12:09:10.079934 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c4xw" event={"ID":"96a9f61e-825e-4999-a5e5-931111334a3c","Type":"ContainerStarted","Data":"1eef9aa64ac81546604be8de930dc45934789d3cb4d0dc2fb31c3f6608753b84"} Mar 13 12:09:11 crc kubenswrapper[4632]: I0313 12:09:11.092676 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c4xw" event={"ID":"96a9f61e-825e-4999-a5e5-931111334a3c","Type":"ContainerStarted","Data":"e0fecd98bfa39d9c2c53d83d664bc034530c6ba45b3a9a1361aa2d2fc97a532a"} Mar 13 12:09:13 crc kubenswrapper[4632]: I0313 12:09:13.110243 4632 generic.go:334] "Generic (PLEG): container finished" podID="96a9f61e-825e-4999-a5e5-931111334a3c" containerID="e0fecd98bfa39d9c2c53d83d664bc034530c6ba45b3a9a1361aa2d2fc97a532a" exitCode=0 Mar 13 12:09:13 crc kubenswrapper[4632]: I0313 12:09:13.110439 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c4xw" event={"ID":"96a9f61e-825e-4999-a5e5-931111334a3c","Type":"ContainerDied","Data":"e0fecd98bfa39d9c2c53d83d664bc034530c6ba45b3a9a1361aa2d2fc97a532a"} Mar 13 12:09:14 crc kubenswrapper[4632]: I0313 12:09:14.122449 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c4xw" event={"ID":"96a9f61e-825e-4999-a5e5-931111334a3c","Type":"ContainerStarted","Data":"a31594fca63f55ea4e7094a820be7740427ea837bc2d5d54f827d6587be7f8ef"} Mar 13 12:09:14 crc kubenswrapper[4632]: I0313 12:09:14.149447 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9c4xw" podStartSLOduration=2.612267403 podStartE2EDuration="6.149415943s" podCreationTimestamp="2026-03-13 12:09:08 +0000 UTC" firstStartedPulling="2026-03-13 12:09:10.081603659 +0000 UTC m=+7524.104133792" lastFinishedPulling="2026-03-13 12:09:13.618752199 +0000 UTC m=+7527.641282332" observedRunningTime="2026-03-13 12:09:14.14388219 +0000 UTC m=+7528.166412373" watchObservedRunningTime="2026-03-13 12:09:14.149415943 +0000 UTC m=+7528.171946076" Mar 13 12:09:18 crc kubenswrapper[4632]: I0313 12:09:18.675865 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9c4xw" Mar 13 12:09:18 crc kubenswrapper[4632]: I0313 12:09:18.676671 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9c4xw" Mar 13 12:09:19 crc kubenswrapper[4632]: I0313 12:09:19.727931 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-9c4xw" podUID="96a9f61e-825e-4999-a5e5-931111334a3c" containerName="registry-server" probeResult="failure" output=< Mar 13 12:09:19 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:09:19 crc kubenswrapper[4632]: > Mar 13 12:09:28 crc kubenswrapper[4632]: I0313 12:09:28.730312 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9c4xw" Mar 13 12:09:28 crc kubenswrapper[4632]: I0313 12:09:28.789011 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9c4xw" Mar 13 12:09:28 crc kubenswrapper[4632]: I0313 12:09:28.972755 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9c4xw"] Mar 13 12:09:30 crc kubenswrapper[4632]: I0313 12:09:30.258832 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9c4xw" podUID="96a9f61e-825e-4999-a5e5-931111334a3c" containerName="registry-server" containerID="cri-o://a31594fca63f55ea4e7094a820be7740427ea837bc2d5d54f827d6587be7f8ef" gracePeriod=2 Mar 13 12:09:31 crc kubenswrapper[4632]: I0313 12:09:31.148833 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9c4xw" Mar 13 12:09:31 crc kubenswrapper[4632]: I0313 12:09:31.181347 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96a9f61e-825e-4999-a5e5-931111334a3c-utilities\") pod \"96a9f61e-825e-4999-a5e5-931111334a3c\" (UID: \"96a9f61e-825e-4999-a5e5-931111334a3c\") " Mar 13 12:09:31 crc kubenswrapper[4632]: I0313 12:09:31.181440 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5mc2\" (UniqueName: \"kubernetes.io/projected/96a9f61e-825e-4999-a5e5-931111334a3c-kube-api-access-x5mc2\") pod \"96a9f61e-825e-4999-a5e5-931111334a3c\" (UID: \"96a9f61e-825e-4999-a5e5-931111334a3c\") " Mar 13 12:09:31 crc kubenswrapper[4632]: I0313 12:09:31.181462 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96a9f61e-825e-4999-a5e5-931111334a3c-catalog-content\") pod \"96a9f61e-825e-4999-a5e5-931111334a3c\" (UID: \"96a9f61e-825e-4999-a5e5-931111334a3c\") " Mar 13 12:09:31 crc kubenswrapper[4632]: I0313 12:09:31.182650 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96a9f61e-825e-4999-a5e5-931111334a3c-utilities" (OuterVolumeSpecName: "utilities") pod "96a9f61e-825e-4999-a5e5-931111334a3c" (UID: "96a9f61e-825e-4999-a5e5-931111334a3c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:09:31 crc kubenswrapper[4632]: I0313 12:09:31.216740 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96a9f61e-825e-4999-a5e5-931111334a3c-kube-api-access-x5mc2" (OuterVolumeSpecName: "kube-api-access-x5mc2") pod "96a9f61e-825e-4999-a5e5-931111334a3c" (UID: "96a9f61e-825e-4999-a5e5-931111334a3c"). InnerVolumeSpecName "kube-api-access-x5mc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:09:31 crc kubenswrapper[4632]: I0313 12:09:31.229136 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96a9f61e-825e-4999-a5e5-931111334a3c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96a9f61e-825e-4999-a5e5-931111334a3c" (UID: "96a9f61e-825e-4999-a5e5-931111334a3c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:09:31 crc kubenswrapper[4632]: I0313 12:09:31.268329 4632 generic.go:334] "Generic (PLEG): container finished" podID="96a9f61e-825e-4999-a5e5-931111334a3c" containerID="a31594fca63f55ea4e7094a820be7740427ea837bc2d5d54f827d6587be7f8ef" exitCode=0 Mar 13 12:09:31 crc kubenswrapper[4632]: I0313 12:09:31.268408 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9c4xw" Mar 13 12:09:31 crc kubenswrapper[4632]: I0313 12:09:31.269306 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c4xw" event={"ID":"96a9f61e-825e-4999-a5e5-931111334a3c","Type":"ContainerDied","Data":"a31594fca63f55ea4e7094a820be7740427ea837bc2d5d54f827d6587be7f8ef"} Mar 13 12:09:31 crc kubenswrapper[4632]: I0313 12:09:31.269403 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c4xw" event={"ID":"96a9f61e-825e-4999-a5e5-931111334a3c","Type":"ContainerDied","Data":"1eef9aa64ac81546604be8de930dc45934789d3cb4d0dc2fb31c3f6608753b84"} Mar 13 12:09:31 crc kubenswrapper[4632]: I0313 12:09:31.269431 4632 scope.go:117] "RemoveContainer" containerID="a31594fca63f55ea4e7094a820be7740427ea837bc2d5d54f827d6587be7f8ef" Mar 13 12:09:31 crc kubenswrapper[4632]: I0313 12:09:31.284591 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96a9f61e-825e-4999-a5e5-931111334a3c-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 12:09:31 crc kubenswrapper[4632]: I0313 12:09:31.284628 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5mc2\" (UniqueName: \"kubernetes.io/projected/96a9f61e-825e-4999-a5e5-931111334a3c-kube-api-access-x5mc2\") on node \"crc\" DevicePath \"\"" Mar 13 12:09:31 crc kubenswrapper[4632]: I0313 12:09:31.284637 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96a9f61e-825e-4999-a5e5-931111334a3c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 12:09:31 crc kubenswrapper[4632]: I0313 12:09:31.325090 4632 scope.go:117] "RemoveContainer" containerID="e0fecd98bfa39d9c2c53d83d664bc034530c6ba45b3a9a1361aa2d2fc97a532a" Mar 13 12:09:31 crc kubenswrapper[4632]: I0313 12:09:31.383558 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9c4xw"] Mar 13 12:09:31 crc kubenswrapper[4632]: I0313 12:09:31.404622 4632 scope.go:117] "RemoveContainer" containerID="fef21df0ca08e5753cfe1eb56acc3f885fe3e8067984bbd0d17ccfaaae953502" Mar 13 12:09:31 crc kubenswrapper[4632]: I0313 12:09:31.420301 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9c4xw"] Mar 13 12:09:31 crc kubenswrapper[4632]: I0313 12:09:31.434190 4632 scope.go:117] "RemoveContainer" containerID="a31594fca63f55ea4e7094a820be7740427ea837bc2d5d54f827d6587be7f8ef" Mar 13 12:09:31 crc kubenswrapper[4632]: E0313 12:09:31.437597 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a31594fca63f55ea4e7094a820be7740427ea837bc2d5d54f827d6587be7f8ef\": container with ID starting with a31594fca63f55ea4e7094a820be7740427ea837bc2d5d54f827d6587be7f8ef not found: ID does not exist" containerID="a31594fca63f55ea4e7094a820be7740427ea837bc2d5d54f827d6587be7f8ef" Mar 13 12:09:31 crc kubenswrapper[4632]: I0313 12:09:31.437638 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a31594fca63f55ea4e7094a820be7740427ea837bc2d5d54f827d6587be7f8ef"} err="failed to get container status \"a31594fca63f55ea4e7094a820be7740427ea837bc2d5d54f827d6587be7f8ef\": rpc error: code = NotFound desc = could not find container \"a31594fca63f55ea4e7094a820be7740427ea837bc2d5d54f827d6587be7f8ef\": container with ID starting with a31594fca63f55ea4e7094a820be7740427ea837bc2d5d54f827d6587be7f8ef not found: ID does not exist" Mar 13 12:09:31 crc kubenswrapper[4632]: I0313 12:09:31.437659 4632 scope.go:117] "RemoveContainer" containerID="e0fecd98bfa39d9c2c53d83d664bc034530c6ba45b3a9a1361aa2d2fc97a532a" Mar 13 12:09:31 crc kubenswrapper[4632]: E0313 12:09:31.437876 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0fecd98bfa39d9c2c53d83d664bc034530c6ba45b3a9a1361aa2d2fc97a532a\": container with ID starting with e0fecd98bfa39d9c2c53d83d664bc034530c6ba45b3a9a1361aa2d2fc97a532a not found: ID does not exist" containerID="e0fecd98bfa39d9c2c53d83d664bc034530c6ba45b3a9a1361aa2d2fc97a532a" Mar 13 12:09:31 crc kubenswrapper[4632]: I0313 12:09:31.437963 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0fecd98bfa39d9c2c53d83d664bc034530c6ba45b3a9a1361aa2d2fc97a532a"} err="failed to get container status \"e0fecd98bfa39d9c2c53d83d664bc034530c6ba45b3a9a1361aa2d2fc97a532a\": rpc error: code = NotFound desc = could not find container \"e0fecd98bfa39d9c2c53d83d664bc034530c6ba45b3a9a1361aa2d2fc97a532a\": container with ID starting with e0fecd98bfa39d9c2c53d83d664bc034530c6ba45b3a9a1361aa2d2fc97a532a not found: ID does not exist" Mar 13 12:09:31 crc kubenswrapper[4632]: I0313 12:09:31.437979 4632 scope.go:117] "RemoveContainer" containerID="fef21df0ca08e5753cfe1eb56acc3f885fe3e8067984bbd0d17ccfaaae953502" Mar 13 12:09:31 crc kubenswrapper[4632]: E0313 12:09:31.438235 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fef21df0ca08e5753cfe1eb56acc3f885fe3e8067984bbd0d17ccfaaae953502\": container with ID starting with fef21df0ca08e5753cfe1eb56acc3f885fe3e8067984bbd0d17ccfaaae953502 not found: ID does not exist" containerID="fef21df0ca08e5753cfe1eb56acc3f885fe3e8067984bbd0d17ccfaaae953502" Mar 13 12:09:31 crc kubenswrapper[4632]: I0313 12:09:31.438260 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fef21df0ca08e5753cfe1eb56acc3f885fe3e8067984bbd0d17ccfaaae953502"} err="failed to get container status \"fef21df0ca08e5753cfe1eb56acc3f885fe3e8067984bbd0d17ccfaaae953502\": rpc error: code = NotFound desc = could not find container \"fef21df0ca08e5753cfe1eb56acc3f885fe3e8067984bbd0d17ccfaaae953502\": container with ID starting with fef21df0ca08e5753cfe1eb56acc3f885fe3e8067984bbd0d17ccfaaae953502 not found: ID does not exist" Mar 13 12:09:32 crc kubenswrapper[4632]: I0313 12:09:32.055091 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96a9f61e-825e-4999-a5e5-931111334a3c" path="/var/lib/kubelet/pods/96a9f61e-825e-4999-a5e5-931111334a3c/volumes" Mar 13 12:10:00 crc kubenswrapper[4632]: I0313 12:10:00.143576 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556730-txd8w"] Mar 13 12:10:00 crc kubenswrapper[4632]: E0313 12:10:00.144423 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96a9f61e-825e-4999-a5e5-931111334a3c" containerName="extract-utilities" Mar 13 12:10:00 crc kubenswrapper[4632]: I0313 12:10:00.144438 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="96a9f61e-825e-4999-a5e5-931111334a3c" containerName="extract-utilities" Mar 13 12:10:00 crc kubenswrapper[4632]: E0313 12:10:00.144472 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96a9f61e-825e-4999-a5e5-931111334a3c" containerName="registry-server" Mar 13 12:10:00 crc kubenswrapper[4632]: I0313 12:10:00.144479 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="96a9f61e-825e-4999-a5e5-931111334a3c" containerName="registry-server" Mar 13 12:10:00 crc kubenswrapper[4632]: E0313 12:10:00.144504 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96a9f61e-825e-4999-a5e5-931111334a3c" containerName="extract-content" Mar 13 12:10:00 crc kubenswrapper[4632]: I0313 12:10:00.144511 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="96a9f61e-825e-4999-a5e5-931111334a3c" containerName="extract-content" Mar 13 12:10:00 crc kubenswrapper[4632]: I0313 12:10:00.144679 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="96a9f61e-825e-4999-a5e5-931111334a3c" containerName="registry-server" Mar 13 12:10:00 crc kubenswrapper[4632]: I0313 12:10:00.145384 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556730-txd8w" Mar 13 12:10:00 crc kubenswrapper[4632]: I0313 12:10:00.148583 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 12:10:00 crc kubenswrapper[4632]: I0313 12:10:00.149200 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 12:10:00 crc kubenswrapper[4632]: I0313 12:10:00.149590 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 12:10:00 crc kubenswrapper[4632]: I0313 12:10:00.156317 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556730-txd8w"] Mar 13 12:10:00 crc kubenswrapper[4632]: I0313 12:10:00.194723 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr924\" (UniqueName: \"kubernetes.io/projected/c4d4ab40-a5ab-4b39-b19e-043766174116-kube-api-access-dr924\") pod \"auto-csr-approver-29556730-txd8w\" (UID: \"c4d4ab40-a5ab-4b39-b19e-043766174116\") " pod="openshift-infra/auto-csr-approver-29556730-txd8w" Mar 13 12:10:00 crc kubenswrapper[4632]: I0313 12:10:00.297290 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr924\" (UniqueName: \"kubernetes.io/projected/c4d4ab40-a5ab-4b39-b19e-043766174116-kube-api-access-dr924\") pod \"auto-csr-approver-29556730-txd8w\" (UID: \"c4d4ab40-a5ab-4b39-b19e-043766174116\") " pod="openshift-infra/auto-csr-approver-29556730-txd8w" Mar 13 12:10:00 crc kubenswrapper[4632]: I0313 12:10:00.317838 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr924\" (UniqueName: \"kubernetes.io/projected/c4d4ab40-a5ab-4b39-b19e-043766174116-kube-api-access-dr924\") pod \"auto-csr-approver-29556730-txd8w\" (UID: \"c4d4ab40-a5ab-4b39-b19e-043766174116\") " pod="openshift-infra/auto-csr-approver-29556730-txd8w" Mar 13 12:10:00 crc kubenswrapper[4632]: I0313 12:10:00.469220 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556730-txd8w" Mar 13 12:10:00 crc kubenswrapper[4632]: I0313 12:10:00.963359 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556730-txd8w"] Mar 13 12:10:01 crc kubenswrapper[4632]: I0313 12:10:01.549171 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556730-txd8w" event={"ID":"c4d4ab40-a5ab-4b39-b19e-043766174116","Type":"ContainerStarted","Data":"3da31d4e48d9483cf964b28af6122166fc360e9fbe694880471ddbd5aaf304be"} Mar 13 12:10:03 crc kubenswrapper[4632]: I0313 12:10:03.569696 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556730-txd8w" event={"ID":"c4d4ab40-a5ab-4b39-b19e-043766174116","Type":"ContainerStarted","Data":"47f2264b87b9b8761c8013ae1aa0f697b6d23277abd018e4664e7e2eed7771a3"} Mar 13 12:10:03 crc kubenswrapper[4632]: I0313 12:10:03.589724 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556730-txd8w" podStartSLOduration=2.226333465 podStartE2EDuration="3.589704869s" podCreationTimestamp="2026-03-13 12:10:00 +0000 UTC" firstStartedPulling="2026-03-13 12:10:00.974415129 +0000 UTC m=+7574.996945252" lastFinishedPulling="2026-03-13 12:10:02.337786523 +0000 UTC m=+7576.360316656" observedRunningTime="2026-03-13 12:10:03.586795659 +0000 UTC m=+7577.609325792" watchObservedRunningTime="2026-03-13 12:10:03.589704869 +0000 UTC m=+7577.612235002" Mar 13 12:10:04 crc kubenswrapper[4632]: I0313 12:10:04.601525 4632 generic.go:334] "Generic (PLEG): container finished" podID="c4d4ab40-a5ab-4b39-b19e-043766174116" containerID="47f2264b87b9b8761c8013ae1aa0f697b6d23277abd018e4664e7e2eed7771a3" exitCode=0 Mar 13 12:10:04 crc kubenswrapper[4632]: I0313 12:10:04.601845 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556730-txd8w" event={"ID":"c4d4ab40-a5ab-4b39-b19e-043766174116","Type":"ContainerDied","Data":"47f2264b87b9b8761c8013ae1aa0f697b6d23277abd018e4664e7e2eed7771a3"} Mar 13 12:10:05 crc kubenswrapper[4632]: I0313 12:10:05.970049 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556730-txd8w" Mar 13 12:10:06 crc kubenswrapper[4632]: I0313 12:10:06.119523 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dr924\" (UniqueName: \"kubernetes.io/projected/c4d4ab40-a5ab-4b39-b19e-043766174116-kube-api-access-dr924\") pod \"c4d4ab40-a5ab-4b39-b19e-043766174116\" (UID: \"c4d4ab40-a5ab-4b39-b19e-043766174116\") " Mar 13 12:10:06 crc kubenswrapper[4632]: I0313 12:10:06.128132 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4d4ab40-a5ab-4b39-b19e-043766174116-kube-api-access-dr924" (OuterVolumeSpecName: "kube-api-access-dr924") pod "c4d4ab40-a5ab-4b39-b19e-043766174116" (UID: "c4d4ab40-a5ab-4b39-b19e-043766174116"). InnerVolumeSpecName "kube-api-access-dr924". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:10:06 crc kubenswrapper[4632]: I0313 12:10:06.223060 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dr924\" (UniqueName: \"kubernetes.io/projected/c4d4ab40-a5ab-4b39-b19e-043766174116-kube-api-access-dr924\") on node \"crc\" DevicePath \"\"" Mar 13 12:10:06 crc kubenswrapper[4632]: I0313 12:10:06.622873 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556730-txd8w" event={"ID":"c4d4ab40-a5ab-4b39-b19e-043766174116","Type":"ContainerDied","Data":"3da31d4e48d9483cf964b28af6122166fc360e9fbe694880471ddbd5aaf304be"} Mar 13 12:10:06 crc kubenswrapper[4632]: I0313 12:10:06.622918 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556730-txd8w" Mar 13 12:10:06 crc kubenswrapper[4632]: I0313 12:10:06.622920 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3da31d4e48d9483cf964b28af6122166fc360e9fbe694880471ddbd5aaf304be" Mar 13 12:10:06 crc kubenswrapper[4632]: I0313 12:10:06.693397 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556724-thbvk"] Mar 13 12:10:06 crc kubenswrapper[4632]: I0313 12:10:06.706517 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556724-thbvk"] Mar 13 12:10:08 crc kubenswrapper[4632]: I0313 12:10:08.056729 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af27fbb5-f222-4fcc-a4d0-c4b4673cf1fe" path="/var/lib/kubelet/pods/af27fbb5-f222-4fcc-a4d0-c4b4673cf1fe/volumes" Mar 13 12:10:24 crc kubenswrapper[4632]: I0313 12:10:24.839864 4632 scope.go:117] "RemoveContainer" containerID="b2b91ac6b566e0c21758b3baa48d2497ca87c0016cb01ff16589e8a5fd981c2d" Mar 13 12:10:32 crc kubenswrapper[4632]: I0313 12:10:32.876555 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-26z26"] Mar 13 12:10:32 crc kubenswrapper[4632]: E0313 12:10:32.878092 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4d4ab40-a5ab-4b39-b19e-043766174116" containerName="oc" Mar 13 12:10:32 crc kubenswrapper[4632]: I0313 12:10:32.878108 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4d4ab40-a5ab-4b39-b19e-043766174116" containerName="oc" Mar 13 12:10:32 crc kubenswrapper[4632]: I0313 12:10:32.878284 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4d4ab40-a5ab-4b39-b19e-043766174116" containerName="oc" Mar 13 12:10:32 crc kubenswrapper[4632]: I0313 12:10:32.881513 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-26z26" Mar 13 12:10:32 crc kubenswrapper[4632]: I0313 12:10:32.906106 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-26z26"] Mar 13 12:10:32 crc kubenswrapper[4632]: I0313 12:10:32.956590 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b-utilities\") pod \"community-operators-26z26\" (UID: \"7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b\") " pod="openshift-marketplace/community-operators-26z26" Mar 13 12:10:32 crc kubenswrapper[4632]: I0313 12:10:32.956634 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b-catalog-content\") pod \"community-operators-26z26\" (UID: \"7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b\") " pod="openshift-marketplace/community-operators-26z26" Mar 13 12:10:32 crc kubenswrapper[4632]: I0313 12:10:32.956964 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zh92\" (UniqueName: \"kubernetes.io/projected/7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b-kube-api-access-5zh92\") pod \"community-operators-26z26\" (UID: \"7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b\") " pod="openshift-marketplace/community-operators-26z26" Mar 13 12:10:33 crc kubenswrapper[4632]: I0313 12:10:33.059082 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zh92\" (UniqueName: \"kubernetes.io/projected/7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b-kube-api-access-5zh92\") pod \"community-operators-26z26\" (UID: \"7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b\") " pod="openshift-marketplace/community-operators-26z26" Mar 13 12:10:33 crc kubenswrapper[4632]: I0313 12:10:33.059357 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b-utilities\") pod \"community-operators-26z26\" (UID: \"7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b\") " pod="openshift-marketplace/community-operators-26z26" Mar 13 12:10:33 crc kubenswrapper[4632]: I0313 12:10:33.059424 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b-catalog-content\") pod \"community-operators-26z26\" (UID: \"7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b\") " pod="openshift-marketplace/community-operators-26z26" Mar 13 12:10:33 crc kubenswrapper[4632]: I0313 12:10:33.060072 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b-utilities\") pod \"community-operators-26z26\" (UID: \"7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b\") " pod="openshift-marketplace/community-operators-26z26" Mar 13 12:10:33 crc kubenswrapper[4632]: I0313 12:10:33.060108 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b-catalog-content\") pod \"community-operators-26z26\" (UID: \"7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b\") " pod="openshift-marketplace/community-operators-26z26" Mar 13 12:10:33 crc kubenswrapper[4632]: I0313 12:10:33.081287 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zh92\" (UniqueName: \"kubernetes.io/projected/7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b-kube-api-access-5zh92\") pod \"community-operators-26z26\" (UID: \"7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b\") " pod="openshift-marketplace/community-operators-26z26" Mar 13 12:10:33 crc kubenswrapper[4632]: I0313 12:10:33.214345 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-26z26" Mar 13 12:10:33 crc kubenswrapper[4632]: I0313 12:10:33.686886 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-26z26"] Mar 13 12:10:33 crc kubenswrapper[4632]: W0313 12:10:33.696313 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7654d2d1_ef2c_4ae6_a358_d5efdfaf3c6b.slice/crio-49fadcbb7750d63803d60167fee630b03775174c0b2e88c50e45e7e7c2903030 WatchSource:0}: Error finding container 49fadcbb7750d63803d60167fee630b03775174c0b2e88c50e45e7e7c2903030: Status 404 returned error can't find the container with id 49fadcbb7750d63803d60167fee630b03775174c0b2e88c50e45e7e7c2903030 Mar 13 12:10:33 crc kubenswrapper[4632]: I0313 12:10:33.874119 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-26z26" event={"ID":"7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b","Type":"ContainerStarted","Data":"49fadcbb7750d63803d60167fee630b03775174c0b2e88c50e45e7e7c2903030"} Mar 13 12:10:34 crc kubenswrapper[4632]: I0313 12:10:34.894501 4632 generic.go:334] "Generic (PLEG): container finished" podID="7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b" containerID="f02456bf592705883010995ba855a910e6930582529323b185cedcde26acc636" exitCode=0 Mar 13 12:10:34 crc kubenswrapper[4632]: I0313 12:10:34.894572 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-26z26" event={"ID":"7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b","Type":"ContainerDied","Data":"f02456bf592705883010995ba855a910e6930582529323b185cedcde26acc636"} Mar 13 12:10:35 crc kubenswrapper[4632]: I0313 12:10:35.908835 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-26z26" event={"ID":"7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b","Type":"ContainerStarted","Data":"856643e5316cdb3679641b711adfb80511d52a0130b5a170d00c6ac71cd987d6"} Mar 13 12:10:38 crc kubenswrapper[4632]: I0313 12:10:38.944334 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-26z26" event={"ID":"7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b","Type":"ContainerDied","Data":"856643e5316cdb3679641b711adfb80511d52a0130b5a170d00c6ac71cd987d6"} Mar 13 12:10:38 crc kubenswrapper[4632]: I0313 12:10:38.944337 4632 generic.go:334] "Generic (PLEG): container finished" podID="7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b" containerID="856643e5316cdb3679641b711adfb80511d52a0130b5a170d00c6ac71cd987d6" exitCode=0 Mar 13 12:10:39 crc kubenswrapper[4632]: I0313 12:10:39.955308 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-26z26" event={"ID":"7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b","Type":"ContainerStarted","Data":"921c63ac9b57ae71823a93ed0ae4b360f16e3e68d7c9d943c870848a775d0669"} Mar 13 12:10:39 crc kubenswrapper[4632]: I0313 12:10:39.973469 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-26z26" podStartSLOduration=3.489528888 podStartE2EDuration="7.973452295s" podCreationTimestamp="2026-03-13 12:10:32 +0000 UTC" firstStartedPulling="2026-03-13 12:10:34.896540956 +0000 UTC m=+7608.919071089" lastFinishedPulling="2026-03-13 12:10:39.380464363 +0000 UTC m=+7613.402994496" observedRunningTime="2026-03-13 12:10:39.971248451 +0000 UTC m=+7613.993778634" watchObservedRunningTime="2026-03-13 12:10:39.973452295 +0000 UTC m=+7613.995982428" Mar 13 12:10:40 crc kubenswrapper[4632]: I0313 12:10:40.461355 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:10:40 crc kubenswrapper[4632]: I0313 12:10:40.463058 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:10:43 crc kubenswrapper[4632]: I0313 12:10:43.215168 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-26z26" Mar 13 12:10:43 crc kubenswrapper[4632]: I0313 12:10:43.215502 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-26z26" Mar 13 12:10:44 crc kubenswrapper[4632]: I0313 12:10:44.265587 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-26z26" podUID="7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b" containerName="registry-server" probeResult="failure" output=< Mar 13 12:10:44 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:10:44 crc kubenswrapper[4632]: > Mar 13 12:10:54 crc kubenswrapper[4632]: I0313 12:10:54.280179 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-26z26" podUID="7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b" containerName="registry-server" probeResult="failure" output=< Mar 13 12:10:54 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:10:54 crc kubenswrapper[4632]: > Mar 13 12:11:03 crc kubenswrapper[4632]: I0313 12:11:03.279908 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-26z26" Mar 13 12:11:03 crc kubenswrapper[4632]: I0313 12:11:03.338726 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-26z26" Mar 13 12:11:04 crc kubenswrapper[4632]: I0313 12:11:04.075231 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-26z26"] Mar 13 12:11:05 crc kubenswrapper[4632]: I0313 12:11:05.239627 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-26z26" podUID="7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b" containerName="registry-server" containerID="cri-o://921c63ac9b57ae71823a93ed0ae4b360f16e3e68d7c9d943c870848a775d0669" gracePeriod=2 Mar 13 12:11:06 crc kubenswrapper[4632]: I0313 12:11:06.264711 4632 generic.go:334] "Generic (PLEG): container finished" podID="7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b" containerID="921c63ac9b57ae71823a93ed0ae4b360f16e3e68d7c9d943c870848a775d0669" exitCode=0 Mar 13 12:11:06 crc kubenswrapper[4632]: I0313 12:11:06.264903 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-26z26" event={"ID":"7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b","Type":"ContainerDied","Data":"921c63ac9b57ae71823a93ed0ae4b360f16e3e68d7c9d943c870848a775d0669"} Mar 13 12:11:06 crc kubenswrapper[4632]: I0313 12:11:06.613446 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-26z26" Mar 13 12:11:06 crc kubenswrapper[4632]: I0313 12:11:06.764633 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b-utilities\") pod \"7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b\" (UID: \"7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b\") " Mar 13 12:11:06 crc kubenswrapper[4632]: I0313 12:11:06.765081 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b-catalog-content\") pod \"7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b\" (UID: \"7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b\") " Mar 13 12:11:06 crc kubenswrapper[4632]: I0313 12:11:06.765137 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zh92\" (UniqueName: \"kubernetes.io/projected/7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b-kube-api-access-5zh92\") pod \"7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b\" (UID: \"7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b\") " Mar 13 12:11:06 crc kubenswrapper[4632]: I0313 12:11:06.765602 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b-utilities" (OuterVolumeSpecName: "utilities") pod "7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b" (UID: "7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:11:06 crc kubenswrapper[4632]: I0313 12:11:06.765903 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 12:11:06 crc kubenswrapper[4632]: I0313 12:11:06.860222 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b-kube-api-access-5zh92" (OuterVolumeSpecName: "kube-api-access-5zh92") pod "7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b" (UID: "7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b"). InnerVolumeSpecName "kube-api-access-5zh92". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:11:06 crc kubenswrapper[4632]: I0313 12:11:06.868052 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zh92\" (UniqueName: \"kubernetes.io/projected/7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b-kube-api-access-5zh92\") on node \"crc\" DevicePath \"\"" Mar 13 12:11:07 crc kubenswrapper[4632]: I0313 12:11:07.016298 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b" (UID: "7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:11:07 crc kubenswrapper[4632]: I0313 12:11:07.071793 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 12:11:07 crc kubenswrapper[4632]: I0313 12:11:07.280689 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-26z26" event={"ID":"7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b","Type":"ContainerDied","Data":"49fadcbb7750d63803d60167fee630b03775174c0b2e88c50e45e7e7c2903030"} Mar 13 12:11:07 crc kubenswrapper[4632]: I0313 12:11:07.280790 4632 scope.go:117] "RemoveContainer" containerID="921c63ac9b57ae71823a93ed0ae4b360f16e3e68d7c9d943c870848a775d0669" Mar 13 12:11:07 crc kubenswrapper[4632]: I0313 12:11:07.281465 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-26z26" Mar 13 12:11:07 crc kubenswrapper[4632]: I0313 12:11:07.319385 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-26z26"] Mar 13 12:11:07 crc kubenswrapper[4632]: I0313 12:11:07.321028 4632 scope.go:117] "RemoveContainer" containerID="856643e5316cdb3679641b711adfb80511d52a0130b5a170d00c6ac71cd987d6" Mar 13 12:11:07 crc kubenswrapper[4632]: I0313 12:11:07.328655 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-26z26"] Mar 13 12:11:07 crc kubenswrapper[4632]: I0313 12:11:07.355560 4632 scope.go:117] "RemoveContainer" containerID="f02456bf592705883010995ba855a910e6930582529323b185cedcde26acc636" Mar 13 12:11:08 crc kubenswrapper[4632]: I0313 12:11:08.056226 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b" path="/var/lib/kubelet/pods/7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b/volumes" Mar 13 12:11:10 crc kubenswrapper[4632]: I0313 12:11:10.469631 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:11:10 crc kubenswrapper[4632]: I0313 12:11:10.470795 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:11:40 crc kubenswrapper[4632]: I0313 12:11:40.461381 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:11:40 crc kubenswrapper[4632]: I0313 12:11:40.462023 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:11:40 crc kubenswrapper[4632]: I0313 12:11:40.462074 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 12:11:40 crc kubenswrapper[4632]: I0313 12:11:40.462897 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ef6a755da94d8b26aaa61b1a356ec9030e87ec1440f6bdf1f6abec8411efbdd9"} pod="openshift-machine-config-operator/machine-config-daemon-zkscb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 12:11:40 crc kubenswrapper[4632]: I0313 12:11:40.463003 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" containerID="cri-o://ef6a755da94d8b26aaa61b1a356ec9030e87ec1440f6bdf1f6abec8411efbdd9" gracePeriod=600 Mar 13 12:11:40 crc kubenswrapper[4632]: I0313 12:11:40.629435 4632 generic.go:334] "Generic (PLEG): container finished" podID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerID="ef6a755da94d8b26aaa61b1a356ec9030e87ec1440f6bdf1f6abec8411efbdd9" exitCode=0 Mar 13 12:11:40 crc kubenswrapper[4632]: I0313 12:11:40.629492 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerDied","Data":"ef6a755da94d8b26aaa61b1a356ec9030e87ec1440f6bdf1f6abec8411efbdd9"} Mar 13 12:11:40 crc kubenswrapper[4632]: I0313 12:11:40.629535 4632 scope.go:117] "RemoveContainer" containerID="acc57c5b8ad70899e139ed86509fb64c5bf067344e4285c37f35406b8db0c7a6" Mar 13 12:11:41 crc kubenswrapper[4632]: I0313 12:11:41.640003 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerStarted","Data":"5cc922706d30866ac208574ee6bcc0812dd5d20bcd356efd2bc6fcac169085a9"} Mar 13 12:11:41 crc kubenswrapper[4632]: I0313 12:11:41.817903 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-695f666b49-nw48z"] Mar 13 12:11:41 crc kubenswrapper[4632]: E0313 12:11:41.818392 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b" containerName="extract-utilities" Mar 13 12:11:41 crc kubenswrapper[4632]: I0313 12:11:41.818417 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b" containerName="extract-utilities" Mar 13 12:11:41 crc kubenswrapper[4632]: E0313 12:11:41.818450 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b" containerName="extract-content" Mar 13 12:11:41 crc kubenswrapper[4632]: I0313 12:11:41.818459 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b" containerName="extract-content" Mar 13 12:11:41 crc kubenswrapper[4632]: E0313 12:11:41.818479 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b" containerName="registry-server" Mar 13 12:11:41 crc kubenswrapper[4632]: I0313 12:11:41.818487 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b" containerName="registry-server" Mar 13 12:11:41 crc kubenswrapper[4632]: I0313 12:11:41.818732 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="7654d2d1-ef2c-4ae6-a358-d5efdfaf3c6b" containerName="registry-server" Mar 13 12:11:41 crc kubenswrapper[4632]: I0313 12:11:41.820915 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-695f666b49-nw48z" Mar 13 12:11:41 crc kubenswrapper[4632]: I0313 12:11:41.870013 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxcff\" (UniqueName: \"kubernetes.io/projected/3a5c1185-e64b-44a9-b4b8-0108d4e80f9a-kube-api-access-mxcff\") pod \"neutron-695f666b49-nw48z\" (UID: \"3a5c1185-e64b-44a9-b4b8-0108d4e80f9a\") " pod="openstack/neutron-695f666b49-nw48z" Mar 13 12:11:41 crc kubenswrapper[4632]: I0313 12:11:41.870088 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a5c1185-e64b-44a9-b4b8-0108d4e80f9a-ovndb-tls-certs\") pod \"neutron-695f666b49-nw48z\" (UID: \"3a5c1185-e64b-44a9-b4b8-0108d4e80f9a\") " pod="openstack/neutron-695f666b49-nw48z" Mar 13 12:11:41 crc kubenswrapper[4632]: I0313 12:11:41.870168 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a5c1185-e64b-44a9-b4b8-0108d4e80f9a-internal-tls-certs\") pod \"neutron-695f666b49-nw48z\" (UID: \"3a5c1185-e64b-44a9-b4b8-0108d4e80f9a\") " pod="openstack/neutron-695f666b49-nw48z" Mar 13 12:11:41 crc kubenswrapper[4632]: I0313 12:11:41.870324 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a5c1185-e64b-44a9-b4b8-0108d4e80f9a-public-tls-certs\") pod \"neutron-695f666b49-nw48z\" (UID: \"3a5c1185-e64b-44a9-b4b8-0108d4e80f9a\") " pod="openstack/neutron-695f666b49-nw48z" Mar 13 12:11:41 crc kubenswrapper[4632]: I0313 12:11:41.870366 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a5c1185-e64b-44a9-b4b8-0108d4e80f9a-combined-ca-bundle\") pod \"neutron-695f666b49-nw48z\" (UID: \"3a5c1185-e64b-44a9-b4b8-0108d4e80f9a\") " pod="openstack/neutron-695f666b49-nw48z" Mar 13 12:11:41 crc kubenswrapper[4632]: I0313 12:11:41.870549 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3a5c1185-e64b-44a9-b4b8-0108d4e80f9a-httpd-config\") pod \"neutron-695f666b49-nw48z\" (UID: \"3a5c1185-e64b-44a9-b4b8-0108d4e80f9a\") " pod="openstack/neutron-695f666b49-nw48z" Mar 13 12:11:41 crc kubenswrapper[4632]: I0313 12:11:41.870681 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3a5c1185-e64b-44a9-b4b8-0108d4e80f9a-config\") pod \"neutron-695f666b49-nw48z\" (UID: \"3a5c1185-e64b-44a9-b4b8-0108d4e80f9a\") " pod="openstack/neutron-695f666b49-nw48z" Mar 13 12:11:41 crc kubenswrapper[4632]: I0313 12:11:41.918536 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-695f666b49-nw48z"] Mar 13 12:11:41 crc kubenswrapper[4632]: I0313 12:11:41.973293 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3a5c1185-e64b-44a9-b4b8-0108d4e80f9a-config\") pod \"neutron-695f666b49-nw48z\" (UID: \"3a5c1185-e64b-44a9-b4b8-0108d4e80f9a\") " pod="openstack/neutron-695f666b49-nw48z" Mar 13 12:11:41 crc kubenswrapper[4632]: I0313 12:11:41.973373 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxcff\" (UniqueName: \"kubernetes.io/projected/3a5c1185-e64b-44a9-b4b8-0108d4e80f9a-kube-api-access-mxcff\") pod \"neutron-695f666b49-nw48z\" (UID: \"3a5c1185-e64b-44a9-b4b8-0108d4e80f9a\") " pod="openstack/neutron-695f666b49-nw48z" Mar 13 12:11:41 crc kubenswrapper[4632]: I0313 12:11:41.973419 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a5c1185-e64b-44a9-b4b8-0108d4e80f9a-ovndb-tls-certs\") pod \"neutron-695f666b49-nw48z\" (UID: \"3a5c1185-e64b-44a9-b4b8-0108d4e80f9a\") " pod="openstack/neutron-695f666b49-nw48z" Mar 13 12:11:41 crc kubenswrapper[4632]: I0313 12:11:41.973500 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a5c1185-e64b-44a9-b4b8-0108d4e80f9a-internal-tls-certs\") pod \"neutron-695f666b49-nw48z\" (UID: \"3a5c1185-e64b-44a9-b4b8-0108d4e80f9a\") " pod="openstack/neutron-695f666b49-nw48z" Mar 13 12:11:41 crc kubenswrapper[4632]: I0313 12:11:41.973544 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a5c1185-e64b-44a9-b4b8-0108d4e80f9a-public-tls-certs\") pod \"neutron-695f666b49-nw48z\" (UID: \"3a5c1185-e64b-44a9-b4b8-0108d4e80f9a\") " pod="openstack/neutron-695f666b49-nw48z" Mar 13 12:11:41 crc kubenswrapper[4632]: I0313 12:11:41.973570 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a5c1185-e64b-44a9-b4b8-0108d4e80f9a-combined-ca-bundle\") pod \"neutron-695f666b49-nw48z\" (UID: \"3a5c1185-e64b-44a9-b4b8-0108d4e80f9a\") " pod="openstack/neutron-695f666b49-nw48z" Mar 13 12:11:41 crc kubenswrapper[4632]: I0313 12:11:41.973638 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3a5c1185-e64b-44a9-b4b8-0108d4e80f9a-httpd-config\") pod \"neutron-695f666b49-nw48z\" (UID: \"3a5c1185-e64b-44a9-b4b8-0108d4e80f9a\") " pod="openstack/neutron-695f666b49-nw48z" Mar 13 12:11:41 crc kubenswrapper[4632]: I0313 12:11:41.985046 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3a5c1185-e64b-44a9-b4b8-0108d4e80f9a-config\") pod \"neutron-695f666b49-nw48z\" (UID: \"3a5c1185-e64b-44a9-b4b8-0108d4e80f9a\") " pod="openstack/neutron-695f666b49-nw48z" Mar 13 12:11:41 crc kubenswrapper[4632]: I0313 12:11:41.985193 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a5c1185-e64b-44a9-b4b8-0108d4e80f9a-internal-tls-certs\") pod \"neutron-695f666b49-nw48z\" (UID: \"3a5c1185-e64b-44a9-b4b8-0108d4e80f9a\") " pod="openstack/neutron-695f666b49-nw48z" Mar 13 12:11:41 crc kubenswrapper[4632]: I0313 12:11:41.985594 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a5c1185-e64b-44a9-b4b8-0108d4e80f9a-combined-ca-bundle\") pod \"neutron-695f666b49-nw48z\" (UID: \"3a5c1185-e64b-44a9-b4b8-0108d4e80f9a\") " pod="openstack/neutron-695f666b49-nw48z" Mar 13 12:11:41 crc kubenswrapper[4632]: I0313 12:11:41.988616 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a5c1185-e64b-44a9-b4b8-0108d4e80f9a-public-tls-certs\") pod \"neutron-695f666b49-nw48z\" (UID: \"3a5c1185-e64b-44a9-b4b8-0108d4e80f9a\") " pod="openstack/neutron-695f666b49-nw48z" Mar 13 12:11:41 crc kubenswrapper[4632]: I0313 12:11:41.989145 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3a5c1185-e64b-44a9-b4b8-0108d4e80f9a-httpd-config\") pod \"neutron-695f666b49-nw48z\" (UID: \"3a5c1185-e64b-44a9-b4b8-0108d4e80f9a\") " pod="openstack/neutron-695f666b49-nw48z" Mar 13 12:11:42 crc kubenswrapper[4632]: I0313 12:11:42.003363 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a5c1185-e64b-44a9-b4b8-0108d4e80f9a-ovndb-tls-certs\") pod \"neutron-695f666b49-nw48z\" (UID: \"3a5c1185-e64b-44a9-b4b8-0108d4e80f9a\") " pod="openstack/neutron-695f666b49-nw48z" Mar 13 12:11:42 crc kubenswrapper[4632]: I0313 12:11:42.006686 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxcff\" (UniqueName: \"kubernetes.io/projected/3a5c1185-e64b-44a9-b4b8-0108d4e80f9a-kube-api-access-mxcff\") pod \"neutron-695f666b49-nw48z\" (UID: \"3a5c1185-e64b-44a9-b4b8-0108d4e80f9a\") " pod="openstack/neutron-695f666b49-nw48z" Mar 13 12:11:42 crc kubenswrapper[4632]: I0313 12:11:42.140734 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-695f666b49-nw48z" Mar 13 12:11:43 crc kubenswrapper[4632]: I0313 12:11:43.575843 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-695f666b49-nw48z"] Mar 13 12:11:43 crc kubenswrapper[4632]: W0313 12:11:43.605085 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a5c1185_e64b_44a9_b4b8_0108d4e80f9a.slice/crio-6b30682db3693448e083a584c4d9224eba418a7b11fcf1ea70e89d376cc0e7a8 WatchSource:0}: Error finding container 6b30682db3693448e083a584c4d9224eba418a7b11fcf1ea70e89d376cc0e7a8: Status 404 returned error can't find the container with id 6b30682db3693448e083a584c4d9224eba418a7b11fcf1ea70e89d376cc0e7a8 Mar 13 12:11:43 crc kubenswrapper[4632]: I0313 12:11:43.668372 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-695f666b49-nw48z" event={"ID":"3a5c1185-e64b-44a9-b4b8-0108d4e80f9a","Type":"ContainerStarted","Data":"6b30682db3693448e083a584c4d9224eba418a7b11fcf1ea70e89d376cc0e7a8"} Mar 13 12:11:44 crc kubenswrapper[4632]: I0313 12:11:44.687000 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-695f666b49-nw48z" event={"ID":"3a5c1185-e64b-44a9-b4b8-0108d4e80f9a","Type":"ContainerStarted","Data":"c42a0ba5071954445f9abf4a2ace1075becd42a3374534a7de730140449c8869"} Mar 13 12:11:44 crc kubenswrapper[4632]: I0313 12:11:44.687663 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-695f666b49-nw48z" Mar 13 12:11:44 crc kubenswrapper[4632]: I0313 12:11:44.687679 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-695f666b49-nw48z" event={"ID":"3a5c1185-e64b-44a9-b4b8-0108d4e80f9a","Type":"ContainerStarted","Data":"e884f11f8966438c8d4d0b48555144b504cc8fbbd1a49c1f1457de8f39a82ad3"} Mar 13 12:12:00 crc kubenswrapper[4632]: I0313 12:12:00.166565 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-695f666b49-nw48z" podStartSLOduration=19.166541269 podStartE2EDuration="19.166541269s" podCreationTimestamp="2026-03-13 12:11:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:11:44.710797102 +0000 UTC m=+7678.733327245" watchObservedRunningTime="2026-03-13 12:12:00.166541269 +0000 UTC m=+7694.189071402" Mar 13 12:12:00 crc kubenswrapper[4632]: I0313 12:12:00.169863 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556732-zffdj"] Mar 13 12:12:00 crc kubenswrapper[4632]: I0313 12:12:00.171431 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556732-zffdj" Mar 13 12:12:00 crc kubenswrapper[4632]: I0313 12:12:00.178346 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 12:12:00 crc kubenswrapper[4632]: I0313 12:12:00.178574 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 12:12:00 crc kubenswrapper[4632]: I0313 12:12:00.180696 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556732-zffdj"] Mar 13 12:12:00 crc kubenswrapper[4632]: I0313 12:12:00.181258 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 12:12:00 crc kubenswrapper[4632]: I0313 12:12:00.249511 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24l62\" (UniqueName: \"kubernetes.io/projected/69547021-fae1-4ad6-8745-c327bb079dce-kube-api-access-24l62\") pod \"auto-csr-approver-29556732-zffdj\" (UID: \"69547021-fae1-4ad6-8745-c327bb079dce\") " pod="openshift-infra/auto-csr-approver-29556732-zffdj" Mar 13 12:12:00 crc kubenswrapper[4632]: I0313 12:12:00.351855 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24l62\" (UniqueName: \"kubernetes.io/projected/69547021-fae1-4ad6-8745-c327bb079dce-kube-api-access-24l62\") pod \"auto-csr-approver-29556732-zffdj\" (UID: \"69547021-fae1-4ad6-8745-c327bb079dce\") " pod="openshift-infra/auto-csr-approver-29556732-zffdj" Mar 13 12:12:00 crc kubenswrapper[4632]: I0313 12:12:00.384849 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24l62\" (UniqueName: \"kubernetes.io/projected/69547021-fae1-4ad6-8745-c327bb079dce-kube-api-access-24l62\") pod \"auto-csr-approver-29556732-zffdj\" (UID: \"69547021-fae1-4ad6-8745-c327bb079dce\") " pod="openshift-infra/auto-csr-approver-29556732-zffdj" Mar 13 12:12:00 crc kubenswrapper[4632]: I0313 12:12:00.498666 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556732-zffdj" Mar 13 12:12:01 crc kubenswrapper[4632]: I0313 12:12:01.251616 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556732-zffdj"] Mar 13 12:12:01 crc kubenswrapper[4632]: W0313 12:12:01.264171 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69547021_fae1_4ad6_8745_c327bb079dce.slice/crio-b661ff26ad7f8831aa3ef67c981c437bbf2ba8ab70664de65931cf5f2426bdcd WatchSource:0}: Error finding container b661ff26ad7f8831aa3ef67c981c437bbf2ba8ab70664de65931cf5f2426bdcd: Status 404 returned error can't find the container with id b661ff26ad7f8831aa3ef67c981c437bbf2ba8ab70664de65931cf5f2426bdcd Mar 13 12:12:01 crc kubenswrapper[4632]: I0313 12:12:01.870279 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556732-zffdj" event={"ID":"69547021-fae1-4ad6-8745-c327bb079dce","Type":"ContainerStarted","Data":"b661ff26ad7f8831aa3ef67c981c437bbf2ba8ab70664de65931cf5f2426bdcd"} Mar 13 12:12:03 crc kubenswrapper[4632]: I0313 12:12:03.893925 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556732-zffdj" event={"ID":"69547021-fae1-4ad6-8745-c327bb079dce","Type":"ContainerStarted","Data":"e6be43aa992650e79f391597a8fccb4cc829615f369f4902627c5d5e92b6ab1e"} Mar 13 12:12:03 crc kubenswrapper[4632]: I0313 12:12:03.913604 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556732-zffdj" podStartSLOduration=2.966690912 podStartE2EDuration="3.913577671s" podCreationTimestamp="2026-03-13 12:12:00 +0000 UTC" firstStartedPulling="2026-03-13 12:12:01.265992751 +0000 UTC m=+7695.288522884" lastFinishedPulling="2026-03-13 12:12:02.21287951 +0000 UTC m=+7696.235409643" observedRunningTime="2026-03-13 12:12:03.909043649 +0000 UTC m=+7697.931573802" watchObservedRunningTime="2026-03-13 12:12:03.913577671 +0000 UTC m=+7697.936107804" Mar 13 12:12:04 crc kubenswrapper[4632]: I0313 12:12:04.903540 4632 generic.go:334] "Generic (PLEG): container finished" podID="69547021-fae1-4ad6-8745-c327bb079dce" containerID="e6be43aa992650e79f391597a8fccb4cc829615f369f4902627c5d5e92b6ab1e" exitCode=0 Mar 13 12:12:04 crc kubenswrapper[4632]: I0313 12:12:04.903650 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556732-zffdj" event={"ID":"69547021-fae1-4ad6-8745-c327bb079dce","Type":"ContainerDied","Data":"e6be43aa992650e79f391597a8fccb4cc829615f369f4902627c5d5e92b6ab1e"} Mar 13 12:12:06 crc kubenswrapper[4632]: I0313 12:12:06.568314 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556732-zffdj" Mar 13 12:12:06 crc kubenswrapper[4632]: I0313 12:12:06.733013 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24l62\" (UniqueName: \"kubernetes.io/projected/69547021-fae1-4ad6-8745-c327bb079dce-kube-api-access-24l62\") pod \"69547021-fae1-4ad6-8745-c327bb079dce\" (UID: \"69547021-fae1-4ad6-8745-c327bb079dce\") " Mar 13 12:12:06 crc kubenswrapper[4632]: I0313 12:12:06.742140 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69547021-fae1-4ad6-8745-c327bb079dce-kube-api-access-24l62" (OuterVolumeSpecName: "kube-api-access-24l62") pod "69547021-fae1-4ad6-8745-c327bb079dce" (UID: "69547021-fae1-4ad6-8745-c327bb079dce"). InnerVolumeSpecName "kube-api-access-24l62". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:12:06 crc kubenswrapper[4632]: I0313 12:12:06.835534 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24l62\" (UniqueName: \"kubernetes.io/projected/69547021-fae1-4ad6-8745-c327bb079dce-kube-api-access-24l62\") on node \"crc\" DevicePath \"\"" Mar 13 12:12:06 crc kubenswrapper[4632]: I0313 12:12:06.925460 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556732-zffdj" event={"ID":"69547021-fae1-4ad6-8745-c327bb079dce","Type":"ContainerDied","Data":"b661ff26ad7f8831aa3ef67c981c437bbf2ba8ab70664de65931cf5f2426bdcd"} Mar 13 12:12:06 crc kubenswrapper[4632]: I0313 12:12:06.925522 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b661ff26ad7f8831aa3ef67c981c437bbf2ba8ab70664de65931cf5f2426bdcd" Mar 13 12:12:06 crc kubenswrapper[4632]: I0313 12:12:06.925652 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556732-zffdj" Mar 13 12:12:07 crc kubenswrapper[4632]: I0313 12:12:07.008688 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556726-pjf8m"] Mar 13 12:12:07 crc kubenswrapper[4632]: I0313 12:12:07.018930 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556726-pjf8m"] Mar 13 12:12:08 crc kubenswrapper[4632]: I0313 12:12:08.058229 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bb028cb-1d8f-4f09-8c73-135a6ce2bb46" path="/var/lib/kubelet/pods/2bb028cb-1d8f-4f09-8c73-135a6ce2bb46/volumes" Mar 13 12:12:12 crc kubenswrapper[4632]: I0313 12:12:12.159090 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-695f666b49-nw48z" Mar 13 12:12:12 crc kubenswrapper[4632]: I0313 12:12:12.271682 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-59586ff4c9-s4xn7"] Mar 13 12:12:12 crc kubenswrapper[4632]: I0313 12:12:12.271930 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-59586ff4c9-s4xn7" podUID="8b9495c7-c9ae-4a07-b216-a250d4cd274e" containerName="neutron-api" containerID="cri-o://94fc75b5bf96292690ce359a5d4ce65dd30bc2b06b1aeb4d309bd6e1dcd7e70c" gracePeriod=30 Mar 13 12:12:12 crc kubenswrapper[4632]: I0313 12:12:12.272130 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-59586ff4c9-s4xn7" podUID="8b9495c7-c9ae-4a07-b216-a250d4cd274e" containerName="neutron-httpd" containerID="cri-o://82e1ea3147a5e24713a581b7fd1d1be6dc38543edaf91f1fa20ce5282f06b072" gracePeriod=30 Mar 13 12:12:12 crc kubenswrapper[4632]: I0313 12:12:12.981274 4632 generic.go:334] "Generic (PLEG): container finished" podID="8b9495c7-c9ae-4a07-b216-a250d4cd274e" containerID="82e1ea3147a5e24713a581b7fd1d1be6dc38543edaf91f1fa20ce5282f06b072" exitCode=0 Mar 13 12:12:12 crc kubenswrapper[4632]: I0313 12:12:12.981352 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59586ff4c9-s4xn7" event={"ID":"8b9495c7-c9ae-4a07-b216-a250d4cd274e","Type":"ContainerDied","Data":"82e1ea3147a5e24713a581b7fd1d1be6dc38543edaf91f1fa20ce5282f06b072"} Mar 13 12:12:16 crc kubenswrapper[4632]: I0313 12:12:16.014218 4632 generic.go:334] "Generic (PLEG): container finished" podID="8b9495c7-c9ae-4a07-b216-a250d4cd274e" containerID="94fc75b5bf96292690ce359a5d4ce65dd30bc2b06b1aeb4d309bd6e1dcd7e70c" exitCode=0 Mar 13 12:12:16 crc kubenswrapper[4632]: I0313 12:12:16.014341 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59586ff4c9-s4xn7" event={"ID":"8b9495c7-c9ae-4a07-b216-a250d4cd274e","Type":"ContainerDied","Data":"94fc75b5bf96292690ce359a5d4ce65dd30bc2b06b1aeb4d309bd6e1dcd7e70c"} Mar 13 12:12:16 crc kubenswrapper[4632]: I0313 12:12:16.115408 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59586ff4c9-s4xn7" Mar 13 12:12:16 crc kubenswrapper[4632]: I0313 12:12:16.226974 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9495c7-c9ae-4a07-b216-a250d4cd274e-ovndb-tls-certs\") pod \"8b9495c7-c9ae-4a07-b216-a250d4cd274e\" (UID: \"8b9495c7-c9ae-4a07-b216-a250d4cd274e\") " Mar 13 12:12:16 crc kubenswrapper[4632]: I0313 12:12:16.227211 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b9495c7-c9ae-4a07-b216-a250d4cd274e-combined-ca-bundle\") pod \"8b9495c7-c9ae-4a07-b216-a250d4cd274e\" (UID: \"8b9495c7-c9ae-4a07-b216-a250d4cd274e\") " Mar 13 12:12:16 crc kubenswrapper[4632]: I0313 12:12:16.227263 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llcp8\" (UniqueName: \"kubernetes.io/projected/8b9495c7-c9ae-4a07-b216-a250d4cd274e-kube-api-access-llcp8\") pod \"8b9495c7-c9ae-4a07-b216-a250d4cd274e\" (UID: \"8b9495c7-c9ae-4a07-b216-a250d4cd274e\") " Mar 13 12:12:16 crc kubenswrapper[4632]: I0313 12:12:16.227296 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9495c7-c9ae-4a07-b216-a250d4cd274e-public-tls-certs\") pod \"8b9495c7-c9ae-4a07-b216-a250d4cd274e\" (UID: \"8b9495c7-c9ae-4a07-b216-a250d4cd274e\") " Mar 13 12:12:16 crc kubenswrapper[4632]: I0313 12:12:16.227341 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8b9495c7-c9ae-4a07-b216-a250d4cd274e-httpd-config\") pod \"8b9495c7-c9ae-4a07-b216-a250d4cd274e\" (UID: \"8b9495c7-c9ae-4a07-b216-a250d4cd274e\") " Mar 13 12:12:16 crc kubenswrapper[4632]: I0313 12:12:16.227366 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9495c7-c9ae-4a07-b216-a250d4cd274e-internal-tls-certs\") pod \"8b9495c7-c9ae-4a07-b216-a250d4cd274e\" (UID: \"8b9495c7-c9ae-4a07-b216-a250d4cd274e\") " Mar 13 12:12:16 crc kubenswrapper[4632]: I0313 12:12:16.227421 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8b9495c7-c9ae-4a07-b216-a250d4cd274e-config\") pod \"8b9495c7-c9ae-4a07-b216-a250d4cd274e\" (UID: \"8b9495c7-c9ae-4a07-b216-a250d4cd274e\") " Mar 13 12:12:16 crc kubenswrapper[4632]: I0313 12:12:16.259810 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b9495c7-c9ae-4a07-b216-a250d4cd274e-kube-api-access-llcp8" (OuterVolumeSpecName: "kube-api-access-llcp8") pod "8b9495c7-c9ae-4a07-b216-a250d4cd274e" (UID: "8b9495c7-c9ae-4a07-b216-a250d4cd274e"). InnerVolumeSpecName "kube-api-access-llcp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:12:16 crc kubenswrapper[4632]: I0313 12:12:16.271280 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b9495c7-c9ae-4a07-b216-a250d4cd274e-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "8b9495c7-c9ae-4a07-b216-a250d4cd274e" (UID: "8b9495c7-c9ae-4a07-b216-a250d4cd274e"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:12:16 crc kubenswrapper[4632]: I0313 12:12:16.309901 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b9495c7-c9ae-4a07-b216-a250d4cd274e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8b9495c7-c9ae-4a07-b216-a250d4cd274e" (UID: "8b9495c7-c9ae-4a07-b216-a250d4cd274e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:12:16 crc kubenswrapper[4632]: I0313 12:12:16.318535 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b9495c7-c9ae-4a07-b216-a250d4cd274e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8b9495c7-c9ae-4a07-b216-a250d4cd274e" (UID: "8b9495c7-c9ae-4a07-b216-a250d4cd274e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:12:16 crc kubenswrapper[4632]: I0313 12:12:16.333408 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llcp8\" (UniqueName: \"kubernetes.io/projected/8b9495c7-c9ae-4a07-b216-a250d4cd274e-kube-api-access-llcp8\") on node \"crc\" DevicePath \"\"" Mar 13 12:12:16 crc kubenswrapper[4632]: I0313 12:12:16.334006 4632 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9495c7-c9ae-4a07-b216-a250d4cd274e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 13 12:12:16 crc kubenswrapper[4632]: I0313 12:12:16.334181 4632 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8b9495c7-c9ae-4a07-b216-a250d4cd274e-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 13 12:12:16 crc kubenswrapper[4632]: I0313 12:12:16.334209 4632 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9495c7-c9ae-4a07-b216-a250d4cd274e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 13 12:12:16 crc kubenswrapper[4632]: I0313 12:12:16.343899 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b9495c7-c9ae-4a07-b216-a250d4cd274e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b9495c7-c9ae-4a07-b216-a250d4cd274e" (UID: "8b9495c7-c9ae-4a07-b216-a250d4cd274e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:12:16 crc kubenswrapper[4632]: I0313 12:12:16.348728 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b9495c7-c9ae-4a07-b216-a250d4cd274e-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "8b9495c7-c9ae-4a07-b216-a250d4cd274e" (UID: "8b9495c7-c9ae-4a07-b216-a250d4cd274e"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:12:16 crc kubenswrapper[4632]: I0313 12:12:16.361061 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b9495c7-c9ae-4a07-b216-a250d4cd274e-config" (OuterVolumeSpecName: "config") pod "8b9495c7-c9ae-4a07-b216-a250d4cd274e" (UID: "8b9495c7-c9ae-4a07-b216-a250d4cd274e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:12:16 crc kubenswrapper[4632]: I0313 12:12:16.435715 4632 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b9495c7-c9ae-4a07-b216-a250d4cd274e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 12:12:16 crc kubenswrapper[4632]: I0313 12:12:16.435759 4632 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8b9495c7-c9ae-4a07-b216-a250d4cd274e-config\") on node \"crc\" DevicePath \"\"" Mar 13 12:12:16 crc kubenswrapper[4632]: I0313 12:12:16.435773 4632 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9495c7-c9ae-4a07-b216-a250d4cd274e-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 13 12:12:17 crc kubenswrapper[4632]: I0313 12:12:17.030562 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59586ff4c9-s4xn7" event={"ID":"8b9495c7-c9ae-4a07-b216-a250d4cd274e","Type":"ContainerDied","Data":"baa77c1d37fb9c8cc82676bdaaab769c07869647e902c21581eef67e591e5d68"} Mar 13 12:12:17 crc kubenswrapper[4632]: I0313 12:12:17.030783 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59586ff4c9-s4xn7" Mar 13 12:12:17 crc kubenswrapper[4632]: I0313 12:12:17.031860 4632 scope.go:117] "RemoveContainer" containerID="82e1ea3147a5e24713a581b7fd1d1be6dc38543edaf91f1fa20ce5282f06b072" Mar 13 12:12:17 crc kubenswrapper[4632]: I0313 12:12:17.078909 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-59586ff4c9-s4xn7"] Mar 13 12:12:17 crc kubenswrapper[4632]: I0313 12:12:17.083067 4632 scope.go:117] "RemoveContainer" containerID="94fc75b5bf96292690ce359a5d4ce65dd30bc2b06b1aeb4d309bd6e1dcd7e70c" Mar 13 12:12:17 crc kubenswrapper[4632]: I0313 12:12:17.087588 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-59586ff4c9-s4xn7"] Mar 13 12:12:18 crc kubenswrapper[4632]: I0313 12:12:18.060885 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b9495c7-c9ae-4a07-b216-a250d4cd274e" path="/var/lib/kubelet/pods/8b9495c7-c9ae-4a07-b216-a250d4cd274e/volumes" Mar 13 12:12:23 crc kubenswrapper[4632]: I0313 12:12:23.251067 4632 trace.go:236] Trace[334643179]: "Calculate volume metrics of multus-daemon-config for pod openshift-multus/multus-gqf22" (13-Mar-2026 12:12:21.664) (total time: 1581ms): Mar 13 12:12:23 crc kubenswrapper[4632]: Trace[334643179]: [1.581560521s] [1.581560521s] END Mar 13 12:12:25 crc kubenswrapper[4632]: I0313 12:12:25.035018 4632 scope.go:117] "RemoveContainer" containerID="d0bcc7d380da406a96a84a48eb87ce7d69302220aa1982823e25de22c01f77e0" Mar 13 12:12:43 crc kubenswrapper[4632]: I0313 12:12:43.784134 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8xrj9"] Mar 13 12:12:43 crc kubenswrapper[4632]: E0313 12:12:43.785145 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69547021-fae1-4ad6-8745-c327bb079dce" containerName="oc" Mar 13 12:12:43 crc kubenswrapper[4632]: I0313 12:12:43.785165 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="69547021-fae1-4ad6-8745-c327bb079dce" containerName="oc" Mar 13 12:12:43 crc kubenswrapper[4632]: E0313 12:12:43.785182 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b9495c7-c9ae-4a07-b216-a250d4cd274e" containerName="neutron-httpd" Mar 13 12:12:43 crc kubenswrapper[4632]: I0313 12:12:43.785190 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b9495c7-c9ae-4a07-b216-a250d4cd274e" containerName="neutron-httpd" Mar 13 12:12:43 crc kubenswrapper[4632]: E0313 12:12:43.785242 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b9495c7-c9ae-4a07-b216-a250d4cd274e" containerName="neutron-api" Mar 13 12:12:43 crc kubenswrapper[4632]: I0313 12:12:43.785252 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b9495c7-c9ae-4a07-b216-a250d4cd274e" containerName="neutron-api" Mar 13 12:12:43 crc kubenswrapper[4632]: I0313 12:12:43.785478 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b9495c7-c9ae-4a07-b216-a250d4cd274e" containerName="neutron-api" Mar 13 12:12:43 crc kubenswrapper[4632]: I0313 12:12:43.785504 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b9495c7-c9ae-4a07-b216-a250d4cd274e" containerName="neutron-httpd" Mar 13 12:12:43 crc kubenswrapper[4632]: I0313 12:12:43.785524 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="69547021-fae1-4ad6-8745-c327bb079dce" containerName="oc" Mar 13 12:12:43 crc kubenswrapper[4632]: I0313 12:12:43.797121 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8xrj9"] Mar 13 12:12:43 crc kubenswrapper[4632]: I0313 12:12:43.797267 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8xrj9" Mar 13 12:12:43 crc kubenswrapper[4632]: I0313 12:12:43.903463 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58986885-b2ff-450a-b232-a26163de811a-utilities\") pod \"certified-operators-8xrj9\" (UID: \"58986885-b2ff-450a-b232-a26163de811a\") " pod="openshift-marketplace/certified-operators-8xrj9" Mar 13 12:12:43 crc kubenswrapper[4632]: I0313 12:12:43.903800 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24rgb\" (UniqueName: \"kubernetes.io/projected/58986885-b2ff-450a-b232-a26163de811a-kube-api-access-24rgb\") pod \"certified-operators-8xrj9\" (UID: \"58986885-b2ff-450a-b232-a26163de811a\") " pod="openshift-marketplace/certified-operators-8xrj9" Mar 13 12:12:43 crc kubenswrapper[4632]: I0313 12:12:43.903931 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58986885-b2ff-450a-b232-a26163de811a-catalog-content\") pod \"certified-operators-8xrj9\" (UID: \"58986885-b2ff-450a-b232-a26163de811a\") " pod="openshift-marketplace/certified-operators-8xrj9" Mar 13 12:12:44 crc kubenswrapper[4632]: I0313 12:12:44.006087 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58986885-b2ff-450a-b232-a26163de811a-utilities\") pod \"certified-operators-8xrj9\" (UID: \"58986885-b2ff-450a-b232-a26163de811a\") " pod="openshift-marketplace/certified-operators-8xrj9" Mar 13 12:12:44 crc kubenswrapper[4632]: I0313 12:12:44.006145 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24rgb\" (UniqueName: \"kubernetes.io/projected/58986885-b2ff-450a-b232-a26163de811a-kube-api-access-24rgb\") pod \"certified-operators-8xrj9\" (UID: \"58986885-b2ff-450a-b232-a26163de811a\") " pod="openshift-marketplace/certified-operators-8xrj9" Mar 13 12:12:44 crc kubenswrapper[4632]: I0313 12:12:44.006195 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58986885-b2ff-450a-b232-a26163de811a-catalog-content\") pod \"certified-operators-8xrj9\" (UID: \"58986885-b2ff-450a-b232-a26163de811a\") " pod="openshift-marketplace/certified-operators-8xrj9" Mar 13 12:12:44 crc kubenswrapper[4632]: I0313 12:12:44.006769 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58986885-b2ff-450a-b232-a26163de811a-catalog-content\") pod \"certified-operators-8xrj9\" (UID: \"58986885-b2ff-450a-b232-a26163de811a\") " pod="openshift-marketplace/certified-operators-8xrj9" Mar 13 12:12:44 crc kubenswrapper[4632]: I0313 12:12:44.006921 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58986885-b2ff-450a-b232-a26163de811a-utilities\") pod \"certified-operators-8xrj9\" (UID: \"58986885-b2ff-450a-b232-a26163de811a\") " pod="openshift-marketplace/certified-operators-8xrj9" Mar 13 12:12:44 crc kubenswrapper[4632]: I0313 12:12:44.039603 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24rgb\" (UniqueName: \"kubernetes.io/projected/58986885-b2ff-450a-b232-a26163de811a-kube-api-access-24rgb\") pod \"certified-operators-8xrj9\" (UID: \"58986885-b2ff-450a-b232-a26163de811a\") " pod="openshift-marketplace/certified-operators-8xrj9" Mar 13 12:12:44 crc kubenswrapper[4632]: I0313 12:12:44.129993 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8xrj9" Mar 13 12:12:44 crc kubenswrapper[4632]: I0313 12:12:44.716512 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8xrj9"] Mar 13 12:12:45 crc kubenswrapper[4632]: I0313 12:12:45.514497 4632 generic.go:334] "Generic (PLEG): container finished" podID="58986885-b2ff-450a-b232-a26163de811a" containerID="b27af3f83423049c58aa5949bc99cebcf9fba37e5504bb6ed378f00293a5e3ff" exitCode=0 Mar 13 12:12:45 crc kubenswrapper[4632]: I0313 12:12:45.514597 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xrj9" event={"ID":"58986885-b2ff-450a-b232-a26163de811a","Type":"ContainerDied","Data":"b27af3f83423049c58aa5949bc99cebcf9fba37e5504bb6ed378f00293a5e3ff"} Mar 13 12:12:45 crc kubenswrapper[4632]: I0313 12:12:45.514872 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xrj9" event={"ID":"58986885-b2ff-450a-b232-a26163de811a","Type":"ContainerStarted","Data":"e7db2df784e22387bb91352a6bc88e575028a70869b8cf3ee15bff73c9b7bfb4"} Mar 13 12:12:46 crc kubenswrapper[4632]: I0313 12:12:46.529894 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xrj9" event={"ID":"58986885-b2ff-450a-b232-a26163de811a","Type":"ContainerStarted","Data":"2d9f6c616f332d122081782eff3ae0b45da85be1a8961803eef5bdd4c8ed6d19"} Mar 13 12:12:48 crc kubenswrapper[4632]: I0313 12:12:48.562916 4632 generic.go:334] "Generic (PLEG): container finished" podID="58986885-b2ff-450a-b232-a26163de811a" containerID="2d9f6c616f332d122081782eff3ae0b45da85be1a8961803eef5bdd4c8ed6d19" exitCode=0 Mar 13 12:12:48 crc kubenswrapper[4632]: I0313 12:12:48.563246 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xrj9" event={"ID":"58986885-b2ff-450a-b232-a26163de811a","Type":"ContainerDied","Data":"2d9f6c616f332d122081782eff3ae0b45da85be1a8961803eef5bdd4c8ed6d19"} Mar 13 12:12:49 crc kubenswrapper[4632]: I0313 12:12:49.574807 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xrj9" event={"ID":"58986885-b2ff-450a-b232-a26163de811a","Type":"ContainerStarted","Data":"2af6cb2b9380482b4ea542096a63d2330cf2514fce0a1df0deaa98cec712ba02"} Mar 13 12:12:49 crc kubenswrapper[4632]: I0313 12:12:49.612754 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8xrj9" podStartSLOduration=3.06037933 podStartE2EDuration="6.612728643s" podCreationTimestamp="2026-03-13 12:12:43 +0000 UTC" firstStartedPulling="2026-03-13 12:12:45.51641246 +0000 UTC m=+7739.538942593" lastFinishedPulling="2026-03-13 12:12:49.068761783 +0000 UTC m=+7743.091291906" observedRunningTime="2026-03-13 12:12:49.602282166 +0000 UTC m=+7743.624812319" watchObservedRunningTime="2026-03-13 12:12:49.612728643 +0000 UTC m=+7743.635258796" Mar 13 12:12:54 crc kubenswrapper[4632]: I0313 12:12:54.130324 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8xrj9" Mar 13 12:12:54 crc kubenswrapper[4632]: I0313 12:12:54.130799 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8xrj9" Mar 13 12:12:55 crc kubenswrapper[4632]: I0313 12:12:55.176537 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8xrj9" podUID="58986885-b2ff-450a-b232-a26163de811a" containerName="registry-server" probeResult="failure" output=< Mar 13 12:12:55 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:12:55 crc kubenswrapper[4632]: > Mar 13 12:13:05 crc kubenswrapper[4632]: I0313 12:13:05.184095 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8xrj9" podUID="58986885-b2ff-450a-b232-a26163de811a" containerName="registry-server" probeResult="failure" output=< Mar 13 12:13:05 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:13:05 crc kubenswrapper[4632]: > Mar 13 12:13:14 crc kubenswrapper[4632]: I0313 12:13:14.306913 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8xrj9" Mar 13 12:13:14 crc kubenswrapper[4632]: I0313 12:13:14.361432 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8xrj9" Mar 13 12:13:15 crc kubenswrapper[4632]: I0313 12:13:15.027922 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8xrj9"] Mar 13 12:13:15 crc kubenswrapper[4632]: I0313 12:13:15.904738 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8xrj9" podUID="58986885-b2ff-450a-b232-a26163de811a" containerName="registry-server" containerID="cri-o://2af6cb2b9380482b4ea542096a63d2330cf2514fce0a1df0deaa98cec712ba02" gracePeriod=2 Mar 13 12:13:16 crc kubenswrapper[4632]: I0313 12:13:16.478324 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8xrj9" Mar 13 12:13:16 crc kubenswrapper[4632]: I0313 12:13:16.594770 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24rgb\" (UniqueName: \"kubernetes.io/projected/58986885-b2ff-450a-b232-a26163de811a-kube-api-access-24rgb\") pod \"58986885-b2ff-450a-b232-a26163de811a\" (UID: \"58986885-b2ff-450a-b232-a26163de811a\") " Mar 13 12:13:16 crc kubenswrapper[4632]: I0313 12:13:16.594973 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58986885-b2ff-450a-b232-a26163de811a-utilities\") pod \"58986885-b2ff-450a-b232-a26163de811a\" (UID: \"58986885-b2ff-450a-b232-a26163de811a\") " Mar 13 12:13:16 crc kubenswrapper[4632]: I0313 12:13:16.595134 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58986885-b2ff-450a-b232-a26163de811a-catalog-content\") pod \"58986885-b2ff-450a-b232-a26163de811a\" (UID: \"58986885-b2ff-450a-b232-a26163de811a\") " Mar 13 12:13:16 crc kubenswrapper[4632]: I0313 12:13:16.595536 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58986885-b2ff-450a-b232-a26163de811a-utilities" (OuterVolumeSpecName: "utilities") pod "58986885-b2ff-450a-b232-a26163de811a" (UID: "58986885-b2ff-450a-b232-a26163de811a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:13:16 crc kubenswrapper[4632]: I0313 12:13:16.595906 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58986885-b2ff-450a-b232-a26163de811a-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 12:13:16 crc kubenswrapper[4632]: I0313 12:13:16.621152 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58986885-b2ff-450a-b232-a26163de811a-kube-api-access-24rgb" (OuterVolumeSpecName: "kube-api-access-24rgb") pod "58986885-b2ff-450a-b232-a26163de811a" (UID: "58986885-b2ff-450a-b232-a26163de811a"). InnerVolumeSpecName "kube-api-access-24rgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:13:16 crc kubenswrapper[4632]: I0313 12:13:16.669722 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58986885-b2ff-450a-b232-a26163de811a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58986885-b2ff-450a-b232-a26163de811a" (UID: "58986885-b2ff-450a-b232-a26163de811a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:13:16 crc kubenswrapper[4632]: I0313 12:13:16.697249 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58986885-b2ff-450a-b232-a26163de811a-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 12:13:16 crc kubenswrapper[4632]: I0313 12:13:16.697279 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24rgb\" (UniqueName: \"kubernetes.io/projected/58986885-b2ff-450a-b232-a26163de811a-kube-api-access-24rgb\") on node \"crc\" DevicePath \"\"" Mar 13 12:13:16 crc kubenswrapper[4632]: I0313 12:13:16.918548 4632 generic.go:334] "Generic (PLEG): container finished" podID="58986885-b2ff-450a-b232-a26163de811a" containerID="2af6cb2b9380482b4ea542096a63d2330cf2514fce0a1df0deaa98cec712ba02" exitCode=0 Mar 13 12:13:16 crc kubenswrapper[4632]: I0313 12:13:16.918728 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xrj9" event={"ID":"58986885-b2ff-450a-b232-a26163de811a","Type":"ContainerDied","Data":"2af6cb2b9380482b4ea542096a63d2330cf2514fce0a1df0deaa98cec712ba02"} Mar 13 12:13:16 crc kubenswrapper[4632]: I0313 12:13:16.918793 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8xrj9" Mar 13 12:13:16 crc kubenswrapper[4632]: I0313 12:13:16.918868 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xrj9" event={"ID":"58986885-b2ff-450a-b232-a26163de811a","Type":"ContainerDied","Data":"e7db2df784e22387bb91352a6bc88e575028a70869b8cf3ee15bff73c9b7bfb4"} Mar 13 12:13:16 crc kubenswrapper[4632]: I0313 12:13:16.918897 4632 scope.go:117] "RemoveContainer" containerID="2af6cb2b9380482b4ea542096a63d2330cf2514fce0a1df0deaa98cec712ba02" Mar 13 12:13:16 crc kubenswrapper[4632]: I0313 12:13:16.945776 4632 scope.go:117] "RemoveContainer" containerID="2d9f6c616f332d122081782eff3ae0b45da85be1a8961803eef5bdd4c8ed6d19" Mar 13 12:13:16 crc kubenswrapper[4632]: I0313 12:13:16.965628 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8xrj9"] Mar 13 12:13:16 crc kubenswrapper[4632]: I0313 12:13:16.977778 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8xrj9"] Mar 13 12:13:16 crc kubenswrapper[4632]: I0313 12:13:16.991387 4632 scope.go:117] "RemoveContainer" containerID="b27af3f83423049c58aa5949bc99cebcf9fba37e5504bb6ed378f00293a5e3ff" Mar 13 12:13:17 crc kubenswrapper[4632]: I0313 12:13:17.033179 4632 scope.go:117] "RemoveContainer" containerID="2af6cb2b9380482b4ea542096a63d2330cf2514fce0a1df0deaa98cec712ba02" Mar 13 12:13:17 crc kubenswrapper[4632]: E0313 12:13:17.038180 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2af6cb2b9380482b4ea542096a63d2330cf2514fce0a1df0deaa98cec712ba02\": container with ID starting with 2af6cb2b9380482b4ea542096a63d2330cf2514fce0a1df0deaa98cec712ba02 not found: ID does not exist" containerID="2af6cb2b9380482b4ea542096a63d2330cf2514fce0a1df0deaa98cec712ba02" Mar 13 12:13:17 crc kubenswrapper[4632]: I0313 12:13:17.038237 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2af6cb2b9380482b4ea542096a63d2330cf2514fce0a1df0deaa98cec712ba02"} err="failed to get container status \"2af6cb2b9380482b4ea542096a63d2330cf2514fce0a1df0deaa98cec712ba02\": rpc error: code = NotFound desc = could not find container \"2af6cb2b9380482b4ea542096a63d2330cf2514fce0a1df0deaa98cec712ba02\": container with ID starting with 2af6cb2b9380482b4ea542096a63d2330cf2514fce0a1df0deaa98cec712ba02 not found: ID does not exist" Mar 13 12:13:17 crc kubenswrapper[4632]: I0313 12:13:17.038410 4632 scope.go:117] "RemoveContainer" containerID="2d9f6c616f332d122081782eff3ae0b45da85be1a8961803eef5bdd4c8ed6d19" Mar 13 12:13:17 crc kubenswrapper[4632]: E0313 12:13:17.038869 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d9f6c616f332d122081782eff3ae0b45da85be1a8961803eef5bdd4c8ed6d19\": container with ID starting with 2d9f6c616f332d122081782eff3ae0b45da85be1a8961803eef5bdd4c8ed6d19 not found: ID does not exist" containerID="2d9f6c616f332d122081782eff3ae0b45da85be1a8961803eef5bdd4c8ed6d19" Mar 13 12:13:17 crc kubenswrapper[4632]: I0313 12:13:17.038907 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d9f6c616f332d122081782eff3ae0b45da85be1a8961803eef5bdd4c8ed6d19"} err="failed to get container status \"2d9f6c616f332d122081782eff3ae0b45da85be1a8961803eef5bdd4c8ed6d19\": rpc error: code = NotFound desc = could not find container \"2d9f6c616f332d122081782eff3ae0b45da85be1a8961803eef5bdd4c8ed6d19\": container with ID starting with 2d9f6c616f332d122081782eff3ae0b45da85be1a8961803eef5bdd4c8ed6d19 not found: ID does not exist" Mar 13 12:13:17 crc kubenswrapper[4632]: I0313 12:13:17.038928 4632 scope.go:117] "RemoveContainer" containerID="b27af3f83423049c58aa5949bc99cebcf9fba37e5504bb6ed378f00293a5e3ff" Mar 13 12:13:17 crc kubenswrapper[4632]: E0313 12:13:17.039243 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b27af3f83423049c58aa5949bc99cebcf9fba37e5504bb6ed378f00293a5e3ff\": container with ID starting with b27af3f83423049c58aa5949bc99cebcf9fba37e5504bb6ed378f00293a5e3ff not found: ID does not exist" containerID="b27af3f83423049c58aa5949bc99cebcf9fba37e5504bb6ed378f00293a5e3ff" Mar 13 12:13:17 crc kubenswrapper[4632]: I0313 12:13:17.039273 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b27af3f83423049c58aa5949bc99cebcf9fba37e5504bb6ed378f00293a5e3ff"} err="failed to get container status \"b27af3f83423049c58aa5949bc99cebcf9fba37e5504bb6ed378f00293a5e3ff\": rpc error: code = NotFound desc = could not find container \"b27af3f83423049c58aa5949bc99cebcf9fba37e5504bb6ed378f00293a5e3ff\": container with ID starting with b27af3f83423049c58aa5949bc99cebcf9fba37e5504bb6ed378f00293a5e3ff not found: ID does not exist" Mar 13 12:13:18 crc kubenswrapper[4632]: I0313 12:13:18.057904 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58986885-b2ff-450a-b232-a26163de811a" path="/var/lib/kubelet/pods/58986885-b2ff-450a-b232-a26163de811a/volumes" Mar 13 12:13:26 crc kubenswrapper[4632]: I0313 12:13:26.170811 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-82xv7"] Mar 13 12:13:26 crc kubenswrapper[4632]: E0313 12:13:26.172265 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58986885-b2ff-450a-b232-a26163de811a" containerName="registry-server" Mar 13 12:13:26 crc kubenswrapper[4632]: I0313 12:13:26.172283 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="58986885-b2ff-450a-b232-a26163de811a" containerName="registry-server" Mar 13 12:13:26 crc kubenswrapper[4632]: E0313 12:13:26.172314 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58986885-b2ff-450a-b232-a26163de811a" containerName="extract-utilities" Mar 13 12:13:26 crc kubenswrapper[4632]: I0313 12:13:26.172321 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="58986885-b2ff-450a-b232-a26163de811a" containerName="extract-utilities" Mar 13 12:13:26 crc kubenswrapper[4632]: E0313 12:13:26.172330 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58986885-b2ff-450a-b232-a26163de811a" containerName="extract-content" Mar 13 12:13:26 crc kubenswrapper[4632]: I0313 12:13:26.172338 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="58986885-b2ff-450a-b232-a26163de811a" containerName="extract-content" Mar 13 12:13:26 crc kubenswrapper[4632]: I0313 12:13:26.172597 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="58986885-b2ff-450a-b232-a26163de811a" containerName="registry-server" Mar 13 12:13:26 crc kubenswrapper[4632]: I0313 12:13:26.174486 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82xv7" Mar 13 12:13:26 crc kubenswrapper[4632]: I0313 12:13:26.186107 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-82xv7"] Mar 13 12:13:26 crc kubenswrapper[4632]: I0313 12:13:26.305830 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/356af638-6aee-4a8f-996e-04eda41f3c75-utilities\") pod \"redhat-operators-82xv7\" (UID: \"356af638-6aee-4a8f-996e-04eda41f3c75\") " pod="openshift-marketplace/redhat-operators-82xv7" Mar 13 12:13:26 crc kubenswrapper[4632]: I0313 12:13:26.305872 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/356af638-6aee-4a8f-996e-04eda41f3c75-catalog-content\") pod \"redhat-operators-82xv7\" (UID: \"356af638-6aee-4a8f-996e-04eda41f3c75\") " pod="openshift-marketplace/redhat-operators-82xv7" Mar 13 12:13:26 crc kubenswrapper[4632]: I0313 12:13:26.306055 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f497x\" (UniqueName: \"kubernetes.io/projected/356af638-6aee-4a8f-996e-04eda41f3c75-kube-api-access-f497x\") pod \"redhat-operators-82xv7\" (UID: \"356af638-6aee-4a8f-996e-04eda41f3c75\") " pod="openshift-marketplace/redhat-operators-82xv7" Mar 13 12:13:26 crc kubenswrapper[4632]: I0313 12:13:26.407501 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f497x\" (UniqueName: \"kubernetes.io/projected/356af638-6aee-4a8f-996e-04eda41f3c75-kube-api-access-f497x\") pod \"redhat-operators-82xv7\" (UID: \"356af638-6aee-4a8f-996e-04eda41f3c75\") " pod="openshift-marketplace/redhat-operators-82xv7" Mar 13 12:13:26 crc kubenswrapper[4632]: I0313 12:13:26.407586 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/356af638-6aee-4a8f-996e-04eda41f3c75-utilities\") pod \"redhat-operators-82xv7\" (UID: \"356af638-6aee-4a8f-996e-04eda41f3c75\") " pod="openshift-marketplace/redhat-operators-82xv7" Mar 13 12:13:26 crc kubenswrapper[4632]: I0313 12:13:26.407614 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/356af638-6aee-4a8f-996e-04eda41f3c75-catalog-content\") pod \"redhat-operators-82xv7\" (UID: \"356af638-6aee-4a8f-996e-04eda41f3c75\") " pod="openshift-marketplace/redhat-operators-82xv7" Mar 13 12:13:26 crc kubenswrapper[4632]: I0313 12:13:26.408735 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/356af638-6aee-4a8f-996e-04eda41f3c75-catalog-content\") pod \"redhat-operators-82xv7\" (UID: \"356af638-6aee-4a8f-996e-04eda41f3c75\") " pod="openshift-marketplace/redhat-operators-82xv7" Mar 13 12:13:26 crc kubenswrapper[4632]: I0313 12:13:26.409063 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/356af638-6aee-4a8f-996e-04eda41f3c75-utilities\") pod \"redhat-operators-82xv7\" (UID: \"356af638-6aee-4a8f-996e-04eda41f3c75\") " pod="openshift-marketplace/redhat-operators-82xv7" Mar 13 12:13:26 crc kubenswrapper[4632]: I0313 12:13:26.434674 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f497x\" (UniqueName: \"kubernetes.io/projected/356af638-6aee-4a8f-996e-04eda41f3c75-kube-api-access-f497x\") pod \"redhat-operators-82xv7\" (UID: \"356af638-6aee-4a8f-996e-04eda41f3c75\") " pod="openshift-marketplace/redhat-operators-82xv7" Mar 13 12:13:26 crc kubenswrapper[4632]: I0313 12:13:26.494376 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82xv7" Mar 13 12:13:26 crc kubenswrapper[4632]: I0313 12:13:26.989880 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-82xv7"] Mar 13 12:13:27 crc kubenswrapper[4632]: I0313 12:13:27.027454 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82xv7" event={"ID":"356af638-6aee-4a8f-996e-04eda41f3c75","Type":"ContainerStarted","Data":"b62d309f4dfbdb74ea9d5a23db29bddea476b47f846e66dc66702ff94689f734"} Mar 13 12:13:28 crc kubenswrapper[4632]: I0313 12:13:28.048591 4632 generic.go:334] "Generic (PLEG): container finished" podID="356af638-6aee-4a8f-996e-04eda41f3c75" containerID="37e01b8e98a8f4df777413f141a70a28ede73377dfa9804760ceab29634fccc3" exitCode=0 Mar 13 12:13:28 crc kubenswrapper[4632]: I0313 12:13:28.060296 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82xv7" event={"ID":"356af638-6aee-4a8f-996e-04eda41f3c75","Type":"ContainerDied","Data":"37e01b8e98a8f4df777413f141a70a28ede73377dfa9804760ceab29634fccc3"} Mar 13 12:13:28 crc kubenswrapper[4632]: I0313 12:13:28.064435 4632 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 12:13:30 crc kubenswrapper[4632]: I0313 12:13:30.106027 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82xv7" event={"ID":"356af638-6aee-4a8f-996e-04eda41f3c75","Type":"ContainerStarted","Data":"2c91311b1f55ad95b395eb30b7c8a1dcb54c8e11934e8e37b24fd616a95caf8f"} Mar 13 12:13:36 crc kubenswrapper[4632]: I0313 12:13:36.169813 4632 generic.go:334] "Generic (PLEG): container finished" podID="356af638-6aee-4a8f-996e-04eda41f3c75" containerID="2c91311b1f55ad95b395eb30b7c8a1dcb54c8e11934e8e37b24fd616a95caf8f" exitCode=0 Mar 13 12:13:36 crc kubenswrapper[4632]: I0313 12:13:36.170036 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82xv7" event={"ID":"356af638-6aee-4a8f-996e-04eda41f3c75","Type":"ContainerDied","Data":"2c91311b1f55ad95b395eb30b7c8a1dcb54c8e11934e8e37b24fd616a95caf8f"} Mar 13 12:13:37 crc kubenswrapper[4632]: I0313 12:13:37.182256 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82xv7" event={"ID":"356af638-6aee-4a8f-996e-04eda41f3c75","Type":"ContainerStarted","Data":"a4ec22df9491ee29893ecc4c1c955cb46cd5e215eb52c9438822b29458df34d6"} Mar 13 12:13:37 crc kubenswrapper[4632]: I0313 12:13:37.206024 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-82xv7" podStartSLOduration=2.576909333 podStartE2EDuration="11.205997093s" podCreationTimestamp="2026-03-13 12:13:26 +0000 UTC" firstStartedPulling="2026-03-13 12:13:28.062133412 +0000 UTC m=+7782.084663545" lastFinishedPulling="2026-03-13 12:13:36.691221172 +0000 UTC m=+7790.713751305" observedRunningTime="2026-03-13 12:13:37.201262906 +0000 UTC m=+7791.223793059" watchObservedRunningTime="2026-03-13 12:13:37.205997093 +0000 UTC m=+7791.228527216" Mar 13 12:13:40 crc kubenswrapper[4632]: I0313 12:13:40.461689 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:13:40 crc kubenswrapper[4632]: I0313 12:13:40.463888 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:13:46 crc kubenswrapper[4632]: I0313 12:13:46.495476 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-82xv7" Mar 13 12:13:46 crc kubenswrapper[4632]: I0313 12:13:46.496019 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-82xv7" Mar 13 12:13:47 crc kubenswrapper[4632]: I0313 12:13:47.543225 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-82xv7" podUID="356af638-6aee-4a8f-996e-04eda41f3c75" containerName="registry-server" probeResult="failure" output=< Mar 13 12:13:47 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:13:47 crc kubenswrapper[4632]: > Mar 13 12:13:57 crc kubenswrapper[4632]: I0313 12:13:57.554909 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-82xv7" podUID="356af638-6aee-4a8f-996e-04eda41f3c75" containerName="registry-server" probeResult="failure" output=< Mar 13 12:13:57 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:13:57 crc kubenswrapper[4632]: > Mar 13 12:14:00 crc kubenswrapper[4632]: I0313 12:14:00.265969 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556734-bwv75"] Mar 13 12:14:00 crc kubenswrapper[4632]: I0313 12:14:00.268399 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556734-bwv75" Mar 13 12:14:00 crc kubenswrapper[4632]: I0313 12:14:00.292973 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 12:14:00 crc kubenswrapper[4632]: I0313 12:14:00.293009 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 12:14:00 crc kubenswrapper[4632]: I0313 12:14:00.294861 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 12:14:00 crc kubenswrapper[4632]: I0313 12:14:00.301531 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556734-bwv75"] Mar 13 12:14:00 crc kubenswrapper[4632]: I0313 12:14:00.332547 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxgj5\" (UniqueName: \"kubernetes.io/projected/5d0a5571-d345-44cf-ba1a-46b3ef68b1ae-kube-api-access-lxgj5\") pod \"auto-csr-approver-29556734-bwv75\" (UID: \"5d0a5571-d345-44cf-ba1a-46b3ef68b1ae\") " pod="openshift-infra/auto-csr-approver-29556734-bwv75" Mar 13 12:14:00 crc kubenswrapper[4632]: I0313 12:14:00.437097 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxgj5\" (UniqueName: \"kubernetes.io/projected/5d0a5571-d345-44cf-ba1a-46b3ef68b1ae-kube-api-access-lxgj5\") pod \"auto-csr-approver-29556734-bwv75\" (UID: \"5d0a5571-d345-44cf-ba1a-46b3ef68b1ae\") " pod="openshift-infra/auto-csr-approver-29556734-bwv75" Mar 13 12:14:00 crc kubenswrapper[4632]: I0313 12:14:00.490401 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxgj5\" (UniqueName: \"kubernetes.io/projected/5d0a5571-d345-44cf-ba1a-46b3ef68b1ae-kube-api-access-lxgj5\") pod \"auto-csr-approver-29556734-bwv75\" (UID: \"5d0a5571-d345-44cf-ba1a-46b3ef68b1ae\") " pod="openshift-infra/auto-csr-approver-29556734-bwv75" Mar 13 12:14:00 crc kubenswrapper[4632]: I0313 12:14:00.600353 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556734-bwv75" Mar 13 12:14:01 crc kubenswrapper[4632]: I0313 12:14:01.798041 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556734-bwv75"] Mar 13 12:14:01 crc kubenswrapper[4632]: W0313 12:14:01.816436 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d0a5571_d345_44cf_ba1a_46b3ef68b1ae.slice/crio-b1227e617d866c1a39d8664c23193a023ef1eeed2c0352c7b8799a2a86a8c01c WatchSource:0}: Error finding container b1227e617d866c1a39d8664c23193a023ef1eeed2c0352c7b8799a2a86a8c01c: Status 404 returned error can't find the container with id b1227e617d866c1a39d8664c23193a023ef1eeed2c0352c7b8799a2a86a8c01c Mar 13 12:14:02 crc kubenswrapper[4632]: I0313 12:14:02.436786 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556734-bwv75" event={"ID":"5d0a5571-d345-44cf-ba1a-46b3ef68b1ae","Type":"ContainerStarted","Data":"b1227e617d866c1a39d8664c23193a023ef1eeed2c0352c7b8799a2a86a8c01c"} Mar 13 12:14:04 crc kubenswrapper[4632]: I0313 12:14:04.472607 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556734-bwv75" event={"ID":"5d0a5571-d345-44cf-ba1a-46b3ef68b1ae","Type":"ContainerStarted","Data":"6faf20a0273a34b58a9864d2f460cdaebfb4cad46108ca19c20cf494270e4fe6"} Mar 13 12:14:04 crc kubenswrapper[4632]: I0313 12:14:04.488358 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556734-bwv75" podStartSLOduration=3.436713212 podStartE2EDuration="4.488338958s" podCreationTimestamp="2026-03-13 12:14:00 +0000 UTC" firstStartedPulling="2026-03-13 12:14:01.82010712 +0000 UTC m=+7815.842637253" lastFinishedPulling="2026-03-13 12:14:02.871732866 +0000 UTC m=+7816.894262999" observedRunningTime="2026-03-13 12:14:04.485446037 +0000 UTC m=+7818.507976170" watchObservedRunningTime="2026-03-13 12:14:04.488338958 +0000 UTC m=+7818.510869091" Mar 13 12:14:06 crc kubenswrapper[4632]: I0313 12:14:06.492849 4632 generic.go:334] "Generic (PLEG): container finished" podID="5d0a5571-d345-44cf-ba1a-46b3ef68b1ae" containerID="6faf20a0273a34b58a9864d2f460cdaebfb4cad46108ca19c20cf494270e4fe6" exitCode=0 Mar 13 12:14:06 crc kubenswrapper[4632]: I0313 12:14:06.492916 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556734-bwv75" event={"ID":"5d0a5571-d345-44cf-ba1a-46b3ef68b1ae","Type":"ContainerDied","Data":"6faf20a0273a34b58a9864d2f460cdaebfb4cad46108ca19c20cf494270e4fe6"} Mar 13 12:14:07 crc kubenswrapper[4632]: I0313 12:14:07.565263 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-82xv7" podUID="356af638-6aee-4a8f-996e-04eda41f3c75" containerName="registry-server" probeResult="failure" output=< Mar 13 12:14:07 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:14:07 crc kubenswrapper[4632]: > Mar 13 12:14:08 crc kubenswrapper[4632]: I0313 12:14:08.067390 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556734-bwv75" Mar 13 12:14:08 crc kubenswrapper[4632]: I0313 12:14:08.197841 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxgj5\" (UniqueName: \"kubernetes.io/projected/5d0a5571-d345-44cf-ba1a-46b3ef68b1ae-kube-api-access-lxgj5\") pod \"5d0a5571-d345-44cf-ba1a-46b3ef68b1ae\" (UID: \"5d0a5571-d345-44cf-ba1a-46b3ef68b1ae\") " Mar 13 12:14:08 crc kubenswrapper[4632]: I0313 12:14:08.224071 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d0a5571-d345-44cf-ba1a-46b3ef68b1ae-kube-api-access-lxgj5" (OuterVolumeSpecName: "kube-api-access-lxgj5") pod "5d0a5571-d345-44cf-ba1a-46b3ef68b1ae" (UID: "5d0a5571-d345-44cf-ba1a-46b3ef68b1ae"). InnerVolumeSpecName "kube-api-access-lxgj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:14:08 crc kubenswrapper[4632]: I0313 12:14:08.300876 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxgj5\" (UniqueName: \"kubernetes.io/projected/5d0a5571-d345-44cf-ba1a-46b3ef68b1ae-kube-api-access-lxgj5\") on node \"crc\" DevicePath \"\"" Mar 13 12:14:08 crc kubenswrapper[4632]: I0313 12:14:08.514183 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556734-bwv75" event={"ID":"5d0a5571-d345-44cf-ba1a-46b3ef68b1ae","Type":"ContainerDied","Data":"b1227e617d866c1a39d8664c23193a023ef1eeed2c0352c7b8799a2a86a8c01c"} Mar 13 12:14:08 crc kubenswrapper[4632]: I0313 12:14:08.514250 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556734-bwv75" Mar 13 12:14:08 crc kubenswrapper[4632]: I0313 12:14:08.515488 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1227e617d866c1a39d8664c23193a023ef1eeed2c0352c7b8799a2a86a8c01c" Mar 13 12:14:08 crc kubenswrapper[4632]: I0313 12:14:08.621318 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556728-lnbz4"] Mar 13 12:14:08 crc kubenswrapper[4632]: I0313 12:14:08.629555 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556728-lnbz4"] Mar 13 12:14:10 crc kubenswrapper[4632]: I0313 12:14:10.056630 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4980ff16-68a2-4b11-83d2-9d8ad1fa105c" path="/var/lib/kubelet/pods/4980ff16-68a2-4b11-83d2-9d8ad1fa105c/volumes" Mar 13 12:14:10 crc kubenswrapper[4632]: I0313 12:14:10.460985 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:14:10 crc kubenswrapper[4632]: I0313 12:14:10.461051 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:14:17 crc kubenswrapper[4632]: I0313 12:14:17.563795 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-82xv7" podUID="356af638-6aee-4a8f-996e-04eda41f3c75" containerName="registry-server" probeResult="failure" output=< Mar 13 12:14:17 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:14:17 crc kubenswrapper[4632]: > Mar 13 12:14:25 crc kubenswrapper[4632]: I0313 12:14:25.233540 4632 scope.go:117] "RemoveContainer" containerID="f67ad12914918a2b5742053c25b75fbf60ba190fadc812d9c25ad10140a8556c" Mar 13 12:14:27 crc kubenswrapper[4632]: I0313 12:14:27.543410 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-82xv7" podUID="356af638-6aee-4a8f-996e-04eda41f3c75" containerName="registry-server" probeResult="failure" output=< Mar 13 12:14:27 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:14:27 crc kubenswrapper[4632]: > Mar 13 12:14:37 crc kubenswrapper[4632]: I0313 12:14:37.552437 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-82xv7" podUID="356af638-6aee-4a8f-996e-04eda41f3c75" containerName="registry-server" probeResult="failure" output=< Mar 13 12:14:37 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:14:37 crc kubenswrapper[4632]: > Mar 13 12:14:40 crc kubenswrapper[4632]: I0313 12:14:40.460885 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:14:40 crc kubenswrapper[4632]: I0313 12:14:40.462193 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:14:40 crc kubenswrapper[4632]: I0313 12:14:40.464180 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 12:14:40 crc kubenswrapper[4632]: I0313 12:14:40.468280 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5cc922706d30866ac208574ee6bcc0812dd5d20bcd356efd2bc6fcac169085a9"} pod="openshift-machine-config-operator/machine-config-daemon-zkscb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 12:14:40 crc kubenswrapper[4632]: I0313 12:14:40.469902 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" containerID="cri-o://5cc922706d30866ac208574ee6bcc0812dd5d20bcd356efd2bc6fcac169085a9" gracePeriod=600 Mar 13 12:14:40 crc kubenswrapper[4632]: E0313 12:14:40.603731 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:14:40 crc kubenswrapper[4632]: I0313 12:14:40.837020 4632 generic.go:334] "Generic (PLEG): container finished" podID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerID="5cc922706d30866ac208574ee6bcc0812dd5d20bcd356efd2bc6fcac169085a9" exitCode=0 Mar 13 12:14:40 crc kubenswrapper[4632]: I0313 12:14:40.837113 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerDied","Data":"5cc922706d30866ac208574ee6bcc0812dd5d20bcd356efd2bc6fcac169085a9"} Mar 13 12:14:40 crc kubenswrapper[4632]: I0313 12:14:40.838633 4632 scope.go:117] "RemoveContainer" containerID="ef6a755da94d8b26aaa61b1a356ec9030e87ec1440f6bdf1f6abec8411efbdd9" Mar 13 12:14:40 crc kubenswrapper[4632]: I0313 12:14:40.840745 4632 scope.go:117] "RemoveContainer" containerID="5cc922706d30866ac208574ee6bcc0812dd5d20bcd356efd2bc6fcac169085a9" Mar 13 12:14:40 crc kubenswrapper[4632]: E0313 12:14:40.841355 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:14:46 crc kubenswrapper[4632]: I0313 12:14:46.567378 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-82xv7" Mar 13 12:14:46 crc kubenswrapper[4632]: I0313 12:14:46.622735 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-82xv7" Mar 13 12:14:46 crc kubenswrapper[4632]: I0313 12:14:46.815886 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-82xv7"] Mar 13 12:14:47 crc kubenswrapper[4632]: I0313 12:14:47.924359 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-82xv7" podUID="356af638-6aee-4a8f-996e-04eda41f3c75" containerName="registry-server" containerID="cri-o://a4ec22df9491ee29893ecc4c1c955cb46cd5e215eb52c9438822b29458df34d6" gracePeriod=2 Mar 13 12:14:48 crc kubenswrapper[4632]: I0313 12:14:48.926315 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82xv7" Mar 13 12:14:48 crc kubenswrapper[4632]: I0313 12:14:48.934029 4632 generic.go:334] "Generic (PLEG): container finished" podID="356af638-6aee-4a8f-996e-04eda41f3c75" containerID="a4ec22df9491ee29893ecc4c1c955cb46cd5e215eb52c9438822b29458df34d6" exitCode=0 Mar 13 12:14:48 crc kubenswrapper[4632]: I0313 12:14:48.934067 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82xv7" Mar 13 12:14:48 crc kubenswrapper[4632]: I0313 12:14:48.934091 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82xv7" event={"ID":"356af638-6aee-4a8f-996e-04eda41f3c75","Type":"ContainerDied","Data":"a4ec22df9491ee29893ecc4c1c955cb46cd5e215eb52c9438822b29458df34d6"} Mar 13 12:14:48 crc kubenswrapper[4632]: I0313 12:14:48.934151 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82xv7" event={"ID":"356af638-6aee-4a8f-996e-04eda41f3c75","Type":"ContainerDied","Data":"b62d309f4dfbdb74ea9d5a23db29bddea476b47f846e66dc66702ff94689f734"} Mar 13 12:14:48 crc kubenswrapper[4632]: I0313 12:14:48.934171 4632 scope.go:117] "RemoveContainer" containerID="a4ec22df9491ee29893ecc4c1c955cb46cd5e215eb52c9438822b29458df34d6" Mar 13 12:14:48 crc kubenswrapper[4632]: I0313 12:14:48.962554 4632 scope.go:117] "RemoveContainer" containerID="2c91311b1f55ad95b395eb30b7c8a1dcb54c8e11934e8e37b24fd616a95caf8f" Mar 13 12:14:48 crc kubenswrapper[4632]: I0313 12:14:48.997828 4632 scope.go:117] "RemoveContainer" containerID="37e01b8e98a8f4df777413f141a70a28ede73377dfa9804760ceab29634fccc3" Mar 13 12:14:49 crc kubenswrapper[4632]: I0313 12:14:49.047222 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/356af638-6aee-4a8f-996e-04eda41f3c75-catalog-content\") pod \"356af638-6aee-4a8f-996e-04eda41f3c75\" (UID: \"356af638-6aee-4a8f-996e-04eda41f3c75\") " Mar 13 12:14:49 crc kubenswrapper[4632]: I0313 12:14:49.047375 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/356af638-6aee-4a8f-996e-04eda41f3c75-utilities\") pod \"356af638-6aee-4a8f-996e-04eda41f3c75\" (UID: \"356af638-6aee-4a8f-996e-04eda41f3c75\") " Mar 13 12:14:49 crc kubenswrapper[4632]: I0313 12:14:49.047464 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f497x\" (UniqueName: \"kubernetes.io/projected/356af638-6aee-4a8f-996e-04eda41f3c75-kube-api-access-f497x\") pod \"356af638-6aee-4a8f-996e-04eda41f3c75\" (UID: \"356af638-6aee-4a8f-996e-04eda41f3c75\") " Mar 13 12:14:49 crc kubenswrapper[4632]: I0313 12:14:49.054324 4632 scope.go:117] "RemoveContainer" containerID="a4ec22df9491ee29893ecc4c1c955cb46cd5e215eb52c9438822b29458df34d6" Mar 13 12:14:49 crc kubenswrapper[4632]: I0313 12:14:49.057725 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/356af638-6aee-4a8f-996e-04eda41f3c75-utilities" (OuterVolumeSpecName: "utilities") pod "356af638-6aee-4a8f-996e-04eda41f3c75" (UID: "356af638-6aee-4a8f-996e-04eda41f3c75"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:14:49 crc kubenswrapper[4632]: E0313 12:14:49.060025 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4ec22df9491ee29893ecc4c1c955cb46cd5e215eb52c9438822b29458df34d6\": container with ID starting with a4ec22df9491ee29893ecc4c1c955cb46cd5e215eb52c9438822b29458df34d6 not found: ID does not exist" containerID="a4ec22df9491ee29893ecc4c1c955cb46cd5e215eb52c9438822b29458df34d6" Mar 13 12:14:49 crc kubenswrapper[4632]: I0313 12:14:49.060093 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4ec22df9491ee29893ecc4c1c955cb46cd5e215eb52c9438822b29458df34d6"} err="failed to get container status \"a4ec22df9491ee29893ecc4c1c955cb46cd5e215eb52c9438822b29458df34d6\": rpc error: code = NotFound desc = could not find container \"a4ec22df9491ee29893ecc4c1c955cb46cd5e215eb52c9438822b29458df34d6\": container with ID starting with a4ec22df9491ee29893ecc4c1c955cb46cd5e215eb52c9438822b29458df34d6 not found: ID does not exist" Mar 13 12:14:49 crc kubenswrapper[4632]: I0313 12:14:49.060119 4632 scope.go:117] "RemoveContainer" containerID="2c91311b1f55ad95b395eb30b7c8a1dcb54c8e11934e8e37b24fd616a95caf8f" Mar 13 12:14:49 crc kubenswrapper[4632]: E0313 12:14:49.060739 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c91311b1f55ad95b395eb30b7c8a1dcb54c8e11934e8e37b24fd616a95caf8f\": container with ID starting with 2c91311b1f55ad95b395eb30b7c8a1dcb54c8e11934e8e37b24fd616a95caf8f not found: ID does not exist" containerID="2c91311b1f55ad95b395eb30b7c8a1dcb54c8e11934e8e37b24fd616a95caf8f" Mar 13 12:14:49 crc kubenswrapper[4632]: I0313 12:14:49.060777 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c91311b1f55ad95b395eb30b7c8a1dcb54c8e11934e8e37b24fd616a95caf8f"} err="failed to get container status \"2c91311b1f55ad95b395eb30b7c8a1dcb54c8e11934e8e37b24fd616a95caf8f\": rpc error: code = NotFound desc = could not find container \"2c91311b1f55ad95b395eb30b7c8a1dcb54c8e11934e8e37b24fd616a95caf8f\": container with ID starting with 2c91311b1f55ad95b395eb30b7c8a1dcb54c8e11934e8e37b24fd616a95caf8f not found: ID does not exist" Mar 13 12:14:49 crc kubenswrapper[4632]: I0313 12:14:49.060798 4632 scope.go:117] "RemoveContainer" containerID="37e01b8e98a8f4df777413f141a70a28ede73377dfa9804760ceab29634fccc3" Mar 13 12:14:49 crc kubenswrapper[4632]: E0313 12:14:49.061203 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37e01b8e98a8f4df777413f141a70a28ede73377dfa9804760ceab29634fccc3\": container with ID starting with 37e01b8e98a8f4df777413f141a70a28ede73377dfa9804760ceab29634fccc3 not found: ID does not exist" containerID="37e01b8e98a8f4df777413f141a70a28ede73377dfa9804760ceab29634fccc3" Mar 13 12:14:49 crc kubenswrapper[4632]: I0313 12:14:49.061236 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37e01b8e98a8f4df777413f141a70a28ede73377dfa9804760ceab29634fccc3"} err="failed to get container status \"37e01b8e98a8f4df777413f141a70a28ede73377dfa9804760ceab29634fccc3\": rpc error: code = NotFound desc = could not find container \"37e01b8e98a8f4df777413f141a70a28ede73377dfa9804760ceab29634fccc3\": container with ID starting with 37e01b8e98a8f4df777413f141a70a28ede73377dfa9804760ceab29634fccc3 not found: ID does not exist" Mar 13 12:14:49 crc kubenswrapper[4632]: I0313 12:14:49.078804 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/356af638-6aee-4a8f-996e-04eda41f3c75-kube-api-access-f497x" (OuterVolumeSpecName: "kube-api-access-f497x") pod "356af638-6aee-4a8f-996e-04eda41f3c75" (UID: "356af638-6aee-4a8f-996e-04eda41f3c75"). InnerVolumeSpecName "kube-api-access-f497x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:14:49 crc kubenswrapper[4632]: I0313 12:14:49.152494 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/356af638-6aee-4a8f-996e-04eda41f3c75-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 12:14:49 crc kubenswrapper[4632]: I0313 12:14:49.152597 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f497x\" (UniqueName: \"kubernetes.io/projected/356af638-6aee-4a8f-996e-04eda41f3c75-kube-api-access-f497x\") on node \"crc\" DevicePath \"\"" Mar 13 12:14:49 crc kubenswrapper[4632]: I0313 12:14:49.238152 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/356af638-6aee-4a8f-996e-04eda41f3c75-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "356af638-6aee-4a8f-996e-04eda41f3c75" (UID: "356af638-6aee-4a8f-996e-04eda41f3c75"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:14:49 crc kubenswrapper[4632]: I0313 12:14:49.254526 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/356af638-6aee-4a8f-996e-04eda41f3c75-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 12:14:49 crc kubenswrapper[4632]: I0313 12:14:49.620001 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-82xv7"] Mar 13 12:14:49 crc kubenswrapper[4632]: I0313 12:14:49.665744 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-82xv7"] Mar 13 12:14:50 crc kubenswrapper[4632]: I0313 12:14:50.057089 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="356af638-6aee-4a8f-996e-04eda41f3c75" path="/var/lib/kubelet/pods/356af638-6aee-4a8f-996e-04eda41f3c75/volumes" Mar 13 12:14:53 crc kubenswrapper[4632]: E0313 12:14:53.109727 4632 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Mar 13 12:14:54 crc kubenswrapper[4632]: I0313 12:14:54.044221 4632 scope.go:117] "RemoveContainer" containerID="5cc922706d30866ac208574ee6bcc0812dd5d20bcd356efd2bc6fcac169085a9" Mar 13 12:14:54 crc kubenswrapper[4632]: E0313 12:14:54.044741 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:15:00 crc kubenswrapper[4632]: I0313 12:15:00.179644 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556735-8mfsj"] Mar 13 12:15:00 crc kubenswrapper[4632]: E0313 12:15:00.183057 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="356af638-6aee-4a8f-996e-04eda41f3c75" containerName="extract-content" Mar 13 12:15:00 crc kubenswrapper[4632]: I0313 12:15:00.183087 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="356af638-6aee-4a8f-996e-04eda41f3c75" containerName="extract-content" Mar 13 12:15:00 crc kubenswrapper[4632]: E0313 12:15:00.183151 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d0a5571-d345-44cf-ba1a-46b3ef68b1ae" containerName="oc" Mar 13 12:15:00 crc kubenswrapper[4632]: I0313 12:15:00.183159 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d0a5571-d345-44cf-ba1a-46b3ef68b1ae" containerName="oc" Mar 13 12:15:00 crc kubenswrapper[4632]: E0313 12:15:00.183184 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="356af638-6aee-4a8f-996e-04eda41f3c75" containerName="extract-utilities" Mar 13 12:15:00 crc kubenswrapper[4632]: I0313 12:15:00.183192 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="356af638-6aee-4a8f-996e-04eda41f3c75" containerName="extract-utilities" Mar 13 12:15:00 crc kubenswrapper[4632]: E0313 12:15:00.183202 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="356af638-6aee-4a8f-996e-04eda41f3c75" containerName="registry-server" Mar 13 12:15:00 crc kubenswrapper[4632]: I0313 12:15:00.183208 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="356af638-6aee-4a8f-996e-04eda41f3c75" containerName="registry-server" Mar 13 12:15:00 crc kubenswrapper[4632]: I0313 12:15:00.184727 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d0a5571-d345-44cf-ba1a-46b3ef68b1ae" containerName="oc" Mar 13 12:15:00 crc kubenswrapper[4632]: I0313 12:15:00.184758 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="356af638-6aee-4a8f-996e-04eda41f3c75" containerName="registry-server" Mar 13 12:15:00 crc kubenswrapper[4632]: I0313 12:15:00.192193 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556735-8mfsj" Mar 13 12:15:00 crc kubenswrapper[4632]: I0313 12:15:00.207033 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 13 12:15:00 crc kubenswrapper[4632]: I0313 12:15:00.207039 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 13 12:15:00 crc kubenswrapper[4632]: I0313 12:15:00.214622 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556735-8mfsj"] Mar 13 12:15:00 crc kubenswrapper[4632]: I0313 12:15:00.235529 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee86cbb5-0041-46eb-8f35-c159f9fbc3b3-secret-volume\") pod \"collect-profiles-29556735-8mfsj\" (UID: \"ee86cbb5-0041-46eb-8f35-c159f9fbc3b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556735-8mfsj" Mar 13 12:15:00 crc kubenswrapper[4632]: I0313 12:15:00.235693 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxpjx\" (UniqueName: \"kubernetes.io/projected/ee86cbb5-0041-46eb-8f35-c159f9fbc3b3-kube-api-access-wxpjx\") pod \"collect-profiles-29556735-8mfsj\" (UID: \"ee86cbb5-0041-46eb-8f35-c159f9fbc3b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556735-8mfsj" Mar 13 12:15:00 crc kubenswrapper[4632]: I0313 12:15:00.235791 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee86cbb5-0041-46eb-8f35-c159f9fbc3b3-config-volume\") pod \"collect-profiles-29556735-8mfsj\" (UID: \"ee86cbb5-0041-46eb-8f35-c159f9fbc3b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556735-8mfsj" Mar 13 12:15:00 crc kubenswrapper[4632]: I0313 12:15:00.338183 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxpjx\" (UniqueName: \"kubernetes.io/projected/ee86cbb5-0041-46eb-8f35-c159f9fbc3b3-kube-api-access-wxpjx\") pod \"collect-profiles-29556735-8mfsj\" (UID: \"ee86cbb5-0041-46eb-8f35-c159f9fbc3b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556735-8mfsj" Mar 13 12:15:00 crc kubenswrapper[4632]: I0313 12:15:00.338281 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee86cbb5-0041-46eb-8f35-c159f9fbc3b3-config-volume\") pod \"collect-profiles-29556735-8mfsj\" (UID: \"ee86cbb5-0041-46eb-8f35-c159f9fbc3b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556735-8mfsj" Mar 13 12:15:00 crc kubenswrapper[4632]: I0313 12:15:00.338323 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee86cbb5-0041-46eb-8f35-c159f9fbc3b3-secret-volume\") pod \"collect-profiles-29556735-8mfsj\" (UID: \"ee86cbb5-0041-46eb-8f35-c159f9fbc3b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556735-8mfsj" Mar 13 12:15:00 crc kubenswrapper[4632]: I0313 12:15:00.339262 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee86cbb5-0041-46eb-8f35-c159f9fbc3b3-config-volume\") pod \"collect-profiles-29556735-8mfsj\" (UID: \"ee86cbb5-0041-46eb-8f35-c159f9fbc3b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556735-8mfsj" Mar 13 12:15:00 crc kubenswrapper[4632]: I0313 12:15:00.350485 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee86cbb5-0041-46eb-8f35-c159f9fbc3b3-secret-volume\") pod \"collect-profiles-29556735-8mfsj\" (UID: \"ee86cbb5-0041-46eb-8f35-c159f9fbc3b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556735-8mfsj" Mar 13 12:15:00 crc kubenswrapper[4632]: I0313 12:15:00.371262 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxpjx\" (UniqueName: \"kubernetes.io/projected/ee86cbb5-0041-46eb-8f35-c159f9fbc3b3-kube-api-access-wxpjx\") pod \"collect-profiles-29556735-8mfsj\" (UID: \"ee86cbb5-0041-46eb-8f35-c159f9fbc3b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556735-8mfsj" Mar 13 12:15:00 crc kubenswrapper[4632]: I0313 12:15:00.534344 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556735-8mfsj" Mar 13 12:15:01 crc kubenswrapper[4632]: I0313 12:15:01.164931 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556735-8mfsj"] Mar 13 12:15:02 crc kubenswrapper[4632]: I0313 12:15:02.075023 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556735-8mfsj" event={"ID":"ee86cbb5-0041-46eb-8f35-c159f9fbc3b3","Type":"ContainerStarted","Data":"9b0bafda64d1039901c8fa27c0404ac16cc9d6c17ec7e7294c6e46712544cdb4"} Mar 13 12:15:02 crc kubenswrapper[4632]: I0313 12:15:02.076218 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556735-8mfsj" event={"ID":"ee86cbb5-0041-46eb-8f35-c159f9fbc3b3","Type":"ContainerStarted","Data":"fb8a8ee86021160eb679abf8d44c17e744720b3d880df97becd5875a7084ef14"} Mar 13 12:15:03 crc kubenswrapper[4632]: I0313 12:15:03.085459 4632 generic.go:334] "Generic (PLEG): container finished" podID="ee86cbb5-0041-46eb-8f35-c159f9fbc3b3" containerID="9b0bafda64d1039901c8fa27c0404ac16cc9d6c17ec7e7294c6e46712544cdb4" exitCode=0 Mar 13 12:15:03 crc kubenswrapper[4632]: I0313 12:15:03.085530 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556735-8mfsj" event={"ID":"ee86cbb5-0041-46eb-8f35-c159f9fbc3b3","Type":"ContainerDied","Data":"9b0bafda64d1039901c8fa27c0404ac16cc9d6c17ec7e7294c6e46712544cdb4"} Mar 13 12:15:04 crc kubenswrapper[4632]: I0313 12:15:04.583602 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556735-8mfsj" Mar 13 12:15:04 crc kubenswrapper[4632]: I0313 12:15:04.638138 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee86cbb5-0041-46eb-8f35-c159f9fbc3b3-config-volume\") pod \"ee86cbb5-0041-46eb-8f35-c159f9fbc3b3\" (UID: \"ee86cbb5-0041-46eb-8f35-c159f9fbc3b3\") " Mar 13 12:15:04 crc kubenswrapper[4632]: I0313 12:15:04.638181 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxpjx\" (UniqueName: \"kubernetes.io/projected/ee86cbb5-0041-46eb-8f35-c159f9fbc3b3-kube-api-access-wxpjx\") pod \"ee86cbb5-0041-46eb-8f35-c159f9fbc3b3\" (UID: \"ee86cbb5-0041-46eb-8f35-c159f9fbc3b3\") " Mar 13 12:15:04 crc kubenswrapper[4632]: I0313 12:15:04.638308 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee86cbb5-0041-46eb-8f35-c159f9fbc3b3-secret-volume\") pod \"ee86cbb5-0041-46eb-8f35-c159f9fbc3b3\" (UID: \"ee86cbb5-0041-46eb-8f35-c159f9fbc3b3\") " Mar 13 12:15:04 crc kubenswrapper[4632]: I0313 12:15:04.638920 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee86cbb5-0041-46eb-8f35-c159f9fbc3b3-config-volume" (OuterVolumeSpecName: "config-volume") pod "ee86cbb5-0041-46eb-8f35-c159f9fbc3b3" (UID: "ee86cbb5-0041-46eb-8f35-c159f9fbc3b3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:15:04 crc kubenswrapper[4632]: I0313 12:15:04.644441 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee86cbb5-0041-46eb-8f35-c159f9fbc3b3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ee86cbb5-0041-46eb-8f35-c159f9fbc3b3" (UID: "ee86cbb5-0041-46eb-8f35-c159f9fbc3b3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:15:04 crc kubenswrapper[4632]: I0313 12:15:04.649227 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee86cbb5-0041-46eb-8f35-c159f9fbc3b3-kube-api-access-wxpjx" (OuterVolumeSpecName: "kube-api-access-wxpjx") pod "ee86cbb5-0041-46eb-8f35-c159f9fbc3b3" (UID: "ee86cbb5-0041-46eb-8f35-c159f9fbc3b3"). InnerVolumeSpecName "kube-api-access-wxpjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:15:04 crc kubenswrapper[4632]: I0313 12:15:04.740183 4632 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee86cbb5-0041-46eb-8f35-c159f9fbc3b3-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 13 12:15:04 crc kubenswrapper[4632]: I0313 12:15:04.740224 4632 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee86cbb5-0041-46eb-8f35-c159f9fbc3b3-config-volume\") on node \"crc\" DevicePath \"\"" Mar 13 12:15:04 crc kubenswrapper[4632]: I0313 12:15:04.740234 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxpjx\" (UniqueName: \"kubernetes.io/projected/ee86cbb5-0041-46eb-8f35-c159f9fbc3b3-kube-api-access-wxpjx\") on node \"crc\" DevicePath \"\"" Mar 13 12:15:05 crc kubenswrapper[4632]: I0313 12:15:05.104359 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556735-8mfsj" event={"ID":"ee86cbb5-0041-46eb-8f35-c159f9fbc3b3","Type":"ContainerDied","Data":"fb8a8ee86021160eb679abf8d44c17e744720b3d880df97becd5875a7084ef14"} Mar 13 12:15:05 crc kubenswrapper[4632]: I0313 12:15:05.104391 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556735-8mfsj" Mar 13 12:15:05 crc kubenswrapper[4632]: I0313 12:15:05.104403 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb8a8ee86021160eb679abf8d44c17e744720b3d880df97becd5875a7084ef14" Mar 13 12:15:05 crc kubenswrapper[4632]: I0313 12:15:05.678268 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556690-p82jh"] Mar 13 12:15:05 crc kubenswrapper[4632]: I0313 12:15:05.684744 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556690-p82jh"] Mar 13 12:15:06 crc kubenswrapper[4632]: I0313 12:15:06.055898 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dc72a85-cdb5-4b11-9e0a-158d269edf96" path="/var/lib/kubelet/pods/9dc72a85-cdb5-4b11-9e0a-158d269edf96/volumes" Mar 13 12:15:09 crc kubenswrapper[4632]: I0313 12:15:09.045077 4632 scope.go:117] "RemoveContainer" containerID="5cc922706d30866ac208574ee6bcc0812dd5d20bcd356efd2bc6fcac169085a9" Mar 13 12:15:09 crc kubenswrapper[4632]: E0313 12:15:09.045614 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:15:21 crc kubenswrapper[4632]: I0313 12:15:21.050590 4632 scope.go:117] "RemoveContainer" containerID="5cc922706d30866ac208574ee6bcc0812dd5d20bcd356efd2bc6fcac169085a9" Mar 13 12:15:21 crc kubenswrapper[4632]: E0313 12:15:21.051366 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:15:25 crc kubenswrapper[4632]: I0313 12:15:25.484341 4632 scope.go:117] "RemoveContainer" containerID="526c0b7d143109242f29250c0cffd4a40f383eaf78da9d0786f09bf0aa0eccb3" Mar 13 12:15:33 crc kubenswrapper[4632]: I0313 12:15:33.044066 4632 scope.go:117] "RemoveContainer" containerID="5cc922706d30866ac208574ee6bcc0812dd5d20bcd356efd2bc6fcac169085a9" Mar 13 12:15:33 crc kubenswrapper[4632]: E0313 12:15:33.044789 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:15:45 crc kubenswrapper[4632]: I0313 12:15:45.044212 4632 scope.go:117] "RemoveContainer" containerID="5cc922706d30866ac208574ee6bcc0812dd5d20bcd356efd2bc6fcac169085a9" Mar 13 12:15:45 crc kubenswrapper[4632]: E0313 12:15:45.045031 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:15:58 crc kubenswrapper[4632]: I0313 12:15:58.056041 4632 scope.go:117] "RemoveContainer" containerID="5cc922706d30866ac208574ee6bcc0812dd5d20bcd356efd2bc6fcac169085a9" Mar 13 12:15:58 crc kubenswrapper[4632]: E0313 12:15:58.057752 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:16:00 crc kubenswrapper[4632]: I0313 12:16:00.153342 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556736-8cfm9"] Mar 13 12:16:00 crc kubenswrapper[4632]: E0313 12:16:00.154127 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee86cbb5-0041-46eb-8f35-c159f9fbc3b3" containerName="collect-profiles" Mar 13 12:16:00 crc kubenswrapper[4632]: I0313 12:16:00.154141 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee86cbb5-0041-46eb-8f35-c159f9fbc3b3" containerName="collect-profiles" Mar 13 12:16:00 crc kubenswrapper[4632]: I0313 12:16:00.154361 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee86cbb5-0041-46eb-8f35-c159f9fbc3b3" containerName="collect-profiles" Mar 13 12:16:00 crc kubenswrapper[4632]: I0313 12:16:00.154989 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556736-8cfm9" Mar 13 12:16:00 crc kubenswrapper[4632]: I0313 12:16:00.163147 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556736-8cfm9"] Mar 13 12:16:00 crc kubenswrapper[4632]: I0313 12:16:00.165497 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 12:16:00 crc kubenswrapper[4632]: I0313 12:16:00.166844 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 12:16:00 crc kubenswrapper[4632]: I0313 12:16:00.167320 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 12:16:00 crc kubenswrapper[4632]: I0313 12:16:00.258877 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zfh5\" (UniqueName: \"kubernetes.io/projected/d3f5dea9-fbf6-48ea-89ef-cf2c15d1689e-kube-api-access-9zfh5\") pod \"auto-csr-approver-29556736-8cfm9\" (UID: \"d3f5dea9-fbf6-48ea-89ef-cf2c15d1689e\") " pod="openshift-infra/auto-csr-approver-29556736-8cfm9" Mar 13 12:16:00 crc kubenswrapper[4632]: I0313 12:16:00.362249 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zfh5\" (UniqueName: \"kubernetes.io/projected/d3f5dea9-fbf6-48ea-89ef-cf2c15d1689e-kube-api-access-9zfh5\") pod \"auto-csr-approver-29556736-8cfm9\" (UID: \"d3f5dea9-fbf6-48ea-89ef-cf2c15d1689e\") " pod="openshift-infra/auto-csr-approver-29556736-8cfm9" Mar 13 12:16:00 crc kubenswrapper[4632]: I0313 12:16:00.391130 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zfh5\" (UniqueName: \"kubernetes.io/projected/d3f5dea9-fbf6-48ea-89ef-cf2c15d1689e-kube-api-access-9zfh5\") pod \"auto-csr-approver-29556736-8cfm9\" (UID: \"d3f5dea9-fbf6-48ea-89ef-cf2c15d1689e\") " pod="openshift-infra/auto-csr-approver-29556736-8cfm9" Mar 13 12:16:00 crc kubenswrapper[4632]: I0313 12:16:00.478598 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556736-8cfm9" Mar 13 12:16:01 crc kubenswrapper[4632]: I0313 12:16:01.007199 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556736-8cfm9"] Mar 13 12:16:01 crc kubenswrapper[4632]: I0313 12:16:01.664357 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556736-8cfm9" event={"ID":"d3f5dea9-fbf6-48ea-89ef-cf2c15d1689e","Type":"ContainerStarted","Data":"3e1d631a8dd69de33bc2cd49fed751c78109691b52c716f91c7edaac078f7b75"} Mar 13 12:16:03 crc kubenswrapper[4632]: I0313 12:16:03.686136 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556736-8cfm9" event={"ID":"d3f5dea9-fbf6-48ea-89ef-cf2c15d1689e","Type":"ContainerStarted","Data":"d94c02e7a4464d40b765382224dccb8b8b7d7ff087751a6cc11225fceac593a0"} Mar 13 12:16:03 crc kubenswrapper[4632]: I0313 12:16:03.715756 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556736-8cfm9" podStartSLOduration=2.767809073 podStartE2EDuration="3.714873247s" podCreationTimestamp="2026-03-13 12:16:00 +0000 UTC" firstStartedPulling="2026-03-13 12:16:01.018915338 +0000 UTC m=+7935.041445471" lastFinishedPulling="2026-03-13 12:16:01.965979512 +0000 UTC m=+7935.988509645" observedRunningTime="2026-03-13 12:16:03.702473782 +0000 UTC m=+7937.725003915" watchObservedRunningTime="2026-03-13 12:16:03.714873247 +0000 UTC m=+7937.737403390" Mar 13 12:16:05 crc kubenswrapper[4632]: I0313 12:16:05.703373 4632 generic.go:334] "Generic (PLEG): container finished" podID="d3f5dea9-fbf6-48ea-89ef-cf2c15d1689e" containerID="d94c02e7a4464d40b765382224dccb8b8b7d7ff087751a6cc11225fceac593a0" exitCode=0 Mar 13 12:16:05 crc kubenswrapper[4632]: I0313 12:16:05.703452 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556736-8cfm9" event={"ID":"d3f5dea9-fbf6-48ea-89ef-cf2c15d1689e","Type":"ContainerDied","Data":"d94c02e7a4464d40b765382224dccb8b8b7d7ff087751a6cc11225fceac593a0"} Mar 13 12:16:07 crc kubenswrapper[4632]: I0313 12:16:07.575525 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556736-8cfm9" Mar 13 12:16:07 crc kubenswrapper[4632]: I0313 12:16:07.714753 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zfh5\" (UniqueName: \"kubernetes.io/projected/d3f5dea9-fbf6-48ea-89ef-cf2c15d1689e-kube-api-access-9zfh5\") pod \"d3f5dea9-fbf6-48ea-89ef-cf2c15d1689e\" (UID: \"d3f5dea9-fbf6-48ea-89ef-cf2c15d1689e\") " Mar 13 12:16:07 crc kubenswrapper[4632]: I0313 12:16:07.721187 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3f5dea9-fbf6-48ea-89ef-cf2c15d1689e-kube-api-access-9zfh5" (OuterVolumeSpecName: "kube-api-access-9zfh5") pod "d3f5dea9-fbf6-48ea-89ef-cf2c15d1689e" (UID: "d3f5dea9-fbf6-48ea-89ef-cf2c15d1689e"). InnerVolumeSpecName "kube-api-access-9zfh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:16:07 crc kubenswrapper[4632]: I0313 12:16:07.728838 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556736-8cfm9" event={"ID":"d3f5dea9-fbf6-48ea-89ef-cf2c15d1689e","Type":"ContainerDied","Data":"3e1d631a8dd69de33bc2cd49fed751c78109691b52c716f91c7edaac078f7b75"} Mar 13 12:16:07 crc kubenswrapper[4632]: I0313 12:16:07.728879 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e1d631a8dd69de33bc2cd49fed751c78109691b52c716f91c7edaac078f7b75" Mar 13 12:16:07 crc kubenswrapper[4632]: I0313 12:16:07.728965 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556736-8cfm9" Mar 13 12:16:07 crc kubenswrapper[4632]: I0313 12:16:07.818039 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zfh5\" (UniqueName: \"kubernetes.io/projected/d3f5dea9-fbf6-48ea-89ef-cf2c15d1689e-kube-api-access-9zfh5\") on node \"crc\" DevicePath \"\"" Mar 13 12:16:08 crc kubenswrapper[4632]: I0313 12:16:08.662293 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556730-txd8w"] Mar 13 12:16:08 crc kubenswrapper[4632]: I0313 12:16:08.681599 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556730-txd8w"] Mar 13 12:16:10 crc kubenswrapper[4632]: I0313 12:16:10.060804 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4d4ab40-a5ab-4b39-b19e-043766174116" path="/var/lib/kubelet/pods/c4d4ab40-a5ab-4b39-b19e-043766174116/volumes" Mar 13 12:16:13 crc kubenswrapper[4632]: I0313 12:16:13.044604 4632 scope.go:117] "RemoveContainer" containerID="5cc922706d30866ac208574ee6bcc0812dd5d20bcd356efd2bc6fcac169085a9" Mar 13 12:16:13 crc kubenswrapper[4632]: E0313 12:16:13.045480 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:16:25 crc kubenswrapper[4632]: I0313 12:16:25.636015 4632 scope.go:117] "RemoveContainer" containerID="47f2264b87b9b8761c8013ae1aa0f697b6d23277abd018e4664e7e2eed7771a3" Mar 13 12:16:28 crc kubenswrapper[4632]: I0313 12:16:28.054340 4632 scope.go:117] "RemoveContainer" containerID="5cc922706d30866ac208574ee6bcc0812dd5d20bcd356efd2bc6fcac169085a9" Mar 13 12:16:28 crc kubenswrapper[4632]: E0313 12:16:28.055121 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:16:43 crc kubenswrapper[4632]: I0313 12:16:43.044568 4632 scope.go:117] "RemoveContainer" containerID="5cc922706d30866ac208574ee6bcc0812dd5d20bcd356efd2bc6fcac169085a9" Mar 13 12:16:43 crc kubenswrapper[4632]: E0313 12:16:43.045345 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:16:55 crc kubenswrapper[4632]: I0313 12:16:55.044286 4632 scope.go:117] "RemoveContainer" containerID="5cc922706d30866ac208574ee6bcc0812dd5d20bcd356efd2bc6fcac169085a9" Mar 13 12:16:55 crc kubenswrapper[4632]: E0313 12:16:55.044930 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:17:06 crc kubenswrapper[4632]: I0313 12:17:06.045456 4632 scope.go:117] "RemoveContainer" containerID="5cc922706d30866ac208574ee6bcc0812dd5d20bcd356efd2bc6fcac169085a9" Mar 13 12:17:06 crc kubenswrapper[4632]: E0313 12:17:06.046725 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:17:19 crc kubenswrapper[4632]: I0313 12:17:19.044470 4632 scope.go:117] "RemoveContainer" containerID="5cc922706d30866ac208574ee6bcc0812dd5d20bcd356efd2bc6fcac169085a9" Mar 13 12:17:19 crc kubenswrapper[4632]: E0313 12:17:19.045214 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:17:32 crc kubenswrapper[4632]: I0313 12:17:32.044866 4632 scope.go:117] "RemoveContainer" containerID="5cc922706d30866ac208574ee6bcc0812dd5d20bcd356efd2bc6fcac169085a9" Mar 13 12:17:32 crc kubenswrapper[4632]: E0313 12:17:32.045662 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:17:44 crc kubenswrapper[4632]: I0313 12:17:44.044695 4632 scope.go:117] "RemoveContainer" containerID="5cc922706d30866ac208574ee6bcc0812dd5d20bcd356efd2bc6fcac169085a9" Mar 13 12:17:44 crc kubenswrapper[4632]: E0313 12:17:44.045480 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:17:56 crc kubenswrapper[4632]: I0313 12:17:56.044609 4632 scope.go:117] "RemoveContainer" containerID="5cc922706d30866ac208574ee6bcc0812dd5d20bcd356efd2bc6fcac169085a9" Mar 13 12:17:56 crc kubenswrapper[4632]: E0313 12:17:56.045457 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:18:00 crc kubenswrapper[4632]: I0313 12:18:00.152247 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556738-ktc6z"] Mar 13 12:18:00 crc kubenswrapper[4632]: E0313 12:18:00.153419 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3f5dea9-fbf6-48ea-89ef-cf2c15d1689e" containerName="oc" Mar 13 12:18:00 crc kubenswrapper[4632]: I0313 12:18:00.153438 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3f5dea9-fbf6-48ea-89ef-cf2c15d1689e" containerName="oc" Mar 13 12:18:00 crc kubenswrapper[4632]: I0313 12:18:00.153663 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3f5dea9-fbf6-48ea-89ef-cf2c15d1689e" containerName="oc" Mar 13 12:18:00 crc kubenswrapper[4632]: I0313 12:18:00.154533 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556738-ktc6z" Mar 13 12:18:00 crc kubenswrapper[4632]: I0313 12:18:00.161797 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 12:18:00 crc kubenswrapper[4632]: I0313 12:18:00.161855 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 12:18:00 crc kubenswrapper[4632]: I0313 12:18:00.161984 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 12:18:00 crc kubenswrapper[4632]: I0313 12:18:00.163431 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556738-ktc6z"] Mar 13 12:18:00 crc kubenswrapper[4632]: I0313 12:18:00.275285 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmzqs\" (UniqueName: \"kubernetes.io/projected/fd947fd4-4e97-4720-98a3-d345ae5dd3fc-kube-api-access-kmzqs\") pod \"auto-csr-approver-29556738-ktc6z\" (UID: \"fd947fd4-4e97-4720-98a3-d345ae5dd3fc\") " pod="openshift-infra/auto-csr-approver-29556738-ktc6z" Mar 13 12:18:00 crc kubenswrapper[4632]: I0313 12:18:00.377472 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmzqs\" (UniqueName: \"kubernetes.io/projected/fd947fd4-4e97-4720-98a3-d345ae5dd3fc-kube-api-access-kmzqs\") pod \"auto-csr-approver-29556738-ktc6z\" (UID: \"fd947fd4-4e97-4720-98a3-d345ae5dd3fc\") " pod="openshift-infra/auto-csr-approver-29556738-ktc6z" Mar 13 12:18:00 crc kubenswrapper[4632]: I0313 12:18:00.409351 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmzqs\" (UniqueName: \"kubernetes.io/projected/fd947fd4-4e97-4720-98a3-d345ae5dd3fc-kube-api-access-kmzqs\") pod \"auto-csr-approver-29556738-ktc6z\" (UID: \"fd947fd4-4e97-4720-98a3-d345ae5dd3fc\") " pod="openshift-infra/auto-csr-approver-29556738-ktc6z" Mar 13 12:18:00 crc kubenswrapper[4632]: I0313 12:18:00.489327 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556738-ktc6z" Mar 13 12:18:00 crc kubenswrapper[4632]: I0313 12:18:00.632867 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-7dbf8b9ddc-6p5vh" podUID="03ca050c-63a7-4b37-91fe-fe5c322cca78" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Mar 13 12:18:00 crc kubenswrapper[4632]: I0313 12:18:00.974908 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556738-ktc6z"] Mar 13 12:18:01 crc kubenswrapper[4632]: I0313 12:18:01.243502 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556738-ktc6z" event={"ID":"fd947fd4-4e97-4720-98a3-d345ae5dd3fc","Type":"ContainerStarted","Data":"9498ecf660d2d20f8dd70c874c835638519a77ddd689a248744483cd6ffdf3e6"} Mar 13 12:18:02 crc kubenswrapper[4632]: I0313 12:18:02.261175 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556738-ktc6z" event={"ID":"fd947fd4-4e97-4720-98a3-d345ae5dd3fc","Type":"ContainerStarted","Data":"fbfc844073b7954c305603f6ba9bca1ebae6e886287d4969b865a335340183e5"} Mar 13 12:18:02 crc kubenswrapper[4632]: I0313 12:18:02.290455 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556738-ktc6z" podStartSLOduration=1.352389181 podStartE2EDuration="2.290421807s" podCreationTimestamp="2026-03-13 12:18:00 +0000 UTC" firstStartedPulling="2026-03-13 12:18:00.98414603 +0000 UTC m=+8055.006676173" lastFinishedPulling="2026-03-13 12:18:01.922178656 +0000 UTC m=+8055.944708799" observedRunningTime="2026-03-13 12:18:02.277545811 +0000 UTC m=+8056.300075944" watchObservedRunningTime="2026-03-13 12:18:02.290421807 +0000 UTC m=+8056.312951950" Mar 13 12:18:03 crc kubenswrapper[4632]: I0313 12:18:03.272798 4632 generic.go:334] "Generic (PLEG): container finished" podID="fd947fd4-4e97-4720-98a3-d345ae5dd3fc" containerID="fbfc844073b7954c305603f6ba9bca1ebae6e886287d4969b865a335340183e5" exitCode=0 Mar 13 12:18:03 crc kubenswrapper[4632]: I0313 12:18:03.272860 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556738-ktc6z" event={"ID":"fd947fd4-4e97-4720-98a3-d345ae5dd3fc","Type":"ContainerDied","Data":"fbfc844073b7954c305603f6ba9bca1ebae6e886287d4969b865a335340183e5"} Mar 13 12:18:04 crc kubenswrapper[4632]: I0313 12:18:04.669628 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556738-ktc6z" Mar 13 12:18:04 crc kubenswrapper[4632]: I0313 12:18:04.763178 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmzqs\" (UniqueName: \"kubernetes.io/projected/fd947fd4-4e97-4720-98a3-d345ae5dd3fc-kube-api-access-kmzqs\") pod \"fd947fd4-4e97-4720-98a3-d345ae5dd3fc\" (UID: \"fd947fd4-4e97-4720-98a3-d345ae5dd3fc\") " Mar 13 12:18:04 crc kubenswrapper[4632]: I0313 12:18:04.768763 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd947fd4-4e97-4720-98a3-d345ae5dd3fc-kube-api-access-kmzqs" (OuterVolumeSpecName: "kube-api-access-kmzqs") pod "fd947fd4-4e97-4720-98a3-d345ae5dd3fc" (UID: "fd947fd4-4e97-4720-98a3-d345ae5dd3fc"). InnerVolumeSpecName "kube-api-access-kmzqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:18:04 crc kubenswrapper[4632]: I0313 12:18:04.865474 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmzqs\" (UniqueName: \"kubernetes.io/projected/fd947fd4-4e97-4720-98a3-d345ae5dd3fc-kube-api-access-kmzqs\") on node \"crc\" DevicePath \"\"" Mar 13 12:18:05 crc kubenswrapper[4632]: I0313 12:18:05.293603 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556738-ktc6z" event={"ID":"fd947fd4-4e97-4720-98a3-d345ae5dd3fc","Type":"ContainerDied","Data":"9498ecf660d2d20f8dd70c874c835638519a77ddd689a248744483cd6ffdf3e6"} Mar 13 12:18:05 crc kubenswrapper[4632]: I0313 12:18:05.293643 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9498ecf660d2d20f8dd70c874c835638519a77ddd689a248744483cd6ffdf3e6" Mar 13 12:18:05 crc kubenswrapper[4632]: I0313 12:18:05.293926 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556738-ktc6z" Mar 13 12:18:05 crc kubenswrapper[4632]: I0313 12:18:05.387162 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556732-zffdj"] Mar 13 12:18:05 crc kubenswrapper[4632]: I0313 12:18:05.396214 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556732-zffdj"] Mar 13 12:18:06 crc kubenswrapper[4632]: I0313 12:18:06.057085 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69547021-fae1-4ad6-8745-c327bb079dce" path="/var/lib/kubelet/pods/69547021-fae1-4ad6-8745-c327bb079dce/volumes" Mar 13 12:18:09 crc kubenswrapper[4632]: I0313 12:18:09.044538 4632 scope.go:117] "RemoveContainer" containerID="5cc922706d30866ac208574ee6bcc0812dd5d20bcd356efd2bc6fcac169085a9" Mar 13 12:18:09 crc kubenswrapper[4632]: E0313 12:18:09.045059 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:18:24 crc kubenswrapper[4632]: I0313 12:18:24.045086 4632 scope.go:117] "RemoveContainer" containerID="5cc922706d30866ac208574ee6bcc0812dd5d20bcd356efd2bc6fcac169085a9" Mar 13 12:18:24 crc kubenswrapper[4632]: E0313 12:18:24.045805 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:18:25 crc kubenswrapper[4632]: I0313 12:18:25.855996 4632 scope.go:117] "RemoveContainer" containerID="e6be43aa992650e79f391597a8fccb4cc829615f369f4902627c5d5e92b6ab1e" Mar 13 12:18:35 crc kubenswrapper[4632]: I0313 12:18:35.044816 4632 scope.go:117] "RemoveContainer" containerID="5cc922706d30866ac208574ee6bcc0812dd5d20bcd356efd2bc6fcac169085a9" Mar 13 12:18:35 crc kubenswrapper[4632]: E0313 12:18:35.045639 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:18:50 crc kubenswrapper[4632]: I0313 12:18:50.044701 4632 scope.go:117] "RemoveContainer" containerID="5cc922706d30866ac208574ee6bcc0812dd5d20bcd356efd2bc6fcac169085a9" Mar 13 12:18:50 crc kubenswrapper[4632]: E0313 12:18:50.045400 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:19:05 crc kubenswrapper[4632]: I0313 12:19:05.044858 4632 scope.go:117] "RemoveContainer" containerID="5cc922706d30866ac208574ee6bcc0812dd5d20bcd356efd2bc6fcac169085a9" Mar 13 12:19:05 crc kubenswrapper[4632]: E0313 12:19:05.046312 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:19:17 crc kubenswrapper[4632]: I0313 12:19:17.044530 4632 scope.go:117] "RemoveContainer" containerID="5cc922706d30866ac208574ee6bcc0812dd5d20bcd356efd2bc6fcac169085a9" Mar 13 12:19:17 crc kubenswrapper[4632]: E0313 12:19:17.045348 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:19:29 crc kubenswrapper[4632]: I0313 12:19:29.045287 4632 scope.go:117] "RemoveContainer" containerID="5cc922706d30866ac208574ee6bcc0812dd5d20bcd356efd2bc6fcac169085a9" Mar 13 12:19:29 crc kubenswrapper[4632]: E0313 12:19:29.046055 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:19:44 crc kubenswrapper[4632]: I0313 12:19:44.044659 4632 scope.go:117] "RemoveContainer" containerID="5cc922706d30866ac208574ee6bcc0812dd5d20bcd356efd2bc6fcac169085a9" Mar 13 12:19:45 crc kubenswrapper[4632]: I0313 12:19:45.308788 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerStarted","Data":"d2f7d92ea8336c364393ccfd7369387047df3a4555b1b7f7be871c5ae3268440"} Mar 13 12:20:00 crc kubenswrapper[4632]: I0313 12:20:00.223233 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556740-fmh5b"] Mar 13 12:20:00 crc kubenswrapper[4632]: E0313 12:20:00.233008 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd947fd4-4e97-4720-98a3-d345ae5dd3fc" containerName="oc" Mar 13 12:20:00 crc kubenswrapper[4632]: I0313 12:20:00.233058 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd947fd4-4e97-4720-98a3-d345ae5dd3fc" containerName="oc" Mar 13 12:20:00 crc kubenswrapper[4632]: I0313 12:20:00.235001 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd947fd4-4e97-4720-98a3-d345ae5dd3fc" containerName="oc" Mar 13 12:20:00 crc kubenswrapper[4632]: I0313 12:20:00.244521 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556740-fmh5b" Mar 13 12:20:00 crc kubenswrapper[4632]: I0313 12:20:00.256111 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 12:20:00 crc kubenswrapper[4632]: I0313 12:20:00.256114 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 12:20:00 crc kubenswrapper[4632]: I0313 12:20:00.256124 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 12:20:00 crc kubenswrapper[4632]: I0313 12:20:00.265595 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh5z2\" (UniqueName: \"kubernetes.io/projected/0f86aa5e-9cfc-458f-ae11-71e5e4dcfe9c-kube-api-access-sh5z2\") pod \"auto-csr-approver-29556740-fmh5b\" (UID: \"0f86aa5e-9cfc-458f-ae11-71e5e4dcfe9c\") " pod="openshift-infra/auto-csr-approver-29556740-fmh5b" Mar 13 12:20:00 crc kubenswrapper[4632]: I0313 12:20:00.267890 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556740-fmh5b"] Mar 13 12:20:00 crc kubenswrapper[4632]: I0313 12:20:00.367615 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh5z2\" (UniqueName: \"kubernetes.io/projected/0f86aa5e-9cfc-458f-ae11-71e5e4dcfe9c-kube-api-access-sh5z2\") pod \"auto-csr-approver-29556740-fmh5b\" (UID: \"0f86aa5e-9cfc-458f-ae11-71e5e4dcfe9c\") " pod="openshift-infra/auto-csr-approver-29556740-fmh5b" Mar 13 12:20:00 crc kubenswrapper[4632]: I0313 12:20:00.397061 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh5z2\" (UniqueName: \"kubernetes.io/projected/0f86aa5e-9cfc-458f-ae11-71e5e4dcfe9c-kube-api-access-sh5z2\") pod \"auto-csr-approver-29556740-fmh5b\" (UID: \"0f86aa5e-9cfc-458f-ae11-71e5e4dcfe9c\") " pod="openshift-infra/auto-csr-approver-29556740-fmh5b" Mar 13 12:20:00 crc kubenswrapper[4632]: I0313 12:20:00.573466 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556740-fmh5b" Mar 13 12:20:01 crc kubenswrapper[4632]: I0313 12:20:01.684793 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556740-fmh5b"] Mar 13 12:20:01 crc kubenswrapper[4632]: W0313 12:20:01.716753 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f86aa5e_9cfc_458f_ae11_71e5e4dcfe9c.slice/crio-59c3307f4abaa31382716abc4886c1fdbb076e0a57ed93e0d5dc37cfa2b758c4 WatchSource:0}: Error finding container 59c3307f4abaa31382716abc4886c1fdbb076e0a57ed93e0d5dc37cfa2b758c4: Status 404 returned error can't find the container with id 59c3307f4abaa31382716abc4886c1fdbb076e0a57ed93e0d5dc37cfa2b758c4 Mar 13 12:20:01 crc kubenswrapper[4632]: I0313 12:20:01.727128 4632 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 12:20:02 crc kubenswrapper[4632]: I0313 12:20:02.498967 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556740-fmh5b" event={"ID":"0f86aa5e-9cfc-458f-ae11-71e5e4dcfe9c","Type":"ContainerStarted","Data":"59c3307f4abaa31382716abc4886c1fdbb076e0a57ed93e0d5dc37cfa2b758c4"} Mar 13 12:20:05 crc kubenswrapper[4632]: I0313 12:20:05.539986 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556740-fmh5b" event={"ID":"0f86aa5e-9cfc-458f-ae11-71e5e4dcfe9c","Type":"ContainerStarted","Data":"e0e701c935a2c4084fd4e093f0c21450f3afd1589228584f67fcd3cbe4d41395"} Mar 13 12:20:05 crc kubenswrapper[4632]: I0313 12:20:05.574368 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556740-fmh5b" podStartSLOduration=3.447666901 podStartE2EDuration="5.573240782s" podCreationTimestamp="2026-03-13 12:20:00 +0000 UTC" firstStartedPulling="2026-03-13 12:20:01.723118144 +0000 UTC m=+8175.745648277" lastFinishedPulling="2026-03-13 12:20:03.848692025 +0000 UTC m=+8177.871222158" observedRunningTime="2026-03-13 12:20:05.557351271 +0000 UTC m=+8179.579881404" watchObservedRunningTime="2026-03-13 12:20:05.573240782 +0000 UTC m=+8179.595770925" Mar 13 12:20:06 crc kubenswrapper[4632]: I0313 12:20:06.552303 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556740-fmh5b" event={"ID":"0f86aa5e-9cfc-458f-ae11-71e5e4dcfe9c","Type":"ContainerDied","Data":"e0e701c935a2c4084fd4e093f0c21450f3afd1589228584f67fcd3cbe4d41395"} Mar 13 12:20:06 crc kubenswrapper[4632]: I0313 12:20:06.554383 4632 generic.go:334] "Generic (PLEG): container finished" podID="0f86aa5e-9cfc-458f-ae11-71e5e4dcfe9c" containerID="e0e701c935a2c4084fd4e093f0c21450f3afd1589228584f67fcd3cbe4d41395" exitCode=0 Mar 13 12:20:08 crc kubenswrapper[4632]: I0313 12:20:08.038508 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556740-fmh5b" Mar 13 12:20:08 crc kubenswrapper[4632]: I0313 12:20:08.227075 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sh5z2\" (UniqueName: \"kubernetes.io/projected/0f86aa5e-9cfc-458f-ae11-71e5e4dcfe9c-kube-api-access-sh5z2\") pod \"0f86aa5e-9cfc-458f-ae11-71e5e4dcfe9c\" (UID: \"0f86aa5e-9cfc-458f-ae11-71e5e4dcfe9c\") " Mar 13 12:20:08 crc kubenswrapper[4632]: I0313 12:20:08.235698 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f86aa5e-9cfc-458f-ae11-71e5e4dcfe9c-kube-api-access-sh5z2" (OuterVolumeSpecName: "kube-api-access-sh5z2") pod "0f86aa5e-9cfc-458f-ae11-71e5e4dcfe9c" (UID: "0f86aa5e-9cfc-458f-ae11-71e5e4dcfe9c"). InnerVolumeSpecName "kube-api-access-sh5z2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:20:08 crc kubenswrapper[4632]: I0313 12:20:08.329609 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sh5z2\" (UniqueName: \"kubernetes.io/projected/0f86aa5e-9cfc-458f-ae11-71e5e4dcfe9c-kube-api-access-sh5z2\") on node \"crc\" DevicePath \"\"" Mar 13 12:20:08 crc kubenswrapper[4632]: I0313 12:20:08.580299 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556740-fmh5b" event={"ID":"0f86aa5e-9cfc-458f-ae11-71e5e4dcfe9c","Type":"ContainerDied","Data":"59c3307f4abaa31382716abc4886c1fdbb076e0a57ed93e0d5dc37cfa2b758c4"} Mar 13 12:20:08 crc kubenswrapper[4632]: I0313 12:20:08.580375 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556740-fmh5b" Mar 13 12:20:08 crc kubenswrapper[4632]: I0313 12:20:08.584290 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59c3307f4abaa31382716abc4886c1fdbb076e0a57ed93e0d5dc37cfa2b758c4" Mar 13 12:20:08 crc kubenswrapper[4632]: I0313 12:20:08.787317 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556734-bwv75"] Mar 13 12:20:08 crc kubenswrapper[4632]: I0313 12:20:08.796745 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556734-bwv75"] Mar 13 12:20:10 crc kubenswrapper[4632]: I0313 12:20:10.056459 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d0a5571-d345-44cf-ba1a-46b3ef68b1ae" path="/var/lib/kubelet/pods/5d0a5571-d345-44cf-ba1a-46b3ef68b1ae/volumes" Mar 13 12:20:25 crc kubenswrapper[4632]: I0313 12:20:25.976970 4632 scope.go:117] "RemoveContainer" containerID="6faf20a0273a34b58a9864d2f460cdaebfb4cad46108ca19c20cf494270e4fe6" Mar 13 12:21:08 crc kubenswrapper[4632]: I0313 12:21:08.553535 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n44pj"] Mar 13 12:21:08 crc kubenswrapper[4632]: E0313 12:21:08.554467 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f86aa5e-9cfc-458f-ae11-71e5e4dcfe9c" containerName="oc" Mar 13 12:21:08 crc kubenswrapper[4632]: I0313 12:21:08.554482 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f86aa5e-9cfc-458f-ae11-71e5e4dcfe9c" containerName="oc" Mar 13 12:21:08 crc kubenswrapper[4632]: I0313 12:21:08.554696 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f86aa5e-9cfc-458f-ae11-71e5e4dcfe9c" containerName="oc" Mar 13 12:21:08 crc kubenswrapper[4632]: I0313 12:21:08.558088 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n44pj" Mar 13 12:21:08 crc kubenswrapper[4632]: I0313 12:21:08.593718 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n44pj"] Mar 13 12:21:08 crc kubenswrapper[4632]: I0313 12:21:08.636218 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n2rg\" (UniqueName: \"kubernetes.io/projected/daf8e5c5-e40a-40d4-b4ea-90347a55cf3c-kube-api-access-7n2rg\") pod \"community-operators-n44pj\" (UID: \"daf8e5c5-e40a-40d4-b4ea-90347a55cf3c\") " pod="openshift-marketplace/community-operators-n44pj" Mar 13 12:21:08 crc kubenswrapper[4632]: I0313 12:21:08.636318 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/daf8e5c5-e40a-40d4-b4ea-90347a55cf3c-catalog-content\") pod \"community-operators-n44pj\" (UID: \"daf8e5c5-e40a-40d4-b4ea-90347a55cf3c\") " pod="openshift-marketplace/community-operators-n44pj" Mar 13 12:21:08 crc kubenswrapper[4632]: I0313 12:21:08.636514 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/daf8e5c5-e40a-40d4-b4ea-90347a55cf3c-utilities\") pod \"community-operators-n44pj\" (UID: \"daf8e5c5-e40a-40d4-b4ea-90347a55cf3c\") " pod="openshift-marketplace/community-operators-n44pj" Mar 13 12:21:08 crc kubenswrapper[4632]: I0313 12:21:08.739066 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n2rg\" (UniqueName: \"kubernetes.io/projected/daf8e5c5-e40a-40d4-b4ea-90347a55cf3c-kube-api-access-7n2rg\") pod \"community-operators-n44pj\" (UID: \"daf8e5c5-e40a-40d4-b4ea-90347a55cf3c\") " pod="openshift-marketplace/community-operators-n44pj" Mar 13 12:21:08 crc kubenswrapper[4632]: I0313 12:21:08.739136 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/daf8e5c5-e40a-40d4-b4ea-90347a55cf3c-catalog-content\") pod \"community-operators-n44pj\" (UID: \"daf8e5c5-e40a-40d4-b4ea-90347a55cf3c\") " pod="openshift-marketplace/community-operators-n44pj" Mar 13 12:21:08 crc kubenswrapper[4632]: I0313 12:21:08.739184 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/daf8e5c5-e40a-40d4-b4ea-90347a55cf3c-utilities\") pod \"community-operators-n44pj\" (UID: \"daf8e5c5-e40a-40d4-b4ea-90347a55cf3c\") " pod="openshift-marketplace/community-operators-n44pj" Mar 13 12:21:08 crc kubenswrapper[4632]: I0313 12:21:08.740195 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/daf8e5c5-e40a-40d4-b4ea-90347a55cf3c-utilities\") pod \"community-operators-n44pj\" (UID: \"daf8e5c5-e40a-40d4-b4ea-90347a55cf3c\") " pod="openshift-marketplace/community-operators-n44pj" Mar 13 12:21:08 crc kubenswrapper[4632]: I0313 12:21:08.741068 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/daf8e5c5-e40a-40d4-b4ea-90347a55cf3c-catalog-content\") pod \"community-operators-n44pj\" (UID: \"daf8e5c5-e40a-40d4-b4ea-90347a55cf3c\") " pod="openshift-marketplace/community-operators-n44pj" Mar 13 12:21:08 crc kubenswrapper[4632]: I0313 12:21:08.770875 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n2rg\" (UniqueName: \"kubernetes.io/projected/daf8e5c5-e40a-40d4-b4ea-90347a55cf3c-kube-api-access-7n2rg\") pod \"community-operators-n44pj\" (UID: \"daf8e5c5-e40a-40d4-b4ea-90347a55cf3c\") " pod="openshift-marketplace/community-operators-n44pj" Mar 13 12:21:08 crc kubenswrapper[4632]: I0313 12:21:08.881461 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n44pj" Mar 13 12:21:09 crc kubenswrapper[4632]: I0313 12:21:09.752992 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n44pj"] Mar 13 12:21:09 crc kubenswrapper[4632]: W0313 12:21:09.768794 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddaf8e5c5_e40a_40d4_b4ea_90347a55cf3c.slice/crio-3c2fbf96cda473030c977a4f1c925da83dc9d613ad78a8d681a07cc04ca99524 WatchSource:0}: Error finding container 3c2fbf96cda473030c977a4f1c925da83dc9d613ad78a8d681a07cc04ca99524: Status 404 returned error can't find the container with id 3c2fbf96cda473030c977a4f1c925da83dc9d613ad78a8d681a07cc04ca99524 Mar 13 12:21:10 crc kubenswrapper[4632]: I0313 12:21:10.257024 4632 generic.go:334] "Generic (PLEG): container finished" podID="daf8e5c5-e40a-40d4-b4ea-90347a55cf3c" containerID="de03b8a6c2df4b8371e57e5a125b76cbf3601ae9167dab0f2c3ad95c58607eae" exitCode=0 Mar 13 12:21:10 crc kubenswrapper[4632]: I0313 12:21:10.257076 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n44pj" event={"ID":"daf8e5c5-e40a-40d4-b4ea-90347a55cf3c","Type":"ContainerDied","Data":"de03b8a6c2df4b8371e57e5a125b76cbf3601ae9167dab0f2c3ad95c58607eae"} Mar 13 12:21:10 crc kubenswrapper[4632]: I0313 12:21:10.257111 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n44pj" event={"ID":"daf8e5c5-e40a-40d4-b4ea-90347a55cf3c","Type":"ContainerStarted","Data":"3c2fbf96cda473030c977a4f1c925da83dc9d613ad78a8d681a07cc04ca99524"} Mar 13 12:21:11 crc kubenswrapper[4632]: I0313 12:21:11.269351 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n44pj" event={"ID":"daf8e5c5-e40a-40d4-b4ea-90347a55cf3c","Type":"ContainerStarted","Data":"81e63ca60a0e0dd00edeac06af45faf13f8dc28e5ffab65ff127a569ef805ba8"} Mar 13 12:21:14 crc kubenswrapper[4632]: I0313 12:21:14.297712 4632 generic.go:334] "Generic (PLEG): container finished" podID="daf8e5c5-e40a-40d4-b4ea-90347a55cf3c" containerID="81e63ca60a0e0dd00edeac06af45faf13f8dc28e5ffab65ff127a569ef805ba8" exitCode=0 Mar 13 12:21:14 crc kubenswrapper[4632]: I0313 12:21:14.297785 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n44pj" event={"ID":"daf8e5c5-e40a-40d4-b4ea-90347a55cf3c","Type":"ContainerDied","Data":"81e63ca60a0e0dd00edeac06af45faf13f8dc28e5ffab65ff127a569ef805ba8"} Mar 13 12:21:15 crc kubenswrapper[4632]: I0313 12:21:15.310919 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n44pj" event={"ID":"daf8e5c5-e40a-40d4-b4ea-90347a55cf3c","Type":"ContainerStarted","Data":"0d61eedbe365424460c70706e237ac566dd5083940c2f53811f5b3ff865d54b0"} Mar 13 12:21:15 crc kubenswrapper[4632]: I0313 12:21:15.348637 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n44pj" podStartSLOduration=2.862916245 podStartE2EDuration="7.34861085s" podCreationTimestamp="2026-03-13 12:21:08 +0000 UTC" firstStartedPulling="2026-03-13 12:21:10.259348322 +0000 UTC m=+8244.281878455" lastFinishedPulling="2026-03-13 12:21:14.745042927 +0000 UTC m=+8248.767573060" observedRunningTime="2026-03-13 12:21:15.338923122 +0000 UTC m=+8249.361453285" watchObservedRunningTime="2026-03-13 12:21:15.34861085 +0000 UTC m=+8249.371141013" Mar 13 12:21:18 crc kubenswrapper[4632]: I0313 12:21:18.881730 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n44pj" Mar 13 12:21:18 crc kubenswrapper[4632]: I0313 12:21:18.882564 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n44pj" Mar 13 12:21:19 crc kubenswrapper[4632]: I0313 12:21:19.932200 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-n44pj" podUID="daf8e5c5-e40a-40d4-b4ea-90347a55cf3c" containerName="registry-server" probeResult="failure" output=< Mar 13 12:21:19 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:21:19 crc kubenswrapper[4632]: > Mar 13 12:21:24 crc kubenswrapper[4632]: I0313 12:21:24.661222 4632 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 2.013797589s: [/var/lib/containers/storage/overlay/529e5121e17145dccbfe8df9c66facc7a0f7ba465e25d548cd766cbc085df8a5/diff /var/log/pods/openstack_cinder-scheduler-0_d2c1c19b-95a5-4db1-8e54-36fe83704b25/cinder-scheduler/1.log]; will not log again for this container unless duration exceeds 2s Mar 13 12:21:24 crc kubenswrapper[4632]: I0313 12:21:24.678684 4632 trace.go:236] Trace[240102934]: "Calculate volume metrics of run-httpd for pod openstack/ceilometer-0" (13-Mar-2026 12:21:22.650) (total time: 2006ms): Mar 13 12:21:24 crc kubenswrapper[4632]: Trace[240102934]: [2.006686126s] [2.006686126s] END Mar 13 12:21:24 crc kubenswrapper[4632]: I0313 12:21:24.685187 4632 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="2cb2f546-c8c5-4ec9-aba8-d3782431de10" containerName="galera" probeResult="failure" output="command timed out" Mar 13 12:21:24 crc kubenswrapper[4632]: I0313 12:21:24.718745 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="2cb2f546-c8c5-4ec9-aba8-d3782431de10" containerName="galera" probeResult="failure" output="command timed out" Mar 13 12:21:29 crc kubenswrapper[4632]: I0313 12:21:29.941333 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-n44pj" podUID="daf8e5c5-e40a-40d4-b4ea-90347a55cf3c" containerName="registry-server" probeResult="failure" output=< Mar 13 12:21:29 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:21:29 crc kubenswrapper[4632]: > Mar 13 12:21:38 crc kubenswrapper[4632]: I0313 12:21:38.952099 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n44pj" Mar 13 12:21:39 crc kubenswrapper[4632]: I0313 12:21:39.013268 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n44pj" Mar 13 12:21:39 crc kubenswrapper[4632]: I0313 12:21:39.778254 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n44pj"] Mar 13 12:21:40 crc kubenswrapper[4632]: I0313 12:21:40.549473 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n44pj" podUID="daf8e5c5-e40a-40d4-b4ea-90347a55cf3c" containerName="registry-server" containerID="cri-o://0d61eedbe365424460c70706e237ac566dd5083940c2f53811f5b3ff865d54b0" gracePeriod=2 Mar 13 12:21:41 crc kubenswrapper[4632]: I0313 12:21:41.319837 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n44pj" Mar 13 12:21:41 crc kubenswrapper[4632]: I0313 12:21:41.437138 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/daf8e5c5-e40a-40d4-b4ea-90347a55cf3c-utilities\") pod \"daf8e5c5-e40a-40d4-b4ea-90347a55cf3c\" (UID: \"daf8e5c5-e40a-40d4-b4ea-90347a55cf3c\") " Mar 13 12:21:41 crc kubenswrapper[4632]: I0313 12:21:41.437220 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n2rg\" (UniqueName: \"kubernetes.io/projected/daf8e5c5-e40a-40d4-b4ea-90347a55cf3c-kube-api-access-7n2rg\") pod \"daf8e5c5-e40a-40d4-b4ea-90347a55cf3c\" (UID: \"daf8e5c5-e40a-40d4-b4ea-90347a55cf3c\") " Mar 13 12:21:41 crc kubenswrapper[4632]: I0313 12:21:41.437590 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/daf8e5c5-e40a-40d4-b4ea-90347a55cf3c-catalog-content\") pod \"daf8e5c5-e40a-40d4-b4ea-90347a55cf3c\" (UID: \"daf8e5c5-e40a-40d4-b4ea-90347a55cf3c\") " Mar 13 12:21:41 crc kubenswrapper[4632]: I0313 12:21:41.437871 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/daf8e5c5-e40a-40d4-b4ea-90347a55cf3c-utilities" (OuterVolumeSpecName: "utilities") pod "daf8e5c5-e40a-40d4-b4ea-90347a55cf3c" (UID: "daf8e5c5-e40a-40d4-b4ea-90347a55cf3c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:21:41 crc kubenswrapper[4632]: I0313 12:21:41.438436 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/daf8e5c5-e40a-40d4-b4ea-90347a55cf3c-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 12:21:41 crc kubenswrapper[4632]: I0313 12:21:41.457236 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daf8e5c5-e40a-40d4-b4ea-90347a55cf3c-kube-api-access-7n2rg" (OuterVolumeSpecName: "kube-api-access-7n2rg") pod "daf8e5c5-e40a-40d4-b4ea-90347a55cf3c" (UID: "daf8e5c5-e40a-40d4-b4ea-90347a55cf3c"). InnerVolumeSpecName "kube-api-access-7n2rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:21:41 crc kubenswrapper[4632]: I0313 12:21:41.499985 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/daf8e5c5-e40a-40d4-b4ea-90347a55cf3c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "daf8e5c5-e40a-40d4-b4ea-90347a55cf3c" (UID: "daf8e5c5-e40a-40d4-b4ea-90347a55cf3c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:21:41 crc kubenswrapper[4632]: I0313 12:21:41.540562 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n2rg\" (UniqueName: \"kubernetes.io/projected/daf8e5c5-e40a-40d4-b4ea-90347a55cf3c-kube-api-access-7n2rg\") on node \"crc\" DevicePath \"\"" Mar 13 12:21:41 crc kubenswrapper[4632]: I0313 12:21:41.540622 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/daf8e5c5-e40a-40d4-b4ea-90347a55cf3c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 12:21:41 crc kubenswrapper[4632]: I0313 12:21:41.562993 4632 generic.go:334] "Generic (PLEG): container finished" podID="daf8e5c5-e40a-40d4-b4ea-90347a55cf3c" containerID="0d61eedbe365424460c70706e237ac566dd5083940c2f53811f5b3ff865d54b0" exitCode=0 Mar 13 12:21:41 crc kubenswrapper[4632]: I0313 12:21:41.563056 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n44pj" Mar 13 12:21:41 crc kubenswrapper[4632]: I0313 12:21:41.563065 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n44pj" event={"ID":"daf8e5c5-e40a-40d4-b4ea-90347a55cf3c","Type":"ContainerDied","Data":"0d61eedbe365424460c70706e237ac566dd5083940c2f53811f5b3ff865d54b0"} Mar 13 12:21:41 crc kubenswrapper[4632]: I0313 12:21:41.563107 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n44pj" event={"ID":"daf8e5c5-e40a-40d4-b4ea-90347a55cf3c","Type":"ContainerDied","Data":"3c2fbf96cda473030c977a4f1c925da83dc9d613ad78a8d681a07cc04ca99524"} Mar 13 12:21:41 crc kubenswrapper[4632]: I0313 12:21:41.563162 4632 scope.go:117] "RemoveContainer" containerID="0d61eedbe365424460c70706e237ac566dd5083940c2f53811f5b3ff865d54b0" Mar 13 12:21:41 crc kubenswrapper[4632]: I0313 12:21:41.609618 4632 scope.go:117] "RemoveContainer" containerID="81e63ca60a0e0dd00edeac06af45faf13f8dc28e5ffab65ff127a569ef805ba8" Mar 13 12:21:41 crc kubenswrapper[4632]: I0313 12:21:41.614788 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n44pj"] Mar 13 12:21:41 crc kubenswrapper[4632]: I0313 12:21:41.625391 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n44pj"] Mar 13 12:21:41 crc kubenswrapper[4632]: I0313 12:21:41.640480 4632 scope.go:117] "RemoveContainer" containerID="de03b8a6c2df4b8371e57e5a125b76cbf3601ae9167dab0f2c3ad95c58607eae" Mar 13 12:21:41 crc kubenswrapper[4632]: I0313 12:21:41.677718 4632 scope.go:117] "RemoveContainer" containerID="0d61eedbe365424460c70706e237ac566dd5083940c2f53811f5b3ff865d54b0" Mar 13 12:21:41 crc kubenswrapper[4632]: E0313 12:21:41.684024 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d61eedbe365424460c70706e237ac566dd5083940c2f53811f5b3ff865d54b0\": container with ID starting with 0d61eedbe365424460c70706e237ac566dd5083940c2f53811f5b3ff865d54b0 not found: ID does not exist" containerID="0d61eedbe365424460c70706e237ac566dd5083940c2f53811f5b3ff865d54b0" Mar 13 12:21:41 crc kubenswrapper[4632]: I0313 12:21:41.684078 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d61eedbe365424460c70706e237ac566dd5083940c2f53811f5b3ff865d54b0"} err="failed to get container status \"0d61eedbe365424460c70706e237ac566dd5083940c2f53811f5b3ff865d54b0\": rpc error: code = NotFound desc = could not find container \"0d61eedbe365424460c70706e237ac566dd5083940c2f53811f5b3ff865d54b0\": container with ID starting with 0d61eedbe365424460c70706e237ac566dd5083940c2f53811f5b3ff865d54b0 not found: ID does not exist" Mar 13 12:21:41 crc kubenswrapper[4632]: I0313 12:21:41.684111 4632 scope.go:117] "RemoveContainer" containerID="81e63ca60a0e0dd00edeac06af45faf13f8dc28e5ffab65ff127a569ef805ba8" Mar 13 12:21:41 crc kubenswrapper[4632]: E0313 12:21:41.684910 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81e63ca60a0e0dd00edeac06af45faf13f8dc28e5ffab65ff127a569ef805ba8\": container with ID starting with 81e63ca60a0e0dd00edeac06af45faf13f8dc28e5ffab65ff127a569ef805ba8 not found: ID does not exist" containerID="81e63ca60a0e0dd00edeac06af45faf13f8dc28e5ffab65ff127a569ef805ba8" Mar 13 12:21:41 crc kubenswrapper[4632]: I0313 12:21:41.684978 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81e63ca60a0e0dd00edeac06af45faf13f8dc28e5ffab65ff127a569ef805ba8"} err="failed to get container status \"81e63ca60a0e0dd00edeac06af45faf13f8dc28e5ffab65ff127a569ef805ba8\": rpc error: code = NotFound desc = could not find container \"81e63ca60a0e0dd00edeac06af45faf13f8dc28e5ffab65ff127a569ef805ba8\": container with ID starting with 81e63ca60a0e0dd00edeac06af45faf13f8dc28e5ffab65ff127a569ef805ba8 not found: ID does not exist" Mar 13 12:21:41 crc kubenswrapper[4632]: I0313 12:21:41.685008 4632 scope.go:117] "RemoveContainer" containerID="de03b8a6c2df4b8371e57e5a125b76cbf3601ae9167dab0f2c3ad95c58607eae" Mar 13 12:21:41 crc kubenswrapper[4632]: E0313 12:21:41.685436 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de03b8a6c2df4b8371e57e5a125b76cbf3601ae9167dab0f2c3ad95c58607eae\": container with ID starting with de03b8a6c2df4b8371e57e5a125b76cbf3601ae9167dab0f2c3ad95c58607eae not found: ID does not exist" containerID="de03b8a6c2df4b8371e57e5a125b76cbf3601ae9167dab0f2c3ad95c58607eae" Mar 13 12:21:41 crc kubenswrapper[4632]: I0313 12:21:41.685457 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de03b8a6c2df4b8371e57e5a125b76cbf3601ae9167dab0f2c3ad95c58607eae"} err="failed to get container status \"de03b8a6c2df4b8371e57e5a125b76cbf3601ae9167dab0f2c3ad95c58607eae\": rpc error: code = NotFound desc = could not find container \"de03b8a6c2df4b8371e57e5a125b76cbf3601ae9167dab0f2c3ad95c58607eae\": container with ID starting with de03b8a6c2df4b8371e57e5a125b76cbf3601ae9167dab0f2c3ad95c58607eae not found: ID does not exist" Mar 13 12:21:42 crc kubenswrapper[4632]: I0313 12:21:42.069005 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="daf8e5c5-e40a-40d4-b4ea-90347a55cf3c" path="/var/lib/kubelet/pods/daf8e5c5-e40a-40d4-b4ea-90347a55cf3c/volumes" Mar 13 12:21:47 crc kubenswrapper[4632]: I0313 12:21:47.584214 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hnpsr"] Mar 13 12:21:47 crc kubenswrapper[4632]: E0313 12:21:47.585634 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daf8e5c5-e40a-40d4-b4ea-90347a55cf3c" containerName="extract-utilities" Mar 13 12:21:47 crc kubenswrapper[4632]: I0313 12:21:47.585650 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="daf8e5c5-e40a-40d4-b4ea-90347a55cf3c" containerName="extract-utilities" Mar 13 12:21:47 crc kubenswrapper[4632]: E0313 12:21:47.585665 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daf8e5c5-e40a-40d4-b4ea-90347a55cf3c" containerName="extract-content" Mar 13 12:21:47 crc kubenswrapper[4632]: I0313 12:21:47.585672 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="daf8e5c5-e40a-40d4-b4ea-90347a55cf3c" containerName="extract-content" Mar 13 12:21:47 crc kubenswrapper[4632]: E0313 12:21:47.585684 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daf8e5c5-e40a-40d4-b4ea-90347a55cf3c" containerName="registry-server" Mar 13 12:21:47 crc kubenswrapper[4632]: I0313 12:21:47.585691 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="daf8e5c5-e40a-40d4-b4ea-90347a55cf3c" containerName="registry-server" Mar 13 12:21:47 crc kubenswrapper[4632]: I0313 12:21:47.585920 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="daf8e5c5-e40a-40d4-b4ea-90347a55cf3c" containerName="registry-server" Mar 13 12:21:47 crc kubenswrapper[4632]: I0313 12:21:47.587270 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hnpsr" Mar 13 12:21:47 crc kubenswrapper[4632]: I0313 12:21:47.591976 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xdn5\" (UniqueName: \"kubernetes.io/projected/e37141ae-e543-40fa-876c-e7b7d9e1598f-kube-api-access-6xdn5\") pod \"redhat-marketplace-hnpsr\" (UID: \"e37141ae-e543-40fa-876c-e7b7d9e1598f\") " pod="openshift-marketplace/redhat-marketplace-hnpsr" Mar 13 12:21:47 crc kubenswrapper[4632]: I0313 12:21:47.592035 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e37141ae-e543-40fa-876c-e7b7d9e1598f-utilities\") pod \"redhat-marketplace-hnpsr\" (UID: \"e37141ae-e543-40fa-876c-e7b7d9e1598f\") " pod="openshift-marketplace/redhat-marketplace-hnpsr" Mar 13 12:21:47 crc kubenswrapper[4632]: I0313 12:21:47.592537 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e37141ae-e543-40fa-876c-e7b7d9e1598f-catalog-content\") pod \"redhat-marketplace-hnpsr\" (UID: \"e37141ae-e543-40fa-876c-e7b7d9e1598f\") " pod="openshift-marketplace/redhat-marketplace-hnpsr" Mar 13 12:21:47 crc kubenswrapper[4632]: I0313 12:21:47.601624 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hnpsr"] Mar 13 12:21:47 crc kubenswrapper[4632]: I0313 12:21:47.704838 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xdn5\" (UniqueName: \"kubernetes.io/projected/e37141ae-e543-40fa-876c-e7b7d9e1598f-kube-api-access-6xdn5\") pod \"redhat-marketplace-hnpsr\" (UID: \"e37141ae-e543-40fa-876c-e7b7d9e1598f\") " pod="openshift-marketplace/redhat-marketplace-hnpsr" Mar 13 12:21:47 crc kubenswrapper[4632]: I0313 12:21:47.704930 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e37141ae-e543-40fa-876c-e7b7d9e1598f-utilities\") pod \"redhat-marketplace-hnpsr\" (UID: \"e37141ae-e543-40fa-876c-e7b7d9e1598f\") " pod="openshift-marketplace/redhat-marketplace-hnpsr" Mar 13 12:21:47 crc kubenswrapper[4632]: I0313 12:21:47.705117 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e37141ae-e543-40fa-876c-e7b7d9e1598f-catalog-content\") pod \"redhat-marketplace-hnpsr\" (UID: \"e37141ae-e543-40fa-876c-e7b7d9e1598f\") " pod="openshift-marketplace/redhat-marketplace-hnpsr" Mar 13 12:21:47 crc kubenswrapper[4632]: I0313 12:21:47.706010 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e37141ae-e543-40fa-876c-e7b7d9e1598f-utilities\") pod \"redhat-marketplace-hnpsr\" (UID: \"e37141ae-e543-40fa-876c-e7b7d9e1598f\") " pod="openshift-marketplace/redhat-marketplace-hnpsr" Mar 13 12:21:47 crc kubenswrapper[4632]: I0313 12:21:47.706093 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e37141ae-e543-40fa-876c-e7b7d9e1598f-catalog-content\") pod \"redhat-marketplace-hnpsr\" (UID: \"e37141ae-e543-40fa-876c-e7b7d9e1598f\") " pod="openshift-marketplace/redhat-marketplace-hnpsr" Mar 13 12:21:47 crc kubenswrapper[4632]: I0313 12:21:47.726121 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xdn5\" (UniqueName: \"kubernetes.io/projected/e37141ae-e543-40fa-876c-e7b7d9e1598f-kube-api-access-6xdn5\") pod \"redhat-marketplace-hnpsr\" (UID: \"e37141ae-e543-40fa-876c-e7b7d9e1598f\") " pod="openshift-marketplace/redhat-marketplace-hnpsr" Mar 13 12:21:47 crc kubenswrapper[4632]: I0313 12:21:47.946798 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hnpsr" Mar 13 12:21:48 crc kubenswrapper[4632]: I0313 12:21:48.656785 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hnpsr"] Mar 13 12:21:49 crc kubenswrapper[4632]: I0313 12:21:49.663768 4632 generic.go:334] "Generic (PLEG): container finished" podID="e37141ae-e543-40fa-876c-e7b7d9e1598f" containerID="030c12dcf041fb69ab0b030b3ab62bdd3a22b3a40f28da17a2039141caaeedee" exitCode=0 Mar 13 12:21:49 crc kubenswrapper[4632]: I0313 12:21:49.664004 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnpsr" event={"ID":"e37141ae-e543-40fa-876c-e7b7d9e1598f","Type":"ContainerDied","Data":"030c12dcf041fb69ab0b030b3ab62bdd3a22b3a40f28da17a2039141caaeedee"} Mar 13 12:21:49 crc kubenswrapper[4632]: I0313 12:21:49.665180 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnpsr" event={"ID":"e37141ae-e543-40fa-876c-e7b7d9e1598f","Type":"ContainerStarted","Data":"9caf9004314593e78add6272bfed81d0b4f7fa3929f6184e4cb0ffa2bbc615fc"} Mar 13 12:21:50 crc kubenswrapper[4632]: I0313 12:21:50.675348 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnpsr" event={"ID":"e37141ae-e543-40fa-876c-e7b7d9e1598f","Type":"ContainerStarted","Data":"ead807d406032ccc6de750261a0d5e2b71a645cfaa7f853ed7fc97cd308b03bb"} Mar 13 12:21:52 crc kubenswrapper[4632]: I0313 12:21:52.695849 4632 generic.go:334] "Generic (PLEG): container finished" podID="e37141ae-e543-40fa-876c-e7b7d9e1598f" containerID="ead807d406032ccc6de750261a0d5e2b71a645cfaa7f853ed7fc97cd308b03bb" exitCode=0 Mar 13 12:21:52 crc kubenswrapper[4632]: I0313 12:21:52.696205 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnpsr" event={"ID":"e37141ae-e543-40fa-876c-e7b7d9e1598f","Type":"ContainerDied","Data":"ead807d406032ccc6de750261a0d5e2b71a645cfaa7f853ed7fc97cd308b03bb"} Mar 13 12:21:54 crc kubenswrapper[4632]: I0313 12:21:54.714417 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnpsr" event={"ID":"e37141ae-e543-40fa-876c-e7b7d9e1598f","Type":"ContainerStarted","Data":"59dd5b6704b2e6f99f2da9cb87a940f54fb4b4c36738389e043006b38d198693"} Mar 13 12:21:54 crc kubenswrapper[4632]: I0313 12:21:54.736328 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hnpsr" podStartSLOduration=3.85097033 podStartE2EDuration="7.73630813s" podCreationTimestamp="2026-03-13 12:21:47 +0000 UTC" firstStartedPulling="2026-03-13 12:21:49.666367416 +0000 UTC m=+8283.688897549" lastFinishedPulling="2026-03-13 12:21:53.551705216 +0000 UTC m=+8287.574235349" observedRunningTime="2026-03-13 12:21:54.733597304 +0000 UTC m=+8288.756127457" watchObservedRunningTime="2026-03-13 12:21:54.73630813 +0000 UTC m=+8288.758838263" Mar 13 12:21:57 crc kubenswrapper[4632]: I0313 12:21:57.947728 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hnpsr" Mar 13 12:21:57 crc kubenswrapper[4632]: I0313 12:21:57.948117 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hnpsr" Mar 13 12:21:58 crc kubenswrapper[4632]: I0313 12:21:58.995773 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-hnpsr" podUID="e37141ae-e543-40fa-876c-e7b7d9e1598f" containerName="registry-server" probeResult="failure" output=< Mar 13 12:21:58 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:21:58 crc kubenswrapper[4632]: > Mar 13 12:22:00 crc kubenswrapper[4632]: I0313 12:22:00.243355 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556742-2lkrj"] Mar 13 12:22:00 crc kubenswrapper[4632]: I0313 12:22:00.245010 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556742-2lkrj" Mar 13 12:22:00 crc kubenswrapper[4632]: I0313 12:22:00.253392 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfptl\" (UniqueName: \"kubernetes.io/projected/a5121453-468a-432e-b110-fd0cd60ed92b-kube-api-access-vfptl\") pod \"auto-csr-approver-29556742-2lkrj\" (UID: \"a5121453-468a-432e-b110-fd0cd60ed92b\") " pod="openshift-infra/auto-csr-approver-29556742-2lkrj" Mar 13 12:22:00 crc kubenswrapper[4632]: I0313 12:22:00.256051 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556742-2lkrj"] Mar 13 12:22:00 crc kubenswrapper[4632]: I0313 12:22:00.265190 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 12:22:00 crc kubenswrapper[4632]: I0313 12:22:00.265288 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 12:22:00 crc kubenswrapper[4632]: I0313 12:22:00.267128 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 12:22:00 crc kubenswrapper[4632]: I0313 12:22:00.355246 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfptl\" (UniqueName: \"kubernetes.io/projected/a5121453-468a-432e-b110-fd0cd60ed92b-kube-api-access-vfptl\") pod \"auto-csr-approver-29556742-2lkrj\" (UID: \"a5121453-468a-432e-b110-fd0cd60ed92b\") " pod="openshift-infra/auto-csr-approver-29556742-2lkrj" Mar 13 12:22:00 crc kubenswrapper[4632]: I0313 12:22:00.386729 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfptl\" (UniqueName: \"kubernetes.io/projected/a5121453-468a-432e-b110-fd0cd60ed92b-kube-api-access-vfptl\") pod \"auto-csr-approver-29556742-2lkrj\" (UID: \"a5121453-468a-432e-b110-fd0cd60ed92b\") " pod="openshift-infra/auto-csr-approver-29556742-2lkrj" Mar 13 12:22:00 crc kubenswrapper[4632]: I0313 12:22:00.583433 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556742-2lkrj" Mar 13 12:22:01 crc kubenswrapper[4632]: I0313 12:22:01.175093 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556742-2lkrj"] Mar 13 12:22:01 crc kubenswrapper[4632]: I0313 12:22:01.774263 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556742-2lkrj" event={"ID":"a5121453-468a-432e-b110-fd0cd60ed92b","Type":"ContainerStarted","Data":"041768bd180226590294d48b8e490a5a4f923f76232d5da42f6d9ec2ac219f63"} Mar 13 12:22:03 crc kubenswrapper[4632]: I0313 12:22:03.798411 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556742-2lkrj" event={"ID":"a5121453-468a-432e-b110-fd0cd60ed92b","Type":"ContainerStarted","Data":"77b11f376c487493e748aed75424d32e4d98e9395efe94071abc3a7b13ebc06d"} Mar 13 12:22:03 crc kubenswrapper[4632]: I0313 12:22:03.824553 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556742-2lkrj" podStartSLOduration=2.764748743 podStartE2EDuration="3.824530915s" podCreationTimestamp="2026-03-13 12:22:00 +0000 UTC" firstStartedPulling="2026-03-13 12:22:01.182132759 +0000 UTC m=+8295.204662892" lastFinishedPulling="2026-03-13 12:22:02.241914931 +0000 UTC m=+8296.264445064" observedRunningTime="2026-03-13 12:22:03.813909195 +0000 UTC m=+8297.836439328" watchObservedRunningTime="2026-03-13 12:22:03.824530915 +0000 UTC m=+8297.847061058" Mar 13 12:22:04 crc kubenswrapper[4632]: I0313 12:22:04.808922 4632 generic.go:334] "Generic (PLEG): container finished" podID="a5121453-468a-432e-b110-fd0cd60ed92b" containerID="77b11f376c487493e748aed75424d32e4d98e9395efe94071abc3a7b13ebc06d" exitCode=0 Mar 13 12:22:04 crc kubenswrapper[4632]: I0313 12:22:04.808990 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556742-2lkrj" event={"ID":"a5121453-468a-432e-b110-fd0cd60ed92b","Type":"ContainerDied","Data":"77b11f376c487493e748aed75424d32e4d98e9395efe94071abc3a7b13ebc06d"} Mar 13 12:22:06 crc kubenswrapper[4632]: I0313 12:22:06.407921 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556742-2lkrj" Mar 13 12:22:06 crc kubenswrapper[4632]: I0313 12:22:06.576698 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfptl\" (UniqueName: \"kubernetes.io/projected/a5121453-468a-432e-b110-fd0cd60ed92b-kube-api-access-vfptl\") pod \"a5121453-468a-432e-b110-fd0cd60ed92b\" (UID: \"a5121453-468a-432e-b110-fd0cd60ed92b\") " Mar 13 12:22:06 crc kubenswrapper[4632]: I0313 12:22:06.588253 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5121453-468a-432e-b110-fd0cd60ed92b-kube-api-access-vfptl" (OuterVolumeSpecName: "kube-api-access-vfptl") pod "a5121453-468a-432e-b110-fd0cd60ed92b" (UID: "a5121453-468a-432e-b110-fd0cd60ed92b"). InnerVolumeSpecName "kube-api-access-vfptl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:22:06 crc kubenswrapper[4632]: I0313 12:22:06.679659 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfptl\" (UniqueName: \"kubernetes.io/projected/a5121453-468a-432e-b110-fd0cd60ed92b-kube-api-access-vfptl\") on node \"crc\" DevicePath \"\"" Mar 13 12:22:06 crc kubenswrapper[4632]: I0313 12:22:06.828046 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556742-2lkrj" event={"ID":"a5121453-468a-432e-b110-fd0cd60ed92b","Type":"ContainerDied","Data":"041768bd180226590294d48b8e490a5a4f923f76232d5da42f6d9ec2ac219f63"} Mar 13 12:22:06 crc kubenswrapper[4632]: I0313 12:22:06.828425 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556742-2lkrj" Mar 13 12:22:06 crc kubenswrapper[4632]: I0313 12:22:06.828095 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="041768bd180226590294d48b8e490a5a4f923f76232d5da42f6d9ec2ac219f63" Mar 13 12:22:06 crc kubenswrapper[4632]: I0313 12:22:06.946382 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556736-8cfm9"] Mar 13 12:22:06 crc kubenswrapper[4632]: I0313 12:22:06.955599 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556736-8cfm9"] Mar 13 12:22:08 crc kubenswrapper[4632]: I0313 12:22:08.015283 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hnpsr" Mar 13 12:22:08 crc kubenswrapper[4632]: I0313 12:22:08.091658 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3f5dea9-fbf6-48ea-89ef-cf2c15d1689e" path="/var/lib/kubelet/pods/d3f5dea9-fbf6-48ea-89ef-cf2c15d1689e/volumes" Mar 13 12:22:08 crc kubenswrapper[4632]: I0313 12:22:08.094312 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hnpsr" Mar 13 12:22:08 crc kubenswrapper[4632]: I0313 12:22:08.257017 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hnpsr"] Mar 13 12:22:09 crc kubenswrapper[4632]: I0313 12:22:09.862373 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hnpsr" podUID="e37141ae-e543-40fa-876c-e7b7d9e1598f" containerName="registry-server" containerID="cri-o://59dd5b6704b2e6f99f2da9cb87a940f54fb4b4c36738389e043006b38d198693" gracePeriod=2 Mar 13 12:22:10 crc kubenswrapper[4632]: I0313 12:22:10.423465 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hnpsr" Mar 13 12:22:10 crc kubenswrapper[4632]: I0313 12:22:10.458982 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xdn5\" (UniqueName: \"kubernetes.io/projected/e37141ae-e543-40fa-876c-e7b7d9e1598f-kube-api-access-6xdn5\") pod \"e37141ae-e543-40fa-876c-e7b7d9e1598f\" (UID: \"e37141ae-e543-40fa-876c-e7b7d9e1598f\") " Mar 13 12:22:10 crc kubenswrapper[4632]: I0313 12:22:10.459303 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e37141ae-e543-40fa-876c-e7b7d9e1598f-utilities\") pod \"e37141ae-e543-40fa-876c-e7b7d9e1598f\" (UID: \"e37141ae-e543-40fa-876c-e7b7d9e1598f\") " Mar 13 12:22:10 crc kubenswrapper[4632]: I0313 12:22:10.459337 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e37141ae-e543-40fa-876c-e7b7d9e1598f-catalog-content\") pod \"e37141ae-e543-40fa-876c-e7b7d9e1598f\" (UID: \"e37141ae-e543-40fa-876c-e7b7d9e1598f\") " Mar 13 12:22:10 crc kubenswrapper[4632]: I0313 12:22:10.460106 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e37141ae-e543-40fa-876c-e7b7d9e1598f-utilities" (OuterVolumeSpecName: "utilities") pod "e37141ae-e543-40fa-876c-e7b7d9e1598f" (UID: "e37141ae-e543-40fa-876c-e7b7d9e1598f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:22:10 crc kubenswrapper[4632]: I0313 12:22:10.460545 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:22:10 crc kubenswrapper[4632]: I0313 12:22:10.464152 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:22:10 crc kubenswrapper[4632]: I0313 12:22:10.467468 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e37141ae-e543-40fa-876c-e7b7d9e1598f-kube-api-access-6xdn5" (OuterVolumeSpecName: "kube-api-access-6xdn5") pod "e37141ae-e543-40fa-876c-e7b7d9e1598f" (UID: "e37141ae-e543-40fa-876c-e7b7d9e1598f"). InnerVolumeSpecName "kube-api-access-6xdn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:22:10 crc kubenswrapper[4632]: I0313 12:22:10.502508 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e37141ae-e543-40fa-876c-e7b7d9e1598f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e37141ae-e543-40fa-876c-e7b7d9e1598f" (UID: "e37141ae-e543-40fa-876c-e7b7d9e1598f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:22:10 crc kubenswrapper[4632]: I0313 12:22:10.562329 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xdn5\" (UniqueName: \"kubernetes.io/projected/e37141ae-e543-40fa-876c-e7b7d9e1598f-kube-api-access-6xdn5\") on node \"crc\" DevicePath \"\"" Mar 13 12:22:10 crc kubenswrapper[4632]: I0313 12:22:10.562384 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e37141ae-e543-40fa-876c-e7b7d9e1598f-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 12:22:10 crc kubenswrapper[4632]: I0313 12:22:10.562394 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e37141ae-e543-40fa-876c-e7b7d9e1598f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 12:22:10 crc kubenswrapper[4632]: I0313 12:22:10.875474 4632 generic.go:334] "Generic (PLEG): container finished" podID="e37141ae-e543-40fa-876c-e7b7d9e1598f" containerID="59dd5b6704b2e6f99f2da9cb87a940f54fb4b4c36738389e043006b38d198693" exitCode=0 Mar 13 12:22:10 crc kubenswrapper[4632]: I0313 12:22:10.875572 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hnpsr" Mar 13 12:22:10 crc kubenswrapper[4632]: I0313 12:22:10.875561 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnpsr" event={"ID":"e37141ae-e543-40fa-876c-e7b7d9e1598f","Type":"ContainerDied","Data":"59dd5b6704b2e6f99f2da9cb87a940f54fb4b4c36738389e043006b38d198693"} Mar 13 12:22:10 crc kubenswrapper[4632]: I0313 12:22:10.875786 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnpsr" event={"ID":"e37141ae-e543-40fa-876c-e7b7d9e1598f","Type":"ContainerDied","Data":"9caf9004314593e78add6272bfed81d0b4f7fa3929f6184e4cb0ffa2bbc615fc"} Mar 13 12:22:10 crc kubenswrapper[4632]: I0313 12:22:10.875819 4632 scope.go:117] "RemoveContainer" containerID="59dd5b6704b2e6f99f2da9cb87a940f54fb4b4c36738389e043006b38d198693" Mar 13 12:22:10 crc kubenswrapper[4632]: I0313 12:22:10.930713 4632 scope.go:117] "RemoveContainer" containerID="ead807d406032ccc6de750261a0d5e2b71a645cfaa7f853ed7fc97cd308b03bb" Mar 13 12:22:10 crc kubenswrapper[4632]: I0313 12:22:10.931779 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hnpsr"] Mar 13 12:22:10 crc kubenswrapper[4632]: I0313 12:22:10.959753 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hnpsr"] Mar 13 12:22:10 crc kubenswrapper[4632]: I0313 12:22:10.977586 4632 scope.go:117] "RemoveContainer" containerID="030c12dcf041fb69ab0b030b3ab62bdd3a22b3a40f28da17a2039141caaeedee" Mar 13 12:22:11 crc kubenswrapper[4632]: I0313 12:22:11.030282 4632 scope.go:117] "RemoveContainer" containerID="59dd5b6704b2e6f99f2da9cb87a940f54fb4b4c36738389e043006b38d198693" Mar 13 12:22:11 crc kubenswrapper[4632]: E0313 12:22:11.030750 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59dd5b6704b2e6f99f2da9cb87a940f54fb4b4c36738389e043006b38d198693\": container with ID starting with 59dd5b6704b2e6f99f2da9cb87a940f54fb4b4c36738389e043006b38d198693 not found: ID does not exist" containerID="59dd5b6704b2e6f99f2da9cb87a940f54fb4b4c36738389e043006b38d198693" Mar 13 12:22:11 crc kubenswrapper[4632]: I0313 12:22:11.030789 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59dd5b6704b2e6f99f2da9cb87a940f54fb4b4c36738389e043006b38d198693"} err="failed to get container status \"59dd5b6704b2e6f99f2da9cb87a940f54fb4b4c36738389e043006b38d198693\": rpc error: code = NotFound desc = could not find container \"59dd5b6704b2e6f99f2da9cb87a940f54fb4b4c36738389e043006b38d198693\": container with ID starting with 59dd5b6704b2e6f99f2da9cb87a940f54fb4b4c36738389e043006b38d198693 not found: ID does not exist" Mar 13 12:22:11 crc kubenswrapper[4632]: I0313 12:22:11.030816 4632 scope.go:117] "RemoveContainer" containerID="ead807d406032ccc6de750261a0d5e2b71a645cfaa7f853ed7fc97cd308b03bb" Mar 13 12:22:11 crc kubenswrapper[4632]: E0313 12:22:11.031085 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ead807d406032ccc6de750261a0d5e2b71a645cfaa7f853ed7fc97cd308b03bb\": container with ID starting with ead807d406032ccc6de750261a0d5e2b71a645cfaa7f853ed7fc97cd308b03bb not found: ID does not exist" containerID="ead807d406032ccc6de750261a0d5e2b71a645cfaa7f853ed7fc97cd308b03bb" Mar 13 12:22:11 crc kubenswrapper[4632]: I0313 12:22:11.031116 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ead807d406032ccc6de750261a0d5e2b71a645cfaa7f853ed7fc97cd308b03bb"} err="failed to get container status \"ead807d406032ccc6de750261a0d5e2b71a645cfaa7f853ed7fc97cd308b03bb\": rpc error: code = NotFound desc = could not find container \"ead807d406032ccc6de750261a0d5e2b71a645cfaa7f853ed7fc97cd308b03bb\": container with ID starting with ead807d406032ccc6de750261a0d5e2b71a645cfaa7f853ed7fc97cd308b03bb not found: ID does not exist" Mar 13 12:22:11 crc kubenswrapper[4632]: I0313 12:22:11.031134 4632 scope.go:117] "RemoveContainer" containerID="030c12dcf041fb69ab0b030b3ab62bdd3a22b3a40f28da17a2039141caaeedee" Mar 13 12:22:11 crc kubenswrapper[4632]: E0313 12:22:11.031574 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"030c12dcf041fb69ab0b030b3ab62bdd3a22b3a40f28da17a2039141caaeedee\": container with ID starting with 030c12dcf041fb69ab0b030b3ab62bdd3a22b3a40f28da17a2039141caaeedee not found: ID does not exist" containerID="030c12dcf041fb69ab0b030b3ab62bdd3a22b3a40f28da17a2039141caaeedee" Mar 13 12:22:11 crc kubenswrapper[4632]: I0313 12:22:11.031604 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"030c12dcf041fb69ab0b030b3ab62bdd3a22b3a40f28da17a2039141caaeedee"} err="failed to get container status \"030c12dcf041fb69ab0b030b3ab62bdd3a22b3a40f28da17a2039141caaeedee\": rpc error: code = NotFound desc = could not find container \"030c12dcf041fb69ab0b030b3ab62bdd3a22b3a40f28da17a2039141caaeedee\": container with ID starting with 030c12dcf041fb69ab0b030b3ab62bdd3a22b3a40f28da17a2039141caaeedee not found: ID does not exist" Mar 13 12:22:12 crc kubenswrapper[4632]: I0313 12:22:12.061190 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e37141ae-e543-40fa-876c-e7b7d9e1598f" path="/var/lib/kubelet/pods/e37141ae-e543-40fa-876c-e7b7d9e1598f/volumes" Mar 13 12:22:26 crc kubenswrapper[4632]: I0313 12:22:26.349766 4632 scope.go:117] "RemoveContainer" containerID="d94c02e7a4464d40b765382224dccb8b8b7d7ff087751a6cc11225fceac593a0" Mar 13 12:22:40 crc kubenswrapper[4632]: I0313 12:22:40.461438 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:22:40 crc kubenswrapper[4632]: I0313 12:22:40.462849 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:22:46 crc kubenswrapper[4632]: I0313 12:22:46.747696 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6xp6q"] Mar 13 12:22:46 crc kubenswrapper[4632]: E0313 12:22:46.748495 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e37141ae-e543-40fa-876c-e7b7d9e1598f" containerName="extract-content" Mar 13 12:22:46 crc kubenswrapper[4632]: I0313 12:22:46.748507 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="e37141ae-e543-40fa-876c-e7b7d9e1598f" containerName="extract-content" Mar 13 12:22:46 crc kubenswrapper[4632]: E0313 12:22:46.748535 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e37141ae-e543-40fa-876c-e7b7d9e1598f" containerName="extract-utilities" Mar 13 12:22:46 crc kubenswrapper[4632]: I0313 12:22:46.748541 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="e37141ae-e543-40fa-876c-e7b7d9e1598f" containerName="extract-utilities" Mar 13 12:22:46 crc kubenswrapper[4632]: E0313 12:22:46.748564 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5121453-468a-432e-b110-fd0cd60ed92b" containerName="oc" Mar 13 12:22:46 crc kubenswrapper[4632]: I0313 12:22:46.748570 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5121453-468a-432e-b110-fd0cd60ed92b" containerName="oc" Mar 13 12:22:46 crc kubenswrapper[4632]: E0313 12:22:46.748584 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e37141ae-e543-40fa-876c-e7b7d9e1598f" containerName="registry-server" Mar 13 12:22:46 crc kubenswrapper[4632]: I0313 12:22:46.748590 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="e37141ae-e543-40fa-876c-e7b7d9e1598f" containerName="registry-server" Mar 13 12:22:46 crc kubenswrapper[4632]: I0313 12:22:46.748752 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="e37141ae-e543-40fa-876c-e7b7d9e1598f" containerName="registry-server" Mar 13 12:22:46 crc kubenswrapper[4632]: I0313 12:22:46.748771 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5121453-468a-432e-b110-fd0cd60ed92b" containerName="oc" Mar 13 12:22:46 crc kubenswrapper[4632]: I0313 12:22:46.750099 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xp6q" Mar 13 12:22:46 crc kubenswrapper[4632]: I0313 12:22:46.766360 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6xp6q"] Mar 13 12:22:46 crc kubenswrapper[4632]: I0313 12:22:46.906838 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/006513e3-67e6-4969-82fb-37e5ac8eaf4a-utilities\") pod \"certified-operators-6xp6q\" (UID: \"006513e3-67e6-4969-82fb-37e5ac8eaf4a\") " pod="openshift-marketplace/certified-operators-6xp6q" Mar 13 12:22:46 crc kubenswrapper[4632]: I0313 12:22:46.906920 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/006513e3-67e6-4969-82fb-37e5ac8eaf4a-catalog-content\") pod \"certified-operators-6xp6q\" (UID: \"006513e3-67e6-4969-82fb-37e5ac8eaf4a\") " pod="openshift-marketplace/certified-operators-6xp6q" Mar 13 12:22:46 crc kubenswrapper[4632]: I0313 12:22:46.907203 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv4gx\" (UniqueName: \"kubernetes.io/projected/006513e3-67e6-4969-82fb-37e5ac8eaf4a-kube-api-access-sv4gx\") pod \"certified-operators-6xp6q\" (UID: \"006513e3-67e6-4969-82fb-37e5ac8eaf4a\") " pod="openshift-marketplace/certified-operators-6xp6q" Mar 13 12:22:47 crc kubenswrapper[4632]: I0313 12:22:47.008917 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/006513e3-67e6-4969-82fb-37e5ac8eaf4a-catalog-content\") pod \"certified-operators-6xp6q\" (UID: \"006513e3-67e6-4969-82fb-37e5ac8eaf4a\") " pod="openshift-marketplace/certified-operators-6xp6q" Mar 13 12:22:47 crc kubenswrapper[4632]: I0313 12:22:47.009140 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv4gx\" (UniqueName: \"kubernetes.io/projected/006513e3-67e6-4969-82fb-37e5ac8eaf4a-kube-api-access-sv4gx\") pod \"certified-operators-6xp6q\" (UID: \"006513e3-67e6-4969-82fb-37e5ac8eaf4a\") " pod="openshift-marketplace/certified-operators-6xp6q" Mar 13 12:22:47 crc kubenswrapper[4632]: I0313 12:22:47.009187 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/006513e3-67e6-4969-82fb-37e5ac8eaf4a-utilities\") pod \"certified-operators-6xp6q\" (UID: \"006513e3-67e6-4969-82fb-37e5ac8eaf4a\") " pod="openshift-marketplace/certified-operators-6xp6q" Mar 13 12:22:47 crc kubenswrapper[4632]: I0313 12:22:47.009530 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/006513e3-67e6-4969-82fb-37e5ac8eaf4a-catalog-content\") pod \"certified-operators-6xp6q\" (UID: \"006513e3-67e6-4969-82fb-37e5ac8eaf4a\") " pod="openshift-marketplace/certified-operators-6xp6q" Mar 13 12:22:47 crc kubenswrapper[4632]: I0313 12:22:47.009566 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/006513e3-67e6-4969-82fb-37e5ac8eaf4a-utilities\") pod \"certified-operators-6xp6q\" (UID: \"006513e3-67e6-4969-82fb-37e5ac8eaf4a\") " pod="openshift-marketplace/certified-operators-6xp6q" Mar 13 12:22:47 crc kubenswrapper[4632]: I0313 12:22:47.035345 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv4gx\" (UniqueName: \"kubernetes.io/projected/006513e3-67e6-4969-82fb-37e5ac8eaf4a-kube-api-access-sv4gx\") pod \"certified-operators-6xp6q\" (UID: \"006513e3-67e6-4969-82fb-37e5ac8eaf4a\") " pod="openshift-marketplace/certified-operators-6xp6q" Mar 13 12:22:47 crc kubenswrapper[4632]: I0313 12:22:47.079818 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xp6q" Mar 13 12:22:47 crc kubenswrapper[4632]: I0313 12:22:47.674776 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6xp6q"] Mar 13 12:22:48 crc kubenswrapper[4632]: I0313 12:22:48.295859 4632 generic.go:334] "Generic (PLEG): container finished" podID="006513e3-67e6-4969-82fb-37e5ac8eaf4a" containerID="5e9374890eb775e4c1bb97be5e3b2ff1cac17016a5c436abe4d78eea14dc7c46" exitCode=0 Mar 13 12:22:48 crc kubenswrapper[4632]: I0313 12:22:48.296128 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xp6q" event={"ID":"006513e3-67e6-4969-82fb-37e5ac8eaf4a","Type":"ContainerDied","Data":"5e9374890eb775e4c1bb97be5e3b2ff1cac17016a5c436abe4d78eea14dc7c46"} Mar 13 12:22:48 crc kubenswrapper[4632]: I0313 12:22:48.297094 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xp6q" event={"ID":"006513e3-67e6-4969-82fb-37e5ac8eaf4a","Type":"ContainerStarted","Data":"5567f226f31b002032305cc353d86ae94e1489dfc0d354345a4756c71df1ee5c"} Mar 13 12:22:50 crc kubenswrapper[4632]: I0313 12:22:50.316337 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xp6q" event={"ID":"006513e3-67e6-4969-82fb-37e5ac8eaf4a","Type":"ContainerStarted","Data":"45e1f9656cfbf9b22004031ee39b6063266926993e09e1b5fb34fc7a8ec2578a"} Mar 13 12:22:53 crc kubenswrapper[4632]: I0313 12:22:53.353533 4632 generic.go:334] "Generic (PLEG): container finished" podID="006513e3-67e6-4969-82fb-37e5ac8eaf4a" containerID="45e1f9656cfbf9b22004031ee39b6063266926993e09e1b5fb34fc7a8ec2578a" exitCode=0 Mar 13 12:22:53 crc kubenswrapper[4632]: I0313 12:22:53.353626 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xp6q" event={"ID":"006513e3-67e6-4969-82fb-37e5ac8eaf4a","Type":"ContainerDied","Data":"45e1f9656cfbf9b22004031ee39b6063266926993e09e1b5fb34fc7a8ec2578a"} Mar 13 12:22:54 crc kubenswrapper[4632]: I0313 12:22:54.375604 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xp6q" event={"ID":"006513e3-67e6-4969-82fb-37e5ac8eaf4a","Type":"ContainerStarted","Data":"99ba058ba824509ca07bb72d4c4f8e7b66dbe556f20ebb5fe9b5fbc3c4237556"} Mar 13 12:22:54 crc kubenswrapper[4632]: I0313 12:22:54.401302 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6xp6q" podStartSLOduration=2.931517953 podStartE2EDuration="8.401281921s" podCreationTimestamp="2026-03-13 12:22:46 +0000 UTC" firstStartedPulling="2026-03-13 12:22:48.298341294 +0000 UTC m=+8342.320871427" lastFinishedPulling="2026-03-13 12:22:53.768105262 +0000 UTC m=+8347.790635395" observedRunningTime="2026-03-13 12:22:54.39836616 +0000 UTC m=+8348.420896293" watchObservedRunningTime="2026-03-13 12:22:54.401281921 +0000 UTC m=+8348.423812054" Mar 13 12:22:57 crc kubenswrapper[4632]: I0313 12:22:57.080367 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6xp6q" Mar 13 12:22:57 crc kubenswrapper[4632]: I0313 12:22:57.080806 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6xp6q" Mar 13 12:22:58 crc kubenswrapper[4632]: I0313 12:22:58.139895 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-6xp6q" podUID="006513e3-67e6-4969-82fb-37e5ac8eaf4a" containerName="registry-server" probeResult="failure" output=< Mar 13 12:22:58 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:22:58 crc kubenswrapper[4632]: > Mar 13 12:23:07 crc kubenswrapper[4632]: I0313 12:23:07.146154 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6xp6q" Mar 13 12:23:07 crc kubenswrapper[4632]: I0313 12:23:07.206355 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6xp6q" Mar 13 12:23:07 crc kubenswrapper[4632]: I0313 12:23:07.386349 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6xp6q"] Mar 13 12:23:08 crc kubenswrapper[4632]: I0313 12:23:08.501274 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6xp6q" podUID="006513e3-67e6-4969-82fb-37e5ac8eaf4a" containerName="registry-server" containerID="cri-o://99ba058ba824509ca07bb72d4c4f8e7b66dbe556f20ebb5fe9b5fbc3c4237556" gracePeriod=2 Mar 13 12:23:09 crc kubenswrapper[4632]: I0313 12:23:09.112212 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xp6q" Mar 13 12:23:09 crc kubenswrapper[4632]: I0313 12:23:09.267118 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv4gx\" (UniqueName: \"kubernetes.io/projected/006513e3-67e6-4969-82fb-37e5ac8eaf4a-kube-api-access-sv4gx\") pod \"006513e3-67e6-4969-82fb-37e5ac8eaf4a\" (UID: \"006513e3-67e6-4969-82fb-37e5ac8eaf4a\") " Mar 13 12:23:09 crc kubenswrapper[4632]: I0313 12:23:09.267318 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/006513e3-67e6-4969-82fb-37e5ac8eaf4a-utilities\") pod \"006513e3-67e6-4969-82fb-37e5ac8eaf4a\" (UID: \"006513e3-67e6-4969-82fb-37e5ac8eaf4a\") " Mar 13 12:23:09 crc kubenswrapper[4632]: I0313 12:23:09.267502 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/006513e3-67e6-4969-82fb-37e5ac8eaf4a-catalog-content\") pod \"006513e3-67e6-4969-82fb-37e5ac8eaf4a\" (UID: \"006513e3-67e6-4969-82fb-37e5ac8eaf4a\") " Mar 13 12:23:09 crc kubenswrapper[4632]: I0313 12:23:09.268158 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/006513e3-67e6-4969-82fb-37e5ac8eaf4a-utilities" (OuterVolumeSpecName: "utilities") pod "006513e3-67e6-4969-82fb-37e5ac8eaf4a" (UID: "006513e3-67e6-4969-82fb-37e5ac8eaf4a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:23:09 crc kubenswrapper[4632]: I0313 12:23:09.276286 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/006513e3-67e6-4969-82fb-37e5ac8eaf4a-kube-api-access-sv4gx" (OuterVolumeSpecName: "kube-api-access-sv4gx") pod "006513e3-67e6-4969-82fb-37e5ac8eaf4a" (UID: "006513e3-67e6-4969-82fb-37e5ac8eaf4a"). InnerVolumeSpecName "kube-api-access-sv4gx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:23:09 crc kubenswrapper[4632]: I0313 12:23:09.325961 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/006513e3-67e6-4969-82fb-37e5ac8eaf4a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "006513e3-67e6-4969-82fb-37e5ac8eaf4a" (UID: "006513e3-67e6-4969-82fb-37e5ac8eaf4a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:23:09 crc kubenswrapper[4632]: I0313 12:23:09.370202 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/006513e3-67e6-4969-82fb-37e5ac8eaf4a-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 12:23:09 crc kubenswrapper[4632]: I0313 12:23:09.370241 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sv4gx\" (UniqueName: \"kubernetes.io/projected/006513e3-67e6-4969-82fb-37e5ac8eaf4a-kube-api-access-sv4gx\") on node \"crc\" DevicePath \"\"" Mar 13 12:23:09 crc kubenswrapper[4632]: I0313 12:23:09.370253 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/006513e3-67e6-4969-82fb-37e5ac8eaf4a-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 12:23:09 crc kubenswrapper[4632]: I0313 12:23:09.511768 4632 generic.go:334] "Generic (PLEG): container finished" podID="006513e3-67e6-4969-82fb-37e5ac8eaf4a" containerID="99ba058ba824509ca07bb72d4c4f8e7b66dbe556f20ebb5fe9b5fbc3c4237556" exitCode=0 Mar 13 12:23:09 crc kubenswrapper[4632]: I0313 12:23:09.511817 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xp6q" event={"ID":"006513e3-67e6-4969-82fb-37e5ac8eaf4a","Type":"ContainerDied","Data":"99ba058ba824509ca07bb72d4c4f8e7b66dbe556f20ebb5fe9b5fbc3c4237556"} Mar 13 12:23:09 crc kubenswrapper[4632]: I0313 12:23:09.511845 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xp6q" event={"ID":"006513e3-67e6-4969-82fb-37e5ac8eaf4a","Type":"ContainerDied","Data":"5567f226f31b002032305cc353d86ae94e1489dfc0d354345a4756c71df1ee5c"} Mar 13 12:23:09 crc kubenswrapper[4632]: I0313 12:23:09.511864 4632 scope.go:117] "RemoveContainer" containerID="99ba058ba824509ca07bb72d4c4f8e7b66dbe556f20ebb5fe9b5fbc3c4237556" Mar 13 12:23:09 crc kubenswrapper[4632]: I0313 12:23:09.512023 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xp6q" Mar 13 12:23:09 crc kubenswrapper[4632]: I0313 12:23:09.547846 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6xp6q"] Mar 13 12:23:09 crc kubenswrapper[4632]: I0313 12:23:09.550698 4632 scope.go:117] "RemoveContainer" containerID="45e1f9656cfbf9b22004031ee39b6063266926993e09e1b5fb34fc7a8ec2578a" Mar 13 12:23:09 crc kubenswrapper[4632]: I0313 12:23:09.556164 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6xp6q"] Mar 13 12:23:09 crc kubenswrapper[4632]: I0313 12:23:09.588230 4632 scope.go:117] "RemoveContainer" containerID="5e9374890eb775e4c1bb97be5e3b2ff1cac17016a5c436abe4d78eea14dc7c46" Mar 13 12:23:09 crc kubenswrapper[4632]: I0313 12:23:09.642370 4632 scope.go:117] "RemoveContainer" containerID="99ba058ba824509ca07bb72d4c4f8e7b66dbe556f20ebb5fe9b5fbc3c4237556" Mar 13 12:23:09 crc kubenswrapper[4632]: E0313 12:23:09.642859 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99ba058ba824509ca07bb72d4c4f8e7b66dbe556f20ebb5fe9b5fbc3c4237556\": container with ID starting with 99ba058ba824509ca07bb72d4c4f8e7b66dbe556f20ebb5fe9b5fbc3c4237556 not found: ID does not exist" containerID="99ba058ba824509ca07bb72d4c4f8e7b66dbe556f20ebb5fe9b5fbc3c4237556" Mar 13 12:23:09 crc kubenswrapper[4632]: I0313 12:23:09.642893 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99ba058ba824509ca07bb72d4c4f8e7b66dbe556f20ebb5fe9b5fbc3c4237556"} err="failed to get container status \"99ba058ba824509ca07bb72d4c4f8e7b66dbe556f20ebb5fe9b5fbc3c4237556\": rpc error: code = NotFound desc = could not find container \"99ba058ba824509ca07bb72d4c4f8e7b66dbe556f20ebb5fe9b5fbc3c4237556\": container with ID starting with 99ba058ba824509ca07bb72d4c4f8e7b66dbe556f20ebb5fe9b5fbc3c4237556 not found: ID does not exist" Mar 13 12:23:09 crc kubenswrapper[4632]: I0313 12:23:09.642923 4632 scope.go:117] "RemoveContainer" containerID="45e1f9656cfbf9b22004031ee39b6063266926993e09e1b5fb34fc7a8ec2578a" Mar 13 12:23:09 crc kubenswrapper[4632]: E0313 12:23:09.643288 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45e1f9656cfbf9b22004031ee39b6063266926993e09e1b5fb34fc7a8ec2578a\": container with ID starting with 45e1f9656cfbf9b22004031ee39b6063266926993e09e1b5fb34fc7a8ec2578a not found: ID does not exist" containerID="45e1f9656cfbf9b22004031ee39b6063266926993e09e1b5fb34fc7a8ec2578a" Mar 13 12:23:09 crc kubenswrapper[4632]: I0313 12:23:09.643333 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45e1f9656cfbf9b22004031ee39b6063266926993e09e1b5fb34fc7a8ec2578a"} err="failed to get container status \"45e1f9656cfbf9b22004031ee39b6063266926993e09e1b5fb34fc7a8ec2578a\": rpc error: code = NotFound desc = could not find container \"45e1f9656cfbf9b22004031ee39b6063266926993e09e1b5fb34fc7a8ec2578a\": container with ID starting with 45e1f9656cfbf9b22004031ee39b6063266926993e09e1b5fb34fc7a8ec2578a not found: ID does not exist" Mar 13 12:23:09 crc kubenswrapper[4632]: I0313 12:23:09.643366 4632 scope.go:117] "RemoveContainer" containerID="5e9374890eb775e4c1bb97be5e3b2ff1cac17016a5c436abe4d78eea14dc7c46" Mar 13 12:23:09 crc kubenswrapper[4632]: E0313 12:23:09.643788 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e9374890eb775e4c1bb97be5e3b2ff1cac17016a5c436abe4d78eea14dc7c46\": container with ID starting with 5e9374890eb775e4c1bb97be5e3b2ff1cac17016a5c436abe4d78eea14dc7c46 not found: ID does not exist" containerID="5e9374890eb775e4c1bb97be5e3b2ff1cac17016a5c436abe4d78eea14dc7c46" Mar 13 12:23:09 crc kubenswrapper[4632]: I0313 12:23:09.643875 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e9374890eb775e4c1bb97be5e3b2ff1cac17016a5c436abe4d78eea14dc7c46"} err="failed to get container status \"5e9374890eb775e4c1bb97be5e3b2ff1cac17016a5c436abe4d78eea14dc7c46\": rpc error: code = NotFound desc = could not find container \"5e9374890eb775e4c1bb97be5e3b2ff1cac17016a5c436abe4d78eea14dc7c46\": container with ID starting with 5e9374890eb775e4c1bb97be5e3b2ff1cac17016a5c436abe4d78eea14dc7c46 not found: ID does not exist" Mar 13 12:23:10 crc kubenswrapper[4632]: I0313 12:23:10.056552 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="006513e3-67e6-4969-82fb-37e5ac8eaf4a" path="/var/lib/kubelet/pods/006513e3-67e6-4969-82fb-37e5ac8eaf4a/volumes" Mar 13 12:23:10 crc kubenswrapper[4632]: I0313 12:23:10.461509 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:23:10 crc kubenswrapper[4632]: I0313 12:23:10.461882 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:23:10 crc kubenswrapper[4632]: I0313 12:23:10.462108 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 12:23:10 crc kubenswrapper[4632]: I0313 12:23:10.464780 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d2f7d92ea8336c364393ccfd7369387047df3a4555b1b7f7be871c5ae3268440"} pod="openshift-machine-config-operator/machine-config-daemon-zkscb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 12:23:10 crc kubenswrapper[4632]: I0313 12:23:10.464998 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" containerID="cri-o://d2f7d92ea8336c364393ccfd7369387047df3a4555b1b7f7be871c5ae3268440" gracePeriod=600 Mar 13 12:23:11 crc kubenswrapper[4632]: I0313 12:23:11.533022 4632 generic.go:334] "Generic (PLEG): container finished" podID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerID="d2f7d92ea8336c364393ccfd7369387047df3a4555b1b7f7be871c5ae3268440" exitCode=0 Mar 13 12:23:11 crc kubenswrapper[4632]: I0313 12:23:11.533081 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerDied","Data":"d2f7d92ea8336c364393ccfd7369387047df3a4555b1b7f7be871c5ae3268440"} Mar 13 12:23:11 crc kubenswrapper[4632]: I0313 12:23:11.533567 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerStarted","Data":"8741648fe2a67d9da8cf15c1e305dc4ff749d2dd595a90ef09b36ac1d0767d1f"} Mar 13 12:23:11 crc kubenswrapper[4632]: I0313 12:23:11.533592 4632 scope.go:117] "RemoveContainer" containerID="5cc922706d30866ac208574ee6bcc0812dd5d20bcd356efd2bc6fcac169085a9" Mar 13 12:24:00 crc kubenswrapper[4632]: I0313 12:24:00.179553 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556744-fb92p"] Mar 13 12:24:00 crc kubenswrapper[4632]: E0313 12:24:00.180563 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="006513e3-67e6-4969-82fb-37e5ac8eaf4a" containerName="extract-utilities" Mar 13 12:24:00 crc kubenswrapper[4632]: I0313 12:24:00.180580 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="006513e3-67e6-4969-82fb-37e5ac8eaf4a" containerName="extract-utilities" Mar 13 12:24:00 crc kubenswrapper[4632]: E0313 12:24:00.180614 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="006513e3-67e6-4969-82fb-37e5ac8eaf4a" containerName="registry-server" Mar 13 12:24:00 crc kubenswrapper[4632]: I0313 12:24:00.180621 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="006513e3-67e6-4969-82fb-37e5ac8eaf4a" containerName="registry-server" Mar 13 12:24:00 crc kubenswrapper[4632]: E0313 12:24:00.180633 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="006513e3-67e6-4969-82fb-37e5ac8eaf4a" containerName="extract-content" Mar 13 12:24:00 crc kubenswrapper[4632]: I0313 12:24:00.180641 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="006513e3-67e6-4969-82fb-37e5ac8eaf4a" containerName="extract-content" Mar 13 12:24:00 crc kubenswrapper[4632]: I0313 12:24:00.180860 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="006513e3-67e6-4969-82fb-37e5ac8eaf4a" containerName="registry-server" Mar 13 12:24:00 crc kubenswrapper[4632]: I0313 12:24:00.181575 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556744-fb92p" Mar 13 12:24:00 crc kubenswrapper[4632]: I0313 12:24:00.183814 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 12:24:00 crc kubenswrapper[4632]: I0313 12:24:00.190867 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 12:24:00 crc kubenswrapper[4632]: I0313 12:24:00.191225 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 12:24:00 crc kubenswrapper[4632]: I0313 12:24:00.199636 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556744-fb92p"] Mar 13 12:24:00 crc kubenswrapper[4632]: I0313 12:24:00.343385 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvhdx\" (UniqueName: \"kubernetes.io/projected/e1006ed9-194b-4d1b-91cf-7722ce335023-kube-api-access-jvhdx\") pod \"auto-csr-approver-29556744-fb92p\" (UID: \"e1006ed9-194b-4d1b-91cf-7722ce335023\") " pod="openshift-infra/auto-csr-approver-29556744-fb92p" Mar 13 12:24:00 crc kubenswrapper[4632]: I0313 12:24:00.445900 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvhdx\" (UniqueName: \"kubernetes.io/projected/e1006ed9-194b-4d1b-91cf-7722ce335023-kube-api-access-jvhdx\") pod \"auto-csr-approver-29556744-fb92p\" (UID: \"e1006ed9-194b-4d1b-91cf-7722ce335023\") " pod="openshift-infra/auto-csr-approver-29556744-fb92p" Mar 13 12:24:00 crc kubenswrapper[4632]: I0313 12:24:00.468804 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvhdx\" (UniqueName: \"kubernetes.io/projected/e1006ed9-194b-4d1b-91cf-7722ce335023-kube-api-access-jvhdx\") pod \"auto-csr-approver-29556744-fb92p\" (UID: \"e1006ed9-194b-4d1b-91cf-7722ce335023\") " pod="openshift-infra/auto-csr-approver-29556744-fb92p" Mar 13 12:24:00 crc kubenswrapper[4632]: I0313 12:24:00.510737 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556744-fb92p" Mar 13 12:24:01 crc kubenswrapper[4632]: I0313 12:24:01.028145 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556744-fb92p"] Mar 13 12:24:01 crc kubenswrapper[4632]: I0313 12:24:01.886382 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556744-fb92p" event={"ID":"e1006ed9-194b-4d1b-91cf-7722ce335023","Type":"ContainerStarted","Data":"6e17f362d9dcb71b1160cdd1d9f6c0be75d7ea8d28408713bea723564ffc14ee"} Mar 13 12:24:02 crc kubenswrapper[4632]: I0313 12:24:02.895807 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556744-fb92p" event={"ID":"e1006ed9-194b-4d1b-91cf-7722ce335023","Type":"ContainerStarted","Data":"8c147fe4b276fdf885236433df734b60b43a6fbbd4e1c4d2a7bec9fd5c3cc6e2"} Mar 13 12:24:02 crc kubenswrapper[4632]: I0313 12:24:02.918270 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556744-fb92p" podStartSLOduration=1.645995664 podStartE2EDuration="2.918250546s" podCreationTimestamp="2026-03-13 12:24:00 +0000 UTC" firstStartedPulling="2026-03-13 12:24:01.035024288 +0000 UTC m=+8415.057554421" lastFinishedPulling="2026-03-13 12:24:02.30727917 +0000 UTC m=+8416.329809303" observedRunningTime="2026-03-13 12:24:02.909398798 +0000 UTC m=+8416.931928931" watchObservedRunningTime="2026-03-13 12:24:02.918250546 +0000 UTC m=+8416.940780679" Mar 13 12:24:04 crc kubenswrapper[4632]: I0313 12:24:04.915513 4632 generic.go:334] "Generic (PLEG): container finished" podID="e1006ed9-194b-4d1b-91cf-7722ce335023" containerID="8c147fe4b276fdf885236433df734b60b43a6fbbd4e1c4d2a7bec9fd5c3cc6e2" exitCode=0 Mar 13 12:24:04 crc kubenswrapper[4632]: I0313 12:24:04.915574 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556744-fb92p" event={"ID":"e1006ed9-194b-4d1b-91cf-7722ce335023","Type":"ContainerDied","Data":"8c147fe4b276fdf885236433df734b60b43a6fbbd4e1c4d2a7bec9fd5c3cc6e2"} Mar 13 12:24:06 crc kubenswrapper[4632]: I0313 12:24:06.399010 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556744-fb92p" Mar 13 12:24:06 crc kubenswrapper[4632]: I0313 12:24:06.579889 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvhdx\" (UniqueName: \"kubernetes.io/projected/e1006ed9-194b-4d1b-91cf-7722ce335023-kube-api-access-jvhdx\") pod \"e1006ed9-194b-4d1b-91cf-7722ce335023\" (UID: \"e1006ed9-194b-4d1b-91cf-7722ce335023\") " Mar 13 12:24:06 crc kubenswrapper[4632]: I0313 12:24:06.585549 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1006ed9-194b-4d1b-91cf-7722ce335023-kube-api-access-jvhdx" (OuterVolumeSpecName: "kube-api-access-jvhdx") pod "e1006ed9-194b-4d1b-91cf-7722ce335023" (UID: "e1006ed9-194b-4d1b-91cf-7722ce335023"). InnerVolumeSpecName "kube-api-access-jvhdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:24:06 crc kubenswrapper[4632]: I0313 12:24:06.682567 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvhdx\" (UniqueName: \"kubernetes.io/projected/e1006ed9-194b-4d1b-91cf-7722ce335023-kube-api-access-jvhdx\") on node \"crc\" DevicePath \"\"" Mar 13 12:24:06 crc kubenswrapper[4632]: I0313 12:24:06.936760 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556744-fb92p" event={"ID":"e1006ed9-194b-4d1b-91cf-7722ce335023","Type":"ContainerDied","Data":"6e17f362d9dcb71b1160cdd1d9f6c0be75d7ea8d28408713bea723564ffc14ee"} Mar 13 12:24:06 crc kubenswrapper[4632]: I0313 12:24:06.936818 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e17f362d9dcb71b1160cdd1d9f6c0be75d7ea8d28408713bea723564ffc14ee" Mar 13 12:24:06 crc kubenswrapper[4632]: I0313 12:24:06.936823 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556744-fb92p" Mar 13 12:24:07 crc kubenswrapper[4632]: I0313 12:24:07.012582 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556738-ktc6z"] Mar 13 12:24:07 crc kubenswrapper[4632]: I0313 12:24:07.023244 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556738-ktc6z"] Mar 13 12:24:08 crc kubenswrapper[4632]: I0313 12:24:08.056488 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd947fd4-4e97-4720-98a3-d345ae5dd3fc" path="/var/lib/kubelet/pods/fd947fd4-4e97-4720-98a3-d345ae5dd3fc/volumes" Mar 13 12:24:26 crc kubenswrapper[4632]: I0313 12:24:26.603674 4632 scope.go:117] "RemoveContainer" containerID="fbfc844073b7954c305603f6ba9bca1ebae6e886287d4969b865a335340183e5" Mar 13 12:24:48 crc kubenswrapper[4632]: I0313 12:24:48.901067 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5hbsx"] Mar 13 12:24:48 crc kubenswrapper[4632]: E0313 12:24:48.901831 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1006ed9-194b-4d1b-91cf-7722ce335023" containerName="oc" Mar 13 12:24:48 crc kubenswrapper[4632]: I0313 12:24:48.901844 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1006ed9-194b-4d1b-91cf-7722ce335023" containerName="oc" Mar 13 12:24:48 crc kubenswrapper[4632]: I0313 12:24:48.903320 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1006ed9-194b-4d1b-91cf-7722ce335023" containerName="oc" Mar 13 12:24:48 crc kubenswrapper[4632]: I0313 12:24:48.905493 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5hbsx" Mar 13 12:24:48 crc kubenswrapper[4632]: I0313 12:24:48.924457 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5hbsx"] Mar 13 12:24:49 crc kubenswrapper[4632]: I0313 12:24:49.057451 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28d46ba0-2f3e-4780-8645-3551c59cbd90-catalog-content\") pod \"redhat-operators-5hbsx\" (UID: \"28d46ba0-2f3e-4780-8645-3551c59cbd90\") " pod="openshift-marketplace/redhat-operators-5hbsx" Mar 13 12:24:49 crc kubenswrapper[4632]: I0313 12:24:49.057514 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28d46ba0-2f3e-4780-8645-3551c59cbd90-utilities\") pod \"redhat-operators-5hbsx\" (UID: \"28d46ba0-2f3e-4780-8645-3551c59cbd90\") " pod="openshift-marketplace/redhat-operators-5hbsx" Mar 13 12:24:49 crc kubenswrapper[4632]: I0313 12:24:49.057559 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf7mj\" (UniqueName: \"kubernetes.io/projected/28d46ba0-2f3e-4780-8645-3551c59cbd90-kube-api-access-wf7mj\") pod \"redhat-operators-5hbsx\" (UID: \"28d46ba0-2f3e-4780-8645-3551c59cbd90\") " pod="openshift-marketplace/redhat-operators-5hbsx" Mar 13 12:24:49 crc kubenswrapper[4632]: I0313 12:24:49.159904 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28d46ba0-2f3e-4780-8645-3551c59cbd90-catalog-content\") pod \"redhat-operators-5hbsx\" (UID: \"28d46ba0-2f3e-4780-8645-3551c59cbd90\") " pod="openshift-marketplace/redhat-operators-5hbsx" Mar 13 12:24:49 crc kubenswrapper[4632]: I0313 12:24:49.159985 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28d46ba0-2f3e-4780-8645-3551c59cbd90-utilities\") pod \"redhat-operators-5hbsx\" (UID: \"28d46ba0-2f3e-4780-8645-3551c59cbd90\") " pod="openshift-marketplace/redhat-operators-5hbsx" Mar 13 12:24:49 crc kubenswrapper[4632]: I0313 12:24:49.160017 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf7mj\" (UniqueName: \"kubernetes.io/projected/28d46ba0-2f3e-4780-8645-3551c59cbd90-kube-api-access-wf7mj\") pod \"redhat-operators-5hbsx\" (UID: \"28d46ba0-2f3e-4780-8645-3551c59cbd90\") " pod="openshift-marketplace/redhat-operators-5hbsx" Mar 13 12:24:49 crc kubenswrapper[4632]: I0313 12:24:49.160104 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28d46ba0-2f3e-4780-8645-3551c59cbd90-catalog-content\") pod \"redhat-operators-5hbsx\" (UID: \"28d46ba0-2f3e-4780-8645-3551c59cbd90\") " pod="openshift-marketplace/redhat-operators-5hbsx" Mar 13 12:24:49 crc kubenswrapper[4632]: I0313 12:24:49.160275 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28d46ba0-2f3e-4780-8645-3551c59cbd90-utilities\") pod \"redhat-operators-5hbsx\" (UID: \"28d46ba0-2f3e-4780-8645-3551c59cbd90\") " pod="openshift-marketplace/redhat-operators-5hbsx" Mar 13 12:24:49 crc kubenswrapper[4632]: I0313 12:24:49.180028 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf7mj\" (UniqueName: \"kubernetes.io/projected/28d46ba0-2f3e-4780-8645-3551c59cbd90-kube-api-access-wf7mj\") pod \"redhat-operators-5hbsx\" (UID: \"28d46ba0-2f3e-4780-8645-3551c59cbd90\") " pod="openshift-marketplace/redhat-operators-5hbsx" Mar 13 12:24:49 crc kubenswrapper[4632]: I0313 12:24:49.230553 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5hbsx" Mar 13 12:24:49 crc kubenswrapper[4632]: I0313 12:24:49.856099 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5hbsx"] Mar 13 12:24:50 crc kubenswrapper[4632]: I0313 12:24:50.343454 4632 generic.go:334] "Generic (PLEG): container finished" podID="28d46ba0-2f3e-4780-8645-3551c59cbd90" containerID="7be8436687bd0d966081b362829fe60e928b205102fb92b02070bb54e66a10a9" exitCode=0 Mar 13 12:24:50 crc kubenswrapper[4632]: I0313 12:24:50.343516 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5hbsx" event={"ID":"28d46ba0-2f3e-4780-8645-3551c59cbd90","Type":"ContainerDied","Data":"7be8436687bd0d966081b362829fe60e928b205102fb92b02070bb54e66a10a9"} Mar 13 12:24:50 crc kubenswrapper[4632]: I0313 12:24:50.343750 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5hbsx" event={"ID":"28d46ba0-2f3e-4780-8645-3551c59cbd90","Type":"ContainerStarted","Data":"6694309764a236c0cf4af5875ea9754ac8fe7e1c78280a3c51d6648b38d11efe"} Mar 13 12:24:52 crc kubenswrapper[4632]: I0313 12:24:52.372329 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5hbsx" event={"ID":"28d46ba0-2f3e-4780-8645-3551c59cbd90","Type":"ContainerStarted","Data":"f2b0a8aae3cf593228d6090f4f4dcba61eee1aa4e51a3ec5b41ca13e73160ca4"} Mar 13 12:24:57 crc kubenswrapper[4632]: I0313 12:24:57.422172 4632 generic.go:334] "Generic (PLEG): container finished" podID="28d46ba0-2f3e-4780-8645-3551c59cbd90" containerID="f2b0a8aae3cf593228d6090f4f4dcba61eee1aa4e51a3ec5b41ca13e73160ca4" exitCode=0 Mar 13 12:24:57 crc kubenswrapper[4632]: I0313 12:24:57.422616 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5hbsx" event={"ID":"28d46ba0-2f3e-4780-8645-3551c59cbd90","Type":"ContainerDied","Data":"f2b0a8aae3cf593228d6090f4f4dcba61eee1aa4e51a3ec5b41ca13e73160ca4"} Mar 13 12:24:58 crc kubenswrapper[4632]: I0313 12:24:58.440918 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5hbsx" event={"ID":"28d46ba0-2f3e-4780-8645-3551c59cbd90","Type":"ContainerStarted","Data":"22b880bd9d36239a72deab6ba7c23eda91f4380144527c59c20491394e33da83"} Mar 13 12:24:58 crc kubenswrapper[4632]: I0313 12:24:58.473662 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5hbsx" podStartSLOduration=2.9906351239999998 podStartE2EDuration="10.473634307s" podCreationTimestamp="2026-03-13 12:24:48 +0000 UTC" firstStartedPulling="2026-03-13 12:24:50.346375482 +0000 UTC m=+8464.368905616" lastFinishedPulling="2026-03-13 12:24:57.829374666 +0000 UTC m=+8471.851904799" observedRunningTime="2026-03-13 12:24:58.467667891 +0000 UTC m=+8472.490198034" watchObservedRunningTime="2026-03-13 12:24:58.473634307 +0000 UTC m=+8472.496164440" Mar 13 12:24:59 crc kubenswrapper[4632]: I0313 12:24:59.230826 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5hbsx" Mar 13 12:24:59 crc kubenswrapper[4632]: I0313 12:24:59.230890 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5hbsx" Mar 13 12:25:00 crc kubenswrapper[4632]: I0313 12:25:00.278956 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5hbsx" podUID="28d46ba0-2f3e-4780-8645-3551c59cbd90" containerName="registry-server" probeResult="failure" output=< Mar 13 12:25:00 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:25:00 crc kubenswrapper[4632]: > Mar 13 12:25:10 crc kubenswrapper[4632]: I0313 12:25:10.297108 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5hbsx" podUID="28d46ba0-2f3e-4780-8645-3551c59cbd90" containerName="registry-server" probeResult="failure" output=< Mar 13 12:25:10 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:25:10 crc kubenswrapper[4632]: > Mar 13 12:25:10 crc kubenswrapper[4632]: I0313 12:25:10.461272 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:25:10 crc kubenswrapper[4632]: I0313 12:25:10.461426 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:25:20 crc kubenswrapper[4632]: I0313 12:25:20.279960 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5hbsx" podUID="28d46ba0-2f3e-4780-8645-3551c59cbd90" containerName="registry-server" probeResult="failure" output=< Mar 13 12:25:20 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:25:20 crc kubenswrapper[4632]: > Mar 13 12:25:30 crc kubenswrapper[4632]: I0313 12:25:30.307710 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5hbsx" podUID="28d46ba0-2f3e-4780-8645-3551c59cbd90" containerName="registry-server" probeResult="failure" output=< Mar 13 12:25:30 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:25:30 crc kubenswrapper[4632]: > Mar 13 12:25:39 crc kubenswrapper[4632]: I0313 12:25:39.292113 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5hbsx" Mar 13 12:25:39 crc kubenswrapper[4632]: I0313 12:25:39.355158 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5hbsx" Mar 13 12:25:39 crc kubenswrapper[4632]: I0313 12:25:39.537565 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5hbsx"] Mar 13 12:25:40 crc kubenswrapper[4632]: I0313 12:25:40.461509 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:25:40 crc kubenswrapper[4632]: I0313 12:25:40.461807 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:25:40 crc kubenswrapper[4632]: I0313 12:25:40.865303 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5hbsx" podUID="28d46ba0-2f3e-4780-8645-3551c59cbd90" containerName="registry-server" containerID="cri-o://22b880bd9d36239a72deab6ba7c23eda91f4380144527c59c20491394e33da83" gracePeriod=2 Mar 13 12:25:41 crc kubenswrapper[4632]: I0313 12:25:41.840214 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5hbsx" Mar 13 12:25:41 crc kubenswrapper[4632]: I0313 12:25:41.878961 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5hbsx" Mar 13 12:25:41 crc kubenswrapper[4632]: I0313 12:25:41.879099 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5hbsx" event={"ID":"28d46ba0-2f3e-4780-8645-3551c59cbd90","Type":"ContainerDied","Data":"22b880bd9d36239a72deab6ba7c23eda91f4380144527c59c20491394e33da83"} Mar 13 12:25:41 crc kubenswrapper[4632]: I0313 12:25:41.879694 4632 generic.go:334] "Generic (PLEG): container finished" podID="28d46ba0-2f3e-4780-8645-3551c59cbd90" containerID="22b880bd9d36239a72deab6ba7c23eda91f4380144527c59c20491394e33da83" exitCode=0 Mar 13 12:25:41 crc kubenswrapper[4632]: I0313 12:25:41.879732 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5hbsx" event={"ID":"28d46ba0-2f3e-4780-8645-3551c59cbd90","Type":"ContainerDied","Data":"6694309764a236c0cf4af5875ea9754ac8fe7e1c78280a3c51d6648b38d11efe"} Mar 13 12:25:41 crc kubenswrapper[4632]: I0313 12:25:41.882128 4632 scope.go:117] "RemoveContainer" containerID="22b880bd9d36239a72deab6ba7c23eda91f4380144527c59c20491394e33da83" Mar 13 12:25:41 crc kubenswrapper[4632]: I0313 12:25:41.927785 4632 scope.go:117] "RemoveContainer" containerID="f2b0a8aae3cf593228d6090f4f4dcba61eee1aa4e51a3ec5b41ca13e73160ca4" Mar 13 12:25:41 crc kubenswrapper[4632]: I0313 12:25:41.981313 4632 scope.go:117] "RemoveContainer" containerID="7be8436687bd0d966081b362829fe60e928b205102fb92b02070bb54e66a10a9" Mar 13 12:25:42 crc kubenswrapper[4632]: I0313 12:25:42.026464 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf7mj\" (UniqueName: \"kubernetes.io/projected/28d46ba0-2f3e-4780-8645-3551c59cbd90-kube-api-access-wf7mj\") pod \"28d46ba0-2f3e-4780-8645-3551c59cbd90\" (UID: \"28d46ba0-2f3e-4780-8645-3551c59cbd90\") " Mar 13 12:25:42 crc kubenswrapper[4632]: I0313 12:25:42.026630 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28d46ba0-2f3e-4780-8645-3551c59cbd90-catalog-content\") pod \"28d46ba0-2f3e-4780-8645-3551c59cbd90\" (UID: \"28d46ba0-2f3e-4780-8645-3551c59cbd90\") " Mar 13 12:25:42 crc kubenswrapper[4632]: I0313 12:25:42.026667 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28d46ba0-2f3e-4780-8645-3551c59cbd90-utilities\") pod \"28d46ba0-2f3e-4780-8645-3551c59cbd90\" (UID: \"28d46ba0-2f3e-4780-8645-3551c59cbd90\") " Mar 13 12:25:42 crc kubenswrapper[4632]: I0313 12:25:42.032200 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28d46ba0-2f3e-4780-8645-3551c59cbd90-utilities" (OuterVolumeSpecName: "utilities") pod "28d46ba0-2f3e-4780-8645-3551c59cbd90" (UID: "28d46ba0-2f3e-4780-8645-3551c59cbd90"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:25:42 crc kubenswrapper[4632]: I0313 12:25:42.037489 4632 scope.go:117] "RemoveContainer" containerID="22b880bd9d36239a72deab6ba7c23eda91f4380144527c59c20491394e33da83" Mar 13 12:25:42 crc kubenswrapper[4632]: E0313 12:25:42.046710 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22b880bd9d36239a72deab6ba7c23eda91f4380144527c59c20491394e33da83\": container with ID starting with 22b880bd9d36239a72deab6ba7c23eda91f4380144527c59c20491394e33da83 not found: ID does not exist" containerID="22b880bd9d36239a72deab6ba7c23eda91f4380144527c59c20491394e33da83" Mar 13 12:25:42 crc kubenswrapper[4632]: I0313 12:25:42.048248 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22b880bd9d36239a72deab6ba7c23eda91f4380144527c59c20491394e33da83"} err="failed to get container status \"22b880bd9d36239a72deab6ba7c23eda91f4380144527c59c20491394e33da83\": rpc error: code = NotFound desc = could not find container \"22b880bd9d36239a72deab6ba7c23eda91f4380144527c59c20491394e33da83\": container with ID starting with 22b880bd9d36239a72deab6ba7c23eda91f4380144527c59c20491394e33da83 not found: ID does not exist" Mar 13 12:25:42 crc kubenswrapper[4632]: I0313 12:25:42.048288 4632 scope.go:117] "RemoveContainer" containerID="f2b0a8aae3cf593228d6090f4f4dcba61eee1aa4e51a3ec5b41ca13e73160ca4" Mar 13 12:25:42 crc kubenswrapper[4632]: E0313 12:25:42.048883 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2b0a8aae3cf593228d6090f4f4dcba61eee1aa4e51a3ec5b41ca13e73160ca4\": container with ID starting with f2b0a8aae3cf593228d6090f4f4dcba61eee1aa4e51a3ec5b41ca13e73160ca4 not found: ID does not exist" containerID="f2b0a8aae3cf593228d6090f4f4dcba61eee1aa4e51a3ec5b41ca13e73160ca4" Mar 13 12:25:42 crc kubenswrapper[4632]: I0313 12:25:42.048922 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2b0a8aae3cf593228d6090f4f4dcba61eee1aa4e51a3ec5b41ca13e73160ca4"} err="failed to get container status \"f2b0a8aae3cf593228d6090f4f4dcba61eee1aa4e51a3ec5b41ca13e73160ca4\": rpc error: code = NotFound desc = could not find container \"f2b0a8aae3cf593228d6090f4f4dcba61eee1aa4e51a3ec5b41ca13e73160ca4\": container with ID starting with f2b0a8aae3cf593228d6090f4f4dcba61eee1aa4e51a3ec5b41ca13e73160ca4 not found: ID does not exist" Mar 13 12:25:42 crc kubenswrapper[4632]: I0313 12:25:42.049006 4632 scope.go:117] "RemoveContainer" containerID="7be8436687bd0d966081b362829fe60e928b205102fb92b02070bb54e66a10a9" Mar 13 12:25:42 crc kubenswrapper[4632]: E0313 12:25:42.049525 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7be8436687bd0d966081b362829fe60e928b205102fb92b02070bb54e66a10a9\": container with ID starting with 7be8436687bd0d966081b362829fe60e928b205102fb92b02070bb54e66a10a9 not found: ID does not exist" containerID="7be8436687bd0d966081b362829fe60e928b205102fb92b02070bb54e66a10a9" Mar 13 12:25:42 crc kubenswrapper[4632]: I0313 12:25:42.049555 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7be8436687bd0d966081b362829fe60e928b205102fb92b02070bb54e66a10a9"} err="failed to get container status \"7be8436687bd0d966081b362829fe60e928b205102fb92b02070bb54e66a10a9\": rpc error: code = NotFound desc = could not find container \"7be8436687bd0d966081b362829fe60e928b205102fb92b02070bb54e66a10a9\": container with ID starting with 7be8436687bd0d966081b362829fe60e928b205102fb92b02070bb54e66a10a9 not found: ID does not exist" Mar 13 12:25:42 crc kubenswrapper[4632]: I0313 12:25:42.055967 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28d46ba0-2f3e-4780-8645-3551c59cbd90-kube-api-access-wf7mj" (OuterVolumeSpecName: "kube-api-access-wf7mj") pod "28d46ba0-2f3e-4780-8645-3551c59cbd90" (UID: "28d46ba0-2f3e-4780-8645-3551c59cbd90"). InnerVolumeSpecName "kube-api-access-wf7mj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:25:42 crc kubenswrapper[4632]: I0313 12:25:42.129696 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf7mj\" (UniqueName: \"kubernetes.io/projected/28d46ba0-2f3e-4780-8645-3551c59cbd90-kube-api-access-wf7mj\") on node \"crc\" DevicePath \"\"" Mar 13 12:25:42 crc kubenswrapper[4632]: I0313 12:25:42.129735 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28d46ba0-2f3e-4780-8645-3551c59cbd90-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 12:25:42 crc kubenswrapper[4632]: I0313 12:25:42.274778 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28d46ba0-2f3e-4780-8645-3551c59cbd90-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "28d46ba0-2f3e-4780-8645-3551c59cbd90" (UID: "28d46ba0-2f3e-4780-8645-3551c59cbd90"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:25:42 crc kubenswrapper[4632]: I0313 12:25:42.334130 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28d46ba0-2f3e-4780-8645-3551c59cbd90-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 12:25:42 crc kubenswrapper[4632]: I0313 12:25:42.526895 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5hbsx"] Mar 13 12:25:42 crc kubenswrapper[4632]: I0313 12:25:42.538251 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5hbsx"] Mar 13 12:25:44 crc kubenswrapper[4632]: I0313 12:25:44.056781 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28d46ba0-2f3e-4780-8645-3551c59cbd90" path="/var/lib/kubelet/pods/28d46ba0-2f3e-4780-8645-3551c59cbd90/volumes" Mar 13 12:26:00 crc kubenswrapper[4632]: I0313 12:26:00.169221 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556746-jtv7q"] Mar 13 12:26:00 crc kubenswrapper[4632]: E0313 12:26:00.172754 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28d46ba0-2f3e-4780-8645-3551c59cbd90" containerName="registry-server" Mar 13 12:26:00 crc kubenswrapper[4632]: I0313 12:26:00.172810 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="28d46ba0-2f3e-4780-8645-3551c59cbd90" containerName="registry-server" Mar 13 12:26:00 crc kubenswrapper[4632]: E0313 12:26:00.172846 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28d46ba0-2f3e-4780-8645-3551c59cbd90" containerName="extract-content" Mar 13 12:26:00 crc kubenswrapper[4632]: I0313 12:26:00.172859 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="28d46ba0-2f3e-4780-8645-3551c59cbd90" containerName="extract-content" Mar 13 12:26:00 crc kubenswrapper[4632]: E0313 12:26:00.172920 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28d46ba0-2f3e-4780-8645-3551c59cbd90" containerName="extract-utilities" Mar 13 12:26:00 crc kubenswrapper[4632]: I0313 12:26:00.172936 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="28d46ba0-2f3e-4780-8645-3551c59cbd90" containerName="extract-utilities" Mar 13 12:26:00 crc kubenswrapper[4632]: I0313 12:26:00.174248 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="28d46ba0-2f3e-4780-8645-3551c59cbd90" containerName="registry-server" Mar 13 12:26:00 crc kubenswrapper[4632]: I0313 12:26:00.181348 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556746-jtv7q" Mar 13 12:26:00 crc kubenswrapper[4632]: I0313 12:26:00.190867 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556746-jtv7q"] Mar 13 12:26:00 crc kubenswrapper[4632]: I0313 12:26:00.200769 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 12:26:00 crc kubenswrapper[4632]: I0313 12:26:00.200796 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 12:26:00 crc kubenswrapper[4632]: I0313 12:26:00.200775 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 12:26:00 crc kubenswrapper[4632]: I0313 12:26:00.301082 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2gs2\" (UniqueName: \"kubernetes.io/projected/d88882b5-a11f-4606-8e0d-59471c7feccb-kube-api-access-p2gs2\") pod \"auto-csr-approver-29556746-jtv7q\" (UID: \"d88882b5-a11f-4606-8e0d-59471c7feccb\") " pod="openshift-infra/auto-csr-approver-29556746-jtv7q" Mar 13 12:26:00 crc kubenswrapper[4632]: I0313 12:26:00.402790 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2gs2\" (UniqueName: \"kubernetes.io/projected/d88882b5-a11f-4606-8e0d-59471c7feccb-kube-api-access-p2gs2\") pod \"auto-csr-approver-29556746-jtv7q\" (UID: \"d88882b5-a11f-4606-8e0d-59471c7feccb\") " pod="openshift-infra/auto-csr-approver-29556746-jtv7q" Mar 13 12:26:00 crc kubenswrapper[4632]: I0313 12:26:00.426591 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2gs2\" (UniqueName: \"kubernetes.io/projected/d88882b5-a11f-4606-8e0d-59471c7feccb-kube-api-access-p2gs2\") pod \"auto-csr-approver-29556746-jtv7q\" (UID: \"d88882b5-a11f-4606-8e0d-59471c7feccb\") " pod="openshift-infra/auto-csr-approver-29556746-jtv7q" Mar 13 12:26:00 crc kubenswrapper[4632]: I0313 12:26:00.512445 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556746-jtv7q" Mar 13 12:26:01 crc kubenswrapper[4632]: W0313 12:26:01.021747 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd88882b5_a11f_4606_8e0d_59471c7feccb.slice/crio-8624fa59086784980c1fd5b3b7f5d596dd40965d2322849d5725dab0f4736adc WatchSource:0}: Error finding container 8624fa59086784980c1fd5b3b7f5d596dd40965d2322849d5725dab0f4736adc: Status 404 returned error can't find the container with id 8624fa59086784980c1fd5b3b7f5d596dd40965d2322849d5725dab0f4736adc Mar 13 12:26:01 crc kubenswrapper[4632]: I0313 12:26:01.029624 4632 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 12:26:01 crc kubenswrapper[4632]: I0313 12:26:01.029827 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556746-jtv7q"] Mar 13 12:26:01 crc kubenswrapper[4632]: I0313 12:26:01.103399 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556746-jtv7q" event={"ID":"d88882b5-a11f-4606-8e0d-59471c7feccb","Type":"ContainerStarted","Data":"8624fa59086784980c1fd5b3b7f5d596dd40965d2322849d5725dab0f4736adc"} Mar 13 12:26:03 crc kubenswrapper[4632]: I0313 12:26:03.126234 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556746-jtv7q" event={"ID":"d88882b5-a11f-4606-8e0d-59471c7feccb","Type":"ContainerStarted","Data":"35ea86f2c8d1955a868a4d03ab725c4feb194878cc720326b8eb0c50ed5ce3c5"} Mar 13 12:26:03 crc kubenswrapper[4632]: I0313 12:26:03.148159 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556746-jtv7q" podStartSLOduration=2.244168615 podStartE2EDuration="3.148140896s" podCreationTimestamp="2026-03-13 12:26:00 +0000 UTC" firstStartedPulling="2026-03-13 12:26:01.024620344 +0000 UTC m=+8535.047150477" lastFinishedPulling="2026-03-13 12:26:01.928592625 +0000 UTC m=+8535.951122758" observedRunningTime="2026-03-13 12:26:03.139024862 +0000 UTC m=+8537.161555025" watchObservedRunningTime="2026-03-13 12:26:03.148140896 +0000 UTC m=+8537.170671019" Mar 13 12:26:04 crc kubenswrapper[4632]: I0313 12:26:04.137154 4632 generic.go:334] "Generic (PLEG): container finished" podID="d88882b5-a11f-4606-8e0d-59471c7feccb" containerID="35ea86f2c8d1955a868a4d03ab725c4feb194878cc720326b8eb0c50ed5ce3c5" exitCode=0 Mar 13 12:26:04 crc kubenswrapper[4632]: I0313 12:26:04.137231 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556746-jtv7q" event={"ID":"d88882b5-a11f-4606-8e0d-59471c7feccb","Type":"ContainerDied","Data":"35ea86f2c8d1955a868a4d03ab725c4feb194878cc720326b8eb0c50ed5ce3c5"} Mar 13 12:26:05 crc kubenswrapper[4632]: I0313 12:26:05.541304 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556746-jtv7q" Mar 13 12:26:05 crc kubenswrapper[4632]: I0313 12:26:05.620142 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2gs2\" (UniqueName: \"kubernetes.io/projected/d88882b5-a11f-4606-8e0d-59471c7feccb-kube-api-access-p2gs2\") pod \"d88882b5-a11f-4606-8e0d-59471c7feccb\" (UID: \"d88882b5-a11f-4606-8e0d-59471c7feccb\") " Mar 13 12:26:05 crc kubenswrapper[4632]: I0313 12:26:05.626362 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d88882b5-a11f-4606-8e0d-59471c7feccb-kube-api-access-p2gs2" (OuterVolumeSpecName: "kube-api-access-p2gs2") pod "d88882b5-a11f-4606-8e0d-59471c7feccb" (UID: "d88882b5-a11f-4606-8e0d-59471c7feccb"). InnerVolumeSpecName "kube-api-access-p2gs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:26:05 crc kubenswrapper[4632]: I0313 12:26:05.723044 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2gs2\" (UniqueName: \"kubernetes.io/projected/d88882b5-a11f-4606-8e0d-59471c7feccb-kube-api-access-p2gs2\") on node \"crc\" DevicePath \"\"" Mar 13 12:26:06 crc kubenswrapper[4632]: I0313 12:26:06.158083 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556746-jtv7q" event={"ID":"d88882b5-a11f-4606-8e0d-59471c7feccb","Type":"ContainerDied","Data":"8624fa59086784980c1fd5b3b7f5d596dd40965d2322849d5725dab0f4736adc"} Mar 13 12:26:06 crc kubenswrapper[4632]: I0313 12:26:06.158131 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556746-jtv7q" Mar 13 12:26:06 crc kubenswrapper[4632]: I0313 12:26:06.158163 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8624fa59086784980c1fd5b3b7f5d596dd40965d2322849d5725dab0f4736adc" Mar 13 12:26:06 crc kubenswrapper[4632]: I0313 12:26:06.242382 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556740-fmh5b"] Mar 13 12:26:06 crc kubenswrapper[4632]: I0313 12:26:06.254193 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556740-fmh5b"] Mar 13 12:26:08 crc kubenswrapper[4632]: I0313 12:26:08.061703 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f86aa5e-9cfc-458f-ae11-71e5e4dcfe9c" path="/var/lib/kubelet/pods/0f86aa5e-9cfc-458f-ae11-71e5e4dcfe9c/volumes" Mar 13 12:26:10 crc kubenswrapper[4632]: I0313 12:26:10.461581 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:26:10 crc kubenswrapper[4632]: I0313 12:26:10.461687 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:26:10 crc kubenswrapper[4632]: I0313 12:26:10.461753 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 12:26:10 crc kubenswrapper[4632]: I0313 12:26:10.471698 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8741648fe2a67d9da8cf15c1e305dc4ff749d2dd595a90ef09b36ac1d0767d1f"} pod="openshift-machine-config-operator/machine-config-daemon-zkscb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 12:26:10 crc kubenswrapper[4632]: I0313 12:26:10.471866 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" containerID="cri-o://8741648fe2a67d9da8cf15c1e305dc4ff749d2dd595a90ef09b36ac1d0767d1f" gracePeriod=600 Mar 13 12:26:10 crc kubenswrapper[4632]: E0313 12:26:10.661535 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:26:11 crc kubenswrapper[4632]: I0313 12:26:11.217860 4632 generic.go:334] "Generic (PLEG): container finished" podID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerID="8741648fe2a67d9da8cf15c1e305dc4ff749d2dd595a90ef09b36ac1d0767d1f" exitCode=0 Mar 13 12:26:11 crc kubenswrapper[4632]: I0313 12:26:11.217907 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerDied","Data":"8741648fe2a67d9da8cf15c1e305dc4ff749d2dd595a90ef09b36ac1d0767d1f"} Mar 13 12:26:11 crc kubenswrapper[4632]: I0313 12:26:11.217974 4632 scope.go:117] "RemoveContainer" containerID="d2f7d92ea8336c364393ccfd7369387047df3a4555b1b7f7be871c5ae3268440" Mar 13 12:26:11 crc kubenswrapper[4632]: I0313 12:26:11.218601 4632 scope.go:117] "RemoveContainer" containerID="8741648fe2a67d9da8cf15c1e305dc4ff749d2dd595a90ef09b36ac1d0767d1f" Mar 13 12:26:11 crc kubenswrapper[4632]: E0313 12:26:11.218953 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:26:25 crc kubenswrapper[4632]: I0313 12:26:25.044922 4632 scope.go:117] "RemoveContainer" containerID="8741648fe2a67d9da8cf15c1e305dc4ff749d2dd595a90ef09b36ac1d0767d1f" Mar 13 12:26:25 crc kubenswrapper[4632]: E0313 12:26:25.045719 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:26:26 crc kubenswrapper[4632]: I0313 12:26:26.769682 4632 scope.go:117] "RemoveContainer" containerID="e0e701c935a2c4084fd4e093f0c21450f3afd1589228584f67fcd3cbe4d41395" Mar 13 12:26:37 crc kubenswrapper[4632]: I0313 12:26:37.044447 4632 scope.go:117] "RemoveContainer" containerID="8741648fe2a67d9da8cf15c1e305dc4ff749d2dd595a90ef09b36ac1d0767d1f" Mar 13 12:26:37 crc kubenswrapper[4632]: E0313 12:26:37.045489 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:26:50 crc kubenswrapper[4632]: I0313 12:26:50.045411 4632 scope.go:117] "RemoveContainer" containerID="8741648fe2a67d9da8cf15c1e305dc4ff749d2dd595a90ef09b36ac1d0767d1f" Mar 13 12:26:50 crc kubenswrapper[4632]: E0313 12:26:50.046156 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:27:02 crc kubenswrapper[4632]: I0313 12:27:02.044222 4632 scope.go:117] "RemoveContainer" containerID="8741648fe2a67d9da8cf15c1e305dc4ff749d2dd595a90ef09b36ac1d0767d1f" Mar 13 12:27:02 crc kubenswrapper[4632]: E0313 12:27:02.045134 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:27:13 crc kubenswrapper[4632]: I0313 12:27:13.045480 4632 scope.go:117] "RemoveContainer" containerID="8741648fe2a67d9da8cf15c1e305dc4ff749d2dd595a90ef09b36ac1d0767d1f" Mar 13 12:27:13 crc kubenswrapper[4632]: E0313 12:27:13.046411 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:27:25 crc kubenswrapper[4632]: I0313 12:27:25.044218 4632 scope.go:117] "RemoveContainer" containerID="8741648fe2a67d9da8cf15c1e305dc4ff749d2dd595a90ef09b36ac1d0767d1f" Mar 13 12:27:25 crc kubenswrapper[4632]: E0313 12:27:25.044969 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:27:37 crc kubenswrapper[4632]: I0313 12:27:37.045055 4632 scope.go:117] "RemoveContainer" containerID="8741648fe2a67d9da8cf15c1e305dc4ff749d2dd595a90ef09b36ac1d0767d1f" Mar 13 12:27:37 crc kubenswrapper[4632]: E0313 12:27:37.046382 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:27:51 crc kubenswrapper[4632]: I0313 12:27:51.044316 4632 scope.go:117] "RemoveContainer" containerID="8741648fe2a67d9da8cf15c1e305dc4ff749d2dd595a90ef09b36ac1d0767d1f" Mar 13 12:27:51 crc kubenswrapper[4632]: E0313 12:27:51.045257 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:28:00 crc kubenswrapper[4632]: I0313 12:28:00.151580 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556748-tr74t"] Mar 13 12:28:00 crc kubenswrapper[4632]: E0313 12:28:00.152465 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d88882b5-a11f-4606-8e0d-59471c7feccb" containerName="oc" Mar 13 12:28:00 crc kubenswrapper[4632]: I0313 12:28:00.152479 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="d88882b5-a11f-4606-8e0d-59471c7feccb" containerName="oc" Mar 13 12:28:00 crc kubenswrapper[4632]: I0313 12:28:00.152755 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="d88882b5-a11f-4606-8e0d-59471c7feccb" containerName="oc" Mar 13 12:28:00 crc kubenswrapper[4632]: I0313 12:28:00.153373 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556748-tr74t" Mar 13 12:28:00 crc kubenswrapper[4632]: I0313 12:28:00.157610 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 12:28:00 crc kubenswrapper[4632]: I0313 12:28:00.157882 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 12:28:00 crc kubenswrapper[4632]: I0313 12:28:00.158486 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 12:28:00 crc kubenswrapper[4632]: I0313 12:28:00.163234 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556748-tr74t"] Mar 13 12:28:00 crc kubenswrapper[4632]: I0313 12:28:00.300133 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjbpg\" (UniqueName: \"kubernetes.io/projected/ba59e3a2-3d83-4f9d-8633-788ba1bf518c-kube-api-access-mjbpg\") pod \"auto-csr-approver-29556748-tr74t\" (UID: \"ba59e3a2-3d83-4f9d-8633-788ba1bf518c\") " pod="openshift-infra/auto-csr-approver-29556748-tr74t" Mar 13 12:28:00 crc kubenswrapper[4632]: I0313 12:28:00.401753 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjbpg\" (UniqueName: \"kubernetes.io/projected/ba59e3a2-3d83-4f9d-8633-788ba1bf518c-kube-api-access-mjbpg\") pod \"auto-csr-approver-29556748-tr74t\" (UID: \"ba59e3a2-3d83-4f9d-8633-788ba1bf518c\") " pod="openshift-infra/auto-csr-approver-29556748-tr74t" Mar 13 12:28:00 crc kubenswrapper[4632]: I0313 12:28:00.419432 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjbpg\" (UniqueName: \"kubernetes.io/projected/ba59e3a2-3d83-4f9d-8633-788ba1bf518c-kube-api-access-mjbpg\") pod \"auto-csr-approver-29556748-tr74t\" (UID: \"ba59e3a2-3d83-4f9d-8633-788ba1bf518c\") " pod="openshift-infra/auto-csr-approver-29556748-tr74t" Mar 13 12:28:00 crc kubenswrapper[4632]: I0313 12:28:00.481315 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556748-tr74t" Mar 13 12:28:01 crc kubenswrapper[4632]: I0313 12:28:01.021175 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556748-tr74t"] Mar 13 12:28:01 crc kubenswrapper[4632]: I0313 12:28:01.289819 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556748-tr74t" event={"ID":"ba59e3a2-3d83-4f9d-8633-788ba1bf518c","Type":"ContainerStarted","Data":"156d224a7e86839fcd1fe5b72dad72deacf75274b9199281fc604f2724238ed2"} Mar 13 12:28:02 crc kubenswrapper[4632]: I0313 12:28:02.300019 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556748-tr74t" event={"ID":"ba59e3a2-3d83-4f9d-8633-788ba1bf518c","Type":"ContainerStarted","Data":"d4da70cb5943a7b88f9744961515085bb09badfa367e58e3aee43668a0864bc3"} Mar 13 12:28:02 crc kubenswrapper[4632]: I0313 12:28:02.316574 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556748-tr74t" podStartSLOduration=1.488017138 podStartE2EDuration="2.316551869s" podCreationTimestamp="2026-03-13 12:28:00 +0000 UTC" firstStartedPulling="2026-03-13 12:28:01.000335578 +0000 UTC m=+8655.022865721" lastFinishedPulling="2026-03-13 12:28:01.828870319 +0000 UTC m=+8655.851400452" observedRunningTime="2026-03-13 12:28:02.313801092 +0000 UTC m=+8656.336331225" watchObservedRunningTime="2026-03-13 12:28:02.316551869 +0000 UTC m=+8656.339082022" Mar 13 12:28:03 crc kubenswrapper[4632]: I0313 12:28:03.321596 4632 generic.go:334] "Generic (PLEG): container finished" podID="ba59e3a2-3d83-4f9d-8633-788ba1bf518c" containerID="d4da70cb5943a7b88f9744961515085bb09badfa367e58e3aee43668a0864bc3" exitCode=0 Mar 13 12:28:03 crc kubenswrapper[4632]: I0313 12:28:03.321911 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556748-tr74t" event={"ID":"ba59e3a2-3d83-4f9d-8633-788ba1bf518c","Type":"ContainerDied","Data":"d4da70cb5943a7b88f9744961515085bb09badfa367e58e3aee43668a0864bc3"} Mar 13 12:28:04 crc kubenswrapper[4632]: I0313 12:28:04.713184 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556748-tr74t" Mar 13 12:28:04 crc kubenswrapper[4632]: I0313 12:28:04.907147 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjbpg\" (UniqueName: \"kubernetes.io/projected/ba59e3a2-3d83-4f9d-8633-788ba1bf518c-kube-api-access-mjbpg\") pod \"ba59e3a2-3d83-4f9d-8633-788ba1bf518c\" (UID: \"ba59e3a2-3d83-4f9d-8633-788ba1bf518c\") " Mar 13 12:28:04 crc kubenswrapper[4632]: I0313 12:28:04.917591 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba59e3a2-3d83-4f9d-8633-788ba1bf518c-kube-api-access-mjbpg" (OuterVolumeSpecName: "kube-api-access-mjbpg") pod "ba59e3a2-3d83-4f9d-8633-788ba1bf518c" (UID: "ba59e3a2-3d83-4f9d-8633-788ba1bf518c"). InnerVolumeSpecName "kube-api-access-mjbpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:28:05 crc kubenswrapper[4632]: I0313 12:28:05.010022 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjbpg\" (UniqueName: \"kubernetes.io/projected/ba59e3a2-3d83-4f9d-8633-788ba1bf518c-kube-api-access-mjbpg\") on node \"crc\" DevicePath \"\"" Mar 13 12:28:05 crc kubenswrapper[4632]: I0313 12:28:05.044587 4632 scope.go:117] "RemoveContainer" containerID="8741648fe2a67d9da8cf15c1e305dc4ff749d2dd595a90ef09b36ac1d0767d1f" Mar 13 12:28:05 crc kubenswrapper[4632]: E0313 12:28:05.045102 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:28:05 crc kubenswrapper[4632]: I0313 12:28:05.343969 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556748-tr74t" event={"ID":"ba59e3a2-3d83-4f9d-8633-788ba1bf518c","Type":"ContainerDied","Data":"156d224a7e86839fcd1fe5b72dad72deacf75274b9199281fc604f2724238ed2"} Mar 13 12:28:05 crc kubenswrapper[4632]: I0313 12:28:05.344015 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="156d224a7e86839fcd1fe5b72dad72deacf75274b9199281fc604f2724238ed2" Mar 13 12:28:05 crc kubenswrapper[4632]: I0313 12:28:05.344081 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556748-tr74t" Mar 13 12:28:05 crc kubenswrapper[4632]: I0313 12:28:05.400138 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556742-2lkrj"] Mar 13 12:28:05 crc kubenswrapper[4632]: I0313 12:28:05.407806 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556742-2lkrj"] Mar 13 12:28:06 crc kubenswrapper[4632]: I0313 12:28:06.059967 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5121453-468a-432e-b110-fd0cd60ed92b" path="/var/lib/kubelet/pods/a5121453-468a-432e-b110-fd0cd60ed92b/volumes" Mar 13 12:28:20 crc kubenswrapper[4632]: I0313 12:28:20.044169 4632 scope.go:117] "RemoveContainer" containerID="8741648fe2a67d9da8cf15c1e305dc4ff749d2dd595a90ef09b36ac1d0767d1f" Mar 13 12:28:20 crc kubenswrapper[4632]: E0313 12:28:20.044841 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:28:26 crc kubenswrapper[4632]: I0313 12:28:26.919972 4632 scope.go:117] "RemoveContainer" containerID="77b11f376c487493e748aed75424d32e4d98e9395efe94071abc3a7b13ebc06d" Mar 13 12:28:34 crc kubenswrapper[4632]: I0313 12:28:34.044927 4632 scope.go:117] "RemoveContainer" containerID="8741648fe2a67d9da8cf15c1e305dc4ff749d2dd595a90ef09b36ac1d0767d1f" Mar 13 12:28:34 crc kubenswrapper[4632]: E0313 12:28:34.045630 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:28:47 crc kubenswrapper[4632]: I0313 12:28:47.044461 4632 scope.go:117] "RemoveContainer" containerID="8741648fe2a67d9da8cf15c1e305dc4ff749d2dd595a90ef09b36ac1d0767d1f" Mar 13 12:28:47 crc kubenswrapper[4632]: E0313 12:28:47.046129 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:29:00 crc kubenswrapper[4632]: I0313 12:29:00.044595 4632 scope.go:117] "RemoveContainer" containerID="8741648fe2a67d9da8cf15c1e305dc4ff749d2dd595a90ef09b36ac1d0767d1f" Mar 13 12:29:00 crc kubenswrapper[4632]: E0313 12:29:00.045512 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:29:12 crc kubenswrapper[4632]: I0313 12:29:12.045559 4632 scope.go:117] "RemoveContainer" containerID="8741648fe2a67d9da8cf15c1e305dc4ff749d2dd595a90ef09b36ac1d0767d1f" Mar 13 12:29:12 crc kubenswrapper[4632]: E0313 12:29:12.046827 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:29:27 crc kubenswrapper[4632]: I0313 12:29:27.044434 4632 scope.go:117] "RemoveContainer" containerID="8741648fe2a67d9da8cf15c1e305dc4ff749d2dd595a90ef09b36ac1d0767d1f" Mar 13 12:29:27 crc kubenswrapper[4632]: E0313 12:29:27.045418 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:29:38 crc kubenswrapper[4632]: I0313 12:29:38.059107 4632 scope.go:117] "RemoveContainer" containerID="8741648fe2a67d9da8cf15c1e305dc4ff749d2dd595a90ef09b36ac1d0767d1f" Mar 13 12:29:38 crc kubenswrapper[4632]: E0313 12:29:38.059985 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:29:51 crc kubenswrapper[4632]: I0313 12:29:51.045156 4632 scope.go:117] "RemoveContainer" containerID="8741648fe2a67d9da8cf15c1e305dc4ff749d2dd595a90ef09b36ac1d0767d1f" Mar 13 12:29:51 crc kubenswrapper[4632]: E0313 12:29:51.046091 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:30:00 crc kubenswrapper[4632]: I0313 12:30:00.163636 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556750-8fj49"] Mar 13 12:30:00 crc kubenswrapper[4632]: E0313 12:30:00.164593 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba59e3a2-3d83-4f9d-8633-788ba1bf518c" containerName="oc" Mar 13 12:30:00 crc kubenswrapper[4632]: I0313 12:30:00.164608 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba59e3a2-3d83-4f9d-8633-788ba1bf518c" containerName="oc" Mar 13 12:30:00 crc kubenswrapper[4632]: I0313 12:30:00.164804 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba59e3a2-3d83-4f9d-8633-788ba1bf518c" containerName="oc" Mar 13 12:30:00 crc kubenswrapper[4632]: I0313 12:30:00.165410 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556750-8fj49" Mar 13 12:30:00 crc kubenswrapper[4632]: I0313 12:30:00.171199 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 12:30:00 crc kubenswrapper[4632]: I0313 12:30:00.171293 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 12:30:00 crc kubenswrapper[4632]: I0313 12:30:00.171402 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 12:30:00 crc kubenswrapper[4632]: I0313 12:30:00.188847 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556750-8fj49"] Mar 13 12:30:00 crc kubenswrapper[4632]: I0313 12:30:00.253763 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556750-66vw4"] Mar 13 12:30:00 crc kubenswrapper[4632]: I0313 12:30:00.255884 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556750-66vw4" Mar 13 12:30:00 crc kubenswrapper[4632]: I0313 12:30:00.258240 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 13 12:30:00 crc kubenswrapper[4632]: I0313 12:30:00.258495 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 13 12:30:00 crc kubenswrapper[4632]: I0313 12:30:00.274717 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v56sh\" (UniqueName: \"kubernetes.io/projected/8cc083b6-fb70-478e-9824-d9eb3cb1fe5b-kube-api-access-v56sh\") pod \"auto-csr-approver-29556750-8fj49\" (UID: \"8cc083b6-fb70-478e-9824-d9eb3cb1fe5b\") " pod="openshift-infra/auto-csr-approver-29556750-8fj49" Mar 13 12:30:00 crc kubenswrapper[4632]: I0313 12:30:00.277453 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556750-66vw4"] Mar 13 12:30:00 crc kubenswrapper[4632]: I0313 12:30:00.376822 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7632e985-c049-4f40-b7e1-06337842cc06-secret-volume\") pod \"collect-profiles-29556750-66vw4\" (UID: \"7632e985-c049-4f40-b7e1-06337842cc06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556750-66vw4" Mar 13 12:30:00 crc kubenswrapper[4632]: I0313 12:30:00.376894 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7632e985-c049-4f40-b7e1-06337842cc06-config-volume\") pod \"collect-profiles-29556750-66vw4\" (UID: \"7632e985-c049-4f40-b7e1-06337842cc06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556750-66vw4" Mar 13 12:30:00 crc kubenswrapper[4632]: I0313 12:30:00.376927 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgq8z\" (UniqueName: \"kubernetes.io/projected/7632e985-c049-4f40-b7e1-06337842cc06-kube-api-access-jgq8z\") pod \"collect-profiles-29556750-66vw4\" (UID: \"7632e985-c049-4f40-b7e1-06337842cc06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556750-66vw4" Mar 13 12:30:00 crc kubenswrapper[4632]: I0313 12:30:00.377357 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v56sh\" (UniqueName: \"kubernetes.io/projected/8cc083b6-fb70-478e-9824-d9eb3cb1fe5b-kube-api-access-v56sh\") pod \"auto-csr-approver-29556750-8fj49\" (UID: \"8cc083b6-fb70-478e-9824-d9eb3cb1fe5b\") " pod="openshift-infra/auto-csr-approver-29556750-8fj49" Mar 13 12:30:00 crc kubenswrapper[4632]: I0313 12:30:00.404706 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v56sh\" (UniqueName: \"kubernetes.io/projected/8cc083b6-fb70-478e-9824-d9eb3cb1fe5b-kube-api-access-v56sh\") pod \"auto-csr-approver-29556750-8fj49\" (UID: \"8cc083b6-fb70-478e-9824-d9eb3cb1fe5b\") " pod="openshift-infra/auto-csr-approver-29556750-8fj49" Mar 13 12:30:00 crc kubenswrapper[4632]: I0313 12:30:00.521562 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556750-8fj49" Mar 13 12:30:00 crc kubenswrapper[4632]: I0313 12:30:00.522481 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7632e985-c049-4f40-b7e1-06337842cc06-secret-volume\") pod \"collect-profiles-29556750-66vw4\" (UID: \"7632e985-c049-4f40-b7e1-06337842cc06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556750-66vw4" Mar 13 12:30:00 crc kubenswrapper[4632]: I0313 12:30:00.522553 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7632e985-c049-4f40-b7e1-06337842cc06-config-volume\") pod \"collect-profiles-29556750-66vw4\" (UID: \"7632e985-c049-4f40-b7e1-06337842cc06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556750-66vw4" Mar 13 12:30:00 crc kubenswrapper[4632]: I0313 12:30:00.522577 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgq8z\" (UniqueName: \"kubernetes.io/projected/7632e985-c049-4f40-b7e1-06337842cc06-kube-api-access-jgq8z\") pod \"collect-profiles-29556750-66vw4\" (UID: \"7632e985-c049-4f40-b7e1-06337842cc06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556750-66vw4" Mar 13 12:30:00 crc kubenswrapper[4632]: I0313 12:30:00.523661 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7632e985-c049-4f40-b7e1-06337842cc06-config-volume\") pod \"collect-profiles-29556750-66vw4\" (UID: \"7632e985-c049-4f40-b7e1-06337842cc06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556750-66vw4" Mar 13 12:30:00 crc kubenswrapper[4632]: I0313 12:30:00.528267 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7632e985-c049-4f40-b7e1-06337842cc06-secret-volume\") pod \"collect-profiles-29556750-66vw4\" (UID: \"7632e985-c049-4f40-b7e1-06337842cc06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556750-66vw4" Mar 13 12:30:00 crc kubenswrapper[4632]: I0313 12:30:00.545567 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgq8z\" (UniqueName: \"kubernetes.io/projected/7632e985-c049-4f40-b7e1-06337842cc06-kube-api-access-jgq8z\") pod \"collect-profiles-29556750-66vw4\" (UID: \"7632e985-c049-4f40-b7e1-06337842cc06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556750-66vw4" Mar 13 12:30:00 crc kubenswrapper[4632]: I0313 12:30:00.577061 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556750-66vw4" Mar 13 12:30:01 crc kubenswrapper[4632]: I0313 12:30:01.110047 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556750-8fj49"] Mar 13 12:30:01 crc kubenswrapper[4632]: I0313 12:30:01.205175 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556750-66vw4"] Mar 13 12:30:01 crc kubenswrapper[4632]: I0313 12:30:01.458175 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556750-8fj49" event={"ID":"8cc083b6-fb70-478e-9824-d9eb3cb1fe5b","Type":"ContainerStarted","Data":"fa176c143f5e8cdba415104680bdf7b7395fcbe38fc69dda69b086b4bfe88541"} Mar 13 12:30:01 crc kubenswrapper[4632]: I0313 12:30:01.460878 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556750-66vw4" event={"ID":"7632e985-c049-4f40-b7e1-06337842cc06","Type":"ContainerStarted","Data":"67ca3bd2d41705600db916cd2979b0933bfd8e0736f1cba722fe7532e8a4c7c5"} Mar 13 12:30:01 crc kubenswrapper[4632]: I0313 12:30:01.460977 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556750-66vw4" event={"ID":"7632e985-c049-4f40-b7e1-06337842cc06","Type":"ContainerStarted","Data":"f33f1afad4f51eb46c04f717f16a08b1bc00e6bd98ef5e322de77dafe10fd1f4"} Mar 13 12:30:02 crc kubenswrapper[4632]: I0313 12:30:02.474194 4632 generic.go:334] "Generic (PLEG): container finished" podID="7632e985-c049-4f40-b7e1-06337842cc06" containerID="67ca3bd2d41705600db916cd2979b0933bfd8e0736f1cba722fe7532e8a4c7c5" exitCode=0 Mar 13 12:30:02 crc kubenswrapper[4632]: I0313 12:30:02.474372 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556750-66vw4" event={"ID":"7632e985-c049-4f40-b7e1-06337842cc06","Type":"ContainerDied","Data":"67ca3bd2d41705600db916cd2979b0933bfd8e0736f1cba722fe7532e8a4c7c5"} Mar 13 12:30:03 crc kubenswrapper[4632]: I0313 12:30:03.045423 4632 scope.go:117] "RemoveContainer" containerID="8741648fe2a67d9da8cf15c1e305dc4ff749d2dd595a90ef09b36ac1d0767d1f" Mar 13 12:30:03 crc kubenswrapper[4632]: E0313 12:30:03.046084 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:30:03 crc kubenswrapper[4632]: I0313 12:30:03.491661 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556750-8fj49" event={"ID":"8cc083b6-fb70-478e-9824-d9eb3cb1fe5b","Type":"ContainerStarted","Data":"50ec5eed6591caef46ce66e044fc885293f40a008476cefa9221d3ccb1262877"} Mar 13 12:30:03 crc kubenswrapper[4632]: I0313 12:30:03.520572 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556750-8fj49" podStartSLOduration=1.733201816 podStartE2EDuration="3.520549823s" podCreationTimestamp="2026-03-13 12:30:00 +0000 UTC" firstStartedPulling="2026-03-13 12:30:01.120093869 +0000 UTC m=+8775.142624002" lastFinishedPulling="2026-03-13 12:30:02.907441876 +0000 UTC m=+8776.929972009" observedRunningTime="2026-03-13 12:30:03.512779522 +0000 UTC m=+8777.535309665" watchObservedRunningTime="2026-03-13 12:30:03.520549823 +0000 UTC m=+8777.543079966" Mar 13 12:30:03 crc kubenswrapper[4632]: I0313 12:30:03.882412 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556750-66vw4" Mar 13 12:30:04 crc kubenswrapper[4632]: I0313 12:30:04.048051 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7632e985-c049-4f40-b7e1-06337842cc06-secret-volume\") pod \"7632e985-c049-4f40-b7e1-06337842cc06\" (UID: \"7632e985-c049-4f40-b7e1-06337842cc06\") " Mar 13 12:30:04 crc kubenswrapper[4632]: I0313 12:30:04.048292 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7632e985-c049-4f40-b7e1-06337842cc06-config-volume\") pod \"7632e985-c049-4f40-b7e1-06337842cc06\" (UID: \"7632e985-c049-4f40-b7e1-06337842cc06\") " Mar 13 12:30:04 crc kubenswrapper[4632]: I0313 12:30:04.048746 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7632e985-c049-4f40-b7e1-06337842cc06-config-volume" (OuterVolumeSpecName: "config-volume") pod "7632e985-c049-4f40-b7e1-06337842cc06" (UID: "7632e985-c049-4f40-b7e1-06337842cc06"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:30:04 crc kubenswrapper[4632]: I0313 12:30:04.049112 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgq8z\" (UniqueName: \"kubernetes.io/projected/7632e985-c049-4f40-b7e1-06337842cc06-kube-api-access-jgq8z\") pod \"7632e985-c049-4f40-b7e1-06337842cc06\" (UID: \"7632e985-c049-4f40-b7e1-06337842cc06\") " Mar 13 12:30:04 crc kubenswrapper[4632]: I0313 12:30:04.049727 4632 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7632e985-c049-4f40-b7e1-06337842cc06-config-volume\") on node \"crc\" DevicePath \"\"" Mar 13 12:30:04 crc kubenswrapper[4632]: I0313 12:30:04.055010 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7632e985-c049-4f40-b7e1-06337842cc06-kube-api-access-jgq8z" (OuterVolumeSpecName: "kube-api-access-jgq8z") pod "7632e985-c049-4f40-b7e1-06337842cc06" (UID: "7632e985-c049-4f40-b7e1-06337842cc06"). InnerVolumeSpecName "kube-api-access-jgq8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:30:04 crc kubenswrapper[4632]: I0313 12:30:04.061084 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7632e985-c049-4f40-b7e1-06337842cc06-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7632e985-c049-4f40-b7e1-06337842cc06" (UID: "7632e985-c049-4f40-b7e1-06337842cc06"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:30:04 crc kubenswrapper[4632]: I0313 12:30:04.151163 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgq8z\" (UniqueName: \"kubernetes.io/projected/7632e985-c049-4f40-b7e1-06337842cc06-kube-api-access-jgq8z\") on node \"crc\" DevicePath \"\"" Mar 13 12:30:04 crc kubenswrapper[4632]: I0313 12:30:04.151206 4632 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7632e985-c049-4f40-b7e1-06337842cc06-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 13 12:30:04 crc kubenswrapper[4632]: I0313 12:30:04.504519 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556750-66vw4" Mar 13 12:30:04 crc kubenswrapper[4632]: I0313 12:30:04.505019 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556750-66vw4" event={"ID":"7632e985-c049-4f40-b7e1-06337842cc06","Type":"ContainerDied","Data":"f33f1afad4f51eb46c04f717f16a08b1bc00e6bd98ef5e322de77dafe10fd1f4"} Mar 13 12:30:04 crc kubenswrapper[4632]: I0313 12:30:04.505058 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f33f1afad4f51eb46c04f717f16a08b1bc00e6bd98ef5e322de77dafe10fd1f4" Mar 13 12:30:04 crc kubenswrapper[4632]: I0313 12:30:04.578464 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556705-cdjnv"] Mar 13 12:30:04 crc kubenswrapper[4632]: I0313 12:30:04.592101 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556705-cdjnv"] Mar 13 12:30:05 crc kubenswrapper[4632]: I0313 12:30:05.511767 4632 generic.go:334] "Generic (PLEG): container finished" podID="8cc083b6-fb70-478e-9824-d9eb3cb1fe5b" containerID="50ec5eed6591caef46ce66e044fc885293f40a008476cefa9221d3ccb1262877" exitCode=0 Mar 13 12:30:05 crc kubenswrapper[4632]: I0313 12:30:05.511809 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556750-8fj49" event={"ID":"8cc083b6-fb70-478e-9824-d9eb3cb1fe5b","Type":"ContainerDied","Data":"50ec5eed6591caef46ce66e044fc885293f40a008476cefa9221d3ccb1262877"} Mar 13 12:30:06 crc kubenswrapper[4632]: I0313 12:30:06.070501 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7" path="/var/lib/kubelet/pods/964e7b0f-4dfa-43e3-9ed5-a9c176c8cfc7/volumes" Mar 13 12:30:06 crc kubenswrapper[4632]: I0313 12:30:06.935031 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556750-8fj49" Mar 13 12:30:07 crc kubenswrapper[4632]: I0313 12:30:07.008925 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v56sh\" (UniqueName: \"kubernetes.io/projected/8cc083b6-fb70-478e-9824-d9eb3cb1fe5b-kube-api-access-v56sh\") pod \"8cc083b6-fb70-478e-9824-d9eb3cb1fe5b\" (UID: \"8cc083b6-fb70-478e-9824-d9eb3cb1fe5b\") " Mar 13 12:30:07 crc kubenswrapper[4632]: I0313 12:30:07.015724 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cc083b6-fb70-478e-9824-d9eb3cb1fe5b-kube-api-access-v56sh" (OuterVolumeSpecName: "kube-api-access-v56sh") pod "8cc083b6-fb70-478e-9824-d9eb3cb1fe5b" (UID: "8cc083b6-fb70-478e-9824-d9eb3cb1fe5b"). InnerVolumeSpecName "kube-api-access-v56sh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:30:07 crc kubenswrapper[4632]: I0313 12:30:07.111656 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v56sh\" (UniqueName: \"kubernetes.io/projected/8cc083b6-fb70-478e-9824-d9eb3cb1fe5b-kube-api-access-v56sh\") on node \"crc\" DevicePath \"\"" Mar 13 12:30:07 crc kubenswrapper[4632]: I0313 12:30:07.533850 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556750-8fj49" event={"ID":"8cc083b6-fb70-478e-9824-d9eb3cb1fe5b","Type":"ContainerDied","Data":"fa176c143f5e8cdba415104680bdf7b7395fcbe38fc69dda69b086b4bfe88541"} Mar 13 12:30:07 crc kubenswrapper[4632]: I0313 12:30:07.534148 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556750-8fj49" Mar 13 12:30:07 crc kubenswrapper[4632]: I0313 12:30:07.534154 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa176c143f5e8cdba415104680bdf7b7395fcbe38fc69dda69b086b4bfe88541" Mar 13 12:30:07 crc kubenswrapper[4632]: I0313 12:30:07.603824 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556744-fb92p"] Mar 13 12:30:07 crc kubenswrapper[4632]: I0313 12:30:07.665281 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556744-fb92p"] Mar 13 12:30:08 crc kubenswrapper[4632]: I0313 12:30:08.058269 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1006ed9-194b-4d1b-91cf-7722ce335023" path="/var/lib/kubelet/pods/e1006ed9-194b-4d1b-91cf-7722ce335023/volumes" Mar 13 12:30:17 crc kubenswrapper[4632]: I0313 12:30:17.045163 4632 scope.go:117] "RemoveContainer" containerID="8741648fe2a67d9da8cf15c1e305dc4ff749d2dd595a90ef09b36ac1d0767d1f" Mar 13 12:30:17 crc kubenswrapper[4632]: E0313 12:30:17.046237 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:30:27 crc kubenswrapper[4632]: I0313 12:30:27.037342 4632 scope.go:117] "RemoveContainer" containerID="8c147fe4b276fdf885236433df734b60b43a6fbbd4e1c4d2a7bec9fd5c3cc6e2" Mar 13 12:30:27 crc kubenswrapper[4632]: I0313 12:30:27.162952 4632 scope.go:117] "RemoveContainer" containerID="a50d24de30277dacbb16bc71e07335e3c84d2cedb12dfb6c3d660775ff2f0c54" Mar 13 12:30:28 crc kubenswrapper[4632]: I0313 12:30:28.046997 4632 scope.go:117] "RemoveContainer" containerID="8741648fe2a67d9da8cf15c1e305dc4ff749d2dd595a90ef09b36ac1d0767d1f" Mar 13 12:30:28 crc kubenswrapper[4632]: E0313 12:30:28.047667 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:30:42 crc kubenswrapper[4632]: I0313 12:30:42.045062 4632 scope.go:117] "RemoveContainer" containerID="8741648fe2a67d9da8cf15c1e305dc4ff749d2dd595a90ef09b36ac1d0767d1f" Mar 13 12:30:42 crc kubenswrapper[4632]: E0313 12:30:42.045901 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:30:54 crc kubenswrapper[4632]: I0313 12:30:54.044335 4632 scope.go:117] "RemoveContainer" containerID="8741648fe2a67d9da8cf15c1e305dc4ff749d2dd595a90ef09b36ac1d0767d1f" Mar 13 12:30:54 crc kubenswrapper[4632]: E0313 12:30:54.046330 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:31:07 crc kubenswrapper[4632]: I0313 12:31:07.044200 4632 scope.go:117] "RemoveContainer" containerID="8741648fe2a67d9da8cf15c1e305dc4ff749d2dd595a90ef09b36ac1d0767d1f" Mar 13 12:31:07 crc kubenswrapper[4632]: E0313 12:31:07.045077 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:31:18 crc kubenswrapper[4632]: I0313 12:31:18.053672 4632 scope.go:117] "RemoveContainer" containerID="8741648fe2a67d9da8cf15c1e305dc4ff749d2dd595a90ef09b36ac1d0767d1f" Mar 13 12:31:18 crc kubenswrapper[4632]: I0313 12:31:18.396536 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerStarted","Data":"ada2bd3447f81dbcb3c7c10ab1a84d7a61b81476a09d5bccd655ef21929539af"} Mar 13 12:31:25 crc kubenswrapper[4632]: I0313 12:31:25.158964 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g9xk2"] Mar 13 12:31:25 crc kubenswrapper[4632]: E0313 12:31:25.160332 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cc083b6-fb70-478e-9824-d9eb3cb1fe5b" containerName="oc" Mar 13 12:31:25 crc kubenswrapper[4632]: I0313 12:31:25.160352 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cc083b6-fb70-478e-9824-d9eb3cb1fe5b" containerName="oc" Mar 13 12:31:25 crc kubenswrapper[4632]: E0313 12:31:25.160366 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7632e985-c049-4f40-b7e1-06337842cc06" containerName="collect-profiles" Mar 13 12:31:25 crc kubenswrapper[4632]: I0313 12:31:25.160374 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="7632e985-c049-4f40-b7e1-06337842cc06" containerName="collect-profiles" Mar 13 12:31:25 crc kubenswrapper[4632]: I0313 12:31:25.160575 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="7632e985-c049-4f40-b7e1-06337842cc06" containerName="collect-profiles" Mar 13 12:31:25 crc kubenswrapper[4632]: I0313 12:31:25.160599 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cc083b6-fb70-478e-9824-d9eb3cb1fe5b" containerName="oc" Mar 13 12:31:25 crc kubenswrapper[4632]: I0313 12:31:25.162474 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g9xk2" Mar 13 12:31:25 crc kubenswrapper[4632]: I0313 12:31:25.231186 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g9xk2"] Mar 13 12:31:25 crc kubenswrapper[4632]: I0313 12:31:25.250526 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/890fbbb1-da06-4cd8-80ad-3606cf60429c-utilities\") pod \"community-operators-g9xk2\" (UID: \"890fbbb1-da06-4cd8-80ad-3606cf60429c\") " pod="openshift-marketplace/community-operators-g9xk2" Mar 13 12:31:25 crc kubenswrapper[4632]: I0313 12:31:25.250625 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/890fbbb1-da06-4cd8-80ad-3606cf60429c-catalog-content\") pod \"community-operators-g9xk2\" (UID: \"890fbbb1-da06-4cd8-80ad-3606cf60429c\") " pod="openshift-marketplace/community-operators-g9xk2" Mar 13 12:31:25 crc kubenswrapper[4632]: I0313 12:31:25.250661 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6qsw\" (UniqueName: \"kubernetes.io/projected/890fbbb1-da06-4cd8-80ad-3606cf60429c-kube-api-access-b6qsw\") pod \"community-operators-g9xk2\" (UID: \"890fbbb1-da06-4cd8-80ad-3606cf60429c\") " pod="openshift-marketplace/community-operators-g9xk2" Mar 13 12:31:25 crc kubenswrapper[4632]: I0313 12:31:25.352304 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/890fbbb1-da06-4cd8-80ad-3606cf60429c-utilities\") pod \"community-operators-g9xk2\" (UID: \"890fbbb1-da06-4cd8-80ad-3606cf60429c\") " pod="openshift-marketplace/community-operators-g9xk2" Mar 13 12:31:25 crc kubenswrapper[4632]: I0313 12:31:25.352363 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/890fbbb1-da06-4cd8-80ad-3606cf60429c-catalog-content\") pod \"community-operators-g9xk2\" (UID: \"890fbbb1-da06-4cd8-80ad-3606cf60429c\") " pod="openshift-marketplace/community-operators-g9xk2" Mar 13 12:31:25 crc kubenswrapper[4632]: I0313 12:31:25.352392 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6qsw\" (UniqueName: \"kubernetes.io/projected/890fbbb1-da06-4cd8-80ad-3606cf60429c-kube-api-access-b6qsw\") pod \"community-operators-g9xk2\" (UID: \"890fbbb1-da06-4cd8-80ad-3606cf60429c\") " pod="openshift-marketplace/community-operators-g9xk2" Mar 13 12:31:25 crc kubenswrapper[4632]: I0313 12:31:25.353560 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/890fbbb1-da06-4cd8-80ad-3606cf60429c-utilities\") pod \"community-operators-g9xk2\" (UID: \"890fbbb1-da06-4cd8-80ad-3606cf60429c\") " pod="openshift-marketplace/community-operators-g9xk2" Mar 13 12:31:25 crc kubenswrapper[4632]: I0313 12:31:25.353575 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/890fbbb1-da06-4cd8-80ad-3606cf60429c-catalog-content\") pod \"community-operators-g9xk2\" (UID: \"890fbbb1-da06-4cd8-80ad-3606cf60429c\") " pod="openshift-marketplace/community-operators-g9xk2" Mar 13 12:31:25 crc kubenswrapper[4632]: I0313 12:31:25.379869 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6qsw\" (UniqueName: \"kubernetes.io/projected/890fbbb1-da06-4cd8-80ad-3606cf60429c-kube-api-access-b6qsw\") pod \"community-operators-g9xk2\" (UID: \"890fbbb1-da06-4cd8-80ad-3606cf60429c\") " pod="openshift-marketplace/community-operators-g9xk2" Mar 13 12:31:25 crc kubenswrapper[4632]: I0313 12:31:25.484384 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g9xk2" Mar 13 12:31:26 crc kubenswrapper[4632]: I0313 12:31:26.615724 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g9xk2"] Mar 13 12:31:26 crc kubenswrapper[4632]: W0313 12:31:26.637725 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod890fbbb1_da06_4cd8_80ad_3606cf60429c.slice/crio-91406894900e25486440e9ccbbbcf37ffdbcd032fb6ed3c80c9a9df23adbf89d WatchSource:0}: Error finding container 91406894900e25486440e9ccbbbcf37ffdbcd032fb6ed3c80c9a9df23adbf89d: Status 404 returned error can't find the container with id 91406894900e25486440e9ccbbbcf37ffdbcd032fb6ed3c80c9a9df23adbf89d Mar 13 12:31:27 crc kubenswrapper[4632]: I0313 12:31:27.482928 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9xk2" event={"ID":"890fbbb1-da06-4cd8-80ad-3606cf60429c","Type":"ContainerDied","Data":"41ca3fc9586c0ae46fcfec562f2e7230a5fd9855fcb4904bbbb958362880638c"} Mar 13 12:31:27 crc kubenswrapper[4632]: I0313 12:31:27.483149 4632 generic.go:334] "Generic (PLEG): container finished" podID="890fbbb1-da06-4cd8-80ad-3606cf60429c" containerID="41ca3fc9586c0ae46fcfec562f2e7230a5fd9855fcb4904bbbb958362880638c" exitCode=0 Mar 13 12:31:27 crc kubenswrapper[4632]: I0313 12:31:27.483760 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9xk2" event={"ID":"890fbbb1-da06-4cd8-80ad-3606cf60429c","Type":"ContainerStarted","Data":"91406894900e25486440e9ccbbbcf37ffdbcd032fb6ed3c80c9a9df23adbf89d"} Mar 13 12:31:27 crc kubenswrapper[4632]: I0313 12:31:27.489024 4632 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 12:31:28 crc kubenswrapper[4632]: I0313 12:31:28.494306 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9xk2" event={"ID":"890fbbb1-da06-4cd8-80ad-3606cf60429c","Type":"ContainerStarted","Data":"0aa7a5a74d17454fafe7a3ea9a1ca0a93f0db659d516166286bcd7d7d51f347f"} Mar 13 12:31:31 crc kubenswrapper[4632]: I0313 12:31:31.533263 4632 generic.go:334] "Generic (PLEG): container finished" podID="890fbbb1-da06-4cd8-80ad-3606cf60429c" containerID="0aa7a5a74d17454fafe7a3ea9a1ca0a93f0db659d516166286bcd7d7d51f347f" exitCode=0 Mar 13 12:31:31 crc kubenswrapper[4632]: I0313 12:31:31.533329 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9xk2" event={"ID":"890fbbb1-da06-4cd8-80ad-3606cf60429c","Type":"ContainerDied","Data":"0aa7a5a74d17454fafe7a3ea9a1ca0a93f0db659d516166286bcd7d7d51f347f"} Mar 13 12:31:32 crc kubenswrapper[4632]: I0313 12:31:32.546441 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9xk2" event={"ID":"890fbbb1-da06-4cd8-80ad-3606cf60429c","Type":"ContainerStarted","Data":"8f38e62b583cfd0853cbd527e55c37912c0be15194f7d16e790f2fedb8e0673e"} Mar 13 12:31:32 crc kubenswrapper[4632]: I0313 12:31:32.571908 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g9xk2" podStartSLOduration=3.08852628 podStartE2EDuration="7.568733761s" podCreationTimestamp="2026-03-13 12:31:25 +0000 UTC" firstStartedPulling="2026-03-13 12:31:27.485824049 +0000 UTC m=+8861.508354192" lastFinishedPulling="2026-03-13 12:31:31.96603154 +0000 UTC m=+8865.988561673" observedRunningTime="2026-03-13 12:31:32.566321762 +0000 UTC m=+8866.588851925" watchObservedRunningTime="2026-03-13 12:31:32.568733761 +0000 UTC m=+8866.591263914" Mar 13 12:31:35 crc kubenswrapper[4632]: I0313 12:31:35.486075 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g9xk2" Mar 13 12:31:35 crc kubenswrapper[4632]: I0313 12:31:35.486423 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g9xk2" Mar 13 12:31:36 crc kubenswrapper[4632]: I0313 12:31:36.542502 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-g9xk2" podUID="890fbbb1-da06-4cd8-80ad-3606cf60429c" containerName="registry-server" probeResult="failure" output=< Mar 13 12:31:36 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:31:36 crc kubenswrapper[4632]: > Mar 13 12:31:45 crc kubenswrapper[4632]: I0313 12:31:45.563825 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g9xk2" Mar 13 12:31:45 crc kubenswrapper[4632]: I0313 12:31:45.635117 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g9xk2" Mar 13 12:31:45 crc kubenswrapper[4632]: I0313 12:31:45.812265 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g9xk2"] Mar 13 12:31:46 crc kubenswrapper[4632]: I0313 12:31:46.686060 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g9xk2" podUID="890fbbb1-da06-4cd8-80ad-3606cf60429c" containerName="registry-server" containerID="cri-o://8f38e62b583cfd0853cbd527e55c37912c0be15194f7d16e790f2fedb8e0673e" gracePeriod=2 Mar 13 12:31:47 crc kubenswrapper[4632]: I0313 12:31:47.534650 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g9xk2" Mar 13 12:31:47 crc kubenswrapper[4632]: I0313 12:31:47.595468 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6qsw\" (UniqueName: \"kubernetes.io/projected/890fbbb1-da06-4cd8-80ad-3606cf60429c-kube-api-access-b6qsw\") pod \"890fbbb1-da06-4cd8-80ad-3606cf60429c\" (UID: \"890fbbb1-da06-4cd8-80ad-3606cf60429c\") " Mar 13 12:31:47 crc kubenswrapper[4632]: I0313 12:31:47.595746 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/890fbbb1-da06-4cd8-80ad-3606cf60429c-utilities\") pod \"890fbbb1-da06-4cd8-80ad-3606cf60429c\" (UID: \"890fbbb1-da06-4cd8-80ad-3606cf60429c\") " Mar 13 12:31:47 crc kubenswrapper[4632]: I0313 12:31:47.595796 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/890fbbb1-da06-4cd8-80ad-3606cf60429c-catalog-content\") pod \"890fbbb1-da06-4cd8-80ad-3606cf60429c\" (UID: \"890fbbb1-da06-4cd8-80ad-3606cf60429c\") " Mar 13 12:31:47 crc kubenswrapper[4632]: I0313 12:31:47.597656 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/890fbbb1-da06-4cd8-80ad-3606cf60429c-utilities" (OuterVolumeSpecName: "utilities") pod "890fbbb1-da06-4cd8-80ad-3606cf60429c" (UID: "890fbbb1-da06-4cd8-80ad-3606cf60429c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:31:47 crc kubenswrapper[4632]: I0313 12:31:47.616257 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/890fbbb1-da06-4cd8-80ad-3606cf60429c-kube-api-access-b6qsw" (OuterVolumeSpecName: "kube-api-access-b6qsw") pod "890fbbb1-da06-4cd8-80ad-3606cf60429c" (UID: "890fbbb1-da06-4cd8-80ad-3606cf60429c"). InnerVolumeSpecName "kube-api-access-b6qsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:31:47 crc kubenswrapper[4632]: I0313 12:31:47.663325 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/890fbbb1-da06-4cd8-80ad-3606cf60429c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "890fbbb1-da06-4cd8-80ad-3606cf60429c" (UID: "890fbbb1-da06-4cd8-80ad-3606cf60429c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:31:47 crc kubenswrapper[4632]: I0313 12:31:47.698710 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6qsw\" (UniqueName: \"kubernetes.io/projected/890fbbb1-da06-4cd8-80ad-3606cf60429c-kube-api-access-b6qsw\") on node \"crc\" DevicePath \"\"" Mar 13 12:31:47 crc kubenswrapper[4632]: I0313 12:31:47.698755 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/890fbbb1-da06-4cd8-80ad-3606cf60429c-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 12:31:47 crc kubenswrapper[4632]: I0313 12:31:47.698768 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/890fbbb1-da06-4cd8-80ad-3606cf60429c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 12:31:47 crc kubenswrapper[4632]: I0313 12:31:47.702381 4632 generic.go:334] "Generic (PLEG): container finished" podID="890fbbb1-da06-4cd8-80ad-3606cf60429c" containerID="8f38e62b583cfd0853cbd527e55c37912c0be15194f7d16e790f2fedb8e0673e" exitCode=0 Mar 13 12:31:47 crc kubenswrapper[4632]: I0313 12:31:47.702430 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g9xk2" Mar 13 12:31:47 crc kubenswrapper[4632]: I0313 12:31:47.702436 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9xk2" event={"ID":"890fbbb1-da06-4cd8-80ad-3606cf60429c","Type":"ContainerDied","Data":"8f38e62b583cfd0853cbd527e55c37912c0be15194f7d16e790f2fedb8e0673e"} Mar 13 12:31:47 crc kubenswrapper[4632]: I0313 12:31:47.702473 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9xk2" event={"ID":"890fbbb1-da06-4cd8-80ad-3606cf60429c","Type":"ContainerDied","Data":"91406894900e25486440e9ccbbbcf37ffdbcd032fb6ed3c80c9a9df23adbf89d"} Mar 13 12:31:47 crc kubenswrapper[4632]: I0313 12:31:47.702495 4632 scope.go:117] "RemoveContainer" containerID="8f38e62b583cfd0853cbd527e55c37912c0be15194f7d16e790f2fedb8e0673e" Mar 13 12:31:47 crc kubenswrapper[4632]: I0313 12:31:47.734885 4632 scope.go:117] "RemoveContainer" containerID="0aa7a5a74d17454fafe7a3ea9a1ca0a93f0db659d516166286bcd7d7d51f347f" Mar 13 12:31:47 crc kubenswrapper[4632]: I0313 12:31:47.756269 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g9xk2"] Mar 13 12:31:47 crc kubenswrapper[4632]: I0313 12:31:47.759132 4632 scope.go:117] "RemoveContainer" containerID="41ca3fc9586c0ae46fcfec562f2e7230a5fd9855fcb4904bbbb958362880638c" Mar 13 12:31:47 crc kubenswrapper[4632]: I0313 12:31:47.765074 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g9xk2"] Mar 13 12:31:47 crc kubenswrapper[4632]: I0313 12:31:47.813632 4632 scope.go:117] "RemoveContainer" containerID="8f38e62b583cfd0853cbd527e55c37912c0be15194f7d16e790f2fedb8e0673e" Mar 13 12:31:47 crc kubenswrapper[4632]: E0313 12:31:47.816920 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f38e62b583cfd0853cbd527e55c37912c0be15194f7d16e790f2fedb8e0673e\": container with ID starting with 8f38e62b583cfd0853cbd527e55c37912c0be15194f7d16e790f2fedb8e0673e not found: ID does not exist" containerID="8f38e62b583cfd0853cbd527e55c37912c0be15194f7d16e790f2fedb8e0673e" Mar 13 12:31:47 crc kubenswrapper[4632]: I0313 12:31:47.817968 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f38e62b583cfd0853cbd527e55c37912c0be15194f7d16e790f2fedb8e0673e"} err="failed to get container status \"8f38e62b583cfd0853cbd527e55c37912c0be15194f7d16e790f2fedb8e0673e\": rpc error: code = NotFound desc = could not find container \"8f38e62b583cfd0853cbd527e55c37912c0be15194f7d16e790f2fedb8e0673e\": container with ID starting with 8f38e62b583cfd0853cbd527e55c37912c0be15194f7d16e790f2fedb8e0673e not found: ID does not exist" Mar 13 12:31:47 crc kubenswrapper[4632]: I0313 12:31:47.818007 4632 scope.go:117] "RemoveContainer" containerID="0aa7a5a74d17454fafe7a3ea9a1ca0a93f0db659d516166286bcd7d7d51f347f" Mar 13 12:31:47 crc kubenswrapper[4632]: E0313 12:31:47.818467 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0aa7a5a74d17454fafe7a3ea9a1ca0a93f0db659d516166286bcd7d7d51f347f\": container with ID starting with 0aa7a5a74d17454fafe7a3ea9a1ca0a93f0db659d516166286bcd7d7d51f347f not found: ID does not exist" containerID="0aa7a5a74d17454fafe7a3ea9a1ca0a93f0db659d516166286bcd7d7d51f347f" Mar 13 12:31:47 crc kubenswrapper[4632]: I0313 12:31:47.818507 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0aa7a5a74d17454fafe7a3ea9a1ca0a93f0db659d516166286bcd7d7d51f347f"} err="failed to get container status \"0aa7a5a74d17454fafe7a3ea9a1ca0a93f0db659d516166286bcd7d7d51f347f\": rpc error: code = NotFound desc = could not find container \"0aa7a5a74d17454fafe7a3ea9a1ca0a93f0db659d516166286bcd7d7d51f347f\": container with ID starting with 0aa7a5a74d17454fafe7a3ea9a1ca0a93f0db659d516166286bcd7d7d51f347f not found: ID does not exist" Mar 13 12:31:47 crc kubenswrapper[4632]: I0313 12:31:47.819120 4632 scope.go:117] "RemoveContainer" containerID="41ca3fc9586c0ae46fcfec562f2e7230a5fd9855fcb4904bbbb958362880638c" Mar 13 12:31:47 crc kubenswrapper[4632]: E0313 12:31:47.819479 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41ca3fc9586c0ae46fcfec562f2e7230a5fd9855fcb4904bbbb958362880638c\": container with ID starting with 41ca3fc9586c0ae46fcfec562f2e7230a5fd9855fcb4904bbbb958362880638c not found: ID does not exist" containerID="41ca3fc9586c0ae46fcfec562f2e7230a5fd9855fcb4904bbbb958362880638c" Mar 13 12:31:47 crc kubenswrapper[4632]: I0313 12:31:47.819528 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41ca3fc9586c0ae46fcfec562f2e7230a5fd9855fcb4904bbbb958362880638c"} err="failed to get container status \"41ca3fc9586c0ae46fcfec562f2e7230a5fd9855fcb4904bbbb958362880638c\": rpc error: code = NotFound desc = could not find container \"41ca3fc9586c0ae46fcfec562f2e7230a5fd9855fcb4904bbbb958362880638c\": container with ID starting with 41ca3fc9586c0ae46fcfec562f2e7230a5fd9855fcb4904bbbb958362880638c not found: ID does not exist" Mar 13 12:31:48 crc kubenswrapper[4632]: I0313 12:31:48.055788 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="890fbbb1-da06-4cd8-80ad-3606cf60429c" path="/var/lib/kubelet/pods/890fbbb1-da06-4cd8-80ad-3606cf60429c/volumes" Mar 13 12:32:00 crc kubenswrapper[4632]: I0313 12:32:00.201233 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556752-48xgt"] Mar 13 12:32:00 crc kubenswrapper[4632]: E0313 12:32:00.202198 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="890fbbb1-da06-4cd8-80ad-3606cf60429c" containerName="extract-utilities" Mar 13 12:32:00 crc kubenswrapper[4632]: I0313 12:32:00.202214 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="890fbbb1-da06-4cd8-80ad-3606cf60429c" containerName="extract-utilities" Mar 13 12:32:00 crc kubenswrapper[4632]: E0313 12:32:00.202226 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="890fbbb1-da06-4cd8-80ad-3606cf60429c" containerName="registry-server" Mar 13 12:32:00 crc kubenswrapper[4632]: I0313 12:32:00.202233 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="890fbbb1-da06-4cd8-80ad-3606cf60429c" containerName="registry-server" Mar 13 12:32:00 crc kubenswrapper[4632]: E0313 12:32:00.202248 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="890fbbb1-da06-4cd8-80ad-3606cf60429c" containerName="extract-content" Mar 13 12:32:00 crc kubenswrapper[4632]: I0313 12:32:00.202254 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="890fbbb1-da06-4cd8-80ad-3606cf60429c" containerName="extract-content" Mar 13 12:32:00 crc kubenswrapper[4632]: I0313 12:32:00.202464 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="890fbbb1-da06-4cd8-80ad-3606cf60429c" containerName="registry-server" Mar 13 12:32:00 crc kubenswrapper[4632]: I0313 12:32:00.204671 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556752-48xgt" Mar 13 12:32:00 crc kubenswrapper[4632]: I0313 12:32:00.214306 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556752-48xgt"] Mar 13 12:32:00 crc kubenswrapper[4632]: I0313 12:32:00.219693 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 12:32:00 crc kubenswrapper[4632]: I0313 12:32:00.247462 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 12:32:00 crc kubenswrapper[4632]: I0313 12:32:00.248268 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 12:32:00 crc kubenswrapper[4632]: I0313 12:32:00.258308 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmpl8\" (UniqueName: \"kubernetes.io/projected/c311ec54-27ae-4082-bd23-4df180976b2f-kube-api-access-rmpl8\") pod \"auto-csr-approver-29556752-48xgt\" (UID: \"c311ec54-27ae-4082-bd23-4df180976b2f\") " pod="openshift-infra/auto-csr-approver-29556752-48xgt" Mar 13 12:32:00 crc kubenswrapper[4632]: I0313 12:32:00.360035 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmpl8\" (UniqueName: \"kubernetes.io/projected/c311ec54-27ae-4082-bd23-4df180976b2f-kube-api-access-rmpl8\") pod \"auto-csr-approver-29556752-48xgt\" (UID: \"c311ec54-27ae-4082-bd23-4df180976b2f\") " pod="openshift-infra/auto-csr-approver-29556752-48xgt" Mar 13 12:32:00 crc kubenswrapper[4632]: I0313 12:32:00.380594 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmpl8\" (UniqueName: \"kubernetes.io/projected/c311ec54-27ae-4082-bd23-4df180976b2f-kube-api-access-rmpl8\") pod \"auto-csr-approver-29556752-48xgt\" (UID: \"c311ec54-27ae-4082-bd23-4df180976b2f\") " pod="openshift-infra/auto-csr-approver-29556752-48xgt" Mar 13 12:32:00 crc kubenswrapper[4632]: I0313 12:32:00.558563 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556752-48xgt" Mar 13 12:32:01 crc kubenswrapper[4632]: I0313 12:32:01.154488 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556752-48xgt"] Mar 13 12:32:01 crc kubenswrapper[4632]: I0313 12:32:01.841111 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556752-48xgt" event={"ID":"c311ec54-27ae-4082-bd23-4df180976b2f","Type":"ContainerStarted","Data":"fea7a8a20327e74d886dc273e5419e151a924579f2951bf3a57f9b9c8d553f18"} Mar 13 12:32:03 crc kubenswrapper[4632]: I0313 12:32:03.861356 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556752-48xgt" event={"ID":"c311ec54-27ae-4082-bd23-4df180976b2f","Type":"ContainerStarted","Data":"997fb6aac287ee23705f38733ad6b8cf02cea468d3978ae35a30d75ea0dfec0f"} Mar 13 12:32:03 crc kubenswrapper[4632]: I0313 12:32:03.882557 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556752-48xgt" podStartSLOduration=3.057365928 podStartE2EDuration="3.882533705s" podCreationTimestamp="2026-03-13 12:32:00 +0000 UTC" firstStartedPulling="2026-03-13 12:32:01.174693814 +0000 UTC m=+8895.197223947" lastFinishedPulling="2026-03-13 12:32:01.999861591 +0000 UTC m=+8896.022391724" observedRunningTime="2026-03-13 12:32:03.877582034 +0000 UTC m=+8897.900112167" watchObservedRunningTime="2026-03-13 12:32:03.882533705 +0000 UTC m=+8897.905063858" Mar 13 12:32:04 crc kubenswrapper[4632]: I0313 12:32:04.878363 4632 generic.go:334] "Generic (PLEG): container finished" podID="c311ec54-27ae-4082-bd23-4df180976b2f" containerID="997fb6aac287ee23705f38733ad6b8cf02cea468d3978ae35a30d75ea0dfec0f" exitCode=0 Mar 13 12:32:04 crc kubenswrapper[4632]: I0313 12:32:04.878442 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556752-48xgt" event={"ID":"c311ec54-27ae-4082-bd23-4df180976b2f","Type":"ContainerDied","Data":"997fb6aac287ee23705f38733ad6b8cf02cea468d3978ae35a30d75ea0dfec0f"} Mar 13 12:32:06 crc kubenswrapper[4632]: I0313 12:32:06.303958 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556752-48xgt" Mar 13 12:32:06 crc kubenswrapper[4632]: I0313 12:32:06.383451 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmpl8\" (UniqueName: \"kubernetes.io/projected/c311ec54-27ae-4082-bd23-4df180976b2f-kube-api-access-rmpl8\") pod \"c311ec54-27ae-4082-bd23-4df180976b2f\" (UID: \"c311ec54-27ae-4082-bd23-4df180976b2f\") " Mar 13 12:32:06 crc kubenswrapper[4632]: I0313 12:32:06.399404 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c311ec54-27ae-4082-bd23-4df180976b2f-kube-api-access-rmpl8" (OuterVolumeSpecName: "kube-api-access-rmpl8") pod "c311ec54-27ae-4082-bd23-4df180976b2f" (UID: "c311ec54-27ae-4082-bd23-4df180976b2f"). InnerVolumeSpecName "kube-api-access-rmpl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:32:06 crc kubenswrapper[4632]: I0313 12:32:06.487562 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmpl8\" (UniqueName: \"kubernetes.io/projected/c311ec54-27ae-4082-bd23-4df180976b2f-kube-api-access-rmpl8\") on node \"crc\" DevicePath \"\"" Mar 13 12:32:06 crc kubenswrapper[4632]: I0313 12:32:06.899006 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556752-48xgt" event={"ID":"c311ec54-27ae-4082-bd23-4df180976b2f","Type":"ContainerDied","Data":"fea7a8a20327e74d886dc273e5419e151a924579f2951bf3a57f9b9c8d553f18"} Mar 13 12:32:06 crc kubenswrapper[4632]: I0313 12:32:06.899053 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fea7a8a20327e74d886dc273e5419e151a924579f2951bf3a57f9b9c8d553f18" Mar 13 12:32:06 crc kubenswrapper[4632]: I0313 12:32:06.899067 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556752-48xgt" Mar 13 12:32:06 crc kubenswrapper[4632]: I0313 12:32:06.959349 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556746-jtv7q"] Mar 13 12:32:06 crc kubenswrapper[4632]: I0313 12:32:06.968284 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556746-jtv7q"] Mar 13 12:32:08 crc kubenswrapper[4632]: I0313 12:32:08.066743 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d88882b5-a11f-4606-8e0d-59471c7feccb" path="/var/lib/kubelet/pods/d88882b5-a11f-4606-8e0d-59471c7feccb/volumes" Mar 13 12:32:27 crc kubenswrapper[4632]: I0313 12:32:27.348814 4632 scope.go:117] "RemoveContainer" containerID="35ea86f2c8d1955a868a4d03ab725c4feb194878cc720326b8eb0c50ed5ce3c5" Mar 13 12:33:40 crc kubenswrapper[4632]: I0313 12:33:40.461336 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:33:40 crc kubenswrapper[4632]: I0313 12:33:40.461960 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:34:00 crc kubenswrapper[4632]: I0313 12:34:00.155201 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556754-kcjmj"] Mar 13 12:34:00 crc kubenswrapper[4632]: E0313 12:34:00.156479 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c311ec54-27ae-4082-bd23-4df180976b2f" containerName="oc" Mar 13 12:34:00 crc kubenswrapper[4632]: I0313 12:34:00.156504 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="c311ec54-27ae-4082-bd23-4df180976b2f" containerName="oc" Mar 13 12:34:00 crc kubenswrapper[4632]: I0313 12:34:00.156826 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="c311ec54-27ae-4082-bd23-4df180976b2f" containerName="oc" Mar 13 12:34:00 crc kubenswrapper[4632]: I0313 12:34:00.157678 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556754-kcjmj" Mar 13 12:34:00 crc kubenswrapper[4632]: I0313 12:34:00.165600 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 12:34:00 crc kubenswrapper[4632]: I0313 12:34:00.165935 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 12:34:00 crc kubenswrapper[4632]: I0313 12:34:00.169877 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 12:34:00 crc kubenswrapper[4632]: I0313 12:34:00.174592 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556754-kcjmj"] Mar 13 12:34:00 crc kubenswrapper[4632]: I0313 12:34:00.262632 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2v6b\" (UniqueName: \"kubernetes.io/projected/ccc70d73-b58b-4a2c-9bce-dc27405c5710-kube-api-access-r2v6b\") pod \"auto-csr-approver-29556754-kcjmj\" (UID: \"ccc70d73-b58b-4a2c-9bce-dc27405c5710\") " pod="openshift-infra/auto-csr-approver-29556754-kcjmj" Mar 13 12:34:00 crc kubenswrapper[4632]: I0313 12:34:00.365201 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2v6b\" (UniqueName: \"kubernetes.io/projected/ccc70d73-b58b-4a2c-9bce-dc27405c5710-kube-api-access-r2v6b\") pod \"auto-csr-approver-29556754-kcjmj\" (UID: \"ccc70d73-b58b-4a2c-9bce-dc27405c5710\") " pod="openshift-infra/auto-csr-approver-29556754-kcjmj" Mar 13 12:34:00 crc kubenswrapper[4632]: I0313 12:34:00.394548 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2v6b\" (UniqueName: \"kubernetes.io/projected/ccc70d73-b58b-4a2c-9bce-dc27405c5710-kube-api-access-r2v6b\") pod \"auto-csr-approver-29556754-kcjmj\" (UID: \"ccc70d73-b58b-4a2c-9bce-dc27405c5710\") " pod="openshift-infra/auto-csr-approver-29556754-kcjmj" Mar 13 12:34:00 crc kubenswrapper[4632]: I0313 12:34:00.481015 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556754-kcjmj" Mar 13 12:34:01 crc kubenswrapper[4632]: I0313 12:34:01.165454 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556754-kcjmj"] Mar 13 12:34:01 crc kubenswrapper[4632]: I0313 12:34:01.442726 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556754-kcjmj" event={"ID":"ccc70d73-b58b-4a2c-9bce-dc27405c5710","Type":"ContainerStarted","Data":"8c18300927c4a79068fc1d8cdd84afa4a73b1ab68a0bfed9583785fb44288336"} Mar 13 12:34:03 crc kubenswrapper[4632]: I0313 12:34:03.463087 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556754-kcjmj" event={"ID":"ccc70d73-b58b-4a2c-9bce-dc27405c5710","Type":"ContainerStarted","Data":"33c9be3390a29151e585ddbf79d6ef390b1a094d7878d4d4c96b9b0bb39d369c"} Mar 13 12:34:03 crc kubenswrapper[4632]: I0313 12:34:03.486256 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556754-kcjmj" podStartSLOduration=2.397711432 podStartE2EDuration="3.486232128s" podCreationTimestamp="2026-03-13 12:34:00 +0000 UTC" firstStartedPulling="2026-03-13 12:34:01.178238624 +0000 UTC m=+9015.200768747" lastFinishedPulling="2026-03-13 12:34:02.26675931 +0000 UTC m=+9016.289289443" observedRunningTime="2026-03-13 12:34:03.484200878 +0000 UTC m=+9017.506731021" watchObservedRunningTime="2026-03-13 12:34:03.486232128 +0000 UTC m=+9017.508762261" Mar 13 12:34:04 crc kubenswrapper[4632]: I0313 12:34:04.472643 4632 generic.go:334] "Generic (PLEG): container finished" podID="ccc70d73-b58b-4a2c-9bce-dc27405c5710" containerID="33c9be3390a29151e585ddbf79d6ef390b1a094d7878d4d4c96b9b0bb39d369c" exitCode=0 Mar 13 12:34:04 crc kubenswrapper[4632]: I0313 12:34:04.472845 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556754-kcjmj" event={"ID":"ccc70d73-b58b-4a2c-9bce-dc27405c5710","Type":"ContainerDied","Data":"33c9be3390a29151e585ddbf79d6ef390b1a094d7878d4d4c96b9b0bb39d369c"} Mar 13 12:34:06 crc kubenswrapper[4632]: I0313 12:34:06.005352 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556754-kcjmj" Mar 13 12:34:06 crc kubenswrapper[4632]: I0313 12:34:06.173431 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2v6b\" (UniqueName: \"kubernetes.io/projected/ccc70d73-b58b-4a2c-9bce-dc27405c5710-kube-api-access-r2v6b\") pod \"ccc70d73-b58b-4a2c-9bce-dc27405c5710\" (UID: \"ccc70d73-b58b-4a2c-9bce-dc27405c5710\") " Mar 13 12:34:06 crc kubenswrapper[4632]: I0313 12:34:06.193445 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccc70d73-b58b-4a2c-9bce-dc27405c5710-kube-api-access-r2v6b" (OuterVolumeSpecName: "kube-api-access-r2v6b") pod "ccc70d73-b58b-4a2c-9bce-dc27405c5710" (UID: "ccc70d73-b58b-4a2c-9bce-dc27405c5710"). InnerVolumeSpecName "kube-api-access-r2v6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:34:06 crc kubenswrapper[4632]: I0313 12:34:06.276493 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2v6b\" (UniqueName: \"kubernetes.io/projected/ccc70d73-b58b-4a2c-9bce-dc27405c5710-kube-api-access-r2v6b\") on node \"crc\" DevicePath \"\"" Mar 13 12:34:06 crc kubenswrapper[4632]: I0313 12:34:06.500899 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556754-kcjmj" event={"ID":"ccc70d73-b58b-4a2c-9bce-dc27405c5710","Type":"ContainerDied","Data":"8c18300927c4a79068fc1d8cdd84afa4a73b1ab68a0bfed9583785fb44288336"} Mar 13 12:34:06 crc kubenswrapper[4632]: I0313 12:34:06.502179 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c18300927c4a79068fc1d8cdd84afa4a73b1ab68a0bfed9583785fb44288336" Mar 13 12:34:06 crc kubenswrapper[4632]: I0313 12:34:06.501170 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556754-kcjmj" Mar 13 12:34:06 crc kubenswrapper[4632]: I0313 12:34:06.571493 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556748-tr74t"] Mar 13 12:34:06 crc kubenswrapper[4632]: I0313 12:34:06.579468 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556748-tr74t"] Mar 13 12:34:08 crc kubenswrapper[4632]: I0313 12:34:08.063395 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba59e3a2-3d83-4f9d-8633-788ba1bf518c" path="/var/lib/kubelet/pods/ba59e3a2-3d83-4f9d-8633-788ba1bf518c/volumes" Mar 13 12:34:10 crc kubenswrapper[4632]: I0313 12:34:10.461218 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:34:10 crc kubenswrapper[4632]: I0313 12:34:10.461618 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:34:21 crc kubenswrapper[4632]: I0313 12:34:21.312454 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tvzvt"] Mar 13 12:34:21 crc kubenswrapper[4632]: E0313 12:34:21.313567 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccc70d73-b58b-4a2c-9bce-dc27405c5710" containerName="oc" Mar 13 12:34:21 crc kubenswrapper[4632]: I0313 12:34:21.313586 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccc70d73-b58b-4a2c-9bce-dc27405c5710" containerName="oc" Mar 13 12:34:21 crc kubenswrapper[4632]: I0313 12:34:21.313858 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccc70d73-b58b-4a2c-9bce-dc27405c5710" containerName="oc" Mar 13 12:34:21 crc kubenswrapper[4632]: I0313 12:34:21.317834 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tvzvt" Mar 13 12:34:21 crc kubenswrapper[4632]: I0313 12:34:21.338930 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tvzvt"] Mar 13 12:34:21 crc kubenswrapper[4632]: I0313 12:34:21.503532 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tch8\" (UniqueName: \"kubernetes.io/projected/aefd0984-736f-4724-96e9-bba46baff210-kube-api-access-8tch8\") pod \"redhat-marketplace-tvzvt\" (UID: \"aefd0984-736f-4724-96e9-bba46baff210\") " pod="openshift-marketplace/redhat-marketplace-tvzvt" Mar 13 12:34:21 crc kubenswrapper[4632]: I0313 12:34:21.503639 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aefd0984-736f-4724-96e9-bba46baff210-utilities\") pod \"redhat-marketplace-tvzvt\" (UID: \"aefd0984-736f-4724-96e9-bba46baff210\") " pod="openshift-marketplace/redhat-marketplace-tvzvt" Mar 13 12:34:21 crc kubenswrapper[4632]: I0313 12:34:21.503840 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aefd0984-736f-4724-96e9-bba46baff210-catalog-content\") pod \"redhat-marketplace-tvzvt\" (UID: \"aefd0984-736f-4724-96e9-bba46baff210\") " pod="openshift-marketplace/redhat-marketplace-tvzvt" Mar 13 12:34:21 crc kubenswrapper[4632]: I0313 12:34:21.605800 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aefd0984-736f-4724-96e9-bba46baff210-catalog-content\") pod \"redhat-marketplace-tvzvt\" (UID: \"aefd0984-736f-4724-96e9-bba46baff210\") " pod="openshift-marketplace/redhat-marketplace-tvzvt" Mar 13 12:34:21 crc kubenswrapper[4632]: I0313 12:34:21.606001 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tch8\" (UniqueName: \"kubernetes.io/projected/aefd0984-736f-4724-96e9-bba46baff210-kube-api-access-8tch8\") pod \"redhat-marketplace-tvzvt\" (UID: \"aefd0984-736f-4724-96e9-bba46baff210\") " pod="openshift-marketplace/redhat-marketplace-tvzvt" Mar 13 12:34:21 crc kubenswrapper[4632]: I0313 12:34:21.606025 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aefd0984-736f-4724-96e9-bba46baff210-utilities\") pod \"redhat-marketplace-tvzvt\" (UID: \"aefd0984-736f-4724-96e9-bba46baff210\") " pod="openshift-marketplace/redhat-marketplace-tvzvt" Mar 13 12:34:21 crc kubenswrapper[4632]: I0313 12:34:21.606290 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aefd0984-736f-4724-96e9-bba46baff210-catalog-content\") pod \"redhat-marketplace-tvzvt\" (UID: \"aefd0984-736f-4724-96e9-bba46baff210\") " pod="openshift-marketplace/redhat-marketplace-tvzvt" Mar 13 12:34:21 crc kubenswrapper[4632]: I0313 12:34:21.606322 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aefd0984-736f-4724-96e9-bba46baff210-utilities\") pod \"redhat-marketplace-tvzvt\" (UID: \"aefd0984-736f-4724-96e9-bba46baff210\") " pod="openshift-marketplace/redhat-marketplace-tvzvt" Mar 13 12:34:21 crc kubenswrapper[4632]: I0313 12:34:21.629689 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tch8\" (UniqueName: \"kubernetes.io/projected/aefd0984-736f-4724-96e9-bba46baff210-kube-api-access-8tch8\") pod \"redhat-marketplace-tvzvt\" (UID: \"aefd0984-736f-4724-96e9-bba46baff210\") " pod="openshift-marketplace/redhat-marketplace-tvzvt" Mar 13 12:34:21 crc kubenswrapper[4632]: I0313 12:34:21.642381 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tvzvt" Mar 13 12:34:22 crc kubenswrapper[4632]: I0313 12:34:22.384773 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tvzvt"] Mar 13 12:34:22 crc kubenswrapper[4632]: I0313 12:34:22.756643 4632 generic.go:334] "Generic (PLEG): container finished" podID="aefd0984-736f-4724-96e9-bba46baff210" containerID="6459338d22358b0519f807dc4afc129c5e2ca57f7e8e5369a57abd6209dfd36f" exitCode=0 Mar 13 12:34:22 crc kubenswrapper[4632]: I0313 12:34:22.756714 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvzvt" event={"ID":"aefd0984-736f-4724-96e9-bba46baff210","Type":"ContainerDied","Data":"6459338d22358b0519f807dc4afc129c5e2ca57f7e8e5369a57abd6209dfd36f"} Mar 13 12:34:22 crc kubenswrapper[4632]: I0313 12:34:22.757007 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvzvt" event={"ID":"aefd0984-736f-4724-96e9-bba46baff210","Type":"ContainerStarted","Data":"3864ef386dc747b0066132dd4d6ee7a1758a98ab4c866a3de7ae196d02e95eaa"} Mar 13 12:34:23 crc kubenswrapper[4632]: I0313 12:34:23.781625 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvzvt" event={"ID":"aefd0984-736f-4724-96e9-bba46baff210","Type":"ContainerStarted","Data":"5100412017a2a84219c5731b435e390960254cc640066455806e36494b500748"} Mar 13 12:34:24 crc kubenswrapper[4632]: I0313 12:34:24.518099 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4crmg"] Mar 13 12:34:24 crc kubenswrapper[4632]: I0313 12:34:24.520605 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4crmg" Mar 13 12:34:24 crc kubenswrapper[4632]: I0313 12:34:24.556559 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4crmg"] Mar 13 12:34:24 crc kubenswrapper[4632]: I0313 12:34:24.717006 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxvbg\" (UniqueName: \"kubernetes.io/projected/99ae46aa-bead-481b-9416-e5a1a8be8196-kube-api-access-bxvbg\") pod \"certified-operators-4crmg\" (UID: \"99ae46aa-bead-481b-9416-e5a1a8be8196\") " pod="openshift-marketplace/certified-operators-4crmg" Mar 13 12:34:24 crc kubenswrapper[4632]: I0313 12:34:24.717137 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99ae46aa-bead-481b-9416-e5a1a8be8196-utilities\") pod \"certified-operators-4crmg\" (UID: \"99ae46aa-bead-481b-9416-e5a1a8be8196\") " pod="openshift-marketplace/certified-operators-4crmg" Mar 13 12:34:24 crc kubenswrapper[4632]: I0313 12:34:24.717160 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99ae46aa-bead-481b-9416-e5a1a8be8196-catalog-content\") pod \"certified-operators-4crmg\" (UID: \"99ae46aa-bead-481b-9416-e5a1a8be8196\") " pod="openshift-marketplace/certified-operators-4crmg" Mar 13 12:34:24 crc kubenswrapper[4632]: I0313 12:34:24.823435 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxvbg\" (UniqueName: \"kubernetes.io/projected/99ae46aa-bead-481b-9416-e5a1a8be8196-kube-api-access-bxvbg\") pod \"certified-operators-4crmg\" (UID: \"99ae46aa-bead-481b-9416-e5a1a8be8196\") " pod="openshift-marketplace/certified-operators-4crmg" Mar 13 12:34:24 crc kubenswrapper[4632]: I0313 12:34:24.824273 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99ae46aa-bead-481b-9416-e5a1a8be8196-utilities\") pod \"certified-operators-4crmg\" (UID: \"99ae46aa-bead-481b-9416-e5a1a8be8196\") " pod="openshift-marketplace/certified-operators-4crmg" Mar 13 12:34:24 crc kubenswrapper[4632]: I0313 12:34:24.824313 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99ae46aa-bead-481b-9416-e5a1a8be8196-catalog-content\") pod \"certified-operators-4crmg\" (UID: \"99ae46aa-bead-481b-9416-e5a1a8be8196\") " pod="openshift-marketplace/certified-operators-4crmg" Mar 13 12:34:24 crc kubenswrapper[4632]: I0313 12:34:24.882413 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99ae46aa-bead-481b-9416-e5a1a8be8196-utilities\") pod \"certified-operators-4crmg\" (UID: \"99ae46aa-bead-481b-9416-e5a1a8be8196\") " pod="openshift-marketplace/certified-operators-4crmg" Mar 13 12:34:24 crc kubenswrapper[4632]: I0313 12:34:24.909363 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99ae46aa-bead-481b-9416-e5a1a8be8196-catalog-content\") pod \"certified-operators-4crmg\" (UID: \"99ae46aa-bead-481b-9416-e5a1a8be8196\") " pod="openshift-marketplace/certified-operators-4crmg" Mar 13 12:34:24 crc kubenswrapper[4632]: I0313 12:34:24.918010 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxvbg\" (UniqueName: \"kubernetes.io/projected/99ae46aa-bead-481b-9416-e5a1a8be8196-kube-api-access-bxvbg\") pod \"certified-operators-4crmg\" (UID: \"99ae46aa-bead-481b-9416-e5a1a8be8196\") " pod="openshift-marketplace/certified-operators-4crmg" Mar 13 12:34:25 crc kubenswrapper[4632]: I0313 12:34:25.187220 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4crmg" Mar 13 12:34:25 crc kubenswrapper[4632]: I0313 12:34:25.814893 4632 generic.go:334] "Generic (PLEG): container finished" podID="aefd0984-736f-4724-96e9-bba46baff210" containerID="5100412017a2a84219c5731b435e390960254cc640066455806e36494b500748" exitCode=0 Mar 13 12:34:25 crc kubenswrapper[4632]: I0313 12:34:25.815207 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvzvt" event={"ID":"aefd0984-736f-4724-96e9-bba46baff210","Type":"ContainerDied","Data":"5100412017a2a84219c5731b435e390960254cc640066455806e36494b500748"} Mar 13 12:34:25 crc kubenswrapper[4632]: I0313 12:34:25.926396 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4crmg"] Mar 13 12:34:26 crc kubenswrapper[4632]: I0313 12:34:26.827534 4632 generic.go:334] "Generic (PLEG): container finished" podID="99ae46aa-bead-481b-9416-e5a1a8be8196" containerID="0f0837940cf88d21f3843c5930040e437ca0684ad084295f166ebe778c422cd2" exitCode=0 Mar 13 12:34:26 crc kubenswrapper[4632]: I0313 12:34:26.827730 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4crmg" event={"ID":"99ae46aa-bead-481b-9416-e5a1a8be8196","Type":"ContainerDied","Data":"0f0837940cf88d21f3843c5930040e437ca0684ad084295f166ebe778c422cd2"} Mar 13 12:34:26 crc kubenswrapper[4632]: I0313 12:34:26.828293 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4crmg" event={"ID":"99ae46aa-bead-481b-9416-e5a1a8be8196","Type":"ContainerStarted","Data":"306a0cb5021e67e31e2f58ee7bbf44297adb063f5cf868144b001c706cab0607"} Mar 13 12:34:26 crc kubenswrapper[4632]: I0313 12:34:26.832101 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvzvt" event={"ID":"aefd0984-736f-4724-96e9-bba46baff210","Type":"ContainerStarted","Data":"30ac8815243eb74ef150559b7957bdc7b42f74dcdd603007dfad8eb2fd4ee057"} Mar 13 12:34:26 crc kubenswrapper[4632]: I0313 12:34:26.928371 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tvzvt" podStartSLOduration=2.388036593 podStartE2EDuration="5.928351872s" podCreationTimestamp="2026-03-13 12:34:21 +0000 UTC" firstStartedPulling="2026-03-13 12:34:22.758598916 +0000 UTC m=+9036.781129049" lastFinishedPulling="2026-03-13 12:34:26.298914195 +0000 UTC m=+9040.321444328" observedRunningTime="2026-03-13 12:34:26.92579664 +0000 UTC m=+9040.948326773" watchObservedRunningTime="2026-03-13 12:34:26.928351872 +0000 UTC m=+9040.950882005" Mar 13 12:34:27 crc kubenswrapper[4632]: I0313 12:34:27.640677 4632 scope.go:117] "RemoveContainer" containerID="d4da70cb5943a7b88f9744961515085bb09badfa367e58e3aee43668a0864bc3" Mar 13 12:34:28 crc kubenswrapper[4632]: I0313 12:34:28.874081 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4crmg" event={"ID":"99ae46aa-bead-481b-9416-e5a1a8be8196","Type":"ContainerStarted","Data":"6dc2cddd4965efa9fc2c49fa5ca691ecbf53012540a04504d6c227a2347e30f4"} Mar 13 12:34:30 crc kubenswrapper[4632]: I0313 12:34:30.894890 4632 generic.go:334] "Generic (PLEG): container finished" podID="99ae46aa-bead-481b-9416-e5a1a8be8196" containerID="6dc2cddd4965efa9fc2c49fa5ca691ecbf53012540a04504d6c227a2347e30f4" exitCode=0 Mar 13 12:34:30 crc kubenswrapper[4632]: I0313 12:34:30.895054 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4crmg" event={"ID":"99ae46aa-bead-481b-9416-e5a1a8be8196","Type":"ContainerDied","Data":"6dc2cddd4965efa9fc2c49fa5ca691ecbf53012540a04504d6c227a2347e30f4"} Mar 13 12:34:31 crc kubenswrapper[4632]: I0313 12:34:31.642964 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tvzvt" Mar 13 12:34:31 crc kubenswrapper[4632]: I0313 12:34:31.643389 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tvzvt" Mar 13 12:34:31 crc kubenswrapper[4632]: I0313 12:34:31.905470 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4crmg" event={"ID":"99ae46aa-bead-481b-9416-e5a1a8be8196","Type":"ContainerStarted","Data":"6e7ba1abc2a88a5e2aebe5114716bbab411d8cc4124cd295d7b37f0ec9fc95e9"} Mar 13 12:34:31 crc kubenswrapper[4632]: I0313 12:34:31.929916 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4crmg" podStartSLOduration=3.233951158 podStartE2EDuration="7.929895809s" podCreationTimestamp="2026-03-13 12:34:24 +0000 UTC" firstStartedPulling="2026-03-13 12:34:26.829318764 +0000 UTC m=+9040.851848897" lastFinishedPulling="2026-03-13 12:34:31.525263415 +0000 UTC m=+9045.547793548" observedRunningTime="2026-03-13 12:34:31.924387964 +0000 UTC m=+9045.946918097" watchObservedRunningTime="2026-03-13 12:34:31.929895809 +0000 UTC m=+9045.952425932" Mar 13 12:34:32 crc kubenswrapper[4632]: I0313 12:34:32.714837 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-tvzvt" podUID="aefd0984-736f-4724-96e9-bba46baff210" containerName="registry-server" probeResult="failure" output=< Mar 13 12:34:32 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:34:32 crc kubenswrapper[4632]: > Mar 13 12:34:35 crc kubenswrapper[4632]: I0313 12:34:35.187433 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4crmg" Mar 13 12:34:35 crc kubenswrapper[4632]: I0313 12:34:35.187848 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4crmg" Mar 13 12:34:36 crc kubenswrapper[4632]: I0313 12:34:36.242548 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-4crmg" podUID="99ae46aa-bead-481b-9416-e5a1a8be8196" containerName="registry-server" probeResult="failure" output=< Mar 13 12:34:36 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:34:36 crc kubenswrapper[4632]: > Mar 13 12:34:40 crc kubenswrapper[4632]: I0313 12:34:40.461075 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:34:40 crc kubenswrapper[4632]: I0313 12:34:40.462223 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:34:40 crc kubenswrapper[4632]: I0313 12:34:40.462280 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 12:34:40 crc kubenswrapper[4632]: I0313 12:34:40.465285 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ada2bd3447f81dbcb3c7c10ab1a84d7a61b81476a09d5bccd655ef21929539af"} pod="openshift-machine-config-operator/machine-config-daemon-zkscb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 12:34:40 crc kubenswrapper[4632]: I0313 12:34:40.466195 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" containerID="cri-o://ada2bd3447f81dbcb3c7c10ab1a84d7a61b81476a09d5bccd655ef21929539af" gracePeriod=600 Mar 13 12:34:41 crc kubenswrapper[4632]: I0313 12:34:41.010844 4632 generic.go:334] "Generic (PLEG): container finished" podID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerID="ada2bd3447f81dbcb3c7c10ab1a84d7a61b81476a09d5bccd655ef21929539af" exitCode=0 Mar 13 12:34:41 crc kubenswrapper[4632]: I0313 12:34:41.010912 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerDied","Data":"ada2bd3447f81dbcb3c7c10ab1a84d7a61b81476a09d5bccd655ef21929539af"} Mar 13 12:34:41 crc kubenswrapper[4632]: I0313 12:34:41.011233 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerStarted","Data":"23322afa11cd9b7f3f2b893b0662422a41461a3fe0777a0243c232f90f5a4eb9"} Mar 13 12:34:41 crc kubenswrapper[4632]: I0313 12:34:41.011258 4632 scope.go:117] "RemoveContainer" containerID="8741648fe2a67d9da8cf15c1e305dc4ff749d2dd595a90ef09b36ac1d0767d1f" Mar 13 12:34:41 crc kubenswrapper[4632]: I0313 12:34:41.710357 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tvzvt" Mar 13 12:34:41 crc kubenswrapper[4632]: I0313 12:34:41.770636 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tvzvt" Mar 13 12:34:44 crc kubenswrapper[4632]: I0313 12:34:44.605609 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tvzvt"] Mar 13 12:34:44 crc kubenswrapper[4632]: I0313 12:34:44.607127 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tvzvt" podUID="aefd0984-736f-4724-96e9-bba46baff210" containerName="registry-server" containerID="cri-o://30ac8815243eb74ef150559b7957bdc7b42f74dcdd603007dfad8eb2fd4ee057" gracePeriod=2 Mar 13 12:34:45 crc kubenswrapper[4632]: I0313 12:34:45.075738 4632 generic.go:334] "Generic (PLEG): container finished" podID="aefd0984-736f-4724-96e9-bba46baff210" containerID="30ac8815243eb74ef150559b7957bdc7b42f74dcdd603007dfad8eb2fd4ee057" exitCode=0 Mar 13 12:34:45 crc kubenswrapper[4632]: I0313 12:34:45.075831 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvzvt" event={"ID":"aefd0984-736f-4724-96e9-bba46baff210","Type":"ContainerDied","Data":"30ac8815243eb74ef150559b7957bdc7b42f74dcdd603007dfad8eb2fd4ee057"} Mar 13 12:34:45 crc kubenswrapper[4632]: I0313 12:34:45.838434 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tvzvt" Mar 13 12:34:45 crc kubenswrapper[4632]: I0313 12:34:45.898597 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aefd0984-736f-4724-96e9-bba46baff210-utilities\") pod \"aefd0984-736f-4724-96e9-bba46baff210\" (UID: \"aefd0984-736f-4724-96e9-bba46baff210\") " Mar 13 12:34:45 crc kubenswrapper[4632]: I0313 12:34:45.898664 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tch8\" (UniqueName: \"kubernetes.io/projected/aefd0984-736f-4724-96e9-bba46baff210-kube-api-access-8tch8\") pod \"aefd0984-736f-4724-96e9-bba46baff210\" (UID: \"aefd0984-736f-4724-96e9-bba46baff210\") " Mar 13 12:34:45 crc kubenswrapper[4632]: I0313 12:34:45.898747 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aefd0984-736f-4724-96e9-bba46baff210-catalog-content\") pod \"aefd0984-736f-4724-96e9-bba46baff210\" (UID: \"aefd0984-736f-4724-96e9-bba46baff210\") " Mar 13 12:34:45 crc kubenswrapper[4632]: I0313 12:34:45.900238 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aefd0984-736f-4724-96e9-bba46baff210-utilities" (OuterVolumeSpecName: "utilities") pod "aefd0984-736f-4724-96e9-bba46baff210" (UID: "aefd0984-736f-4724-96e9-bba46baff210"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:34:45 crc kubenswrapper[4632]: I0313 12:34:45.911710 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aefd0984-736f-4724-96e9-bba46baff210-kube-api-access-8tch8" (OuterVolumeSpecName: "kube-api-access-8tch8") pod "aefd0984-736f-4724-96e9-bba46baff210" (UID: "aefd0984-736f-4724-96e9-bba46baff210"). InnerVolumeSpecName "kube-api-access-8tch8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:34:45 crc kubenswrapper[4632]: I0313 12:34:45.920296 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aefd0984-736f-4724-96e9-bba46baff210-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aefd0984-736f-4724-96e9-bba46baff210" (UID: "aefd0984-736f-4724-96e9-bba46baff210"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:34:46 crc kubenswrapper[4632]: I0313 12:34:46.001050 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aefd0984-736f-4724-96e9-bba46baff210-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 12:34:46 crc kubenswrapper[4632]: I0313 12:34:46.001317 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tch8\" (UniqueName: \"kubernetes.io/projected/aefd0984-736f-4724-96e9-bba46baff210-kube-api-access-8tch8\") on node \"crc\" DevicePath \"\"" Mar 13 12:34:46 crc kubenswrapper[4632]: I0313 12:34:46.001379 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aefd0984-736f-4724-96e9-bba46baff210-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 12:34:46 crc kubenswrapper[4632]: I0313 12:34:46.087770 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvzvt" event={"ID":"aefd0984-736f-4724-96e9-bba46baff210","Type":"ContainerDied","Data":"3864ef386dc747b0066132dd4d6ee7a1758a98ab4c866a3de7ae196d02e95eaa"} Mar 13 12:34:46 crc kubenswrapper[4632]: I0313 12:34:46.088104 4632 scope.go:117] "RemoveContainer" containerID="30ac8815243eb74ef150559b7957bdc7b42f74dcdd603007dfad8eb2fd4ee057" Mar 13 12:34:46 crc kubenswrapper[4632]: I0313 12:34:46.088226 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tvzvt" Mar 13 12:34:46 crc kubenswrapper[4632]: I0313 12:34:46.148438 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tvzvt"] Mar 13 12:34:46 crc kubenswrapper[4632]: I0313 12:34:46.154069 4632 scope.go:117] "RemoveContainer" containerID="5100412017a2a84219c5731b435e390960254cc640066455806e36494b500748" Mar 13 12:34:46 crc kubenswrapper[4632]: I0313 12:34:46.158995 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tvzvt"] Mar 13 12:34:46 crc kubenswrapper[4632]: I0313 12:34:46.188236 4632 scope.go:117] "RemoveContainer" containerID="6459338d22358b0519f807dc4afc129c5e2ca57f7e8e5369a57abd6209dfd36f" Mar 13 12:34:46 crc kubenswrapper[4632]: I0313 12:34:46.238188 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-4crmg" podUID="99ae46aa-bead-481b-9416-e5a1a8be8196" containerName="registry-server" probeResult="failure" output=< Mar 13 12:34:46 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:34:46 crc kubenswrapper[4632]: > Mar 13 12:34:48 crc kubenswrapper[4632]: I0313 12:34:48.057960 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aefd0984-736f-4724-96e9-bba46baff210" path="/var/lib/kubelet/pods/aefd0984-736f-4724-96e9-bba46baff210/volumes" Mar 13 12:34:55 crc kubenswrapper[4632]: I0313 12:34:55.264278 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4crmg" Mar 13 12:34:55 crc kubenswrapper[4632]: I0313 12:34:55.334212 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4crmg" Mar 13 12:34:55 crc kubenswrapper[4632]: I0313 12:34:55.702341 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4crmg"] Mar 13 12:34:57 crc kubenswrapper[4632]: I0313 12:34:57.192032 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4crmg" podUID="99ae46aa-bead-481b-9416-e5a1a8be8196" containerName="registry-server" containerID="cri-o://6e7ba1abc2a88a5e2aebe5114716bbab411d8cc4124cd295d7b37f0ec9fc95e9" gracePeriod=2 Mar 13 12:34:57 crc kubenswrapper[4632]: I0313 12:34:57.743895 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4crmg" Mar 13 12:34:57 crc kubenswrapper[4632]: I0313 12:34:57.936310 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99ae46aa-bead-481b-9416-e5a1a8be8196-utilities\") pod \"99ae46aa-bead-481b-9416-e5a1a8be8196\" (UID: \"99ae46aa-bead-481b-9416-e5a1a8be8196\") " Mar 13 12:34:57 crc kubenswrapper[4632]: I0313 12:34:57.936455 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99ae46aa-bead-481b-9416-e5a1a8be8196-catalog-content\") pod \"99ae46aa-bead-481b-9416-e5a1a8be8196\" (UID: \"99ae46aa-bead-481b-9416-e5a1a8be8196\") " Mar 13 12:34:57 crc kubenswrapper[4632]: I0313 12:34:57.936554 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxvbg\" (UniqueName: \"kubernetes.io/projected/99ae46aa-bead-481b-9416-e5a1a8be8196-kube-api-access-bxvbg\") pod \"99ae46aa-bead-481b-9416-e5a1a8be8196\" (UID: \"99ae46aa-bead-481b-9416-e5a1a8be8196\") " Mar 13 12:34:57 crc kubenswrapper[4632]: I0313 12:34:57.937600 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99ae46aa-bead-481b-9416-e5a1a8be8196-utilities" (OuterVolumeSpecName: "utilities") pod "99ae46aa-bead-481b-9416-e5a1a8be8196" (UID: "99ae46aa-bead-481b-9416-e5a1a8be8196"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:34:57 crc kubenswrapper[4632]: I0313 12:34:57.964105 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99ae46aa-bead-481b-9416-e5a1a8be8196-kube-api-access-bxvbg" (OuterVolumeSpecName: "kube-api-access-bxvbg") pod "99ae46aa-bead-481b-9416-e5a1a8be8196" (UID: "99ae46aa-bead-481b-9416-e5a1a8be8196"). InnerVolumeSpecName "kube-api-access-bxvbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:34:58 crc kubenswrapper[4632]: I0313 12:34:58.018989 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99ae46aa-bead-481b-9416-e5a1a8be8196-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "99ae46aa-bead-481b-9416-e5a1a8be8196" (UID: "99ae46aa-bead-481b-9416-e5a1a8be8196"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:34:58 crc kubenswrapper[4632]: I0313 12:34:58.043369 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxvbg\" (UniqueName: \"kubernetes.io/projected/99ae46aa-bead-481b-9416-e5a1a8be8196-kube-api-access-bxvbg\") on node \"crc\" DevicePath \"\"" Mar 13 12:34:58 crc kubenswrapper[4632]: I0313 12:34:58.043693 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99ae46aa-bead-481b-9416-e5a1a8be8196-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 12:34:58 crc kubenswrapper[4632]: I0313 12:34:58.043703 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99ae46aa-bead-481b-9416-e5a1a8be8196-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 12:34:58 crc kubenswrapper[4632]: E0313 12:34:58.153326 4632 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99ae46aa_bead_481b_9416_e5a1a8be8196.slice\": RecentStats: unable to find data in memory cache]" Mar 13 12:34:58 crc kubenswrapper[4632]: I0313 12:34:58.203996 4632 generic.go:334] "Generic (PLEG): container finished" podID="99ae46aa-bead-481b-9416-e5a1a8be8196" containerID="6e7ba1abc2a88a5e2aebe5114716bbab411d8cc4124cd295d7b37f0ec9fc95e9" exitCode=0 Mar 13 12:34:58 crc kubenswrapper[4632]: I0313 12:34:58.204039 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4crmg" event={"ID":"99ae46aa-bead-481b-9416-e5a1a8be8196","Type":"ContainerDied","Data":"6e7ba1abc2a88a5e2aebe5114716bbab411d8cc4124cd295d7b37f0ec9fc95e9"} Mar 13 12:34:58 crc kubenswrapper[4632]: I0313 12:34:58.204068 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4crmg" event={"ID":"99ae46aa-bead-481b-9416-e5a1a8be8196","Type":"ContainerDied","Data":"306a0cb5021e67e31e2f58ee7bbf44297adb063f5cf868144b001c706cab0607"} Mar 13 12:34:58 crc kubenswrapper[4632]: I0313 12:34:58.204088 4632 scope.go:117] "RemoveContainer" containerID="6e7ba1abc2a88a5e2aebe5114716bbab411d8cc4124cd295d7b37f0ec9fc95e9" Mar 13 12:34:58 crc kubenswrapper[4632]: I0313 12:34:58.204086 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4crmg" Mar 13 12:34:58 crc kubenswrapper[4632]: I0313 12:34:58.233597 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4crmg"] Mar 13 12:34:58 crc kubenswrapper[4632]: I0313 12:34:58.243621 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4crmg"] Mar 13 12:34:58 crc kubenswrapper[4632]: I0313 12:34:58.246397 4632 scope.go:117] "RemoveContainer" containerID="6dc2cddd4965efa9fc2c49fa5ca691ecbf53012540a04504d6c227a2347e30f4" Mar 13 12:34:58 crc kubenswrapper[4632]: I0313 12:34:58.269434 4632 scope.go:117] "RemoveContainer" containerID="0f0837940cf88d21f3843c5930040e437ca0684ad084295f166ebe778c422cd2" Mar 13 12:34:58 crc kubenswrapper[4632]: I0313 12:34:58.313210 4632 scope.go:117] "RemoveContainer" containerID="6e7ba1abc2a88a5e2aebe5114716bbab411d8cc4124cd295d7b37f0ec9fc95e9" Mar 13 12:34:58 crc kubenswrapper[4632]: E0313 12:34:58.324117 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e7ba1abc2a88a5e2aebe5114716bbab411d8cc4124cd295d7b37f0ec9fc95e9\": container with ID starting with 6e7ba1abc2a88a5e2aebe5114716bbab411d8cc4124cd295d7b37f0ec9fc95e9 not found: ID does not exist" containerID="6e7ba1abc2a88a5e2aebe5114716bbab411d8cc4124cd295d7b37f0ec9fc95e9" Mar 13 12:34:58 crc kubenswrapper[4632]: I0313 12:34:58.324190 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e7ba1abc2a88a5e2aebe5114716bbab411d8cc4124cd295d7b37f0ec9fc95e9"} err="failed to get container status \"6e7ba1abc2a88a5e2aebe5114716bbab411d8cc4124cd295d7b37f0ec9fc95e9\": rpc error: code = NotFound desc = could not find container \"6e7ba1abc2a88a5e2aebe5114716bbab411d8cc4124cd295d7b37f0ec9fc95e9\": container with ID starting with 6e7ba1abc2a88a5e2aebe5114716bbab411d8cc4124cd295d7b37f0ec9fc95e9 not found: ID does not exist" Mar 13 12:34:58 crc kubenswrapper[4632]: I0313 12:34:58.324232 4632 scope.go:117] "RemoveContainer" containerID="6dc2cddd4965efa9fc2c49fa5ca691ecbf53012540a04504d6c227a2347e30f4" Mar 13 12:34:58 crc kubenswrapper[4632]: E0313 12:34:58.325046 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dc2cddd4965efa9fc2c49fa5ca691ecbf53012540a04504d6c227a2347e30f4\": container with ID starting with 6dc2cddd4965efa9fc2c49fa5ca691ecbf53012540a04504d6c227a2347e30f4 not found: ID does not exist" containerID="6dc2cddd4965efa9fc2c49fa5ca691ecbf53012540a04504d6c227a2347e30f4" Mar 13 12:34:58 crc kubenswrapper[4632]: I0313 12:34:58.325108 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dc2cddd4965efa9fc2c49fa5ca691ecbf53012540a04504d6c227a2347e30f4"} err="failed to get container status \"6dc2cddd4965efa9fc2c49fa5ca691ecbf53012540a04504d6c227a2347e30f4\": rpc error: code = NotFound desc = could not find container \"6dc2cddd4965efa9fc2c49fa5ca691ecbf53012540a04504d6c227a2347e30f4\": container with ID starting with 6dc2cddd4965efa9fc2c49fa5ca691ecbf53012540a04504d6c227a2347e30f4 not found: ID does not exist" Mar 13 12:34:58 crc kubenswrapper[4632]: I0313 12:34:58.325164 4632 scope.go:117] "RemoveContainer" containerID="0f0837940cf88d21f3843c5930040e437ca0684ad084295f166ebe778c422cd2" Mar 13 12:34:58 crc kubenswrapper[4632]: E0313 12:34:58.325804 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f0837940cf88d21f3843c5930040e437ca0684ad084295f166ebe778c422cd2\": container with ID starting with 0f0837940cf88d21f3843c5930040e437ca0684ad084295f166ebe778c422cd2 not found: ID does not exist" containerID="0f0837940cf88d21f3843c5930040e437ca0684ad084295f166ebe778c422cd2" Mar 13 12:34:58 crc kubenswrapper[4632]: I0313 12:34:58.325869 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f0837940cf88d21f3843c5930040e437ca0684ad084295f166ebe778c422cd2"} err="failed to get container status \"0f0837940cf88d21f3843c5930040e437ca0684ad084295f166ebe778c422cd2\": rpc error: code = NotFound desc = could not find container \"0f0837940cf88d21f3843c5930040e437ca0684ad084295f166ebe778c422cd2\": container with ID starting with 0f0837940cf88d21f3843c5930040e437ca0684ad084295f166ebe778c422cd2 not found: ID does not exist" Mar 13 12:35:00 crc kubenswrapper[4632]: I0313 12:35:00.057708 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99ae46aa-bead-481b-9416-e5a1a8be8196" path="/var/lib/kubelet/pods/99ae46aa-bead-481b-9416-e5a1a8be8196/volumes" Mar 13 12:36:00 crc kubenswrapper[4632]: I0313 12:36:00.273738 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556756-4d96k"] Mar 13 12:36:00 crc kubenswrapper[4632]: E0313 12:36:00.293386 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99ae46aa-bead-481b-9416-e5a1a8be8196" containerName="registry-server" Mar 13 12:36:00 crc kubenswrapper[4632]: I0313 12:36:00.293857 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="99ae46aa-bead-481b-9416-e5a1a8be8196" containerName="registry-server" Mar 13 12:36:00 crc kubenswrapper[4632]: E0313 12:36:00.293958 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aefd0984-736f-4724-96e9-bba46baff210" containerName="registry-server" Mar 13 12:36:00 crc kubenswrapper[4632]: I0313 12:36:00.293969 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="aefd0984-736f-4724-96e9-bba46baff210" containerName="registry-server" Mar 13 12:36:00 crc kubenswrapper[4632]: E0313 12:36:00.293988 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aefd0984-736f-4724-96e9-bba46baff210" containerName="extract-utilities" Mar 13 12:36:00 crc kubenswrapper[4632]: I0313 12:36:00.293998 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="aefd0984-736f-4724-96e9-bba46baff210" containerName="extract-utilities" Mar 13 12:36:00 crc kubenswrapper[4632]: E0313 12:36:00.294035 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aefd0984-736f-4724-96e9-bba46baff210" containerName="extract-content" Mar 13 12:36:00 crc kubenswrapper[4632]: I0313 12:36:00.294044 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="aefd0984-736f-4724-96e9-bba46baff210" containerName="extract-content" Mar 13 12:36:00 crc kubenswrapper[4632]: E0313 12:36:00.294063 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99ae46aa-bead-481b-9416-e5a1a8be8196" containerName="extract-content" Mar 13 12:36:00 crc kubenswrapper[4632]: I0313 12:36:00.294072 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="99ae46aa-bead-481b-9416-e5a1a8be8196" containerName="extract-content" Mar 13 12:36:00 crc kubenswrapper[4632]: E0313 12:36:00.294096 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99ae46aa-bead-481b-9416-e5a1a8be8196" containerName="extract-utilities" Mar 13 12:36:00 crc kubenswrapper[4632]: I0313 12:36:00.294105 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="99ae46aa-bead-481b-9416-e5a1a8be8196" containerName="extract-utilities" Mar 13 12:36:00 crc kubenswrapper[4632]: I0313 12:36:00.294434 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="99ae46aa-bead-481b-9416-e5a1a8be8196" containerName="registry-server" Mar 13 12:36:00 crc kubenswrapper[4632]: I0313 12:36:00.294463 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="aefd0984-736f-4724-96e9-bba46baff210" containerName="registry-server" Mar 13 12:36:00 crc kubenswrapper[4632]: I0313 12:36:00.296203 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556756-4d96k" Mar 13 12:36:00 crc kubenswrapper[4632]: I0313 12:36:00.302241 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556756-4d96k"] Mar 13 12:36:00 crc kubenswrapper[4632]: I0313 12:36:00.328306 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 12:36:00 crc kubenswrapper[4632]: I0313 12:36:00.328315 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 12:36:00 crc kubenswrapper[4632]: I0313 12:36:00.328406 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 12:36:00 crc kubenswrapper[4632]: I0313 12:36:00.423402 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5gzh\" (UniqueName: \"kubernetes.io/projected/4bab5d92-9e39-4d06-98ae-8b9b50d50214-kube-api-access-f5gzh\") pod \"auto-csr-approver-29556756-4d96k\" (UID: \"4bab5d92-9e39-4d06-98ae-8b9b50d50214\") " pod="openshift-infra/auto-csr-approver-29556756-4d96k" Mar 13 12:36:00 crc kubenswrapper[4632]: I0313 12:36:00.525294 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5gzh\" (UniqueName: \"kubernetes.io/projected/4bab5d92-9e39-4d06-98ae-8b9b50d50214-kube-api-access-f5gzh\") pod \"auto-csr-approver-29556756-4d96k\" (UID: \"4bab5d92-9e39-4d06-98ae-8b9b50d50214\") " pod="openshift-infra/auto-csr-approver-29556756-4d96k" Mar 13 12:36:00 crc kubenswrapper[4632]: I0313 12:36:00.559805 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5gzh\" (UniqueName: \"kubernetes.io/projected/4bab5d92-9e39-4d06-98ae-8b9b50d50214-kube-api-access-f5gzh\") pod \"auto-csr-approver-29556756-4d96k\" (UID: \"4bab5d92-9e39-4d06-98ae-8b9b50d50214\") " pod="openshift-infra/auto-csr-approver-29556756-4d96k" Mar 13 12:36:00 crc kubenswrapper[4632]: I0313 12:36:00.619915 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556756-4d96k" Mar 13 12:36:01 crc kubenswrapper[4632]: I0313 12:36:01.443101 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556756-4d96k"] Mar 13 12:36:02 crc kubenswrapper[4632]: I0313 12:36:02.006033 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556756-4d96k" event={"ID":"4bab5d92-9e39-4d06-98ae-8b9b50d50214","Type":"ContainerStarted","Data":"8b8fe1afbf64559cb08d91b985e36d5327c22b066ce15bacd4281c3bcd4a5d21"} Mar 13 12:36:04 crc kubenswrapper[4632]: I0313 12:36:04.027505 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556756-4d96k" event={"ID":"4bab5d92-9e39-4d06-98ae-8b9b50d50214","Type":"ContainerStarted","Data":"a8e65f824fd306a694713a170d4b213522c9b5fbd2a9bb06608f463371bdb733"} Mar 13 12:36:04 crc kubenswrapper[4632]: I0313 12:36:04.047496 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556756-4d96k" podStartSLOduration=3.133595994 podStartE2EDuration="4.047476523s" podCreationTimestamp="2026-03-13 12:36:00 +0000 UTC" firstStartedPulling="2026-03-13 12:36:01.464251471 +0000 UTC m=+9135.486781604" lastFinishedPulling="2026-03-13 12:36:02.378132 +0000 UTC m=+9136.400662133" observedRunningTime="2026-03-13 12:36:04.04285304 +0000 UTC m=+9138.065383173" watchObservedRunningTime="2026-03-13 12:36:04.047476523 +0000 UTC m=+9138.070006656" Mar 13 12:36:07 crc kubenswrapper[4632]: I0313 12:36:07.066402 4632 generic.go:334] "Generic (PLEG): container finished" podID="4bab5d92-9e39-4d06-98ae-8b9b50d50214" containerID="a8e65f824fd306a694713a170d4b213522c9b5fbd2a9bb06608f463371bdb733" exitCode=0 Mar 13 12:36:07 crc kubenswrapper[4632]: I0313 12:36:07.066592 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556756-4d96k" event={"ID":"4bab5d92-9e39-4d06-98ae-8b9b50d50214","Type":"ContainerDied","Data":"a8e65f824fd306a694713a170d4b213522c9b5fbd2a9bb06608f463371bdb733"} Mar 13 12:36:08 crc kubenswrapper[4632]: I0313 12:36:08.673547 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556756-4d96k" Mar 13 12:36:08 crc kubenswrapper[4632]: I0313 12:36:08.794457 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5gzh\" (UniqueName: \"kubernetes.io/projected/4bab5d92-9e39-4d06-98ae-8b9b50d50214-kube-api-access-f5gzh\") pod \"4bab5d92-9e39-4d06-98ae-8b9b50d50214\" (UID: \"4bab5d92-9e39-4d06-98ae-8b9b50d50214\") " Mar 13 12:36:08 crc kubenswrapper[4632]: I0313 12:36:08.806208 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bab5d92-9e39-4d06-98ae-8b9b50d50214-kube-api-access-f5gzh" (OuterVolumeSpecName: "kube-api-access-f5gzh") pod "4bab5d92-9e39-4d06-98ae-8b9b50d50214" (UID: "4bab5d92-9e39-4d06-98ae-8b9b50d50214"). InnerVolumeSpecName "kube-api-access-f5gzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:36:08 crc kubenswrapper[4632]: I0313 12:36:08.896694 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5gzh\" (UniqueName: \"kubernetes.io/projected/4bab5d92-9e39-4d06-98ae-8b9b50d50214-kube-api-access-f5gzh\") on node \"crc\" DevicePath \"\"" Mar 13 12:36:09 crc kubenswrapper[4632]: I0313 12:36:09.092723 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556756-4d96k" event={"ID":"4bab5d92-9e39-4d06-98ae-8b9b50d50214","Type":"ContainerDied","Data":"8b8fe1afbf64559cb08d91b985e36d5327c22b066ce15bacd4281c3bcd4a5d21"} Mar 13 12:36:09 crc kubenswrapper[4632]: I0313 12:36:09.092767 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b8fe1afbf64559cb08d91b985e36d5327c22b066ce15bacd4281c3bcd4a5d21" Mar 13 12:36:09 crc kubenswrapper[4632]: I0313 12:36:09.092837 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556756-4d96k" Mar 13 12:36:09 crc kubenswrapper[4632]: I0313 12:36:09.255467 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556750-8fj49"] Mar 13 12:36:09 crc kubenswrapper[4632]: I0313 12:36:09.263522 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556750-8fj49"] Mar 13 12:36:10 crc kubenswrapper[4632]: I0313 12:36:10.058866 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cc083b6-fb70-478e-9824-d9eb3cb1fe5b" path="/var/lib/kubelet/pods/8cc083b6-fb70-478e-9824-d9eb3cb1fe5b/volumes" Mar 13 12:36:16 crc kubenswrapper[4632]: I0313 12:36:16.993889 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v86rj"] Mar 13 12:36:16 crc kubenswrapper[4632]: E0313 12:36:16.994768 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bab5d92-9e39-4d06-98ae-8b9b50d50214" containerName="oc" Mar 13 12:36:16 crc kubenswrapper[4632]: I0313 12:36:16.994784 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bab5d92-9e39-4d06-98ae-8b9b50d50214" containerName="oc" Mar 13 12:36:16 crc kubenswrapper[4632]: I0313 12:36:16.995020 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bab5d92-9e39-4d06-98ae-8b9b50d50214" containerName="oc" Mar 13 12:36:16 crc kubenswrapper[4632]: I0313 12:36:16.998046 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v86rj" Mar 13 12:36:17 crc kubenswrapper[4632]: I0313 12:36:17.024573 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v86rj"] Mar 13 12:36:17 crc kubenswrapper[4632]: I0313 12:36:17.057735 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwltc\" (UniqueName: \"kubernetes.io/projected/ac09d633-ce71-480d-bc5e-d9be1d416b03-kube-api-access-zwltc\") pod \"redhat-operators-v86rj\" (UID: \"ac09d633-ce71-480d-bc5e-d9be1d416b03\") " pod="openshift-marketplace/redhat-operators-v86rj" Mar 13 12:36:17 crc kubenswrapper[4632]: I0313 12:36:17.057848 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac09d633-ce71-480d-bc5e-d9be1d416b03-catalog-content\") pod \"redhat-operators-v86rj\" (UID: \"ac09d633-ce71-480d-bc5e-d9be1d416b03\") " pod="openshift-marketplace/redhat-operators-v86rj" Mar 13 12:36:17 crc kubenswrapper[4632]: I0313 12:36:17.057998 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac09d633-ce71-480d-bc5e-d9be1d416b03-utilities\") pod \"redhat-operators-v86rj\" (UID: \"ac09d633-ce71-480d-bc5e-d9be1d416b03\") " pod="openshift-marketplace/redhat-operators-v86rj" Mar 13 12:36:17 crc kubenswrapper[4632]: I0313 12:36:17.159611 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac09d633-ce71-480d-bc5e-d9be1d416b03-utilities\") pod \"redhat-operators-v86rj\" (UID: \"ac09d633-ce71-480d-bc5e-d9be1d416b03\") " pod="openshift-marketplace/redhat-operators-v86rj" Mar 13 12:36:17 crc kubenswrapper[4632]: I0313 12:36:17.159746 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwltc\" (UniqueName: \"kubernetes.io/projected/ac09d633-ce71-480d-bc5e-d9be1d416b03-kube-api-access-zwltc\") pod \"redhat-operators-v86rj\" (UID: \"ac09d633-ce71-480d-bc5e-d9be1d416b03\") " pod="openshift-marketplace/redhat-operators-v86rj" Mar 13 12:36:17 crc kubenswrapper[4632]: I0313 12:36:17.159853 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac09d633-ce71-480d-bc5e-d9be1d416b03-catalog-content\") pod \"redhat-operators-v86rj\" (UID: \"ac09d633-ce71-480d-bc5e-d9be1d416b03\") " pod="openshift-marketplace/redhat-operators-v86rj" Mar 13 12:36:17 crc kubenswrapper[4632]: I0313 12:36:17.160323 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac09d633-ce71-480d-bc5e-d9be1d416b03-utilities\") pod \"redhat-operators-v86rj\" (UID: \"ac09d633-ce71-480d-bc5e-d9be1d416b03\") " pod="openshift-marketplace/redhat-operators-v86rj" Mar 13 12:36:17 crc kubenswrapper[4632]: I0313 12:36:17.160635 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac09d633-ce71-480d-bc5e-d9be1d416b03-catalog-content\") pod \"redhat-operators-v86rj\" (UID: \"ac09d633-ce71-480d-bc5e-d9be1d416b03\") " pod="openshift-marketplace/redhat-operators-v86rj" Mar 13 12:36:17 crc kubenswrapper[4632]: I0313 12:36:17.191879 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwltc\" (UniqueName: \"kubernetes.io/projected/ac09d633-ce71-480d-bc5e-d9be1d416b03-kube-api-access-zwltc\") pod \"redhat-operators-v86rj\" (UID: \"ac09d633-ce71-480d-bc5e-d9be1d416b03\") " pod="openshift-marketplace/redhat-operators-v86rj" Mar 13 12:36:17 crc kubenswrapper[4632]: I0313 12:36:17.325426 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v86rj" Mar 13 12:36:17 crc kubenswrapper[4632]: I0313 12:36:17.847838 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v86rj"] Mar 13 12:36:18 crc kubenswrapper[4632]: I0313 12:36:18.175613 4632 generic.go:334] "Generic (PLEG): container finished" podID="ac09d633-ce71-480d-bc5e-d9be1d416b03" containerID="29d8b3a65318b6e6951210b601e841a218c4f0ffd2cbfdd6b7e54cb15bb8b2a2" exitCode=0 Mar 13 12:36:18 crc kubenswrapper[4632]: I0313 12:36:18.175789 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v86rj" event={"ID":"ac09d633-ce71-480d-bc5e-d9be1d416b03","Type":"ContainerDied","Data":"29d8b3a65318b6e6951210b601e841a218c4f0ffd2cbfdd6b7e54cb15bb8b2a2"} Mar 13 12:36:18 crc kubenswrapper[4632]: I0313 12:36:18.175853 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v86rj" event={"ID":"ac09d633-ce71-480d-bc5e-d9be1d416b03","Type":"ContainerStarted","Data":"4cf7e4e180fdee7c7813c8d090f0a4ec39f241c14816db7a866702741112c8ae"} Mar 13 12:36:19 crc kubenswrapper[4632]: I0313 12:36:19.191696 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v86rj" event={"ID":"ac09d633-ce71-480d-bc5e-d9be1d416b03","Type":"ContainerStarted","Data":"1187b21667ee3759adaeaf1e0fd0b393da1afd6f412928f91725d748894e2904"} Mar 13 12:36:26 crc kubenswrapper[4632]: I0313 12:36:26.269884 4632 generic.go:334] "Generic (PLEG): container finished" podID="ac09d633-ce71-480d-bc5e-d9be1d416b03" containerID="1187b21667ee3759adaeaf1e0fd0b393da1afd6f412928f91725d748894e2904" exitCode=0 Mar 13 12:36:26 crc kubenswrapper[4632]: I0313 12:36:26.270001 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v86rj" event={"ID":"ac09d633-ce71-480d-bc5e-d9be1d416b03","Type":"ContainerDied","Data":"1187b21667ee3759adaeaf1e0fd0b393da1afd6f412928f91725d748894e2904"} Mar 13 12:36:27 crc kubenswrapper[4632]: I0313 12:36:27.282340 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v86rj" event={"ID":"ac09d633-ce71-480d-bc5e-d9be1d416b03","Type":"ContainerStarted","Data":"21822b608332f376e9e1af2cdb22e1aa5756c02a18555a5fbfad899b33d82cd4"} Mar 13 12:36:27 crc kubenswrapper[4632]: I0313 12:36:27.310425 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v86rj" podStartSLOduration=2.813055071 podStartE2EDuration="11.310401162s" podCreationTimestamp="2026-03-13 12:36:16 +0000 UTC" firstStartedPulling="2026-03-13 12:36:18.17722794 +0000 UTC m=+9152.199758083" lastFinishedPulling="2026-03-13 12:36:26.674574041 +0000 UTC m=+9160.697104174" observedRunningTime="2026-03-13 12:36:27.30009038 +0000 UTC m=+9161.322620523" watchObservedRunningTime="2026-03-13 12:36:27.310401162 +0000 UTC m=+9161.332931325" Mar 13 12:36:27 crc kubenswrapper[4632]: I0313 12:36:27.326103 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v86rj" Mar 13 12:36:27 crc kubenswrapper[4632]: I0313 12:36:27.326164 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v86rj" Mar 13 12:36:27 crc kubenswrapper[4632]: I0313 12:36:27.843429 4632 scope.go:117] "RemoveContainer" containerID="50ec5eed6591caef46ce66e044fc885293f40a008476cefa9221d3ccb1262877" Mar 13 12:36:28 crc kubenswrapper[4632]: I0313 12:36:28.379695 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v86rj" podUID="ac09d633-ce71-480d-bc5e-d9be1d416b03" containerName="registry-server" probeResult="failure" output=< Mar 13 12:36:28 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:36:28 crc kubenswrapper[4632]: > Mar 13 12:36:38 crc kubenswrapper[4632]: I0313 12:36:38.388928 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v86rj" podUID="ac09d633-ce71-480d-bc5e-d9be1d416b03" containerName="registry-server" probeResult="failure" output=< Mar 13 12:36:38 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:36:38 crc kubenswrapper[4632]: > Mar 13 12:36:40 crc kubenswrapper[4632]: I0313 12:36:40.462673 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:36:40 crc kubenswrapper[4632]: I0313 12:36:40.470435 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:36:48 crc kubenswrapper[4632]: I0313 12:36:48.386520 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v86rj" podUID="ac09d633-ce71-480d-bc5e-d9be1d416b03" containerName="registry-server" probeResult="failure" output=< Mar 13 12:36:48 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:36:48 crc kubenswrapper[4632]: > Mar 13 12:36:58 crc kubenswrapper[4632]: I0313 12:36:58.381573 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v86rj" podUID="ac09d633-ce71-480d-bc5e-d9be1d416b03" containerName="registry-server" probeResult="failure" output=< Mar 13 12:36:58 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:36:58 crc kubenswrapper[4632]: > Mar 13 12:37:08 crc kubenswrapper[4632]: I0313 12:37:08.375140 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v86rj" podUID="ac09d633-ce71-480d-bc5e-d9be1d416b03" containerName="registry-server" probeResult="failure" output=< Mar 13 12:37:08 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:37:08 crc kubenswrapper[4632]: > Mar 13 12:37:10 crc kubenswrapper[4632]: I0313 12:37:10.464142 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:37:10 crc kubenswrapper[4632]: I0313 12:37:10.464404 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:37:18 crc kubenswrapper[4632]: I0313 12:37:18.396613 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v86rj" podUID="ac09d633-ce71-480d-bc5e-d9be1d416b03" containerName="registry-server" probeResult="failure" output=< Mar 13 12:37:18 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:37:18 crc kubenswrapper[4632]: > Mar 13 12:37:28 crc kubenswrapper[4632]: I0313 12:37:28.396601 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v86rj" podUID="ac09d633-ce71-480d-bc5e-d9be1d416b03" containerName="registry-server" probeResult="failure" output=< Mar 13 12:37:28 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:37:28 crc kubenswrapper[4632]: > Mar 13 12:37:37 crc kubenswrapper[4632]: I0313 12:37:37.389583 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v86rj" Mar 13 12:37:37 crc kubenswrapper[4632]: I0313 12:37:37.452083 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v86rj" Mar 13 12:37:37 crc kubenswrapper[4632]: I0313 12:37:37.670156 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v86rj"] Mar 13 12:37:39 crc kubenswrapper[4632]: I0313 12:37:39.167654 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v86rj" podUID="ac09d633-ce71-480d-bc5e-d9be1d416b03" containerName="registry-server" containerID="cri-o://21822b608332f376e9e1af2cdb22e1aa5756c02a18555a5fbfad899b33d82cd4" gracePeriod=2 Mar 13 12:37:40 crc kubenswrapper[4632]: I0313 12:37:40.179500 4632 generic.go:334] "Generic (PLEG): container finished" podID="ac09d633-ce71-480d-bc5e-d9be1d416b03" containerID="21822b608332f376e9e1af2cdb22e1aa5756c02a18555a5fbfad899b33d82cd4" exitCode=0 Mar 13 12:37:40 crc kubenswrapper[4632]: I0313 12:37:40.180279 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v86rj" event={"ID":"ac09d633-ce71-480d-bc5e-d9be1d416b03","Type":"ContainerDied","Data":"21822b608332f376e9e1af2cdb22e1aa5756c02a18555a5fbfad899b33d82cd4"} Mar 13 12:37:40 crc kubenswrapper[4632]: I0313 12:37:40.426522 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v86rj" Mar 13 12:37:40 crc kubenswrapper[4632]: I0313 12:37:40.460674 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:37:40 crc kubenswrapper[4632]: I0313 12:37:40.460766 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:37:40 crc kubenswrapper[4632]: I0313 12:37:40.460827 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 12:37:40 crc kubenswrapper[4632]: I0313 12:37:40.461772 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"23322afa11cd9b7f3f2b893b0662422a41461a3fe0777a0243c232f90f5a4eb9"} pod="openshift-machine-config-operator/machine-config-daemon-zkscb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 12:37:40 crc kubenswrapper[4632]: I0313 12:37:40.461889 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" containerID="cri-o://23322afa11cd9b7f3f2b893b0662422a41461a3fe0777a0243c232f90f5a4eb9" gracePeriod=600 Mar 13 12:37:40 crc kubenswrapper[4632]: I0313 12:37:40.480838 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac09d633-ce71-480d-bc5e-d9be1d416b03-utilities\") pod \"ac09d633-ce71-480d-bc5e-d9be1d416b03\" (UID: \"ac09d633-ce71-480d-bc5e-d9be1d416b03\") " Mar 13 12:37:40 crc kubenswrapper[4632]: I0313 12:37:40.480959 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwltc\" (UniqueName: \"kubernetes.io/projected/ac09d633-ce71-480d-bc5e-d9be1d416b03-kube-api-access-zwltc\") pod \"ac09d633-ce71-480d-bc5e-d9be1d416b03\" (UID: \"ac09d633-ce71-480d-bc5e-d9be1d416b03\") " Mar 13 12:37:40 crc kubenswrapper[4632]: I0313 12:37:40.481077 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac09d633-ce71-480d-bc5e-d9be1d416b03-catalog-content\") pod \"ac09d633-ce71-480d-bc5e-d9be1d416b03\" (UID: \"ac09d633-ce71-480d-bc5e-d9be1d416b03\") " Mar 13 12:37:40 crc kubenswrapper[4632]: I0313 12:37:40.491141 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac09d633-ce71-480d-bc5e-d9be1d416b03-utilities" (OuterVolumeSpecName: "utilities") pod "ac09d633-ce71-480d-bc5e-d9be1d416b03" (UID: "ac09d633-ce71-480d-bc5e-d9be1d416b03"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:37:40 crc kubenswrapper[4632]: I0313 12:37:40.507502 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac09d633-ce71-480d-bc5e-d9be1d416b03-kube-api-access-zwltc" (OuterVolumeSpecName: "kube-api-access-zwltc") pod "ac09d633-ce71-480d-bc5e-d9be1d416b03" (UID: "ac09d633-ce71-480d-bc5e-d9be1d416b03"). InnerVolumeSpecName "kube-api-access-zwltc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:37:40 crc kubenswrapper[4632]: I0313 12:37:40.584247 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac09d633-ce71-480d-bc5e-d9be1d416b03-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 12:37:40 crc kubenswrapper[4632]: I0313 12:37:40.584288 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwltc\" (UniqueName: \"kubernetes.io/projected/ac09d633-ce71-480d-bc5e-d9be1d416b03-kube-api-access-zwltc\") on node \"crc\" DevicePath \"\"" Mar 13 12:37:40 crc kubenswrapper[4632]: E0313 12:37:40.606604 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:37:40 crc kubenswrapper[4632]: I0313 12:37:40.763089 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac09d633-ce71-480d-bc5e-d9be1d416b03-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac09d633-ce71-480d-bc5e-d9be1d416b03" (UID: "ac09d633-ce71-480d-bc5e-d9be1d416b03"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:37:40 crc kubenswrapper[4632]: I0313 12:37:40.786775 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac09d633-ce71-480d-bc5e-d9be1d416b03-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 12:37:41 crc kubenswrapper[4632]: I0313 12:37:41.204495 4632 generic.go:334] "Generic (PLEG): container finished" podID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerID="23322afa11cd9b7f3f2b893b0662422a41461a3fe0777a0243c232f90f5a4eb9" exitCode=0 Mar 13 12:37:41 crc kubenswrapper[4632]: I0313 12:37:41.205513 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerDied","Data":"23322afa11cd9b7f3f2b893b0662422a41461a3fe0777a0243c232f90f5a4eb9"} Mar 13 12:37:41 crc kubenswrapper[4632]: I0313 12:37:41.207729 4632 scope.go:117] "RemoveContainer" containerID="ada2bd3447f81dbcb3c7c10ab1a84d7a61b81476a09d5bccd655ef21929539af" Mar 13 12:37:41 crc kubenswrapper[4632]: I0313 12:37:41.208794 4632 scope.go:117] "RemoveContainer" containerID="23322afa11cd9b7f3f2b893b0662422a41461a3fe0777a0243c232f90f5a4eb9" Mar 13 12:37:41 crc kubenswrapper[4632]: E0313 12:37:41.209812 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:37:41 crc kubenswrapper[4632]: I0313 12:37:41.210700 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v86rj" event={"ID":"ac09d633-ce71-480d-bc5e-d9be1d416b03","Type":"ContainerDied","Data":"4cf7e4e180fdee7c7813c8d090f0a4ec39f241c14816db7a866702741112c8ae"} Mar 13 12:37:41 crc kubenswrapper[4632]: I0313 12:37:41.210806 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v86rj" Mar 13 12:37:41 crc kubenswrapper[4632]: I0313 12:37:41.279763 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v86rj"] Mar 13 12:37:41 crc kubenswrapper[4632]: I0313 12:37:41.290549 4632 scope.go:117] "RemoveContainer" containerID="21822b608332f376e9e1af2cdb22e1aa5756c02a18555a5fbfad899b33d82cd4" Mar 13 12:37:41 crc kubenswrapper[4632]: I0313 12:37:41.291272 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v86rj"] Mar 13 12:37:41 crc kubenswrapper[4632]: I0313 12:37:41.322384 4632 scope.go:117] "RemoveContainer" containerID="1187b21667ee3759adaeaf1e0fd0b393da1afd6f412928f91725d748894e2904" Mar 13 12:37:41 crc kubenswrapper[4632]: I0313 12:37:41.384126 4632 scope.go:117] "RemoveContainer" containerID="29d8b3a65318b6e6951210b601e841a218c4f0ffd2cbfdd6b7e54cb15bb8b2a2" Mar 13 12:37:42 crc kubenswrapper[4632]: I0313 12:37:42.058325 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac09d633-ce71-480d-bc5e-d9be1d416b03" path="/var/lib/kubelet/pods/ac09d633-ce71-480d-bc5e-d9be1d416b03/volumes" Mar 13 12:37:55 crc kubenswrapper[4632]: I0313 12:37:55.044043 4632 scope.go:117] "RemoveContainer" containerID="23322afa11cd9b7f3f2b893b0662422a41461a3fe0777a0243c232f90f5a4eb9" Mar 13 12:37:55 crc kubenswrapper[4632]: E0313 12:37:55.045025 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:38:00 crc kubenswrapper[4632]: I0313 12:38:00.225899 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556758-989sl"] Mar 13 12:38:00 crc kubenswrapper[4632]: E0313 12:38:00.229032 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac09d633-ce71-480d-bc5e-d9be1d416b03" containerName="extract-content" Mar 13 12:38:00 crc kubenswrapper[4632]: I0313 12:38:00.229061 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac09d633-ce71-480d-bc5e-d9be1d416b03" containerName="extract-content" Mar 13 12:38:00 crc kubenswrapper[4632]: E0313 12:38:00.229086 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac09d633-ce71-480d-bc5e-d9be1d416b03" containerName="registry-server" Mar 13 12:38:00 crc kubenswrapper[4632]: I0313 12:38:00.229092 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac09d633-ce71-480d-bc5e-d9be1d416b03" containerName="registry-server" Mar 13 12:38:00 crc kubenswrapper[4632]: E0313 12:38:00.229116 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac09d633-ce71-480d-bc5e-d9be1d416b03" containerName="extract-utilities" Mar 13 12:38:00 crc kubenswrapper[4632]: I0313 12:38:00.229123 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac09d633-ce71-480d-bc5e-d9be1d416b03" containerName="extract-utilities" Mar 13 12:38:00 crc kubenswrapper[4632]: I0313 12:38:00.229349 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac09d633-ce71-480d-bc5e-d9be1d416b03" containerName="registry-server" Mar 13 12:38:00 crc kubenswrapper[4632]: I0313 12:38:00.238765 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556758-989sl" Mar 13 12:38:00 crc kubenswrapper[4632]: I0313 12:38:00.261532 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 12:38:00 crc kubenswrapper[4632]: I0313 12:38:00.263080 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 12:38:00 crc kubenswrapper[4632]: I0313 12:38:00.263249 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 12:38:00 crc kubenswrapper[4632]: I0313 12:38:00.322968 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556758-989sl"] Mar 13 12:38:00 crc kubenswrapper[4632]: I0313 12:38:00.402384 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8f58\" (UniqueName: \"kubernetes.io/projected/564cb4aa-8722-4f69-adeb-16bc8b74bff0-kube-api-access-j8f58\") pod \"auto-csr-approver-29556758-989sl\" (UID: \"564cb4aa-8722-4f69-adeb-16bc8b74bff0\") " pod="openshift-infra/auto-csr-approver-29556758-989sl" Mar 13 12:38:00 crc kubenswrapper[4632]: I0313 12:38:00.504999 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8f58\" (UniqueName: \"kubernetes.io/projected/564cb4aa-8722-4f69-adeb-16bc8b74bff0-kube-api-access-j8f58\") pod \"auto-csr-approver-29556758-989sl\" (UID: \"564cb4aa-8722-4f69-adeb-16bc8b74bff0\") " pod="openshift-infra/auto-csr-approver-29556758-989sl" Mar 13 12:38:00 crc kubenswrapper[4632]: I0313 12:38:00.525816 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8f58\" (UniqueName: \"kubernetes.io/projected/564cb4aa-8722-4f69-adeb-16bc8b74bff0-kube-api-access-j8f58\") pod \"auto-csr-approver-29556758-989sl\" (UID: \"564cb4aa-8722-4f69-adeb-16bc8b74bff0\") " pod="openshift-infra/auto-csr-approver-29556758-989sl" Mar 13 12:38:00 crc kubenswrapper[4632]: I0313 12:38:00.576932 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556758-989sl" Mar 13 12:38:01 crc kubenswrapper[4632]: I0313 12:38:01.149109 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556758-989sl"] Mar 13 12:38:01 crc kubenswrapper[4632]: I0313 12:38:01.172447 4632 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 12:38:01 crc kubenswrapper[4632]: I0313 12:38:01.428777 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556758-989sl" event={"ID":"564cb4aa-8722-4f69-adeb-16bc8b74bff0","Type":"ContainerStarted","Data":"5e6178bec38703640c7aad295f130226587694e4d95f5a4c4a588ca736c04131"} Mar 13 12:38:04 crc kubenswrapper[4632]: I0313 12:38:04.455952 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556758-989sl" event={"ID":"564cb4aa-8722-4f69-adeb-16bc8b74bff0","Type":"ContainerStarted","Data":"e9210685ea8bbb4f63a3e5f03db6817cbf29b557c698d4de6b1ae5929063a7c0"} Mar 13 12:38:04 crc kubenswrapper[4632]: I0313 12:38:04.482448 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556758-989sl" podStartSLOduration=2.553739883 podStartE2EDuration="4.480864357s" podCreationTimestamp="2026-03-13 12:38:00 +0000 UTC" firstStartedPulling="2026-03-13 12:38:01.169966432 +0000 UTC m=+9255.192496555" lastFinishedPulling="2026-03-13 12:38:03.097090886 +0000 UTC m=+9257.119621029" observedRunningTime="2026-03-13 12:38:04.468227837 +0000 UTC m=+9258.490757970" watchObservedRunningTime="2026-03-13 12:38:04.480864357 +0000 UTC m=+9258.503394490" Mar 13 12:38:05 crc kubenswrapper[4632]: I0313 12:38:05.467343 4632 generic.go:334] "Generic (PLEG): container finished" podID="564cb4aa-8722-4f69-adeb-16bc8b74bff0" containerID="e9210685ea8bbb4f63a3e5f03db6817cbf29b557c698d4de6b1ae5929063a7c0" exitCode=0 Mar 13 12:38:05 crc kubenswrapper[4632]: I0313 12:38:05.467443 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556758-989sl" event={"ID":"564cb4aa-8722-4f69-adeb-16bc8b74bff0","Type":"ContainerDied","Data":"e9210685ea8bbb4f63a3e5f03db6817cbf29b557c698d4de6b1ae5929063a7c0"} Mar 13 12:38:06 crc kubenswrapper[4632]: I0313 12:38:06.871219 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556758-989sl" Mar 13 12:38:07 crc kubenswrapper[4632]: I0313 12:38:07.032419 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8f58\" (UniqueName: \"kubernetes.io/projected/564cb4aa-8722-4f69-adeb-16bc8b74bff0-kube-api-access-j8f58\") pod \"564cb4aa-8722-4f69-adeb-16bc8b74bff0\" (UID: \"564cb4aa-8722-4f69-adeb-16bc8b74bff0\") " Mar 13 12:38:07 crc kubenswrapper[4632]: I0313 12:38:07.045255 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/564cb4aa-8722-4f69-adeb-16bc8b74bff0-kube-api-access-j8f58" (OuterVolumeSpecName: "kube-api-access-j8f58") pod "564cb4aa-8722-4f69-adeb-16bc8b74bff0" (UID: "564cb4aa-8722-4f69-adeb-16bc8b74bff0"). InnerVolumeSpecName "kube-api-access-j8f58". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:38:07 crc kubenswrapper[4632]: I0313 12:38:07.135546 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8f58\" (UniqueName: \"kubernetes.io/projected/564cb4aa-8722-4f69-adeb-16bc8b74bff0-kube-api-access-j8f58\") on node \"crc\" DevicePath \"\"" Mar 13 12:38:07 crc kubenswrapper[4632]: I0313 12:38:07.487791 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556758-989sl" event={"ID":"564cb4aa-8722-4f69-adeb-16bc8b74bff0","Type":"ContainerDied","Data":"5e6178bec38703640c7aad295f130226587694e4d95f5a4c4a588ca736c04131"} Mar 13 12:38:07 crc kubenswrapper[4632]: I0313 12:38:07.487837 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e6178bec38703640c7aad295f130226587694e4d95f5a4c4a588ca736c04131" Mar 13 12:38:07 crc kubenswrapper[4632]: I0313 12:38:07.487863 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556758-989sl" Mar 13 12:38:07 crc kubenswrapper[4632]: I0313 12:38:07.573744 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556752-48xgt"] Mar 13 12:38:07 crc kubenswrapper[4632]: I0313 12:38:07.584206 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556752-48xgt"] Mar 13 12:38:08 crc kubenswrapper[4632]: I0313 12:38:08.054096 4632 scope.go:117] "RemoveContainer" containerID="23322afa11cd9b7f3f2b893b0662422a41461a3fe0777a0243c232f90f5a4eb9" Mar 13 12:38:08 crc kubenswrapper[4632]: E0313 12:38:08.055794 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:38:08 crc kubenswrapper[4632]: I0313 12:38:08.069043 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c311ec54-27ae-4082-bd23-4df180976b2f" path="/var/lib/kubelet/pods/c311ec54-27ae-4082-bd23-4df180976b2f/volumes" Mar 13 12:38:20 crc kubenswrapper[4632]: I0313 12:38:20.044737 4632 scope.go:117] "RemoveContainer" containerID="23322afa11cd9b7f3f2b893b0662422a41461a3fe0777a0243c232f90f5a4eb9" Mar 13 12:38:20 crc kubenswrapper[4632]: E0313 12:38:20.045625 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:38:28 crc kubenswrapper[4632]: I0313 12:38:28.103447 4632 scope.go:117] "RemoveContainer" containerID="997fb6aac287ee23705f38733ad6b8cf02cea468d3978ae35a30d75ea0dfec0f" Mar 13 12:38:35 crc kubenswrapper[4632]: I0313 12:38:35.045145 4632 scope.go:117] "RemoveContainer" containerID="23322afa11cd9b7f3f2b893b0662422a41461a3fe0777a0243c232f90f5a4eb9" Mar 13 12:38:35 crc kubenswrapper[4632]: E0313 12:38:35.045910 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:38:49 crc kubenswrapper[4632]: I0313 12:38:49.044481 4632 scope.go:117] "RemoveContainer" containerID="23322afa11cd9b7f3f2b893b0662422a41461a3fe0777a0243c232f90f5a4eb9" Mar 13 12:38:49 crc kubenswrapper[4632]: E0313 12:38:49.045405 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:39:00 crc kubenswrapper[4632]: I0313 12:39:00.045536 4632 scope.go:117] "RemoveContainer" containerID="23322afa11cd9b7f3f2b893b0662422a41461a3fe0777a0243c232f90f5a4eb9" Mar 13 12:39:00 crc kubenswrapper[4632]: E0313 12:39:00.046485 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:39:13 crc kubenswrapper[4632]: I0313 12:39:13.062490 4632 scope.go:117] "RemoveContainer" containerID="23322afa11cd9b7f3f2b893b0662422a41461a3fe0777a0243c232f90f5a4eb9" Mar 13 12:39:13 crc kubenswrapper[4632]: E0313 12:39:13.064008 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:39:28 crc kubenswrapper[4632]: I0313 12:39:28.055640 4632 scope.go:117] "RemoveContainer" containerID="23322afa11cd9b7f3f2b893b0662422a41461a3fe0777a0243c232f90f5a4eb9" Mar 13 12:39:28 crc kubenswrapper[4632]: E0313 12:39:28.056401 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:39:41 crc kubenswrapper[4632]: I0313 12:39:41.044756 4632 scope.go:117] "RemoveContainer" containerID="23322afa11cd9b7f3f2b893b0662422a41461a3fe0777a0243c232f90f5a4eb9" Mar 13 12:39:41 crc kubenswrapper[4632]: E0313 12:39:41.045833 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:39:55 crc kubenswrapper[4632]: I0313 12:39:55.045754 4632 scope.go:117] "RemoveContainer" containerID="23322afa11cd9b7f3f2b893b0662422a41461a3fe0777a0243c232f90f5a4eb9" Mar 13 12:39:55 crc kubenswrapper[4632]: E0313 12:39:55.046482 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:40:00 crc kubenswrapper[4632]: I0313 12:40:00.162310 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556760-fxrw2"] Mar 13 12:40:00 crc kubenswrapper[4632]: E0313 12:40:00.167234 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="564cb4aa-8722-4f69-adeb-16bc8b74bff0" containerName="oc" Mar 13 12:40:00 crc kubenswrapper[4632]: I0313 12:40:00.167285 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="564cb4aa-8722-4f69-adeb-16bc8b74bff0" containerName="oc" Mar 13 12:40:00 crc kubenswrapper[4632]: I0313 12:40:00.167661 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="564cb4aa-8722-4f69-adeb-16bc8b74bff0" containerName="oc" Mar 13 12:40:00 crc kubenswrapper[4632]: I0313 12:40:00.169844 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556760-fxrw2" Mar 13 12:40:00 crc kubenswrapper[4632]: I0313 12:40:00.175775 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 12:40:00 crc kubenswrapper[4632]: I0313 12:40:00.176042 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 12:40:00 crc kubenswrapper[4632]: I0313 12:40:00.181612 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 12:40:00 crc kubenswrapper[4632]: I0313 12:40:00.184126 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556760-fxrw2"] Mar 13 12:40:00 crc kubenswrapper[4632]: I0313 12:40:00.238988 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fpcc\" (UniqueName: \"kubernetes.io/projected/58d6a88c-498f-4887-998a-c3e3a1a2fef2-kube-api-access-8fpcc\") pod \"auto-csr-approver-29556760-fxrw2\" (UID: \"58d6a88c-498f-4887-998a-c3e3a1a2fef2\") " pod="openshift-infra/auto-csr-approver-29556760-fxrw2" Mar 13 12:40:00 crc kubenswrapper[4632]: I0313 12:40:00.353118 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fpcc\" (UniqueName: \"kubernetes.io/projected/58d6a88c-498f-4887-998a-c3e3a1a2fef2-kube-api-access-8fpcc\") pod \"auto-csr-approver-29556760-fxrw2\" (UID: \"58d6a88c-498f-4887-998a-c3e3a1a2fef2\") " pod="openshift-infra/auto-csr-approver-29556760-fxrw2" Mar 13 12:40:00 crc kubenswrapper[4632]: I0313 12:40:00.398627 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fpcc\" (UniqueName: \"kubernetes.io/projected/58d6a88c-498f-4887-998a-c3e3a1a2fef2-kube-api-access-8fpcc\") pod \"auto-csr-approver-29556760-fxrw2\" (UID: \"58d6a88c-498f-4887-998a-c3e3a1a2fef2\") " pod="openshift-infra/auto-csr-approver-29556760-fxrw2" Mar 13 12:40:00 crc kubenswrapper[4632]: I0313 12:40:00.498708 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556760-fxrw2" Mar 13 12:40:01 crc kubenswrapper[4632]: I0313 12:40:01.047448 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556760-fxrw2"] Mar 13 12:40:01 crc kubenswrapper[4632]: W0313 12:40:01.061673 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58d6a88c_498f_4887_998a_c3e3a1a2fef2.slice/crio-b5bb002e8e749efb2b27f87a90b9dd24c01d801dd48b684fd281e34cb8a2bc22 WatchSource:0}: Error finding container b5bb002e8e749efb2b27f87a90b9dd24c01d801dd48b684fd281e34cb8a2bc22: Status 404 returned error can't find the container with id b5bb002e8e749efb2b27f87a90b9dd24c01d801dd48b684fd281e34cb8a2bc22 Mar 13 12:40:01 crc kubenswrapper[4632]: I0313 12:40:01.581347 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556760-fxrw2" event={"ID":"58d6a88c-498f-4887-998a-c3e3a1a2fef2","Type":"ContainerStarted","Data":"b5bb002e8e749efb2b27f87a90b9dd24c01d801dd48b684fd281e34cb8a2bc22"} Mar 13 12:40:04 crc kubenswrapper[4632]: I0313 12:40:04.619791 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556760-fxrw2" event={"ID":"58d6a88c-498f-4887-998a-c3e3a1a2fef2","Type":"ContainerStarted","Data":"a5bdb6d7b1972d01ea3faadd8b4d91d40f96718626d20034621dcf3eda3e5f37"} Mar 13 12:40:04 crc kubenswrapper[4632]: I0313 12:40:04.652983 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556760-fxrw2" podStartSLOduration=2.7252013550000003 podStartE2EDuration="4.652917154s" podCreationTimestamp="2026-03-13 12:40:00 +0000 UTC" firstStartedPulling="2026-03-13 12:40:01.069840625 +0000 UTC m=+9375.092370788" lastFinishedPulling="2026-03-13 12:40:02.997556444 +0000 UTC m=+9377.020086587" observedRunningTime="2026-03-13 12:40:04.641502724 +0000 UTC m=+9378.664032867" watchObservedRunningTime="2026-03-13 12:40:04.652917154 +0000 UTC m=+9378.675447287" Mar 13 12:40:05 crc kubenswrapper[4632]: I0313 12:40:05.632898 4632 generic.go:334] "Generic (PLEG): container finished" podID="58d6a88c-498f-4887-998a-c3e3a1a2fef2" containerID="a5bdb6d7b1972d01ea3faadd8b4d91d40f96718626d20034621dcf3eda3e5f37" exitCode=0 Mar 13 12:40:05 crc kubenswrapper[4632]: I0313 12:40:05.633136 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556760-fxrw2" event={"ID":"58d6a88c-498f-4887-998a-c3e3a1a2fef2","Type":"ContainerDied","Data":"a5bdb6d7b1972d01ea3faadd8b4d91d40f96718626d20034621dcf3eda3e5f37"} Mar 13 12:40:07 crc kubenswrapper[4632]: I0313 12:40:07.008823 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556760-fxrw2" Mar 13 12:40:07 crc kubenswrapper[4632]: I0313 12:40:07.097773 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fpcc\" (UniqueName: \"kubernetes.io/projected/58d6a88c-498f-4887-998a-c3e3a1a2fef2-kube-api-access-8fpcc\") pod \"58d6a88c-498f-4887-998a-c3e3a1a2fef2\" (UID: \"58d6a88c-498f-4887-998a-c3e3a1a2fef2\") " Mar 13 12:40:07 crc kubenswrapper[4632]: I0313 12:40:07.106697 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58d6a88c-498f-4887-998a-c3e3a1a2fef2-kube-api-access-8fpcc" (OuterVolumeSpecName: "kube-api-access-8fpcc") pod "58d6a88c-498f-4887-998a-c3e3a1a2fef2" (UID: "58d6a88c-498f-4887-998a-c3e3a1a2fef2"). InnerVolumeSpecName "kube-api-access-8fpcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:40:07 crc kubenswrapper[4632]: I0313 12:40:07.200931 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fpcc\" (UniqueName: \"kubernetes.io/projected/58d6a88c-498f-4887-998a-c3e3a1a2fef2-kube-api-access-8fpcc\") on node \"crc\" DevicePath \"\"" Mar 13 12:40:07 crc kubenswrapper[4632]: I0313 12:40:07.660661 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556760-fxrw2" event={"ID":"58d6a88c-498f-4887-998a-c3e3a1a2fef2","Type":"ContainerDied","Data":"b5bb002e8e749efb2b27f87a90b9dd24c01d801dd48b684fd281e34cb8a2bc22"} Mar 13 12:40:07 crc kubenswrapper[4632]: I0313 12:40:07.660732 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556760-fxrw2" Mar 13 12:40:07 crc kubenswrapper[4632]: I0313 12:40:07.661171 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5bb002e8e749efb2b27f87a90b9dd24c01d801dd48b684fd281e34cb8a2bc22" Mar 13 12:40:07 crc kubenswrapper[4632]: I0313 12:40:07.737769 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556754-kcjmj"] Mar 13 12:40:07 crc kubenswrapper[4632]: I0313 12:40:07.746631 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556754-kcjmj"] Mar 13 12:40:08 crc kubenswrapper[4632]: I0313 12:40:08.063786 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccc70d73-b58b-4a2c-9bce-dc27405c5710" path="/var/lib/kubelet/pods/ccc70d73-b58b-4a2c-9bce-dc27405c5710/volumes" Mar 13 12:40:09 crc kubenswrapper[4632]: I0313 12:40:09.046148 4632 scope.go:117] "RemoveContainer" containerID="23322afa11cd9b7f3f2b893b0662422a41461a3fe0777a0243c232f90f5a4eb9" Mar 13 12:40:09 crc kubenswrapper[4632]: E0313 12:40:09.047020 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:40:20 crc kubenswrapper[4632]: I0313 12:40:20.044049 4632 scope.go:117] "RemoveContainer" containerID="23322afa11cd9b7f3f2b893b0662422a41461a3fe0777a0243c232f90f5a4eb9" Mar 13 12:40:20 crc kubenswrapper[4632]: E0313 12:40:20.044640 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:40:28 crc kubenswrapper[4632]: I0313 12:40:28.347137 4632 scope.go:117] "RemoveContainer" containerID="33c9be3390a29151e585ddbf79d6ef390b1a094d7878d4d4c96b9b0bb39d369c" Mar 13 12:40:33 crc kubenswrapper[4632]: I0313 12:40:33.044905 4632 scope.go:117] "RemoveContainer" containerID="23322afa11cd9b7f3f2b893b0662422a41461a3fe0777a0243c232f90f5a4eb9" Mar 13 12:40:33 crc kubenswrapper[4632]: E0313 12:40:33.045471 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:40:44 crc kubenswrapper[4632]: I0313 12:40:44.045159 4632 scope.go:117] "RemoveContainer" containerID="23322afa11cd9b7f3f2b893b0662422a41461a3fe0777a0243c232f90f5a4eb9" Mar 13 12:40:44 crc kubenswrapper[4632]: E0313 12:40:44.046022 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:40:55 crc kubenswrapper[4632]: I0313 12:40:55.046163 4632 scope.go:117] "RemoveContainer" containerID="23322afa11cd9b7f3f2b893b0662422a41461a3fe0777a0243c232f90f5a4eb9" Mar 13 12:40:55 crc kubenswrapper[4632]: E0313 12:40:55.049704 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:41:10 crc kubenswrapper[4632]: I0313 12:41:10.044847 4632 scope.go:117] "RemoveContainer" containerID="23322afa11cd9b7f3f2b893b0662422a41461a3fe0777a0243c232f90f5a4eb9" Mar 13 12:41:10 crc kubenswrapper[4632]: E0313 12:41:10.045600 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:41:25 crc kubenswrapper[4632]: I0313 12:41:25.044302 4632 scope.go:117] "RemoveContainer" containerID="23322afa11cd9b7f3f2b893b0662422a41461a3fe0777a0243c232f90f5a4eb9" Mar 13 12:41:25 crc kubenswrapper[4632]: E0313 12:41:25.045035 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:41:38 crc kubenswrapper[4632]: I0313 12:41:38.052109 4632 scope.go:117] "RemoveContainer" containerID="23322afa11cd9b7f3f2b893b0662422a41461a3fe0777a0243c232f90f5a4eb9" Mar 13 12:41:38 crc kubenswrapper[4632]: E0313 12:41:38.053086 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:41:51 crc kubenswrapper[4632]: I0313 12:41:51.044420 4632 scope.go:117] "RemoveContainer" containerID="23322afa11cd9b7f3f2b893b0662422a41461a3fe0777a0243c232f90f5a4eb9" Mar 13 12:41:51 crc kubenswrapper[4632]: E0313 12:41:51.045339 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:42:00 crc kubenswrapper[4632]: I0313 12:42:00.163323 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jp4lw"] Mar 13 12:42:00 crc kubenswrapper[4632]: E0313 12:42:00.164148 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58d6a88c-498f-4887-998a-c3e3a1a2fef2" containerName="oc" Mar 13 12:42:00 crc kubenswrapper[4632]: I0313 12:42:00.164160 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="58d6a88c-498f-4887-998a-c3e3a1a2fef2" containerName="oc" Mar 13 12:42:00 crc kubenswrapper[4632]: I0313 12:42:00.164367 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="58d6a88c-498f-4887-998a-c3e3a1a2fef2" containerName="oc" Mar 13 12:42:00 crc kubenswrapper[4632]: I0313 12:42:00.165836 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jp4lw" Mar 13 12:42:00 crc kubenswrapper[4632]: I0313 12:42:00.183055 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jp4lw"] Mar 13 12:42:00 crc kubenswrapper[4632]: I0313 12:42:00.251303 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556762-mjtxh"] Mar 13 12:42:00 crc kubenswrapper[4632]: I0313 12:42:00.252746 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556762-mjtxh" Mar 13 12:42:00 crc kubenswrapper[4632]: I0313 12:42:00.258803 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 12:42:00 crc kubenswrapper[4632]: I0313 12:42:00.259462 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 12:42:00 crc kubenswrapper[4632]: I0313 12:42:00.264288 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 12:42:00 crc kubenswrapper[4632]: I0313 12:42:00.279511 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556762-mjtxh"] Mar 13 12:42:00 crc kubenswrapper[4632]: I0313 12:42:00.281767 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fhsr\" (UniqueName: \"kubernetes.io/projected/0325e4fd-6765-4752-afb6-831414e3e532-kube-api-access-8fhsr\") pod \"community-operators-jp4lw\" (UID: \"0325e4fd-6765-4752-afb6-831414e3e532\") " pod="openshift-marketplace/community-operators-jp4lw" Mar 13 12:42:00 crc kubenswrapper[4632]: I0313 12:42:00.282002 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0325e4fd-6765-4752-afb6-831414e3e532-utilities\") pod \"community-operators-jp4lw\" (UID: \"0325e4fd-6765-4752-afb6-831414e3e532\") " pod="openshift-marketplace/community-operators-jp4lw" Mar 13 12:42:00 crc kubenswrapper[4632]: I0313 12:42:00.282031 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0325e4fd-6765-4752-afb6-831414e3e532-catalog-content\") pod \"community-operators-jp4lw\" (UID: \"0325e4fd-6765-4752-afb6-831414e3e532\") " pod="openshift-marketplace/community-operators-jp4lw" Mar 13 12:42:00 crc kubenswrapper[4632]: I0313 12:42:00.384025 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xcrm\" (UniqueName: \"kubernetes.io/projected/cf13c17d-1ea6-4a0e-bfbd-e3bfc8d453ae-kube-api-access-2xcrm\") pod \"auto-csr-approver-29556762-mjtxh\" (UID: \"cf13c17d-1ea6-4a0e-bfbd-e3bfc8d453ae\") " pod="openshift-infra/auto-csr-approver-29556762-mjtxh" Mar 13 12:42:00 crc kubenswrapper[4632]: I0313 12:42:00.384362 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0325e4fd-6765-4752-afb6-831414e3e532-utilities\") pod \"community-operators-jp4lw\" (UID: \"0325e4fd-6765-4752-afb6-831414e3e532\") " pod="openshift-marketplace/community-operators-jp4lw" Mar 13 12:42:00 crc kubenswrapper[4632]: I0313 12:42:00.384384 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0325e4fd-6765-4752-afb6-831414e3e532-catalog-content\") pod \"community-operators-jp4lw\" (UID: \"0325e4fd-6765-4752-afb6-831414e3e532\") " pod="openshift-marketplace/community-operators-jp4lw" Mar 13 12:42:00 crc kubenswrapper[4632]: I0313 12:42:00.384451 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fhsr\" (UniqueName: \"kubernetes.io/projected/0325e4fd-6765-4752-afb6-831414e3e532-kube-api-access-8fhsr\") pod \"community-operators-jp4lw\" (UID: \"0325e4fd-6765-4752-afb6-831414e3e532\") " pod="openshift-marketplace/community-operators-jp4lw" Mar 13 12:42:00 crc kubenswrapper[4632]: I0313 12:42:00.387558 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0325e4fd-6765-4752-afb6-831414e3e532-utilities\") pod \"community-operators-jp4lw\" (UID: \"0325e4fd-6765-4752-afb6-831414e3e532\") " pod="openshift-marketplace/community-operators-jp4lw" Mar 13 12:42:00 crc kubenswrapper[4632]: I0313 12:42:00.387808 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0325e4fd-6765-4752-afb6-831414e3e532-catalog-content\") pod \"community-operators-jp4lw\" (UID: \"0325e4fd-6765-4752-afb6-831414e3e532\") " pod="openshift-marketplace/community-operators-jp4lw" Mar 13 12:42:00 crc kubenswrapper[4632]: I0313 12:42:00.410218 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fhsr\" (UniqueName: \"kubernetes.io/projected/0325e4fd-6765-4752-afb6-831414e3e532-kube-api-access-8fhsr\") pod \"community-operators-jp4lw\" (UID: \"0325e4fd-6765-4752-afb6-831414e3e532\") " pod="openshift-marketplace/community-operators-jp4lw" Mar 13 12:42:00 crc kubenswrapper[4632]: I0313 12:42:00.482199 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jp4lw" Mar 13 12:42:00 crc kubenswrapper[4632]: I0313 12:42:00.486610 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xcrm\" (UniqueName: \"kubernetes.io/projected/cf13c17d-1ea6-4a0e-bfbd-e3bfc8d453ae-kube-api-access-2xcrm\") pod \"auto-csr-approver-29556762-mjtxh\" (UID: \"cf13c17d-1ea6-4a0e-bfbd-e3bfc8d453ae\") " pod="openshift-infra/auto-csr-approver-29556762-mjtxh" Mar 13 12:42:00 crc kubenswrapper[4632]: I0313 12:42:00.514255 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xcrm\" (UniqueName: \"kubernetes.io/projected/cf13c17d-1ea6-4a0e-bfbd-e3bfc8d453ae-kube-api-access-2xcrm\") pod \"auto-csr-approver-29556762-mjtxh\" (UID: \"cf13c17d-1ea6-4a0e-bfbd-e3bfc8d453ae\") " pod="openshift-infra/auto-csr-approver-29556762-mjtxh" Mar 13 12:42:00 crc kubenswrapper[4632]: I0313 12:42:00.573427 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556762-mjtxh" Mar 13 12:42:01 crc kubenswrapper[4632]: I0313 12:42:01.485671 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jp4lw"] Mar 13 12:42:01 crc kubenswrapper[4632]: I0313 12:42:01.529461 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jp4lw" event={"ID":"0325e4fd-6765-4752-afb6-831414e3e532","Type":"ContainerStarted","Data":"f753646e3f4b18f0420fff84cc10fc721e6ad55cc5ba98ca0dd48214462119c9"} Mar 13 12:42:01 crc kubenswrapper[4632]: I0313 12:42:01.564567 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556762-mjtxh"] Mar 13 12:42:01 crc kubenswrapper[4632]: W0313 12:42:01.565562 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf13c17d_1ea6_4a0e_bfbd_e3bfc8d453ae.slice/crio-922c1997c080dd1be09c651a5cfc9f28a4d04ab91c7bb78a448553faa0d90e89 WatchSource:0}: Error finding container 922c1997c080dd1be09c651a5cfc9f28a4d04ab91c7bb78a448553faa0d90e89: Status 404 returned error can't find the container with id 922c1997c080dd1be09c651a5cfc9f28a4d04ab91c7bb78a448553faa0d90e89 Mar 13 12:42:02 crc kubenswrapper[4632]: I0313 12:42:02.539862 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556762-mjtxh" event={"ID":"cf13c17d-1ea6-4a0e-bfbd-e3bfc8d453ae","Type":"ContainerStarted","Data":"922c1997c080dd1be09c651a5cfc9f28a4d04ab91c7bb78a448553faa0d90e89"} Mar 13 12:42:02 crc kubenswrapper[4632]: I0313 12:42:02.542275 4632 generic.go:334] "Generic (PLEG): container finished" podID="0325e4fd-6765-4752-afb6-831414e3e532" containerID="c2b08b03a5ca7bde67bc2372bc69a87126ed72cacb93a81dca4fa17a497ed3c6" exitCode=0 Mar 13 12:42:02 crc kubenswrapper[4632]: I0313 12:42:02.542333 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jp4lw" event={"ID":"0325e4fd-6765-4752-afb6-831414e3e532","Type":"ContainerDied","Data":"c2b08b03a5ca7bde67bc2372bc69a87126ed72cacb93a81dca4fa17a497ed3c6"} Mar 13 12:42:03 crc kubenswrapper[4632]: I0313 12:42:03.554044 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556762-mjtxh" event={"ID":"cf13c17d-1ea6-4a0e-bfbd-e3bfc8d453ae","Type":"ContainerStarted","Data":"8dfadb29bc36e882b8d8ebf6016fec294107233b5f8602de74595b7d612d371c"} Mar 13 12:42:03 crc kubenswrapper[4632]: I0313 12:42:03.557274 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jp4lw" event={"ID":"0325e4fd-6765-4752-afb6-831414e3e532","Type":"ContainerStarted","Data":"e28102d9e0189fa624d8379759716e19dc6f79055f8b59b2f5724f1145bcc559"} Mar 13 12:42:03 crc kubenswrapper[4632]: I0313 12:42:03.580256 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556762-mjtxh" podStartSLOduration=2.230684517 podStartE2EDuration="3.580234709s" podCreationTimestamp="2026-03-13 12:42:00 +0000 UTC" firstStartedPulling="2026-03-13 12:42:01.567637129 +0000 UTC m=+9495.590167262" lastFinishedPulling="2026-03-13 12:42:02.917187321 +0000 UTC m=+9496.939717454" observedRunningTime="2026-03-13 12:42:03.568574223 +0000 UTC m=+9497.591104366" watchObservedRunningTime="2026-03-13 12:42:03.580234709 +0000 UTC m=+9497.602764842" Mar 13 12:42:05 crc kubenswrapper[4632]: I0313 12:42:05.043854 4632 scope.go:117] "RemoveContainer" containerID="23322afa11cd9b7f3f2b893b0662422a41461a3fe0777a0243c232f90f5a4eb9" Mar 13 12:42:05 crc kubenswrapper[4632]: E0313 12:42:05.044640 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:42:05 crc kubenswrapper[4632]: I0313 12:42:05.579898 4632 generic.go:334] "Generic (PLEG): container finished" podID="0325e4fd-6765-4752-afb6-831414e3e532" containerID="e28102d9e0189fa624d8379759716e19dc6f79055f8b59b2f5724f1145bcc559" exitCode=0 Mar 13 12:42:05 crc kubenswrapper[4632]: I0313 12:42:05.580005 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jp4lw" event={"ID":"0325e4fd-6765-4752-afb6-831414e3e532","Type":"ContainerDied","Data":"e28102d9e0189fa624d8379759716e19dc6f79055f8b59b2f5724f1145bcc559"} Mar 13 12:42:05 crc kubenswrapper[4632]: I0313 12:42:05.582164 4632 generic.go:334] "Generic (PLEG): container finished" podID="cf13c17d-1ea6-4a0e-bfbd-e3bfc8d453ae" containerID="8dfadb29bc36e882b8d8ebf6016fec294107233b5f8602de74595b7d612d371c" exitCode=0 Mar 13 12:42:05 crc kubenswrapper[4632]: I0313 12:42:05.582201 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556762-mjtxh" event={"ID":"cf13c17d-1ea6-4a0e-bfbd-e3bfc8d453ae","Type":"ContainerDied","Data":"8dfadb29bc36e882b8d8ebf6016fec294107233b5f8602de74595b7d612d371c"} Mar 13 12:42:06 crc kubenswrapper[4632]: I0313 12:42:06.592716 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jp4lw" event={"ID":"0325e4fd-6765-4752-afb6-831414e3e532","Type":"ContainerStarted","Data":"6f6a15708a6bdac7d0aeb9980066237cfba1941652f86fc3be3ae158e72127eb"} Mar 13 12:42:06 crc kubenswrapper[4632]: I0313 12:42:06.616635 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jp4lw" podStartSLOduration=3.13655522 podStartE2EDuration="6.616615533s" podCreationTimestamp="2026-03-13 12:42:00 +0000 UTC" firstStartedPulling="2026-03-13 12:42:02.544443411 +0000 UTC m=+9496.566973544" lastFinishedPulling="2026-03-13 12:42:06.024503724 +0000 UTC m=+9500.047033857" observedRunningTime="2026-03-13 12:42:06.614507372 +0000 UTC m=+9500.637037515" watchObservedRunningTime="2026-03-13 12:42:06.616615533 +0000 UTC m=+9500.639145666" Mar 13 12:42:07 crc kubenswrapper[4632]: I0313 12:42:07.211307 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556762-mjtxh" Mar 13 12:42:07 crc kubenswrapper[4632]: I0313 12:42:07.325190 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xcrm\" (UniqueName: \"kubernetes.io/projected/cf13c17d-1ea6-4a0e-bfbd-e3bfc8d453ae-kube-api-access-2xcrm\") pod \"cf13c17d-1ea6-4a0e-bfbd-e3bfc8d453ae\" (UID: \"cf13c17d-1ea6-4a0e-bfbd-e3bfc8d453ae\") " Mar 13 12:42:07 crc kubenswrapper[4632]: I0313 12:42:07.340108 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf13c17d-1ea6-4a0e-bfbd-e3bfc8d453ae-kube-api-access-2xcrm" (OuterVolumeSpecName: "kube-api-access-2xcrm") pod "cf13c17d-1ea6-4a0e-bfbd-e3bfc8d453ae" (UID: "cf13c17d-1ea6-4a0e-bfbd-e3bfc8d453ae"). InnerVolumeSpecName "kube-api-access-2xcrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:42:07 crc kubenswrapper[4632]: I0313 12:42:07.427355 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xcrm\" (UniqueName: \"kubernetes.io/projected/cf13c17d-1ea6-4a0e-bfbd-e3bfc8d453ae-kube-api-access-2xcrm\") on node \"crc\" DevicePath \"\"" Mar 13 12:42:07 crc kubenswrapper[4632]: I0313 12:42:07.603213 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556762-mjtxh" event={"ID":"cf13c17d-1ea6-4a0e-bfbd-e3bfc8d453ae","Type":"ContainerDied","Data":"922c1997c080dd1be09c651a5cfc9f28a4d04ab91c7bb78a448553faa0d90e89"} Mar 13 12:42:07 crc kubenswrapper[4632]: I0313 12:42:07.604169 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="922c1997c080dd1be09c651a5cfc9f28a4d04ab91c7bb78a448553faa0d90e89" Mar 13 12:42:07 crc kubenswrapper[4632]: I0313 12:42:07.603302 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556762-mjtxh" Mar 13 12:42:07 crc kubenswrapper[4632]: I0313 12:42:07.693208 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556756-4d96k"] Mar 13 12:42:07 crc kubenswrapper[4632]: I0313 12:42:07.702155 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556756-4d96k"] Mar 13 12:42:08 crc kubenswrapper[4632]: I0313 12:42:08.062539 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bab5d92-9e39-4d06-98ae-8b9b50d50214" path="/var/lib/kubelet/pods/4bab5d92-9e39-4d06-98ae-8b9b50d50214/volumes" Mar 13 12:42:10 crc kubenswrapper[4632]: I0313 12:42:10.483502 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jp4lw" Mar 13 12:42:10 crc kubenswrapper[4632]: I0313 12:42:10.483824 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jp4lw" Mar 13 12:42:11 crc kubenswrapper[4632]: I0313 12:42:11.542532 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-jp4lw" podUID="0325e4fd-6765-4752-afb6-831414e3e532" containerName="registry-server" probeResult="failure" output=< Mar 13 12:42:11 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:42:11 crc kubenswrapper[4632]: > Mar 13 12:42:19 crc kubenswrapper[4632]: I0313 12:42:19.045664 4632 scope.go:117] "RemoveContainer" containerID="23322afa11cd9b7f3f2b893b0662422a41461a3fe0777a0243c232f90f5a4eb9" Mar 13 12:42:19 crc kubenswrapper[4632]: E0313 12:42:19.046593 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:42:20 crc kubenswrapper[4632]: I0313 12:42:20.577527 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jp4lw" Mar 13 12:42:20 crc kubenswrapper[4632]: I0313 12:42:20.635459 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jp4lw" Mar 13 12:42:20 crc kubenswrapper[4632]: I0313 12:42:20.820955 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jp4lw"] Mar 13 12:42:21 crc kubenswrapper[4632]: I0313 12:42:21.744358 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jp4lw" podUID="0325e4fd-6765-4752-afb6-831414e3e532" containerName="registry-server" containerID="cri-o://6f6a15708a6bdac7d0aeb9980066237cfba1941652f86fc3be3ae158e72127eb" gracePeriod=2 Mar 13 12:42:22 crc kubenswrapper[4632]: I0313 12:42:22.600191 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jp4lw" Mar 13 12:42:22 crc kubenswrapper[4632]: I0313 12:42:22.696437 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0325e4fd-6765-4752-afb6-831414e3e532-utilities\") pod \"0325e4fd-6765-4752-afb6-831414e3e532\" (UID: \"0325e4fd-6765-4752-afb6-831414e3e532\") " Mar 13 12:42:22 crc kubenswrapper[4632]: I0313 12:42:22.696510 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fhsr\" (UniqueName: \"kubernetes.io/projected/0325e4fd-6765-4752-afb6-831414e3e532-kube-api-access-8fhsr\") pod \"0325e4fd-6765-4752-afb6-831414e3e532\" (UID: \"0325e4fd-6765-4752-afb6-831414e3e532\") " Mar 13 12:42:22 crc kubenswrapper[4632]: I0313 12:42:22.696821 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0325e4fd-6765-4752-afb6-831414e3e532-catalog-content\") pod \"0325e4fd-6765-4752-afb6-831414e3e532\" (UID: \"0325e4fd-6765-4752-afb6-831414e3e532\") " Mar 13 12:42:22 crc kubenswrapper[4632]: I0313 12:42:22.698112 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0325e4fd-6765-4752-afb6-831414e3e532-utilities" (OuterVolumeSpecName: "utilities") pod "0325e4fd-6765-4752-afb6-831414e3e532" (UID: "0325e4fd-6765-4752-afb6-831414e3e532"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:42:22 crc kubenswrapper[4632]: I0313 12:42:22.738079 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0325e4fd-6765-4752-afb6-831414e3e532-kube-api-access-8fhsr" (OuterVolumeSpecName: "kube-api-access-8fhsr") pod "0325e4fd-6765-4752-afb6-831414e3e532" (UID: "0325e4fd-6765-4752-afb6-831414e3e532"). InnerVolumeSpecName "kube-api-access-8fhsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:42:22 crc kubenswrapper[4632]: I0313 12:42:22.757396 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0325e4fd-6765-4752-afb6-831414e3e532-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0325e4fd-6765-4752-afb6-831414e3e532" (UID: "0325e4fd-6765-4752-afb6-831414e3e532"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:42:22 crc kubenswrapper[4632]: I0313 12:42:22.761341 4632 generic.go:334] "Generic (PLEG): container finished" podID="0325e4fd-6765-4752-afb6-831414e3e532" containerID="6f6a15708a6bdac7d0aeb9980066237cfba1941652f86fc3be3ae158e72127eb" exitCode=0 Mar 13 12:42:22 crc kubenswrapper[4632]: I0313 12:42:22.761364 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jp4lw" Mar 13 12:42:22 crc kubenswrapper[4632]: I0313 12:42:22.761387 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jp4lw" event={"ID":"0325e4fd-6765-4752-afb6-831414e3e532","Type":"ContainerDied","Data":"6f6a15708a6bdac7d0aeb9980066237cfba1941652f86fc3be3ae158e72127eb"} Mar 13 12:42:22 crc kubenswrapper[4632]: I0313 12:42:22.761416 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jp4lw" event={"ID":"0325e4fd-6765-4752-afb6-831414e3e532","Type":"ContainerDied","Data":"f753646e3f4b18f0420fff84cc10fc721e6ad55cc5ba98ca0dd48214462119c9"} Mar 13 12:42:22 crc kubenswrapper[4632]: I0313 12:42:22.761436 4632 scope.go:117] "RemoveContainer" containerID="6f6a15708a6bdac7d0aeb9980066237cfba1941652f86fc3be3ae158e72127eb" Mar 13 12:42:22 crc kubenswrapper[4632]: I0313 12:42:22.798635 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0325e4fd-6765-4752-afb6-831414e3e532-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 12:42:22 crc kubenswrapper[4632]: I0313 12:42:22.802236 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0325e4fd-6765-4752-afb6-831414e3e532-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 12:42:22 crc kubenswrapper[4632]: I0313 12:42:22.802322 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fhsr\" (UniqueName: \"kubernetes.io/projected/0325e4fd-6765-4752-afb6-831414e3e532-kube-api-access-8fhsr\") on node \"crc\" DevicePath \"\"" Mar 13 12:42:22 crc kubenswrapper[4632]: I0313 12:42:22.802813 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jp4lw"] Mar 13 12:42:22 crc kubenswrapper[4632]: I0313 12:42:22.812222 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jp4lw"] Mar 13 12:42:22 crc kubenswrapper[4632]: I0313 12:42:22.822174 4632 scope.go:117] "RemoveContainer" containerID="e28102d9e0189fa624d8379759716e19dc6f79055f8b59b2f5724f1145bcc559" Mar 13 12:42:22 crc kubenswrapper[4632]: I0313 12:42:22.849526 4632 scope.go:117] "RemoveContainer" containerID="c2b08b03a5ca7bde67bc2372bc69a87126ed72cacb93a81dca4fa17a497ed3c6" Mar 13 12:42:22 crc kubenswrapper[4632]: I0313 12:42:22.928755 4632 scope.go:117] "RemoveContainer" containerID="6f6a15708a6bdac7d0aeb9980066237cfba1941652f86fc3be3ae158e72127eb" Mar 13 12:42:22 crc kubenswrapper[4632]: E0313 12:42:22.933051 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f6a15708a6bdac7d0aeb9980066237cfba1941652f86fc3be3ae158e72127eb\": container with ID starting with 6f6a15708a6bdac7d0aeb9980066237cfba1941652f86fc3be3ae158e72127eb not found: ID does not exist" containerID="6f6a15708a6bdac7d0aeb9980066237cfba1941652f86fc3be3ae158e72127eb" Mar 13 12:42:22 crc kubenswrapper[4632]: I0313 12:42:22.933098 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f6a15708a6bdac7d0aeb9980066237cfba1941652f86fc3be3ae158e72127eb"} err="failed to get container status \"6f6a15708a6bdac7d0aeb9980066237cfba1941652f86fc3be3ae158e72127eb\": rpc error: code = NotFound desc = could not find container \"6f6a15708a6bdac7d0aeb9980066237cfba1941652f86fc3be3ae158e72127eb\": container with ID starting with 6f6a15708a6bdac7d0aeb9980066237cfba1941652f86fc3be3ae158e72127eb not found: ID does not exist" Mar 13 12:42:22 crc kubenswrapper[4632]: I0313 12:42:22.933122 4632 scope.go:117] "RemoveContainer" containerID="e28102d9e0189fa624d8379759716e19dc6f79055f8b59b2f5724f1145bcc559" Mar 13 12:42:22 crc kubenswrapper[4632]: E0313 12:42:22.933730 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e28102d9e0189fa624d8379759716e19dc6f79055f8b59b2f5724f1145bcc559\": container with ID starting with e28102d9e0189fa624d8379759716e19dc6f79055f8b59b2f5724f1145bcc559 not found: ID does not exist" containerID="e28102d9e0189fa624d8379759716e19dc6f79055f8b59b2f5724f1145bcc559" Mar 13 12:42:22 crc kubenswrapper[4632]: I0313 12:42:22.933753 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e28102d9e0189fa624d8379759716e19dc6f79055f8b59b2f5724f1145bcc559"} err="failed to get container status \"e28102d9e0189fa624d8379759716e19dc6f79055f8b59b2f5724f1145bcc559\": rpc error: code = NotFound desc = could not find container \"e28102d9e0189fa624d8379759716e19dc6f79055f8b59b2f5724f1145bcc559\": container with ID starting with e28102d9e0189fa624d8379759716e19dc6f79055f8b59b2f5724f1145bcc559 not found: ID does not exist" Mar 13 12:42:22 crc kubenswrapper[4632]: I0313 12:42:22.933767 4632 scope.go:117] "RemoveContainer" containerID="c2b08b03a5ca7bde67bc2372bc69a87126ed72cacb93a81dca4fa17a497ed3c6" Mar 13 12:42:22 crc kubenswrapper[4632]: E0313 12:42:22.934425 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2b08b03a5ca7bde67bc2372bc69a87126ed72cacb93a81dca4fa17a497ed3c6\": container with ID starting with c2b08b03a5ca7bde67bc2372bc69a87126ed72cacb93a81dca4fa17a497ed3c6 not found: ID does not exist" containerID="c2b08b03a5ca7bde67bc2372bc69a87126ed72cacb93a81dca4fa17a497ed3c6" Mar 13 12:42:22 crc kubenswrapper[4632]: I0313 12:42:22.934453 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2b08b03a5ca7bde67bc2372bc69a87126ed72cacb93a81dca4fa17a497ed3c6"} err="failed to get container status \"c2b08b03a5ca7bde67bc2372bc69a87126ed72cacb93a81dca4fa17a497ed3c6\": rpc error: code = NotFound desc = could not find container \"c2b08b03a5ca7bde67bc2372bc69a87126ed72cacb93a81dca4fa17a497ed3c6\": container with ID starting with c2b08b03a5ca7bde67bc2372bc69a87126ed72cacb93a81dca4fa17a497ed3c6 not found: ID does not exist" Mar 13 12:42:24 crc kubenswrapper[4632]: I0313 12:42:24.055158 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0325e4fd-6765-4752-afb6-831414e3e532" path="/var/lib/kubelet/pods/0325e4fd-6765-4752-afb6-831414e3e532/volumes" Mar 13 12:42:28 crc kubenswrapper[4632]: I0313 12:42:28.472101 4632 scope.go:117] "RemoveContainer" containerID="a8e65f824fd306a694713a170d4b213522c9b5fbd2a9bb06608f463371bdb733" Mar 13 12:42:33 crc kubenswrapper[4632]: I0313 12:42:33.043920 4632 scope.go:117] "RemoveContainer" containerID="23322afa11cd9b7f3f2b893b0662422a41461a3fe0777a0243c232f90f5a4eb9" Mar 13 12:42:33 crc kubenswrapper[4632]: E0313 12:42:33.044868 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:42:44 crc kubenswrapper[4632]: I0313 12:42:44.044911 4632 scope.go:117] "RemoveContainer" containerID="23322afa11cd9b7f3f2b893b0662422a41461a3fe0777a0243c232f90f5a4eb9" Mar 13 12:42:44 crc kubenswrapper[4632]: I0313 12:42:44.969521 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerStarted","Data":"ac0aa587db0bc6f14a810b1c0a407933497eafb76c5051481d9814592d0380b3"} Mar 13 12:42:56 crc kubenswrapper[4632]: I0313 12:42:56.073025 4632 generic.go:334] "Generic (PLEG): container finished" podID="611401cc-04fe-4276-82fa-a896182802d4" containerID="e5092a16adcd02c327c069b34afdd26aca8018f63ed747e3778a6c696a0e6a3c" exitCode=0 Mar 13 12:42:56 crc kubenswrapper[4632]: I0313 12:42:56.073605 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"611401cc-04fe-4276-82fa-a896182802d4","Type":"ContainerDied","Data":"e5092a16adcd02c327c069b34afdd26aca8018f63ed747e3778a6c696a0e6a3c"} Mar 13 12:42:57 crc kubenswrapper[4632]: I0313 12:42:57.864441 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 13 12:42:57 crc kubenswrapper[4632]: I0313 12:42:57.933700 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/611401cc-04fe-4276-82fa-a896182802d4-openstack-config-secret\") pod \"611401cc-04fe-4276-82fa-a896182802d4\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") " Mar 13 12:42:57 crc kubenswrapper[4632]: I0313 12:42:57.934035 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"611401cc-04fe-4276-82fa-a896182802d4\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") " Mar 13 12:42:57 crc kubenswrapper[4632]: I0313 12:42:57.934066 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/611401cc-04fe-4276-82fa-a896182802d4-test-operator-ephemeral-workdir\") pod \"611401cc-04fe-4276-82fa-a896182802d4\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") " Mar 13 12:42:57 crc kubenswrapper[4632]: I0313 12:42:57.934091 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/611401cc-04fe-4276-82fa-a896182802d4-ssh-key\") pod \"611401cc-04fe-4276-82fa-a896182802d4\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") " Mar 13 12:42:57 crc kubenswrapper[4632]: I0313 12:42:57.934866 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/611401cc-04fe-4276-82fa-a896182802d4-config-data\") pod \"611401cc-04fe-4276-82fa-a896182802d4\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") " Mar 13 12:42:57 crc kubenswrapper[4632]: I0313 12:42:57.934921 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/611401cc-04fe-4276-82fa-a896182802d4-ca-certs\") pod \"611401cc-04fe-4276-82fa-a896182802d4\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") " Mar 13 12:42:57 crc kubenswrapper[4632]: I0313 12:42:57.935099 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/611401cc-04fe-4276-82fa-a896182802d4-openstack-config\") pod \"611401cc-04fe-4276-82fa-a896182802d4\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") " Mar 13 12:42:57 crc kubenswrapper[4632]: I0313 12:42:57.935175 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrwf2\" (UniqueName: \"kubernetes.io/projected/611401cc-04fe-4276-82fa-a896182802d4-kube-api-access-hrwf2\") pod \"611401cc-04fe-4276-82fa-a896182802d4\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") " Mar 13 12:42:57 crc kubenswrapper[4632]: I0313 12:42:57.935203 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/611401cc-04fe-4276-82fa-a896182802d4-test-operator-ephemeral-temporary\") pod \"611401cc-04fe-4276-82fa-a896182802d4\" (UID: \"611401cc-04fe-4276-82fa-a896182802d4\") " Mar 13 12:42:57 crc kubenswrapper[4632]: I0313 12:42:57.935675 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/611401cc-04fe-4276-82fa-a896182802d4-config-data" (OuterVolumeSpecName: "config-data") pod "611401cc-04fe-4276-82fa-a896182802d4" (UID: "611401cc-04fe-4276-82fa-a896182802d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:42:57 crc kubenswrapper[4632]: I0313 12:42:57.938069 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/611401cc-04fe-4276-82fa-a896182802d4-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "611401cc-04fe-4276-82fa-a896182802d4" (UID: "611401cc-04fe-4276-82fa-a896182802d4"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:42:57 crc kubenswrapper[4632]: I0313 12:42:57.952470 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/611401cc-04fe-4276-82fa-a896182802d4-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "611401cc-04fe-4276-82fa-a896182802d4" (UID: "611401cc-04fe-4276-82fa-a896182802d4"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:42:57 crc kubenswrapper[4632]: I0313 12:42:57.952762 4632 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/611401cc-04fe-4276-82fa-a896182802d4-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Mar 13 12:42:57 crc kubenswrapper[4632]: I0313 12:42:57.952780 4632 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/611401cc-04fe-4276-82fa-a896182802d4-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Mar 13 12:42:57 crc kubenswrapper[4632]: I0313 12:42:57.952793 4632 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/611401cc-04fe-4276-82fa-a896182802d4-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 12:42:57 crc kubenswrapper[4632]: I0313 12:42:57.954099 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "test-operator-logs") pod "611401cc-04fe-4276-82fa-a896182802d4" (UID: "611401cc-04fe-4276-82fa-a896182802d4"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 13 12:42:57 crc kubenswrapper[4632]: I0313 12:42:57.995231 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/611401cc-04fe-4276-82fa-a896182802d4-kube-api-access-hrwf2" (OuterVolumeSpecName: "kube-api-access-hrwf2") pod "611401cc-04fe-4276-82fa-a896182802d4" (UID: "611401cc-04fe-4276-82fa-a896182802d4"). InnerVolumeSpecName "kube-api-access-hrwf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:42:58 crc kubenswrapper[4632]: I0313 12:42:58.075322 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/611401cc-04fe-4276-82fa-a896182802d4-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "611401cc-04fe-4276-82fa-a896182802d4" (UID: "611401cc-04fe-4276-82fa-a896182802d4"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:42:58 crc kubenswrapper[4632]: I0313 12:42:58.075543 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrwf2\" (UniqueName: \"kubernetes.io/projected/611401cc-04fe-4276-82fa-a896182802d4-kube-api-access-hrwf2\") on node \"crc\" DevicePath \"\"" Mar 13 12:42:58 crc kubenswrapper[4632]: I0313 12:42:58.077908 4632 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Mar 13 12:42:58 crc kubenswrapper[4632]: I0313 12:42:58.083226 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/611401cc-04fe-4276-82fa-a896182802d4-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "611401cc-04fe-4276-82fa-a896182802d4" (UID: "611401cc-04fe-4276-82fa-a896182802d4"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:42:58 crc kubenswrapper[4632]: I0313 12:42:58.089734 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/611401cc-04fe-4276-82fa-a896182802d4-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "611401cc-04fe-4276-82fa-a896182802d4" (UID: "611401cc-04fe-4276-82fa-a896182802d4"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:42:58 crc kubenswrapper[4632]: I0313 12:42:58.097596 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 13 12:42:58 crc kubenswrapper[4632]: I0313 12:42:58.099023 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"611401cc-04fe-4276-82fa-a896182802d4","Type":"ContainerDied","Data":"95b1b1d6a519cb7b9bfef154cebb6e4b73104a8706f52af49a8997ffa20ebd91"} Mar 13 12:42:58 crc kubenswrapper[4632]: I0313 12:42:58.099053 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95b1b1d6a519cb7b9bfef154cebb6e4b73104a8706f52af49a8997ffa20ebd91" Mar 13 12:42:58 crc kubenswrapper[4632]: I0313 12:42:58.100785 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/611401cc-04fe-4276-82fa-a896182802d4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "611401cc-04fe-4276-82fa-a896182802d4" (UID: "611401cc-04fe-4276-82fa-a896182802d4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:42:58 crc kubenswrapper[4632]: I0313 12:42:58.115840 4632 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Mar 13 12:42:58 crc kubenswrapper[4632]: I0313 12:42:58.179397 4632 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/611401cc-04fe-4276-82fa-a896182802d4-ca-certs\") on node \"crc\" DevicePath \"\"" Mar 13 12:42:58 crc kubenswrapper[4632]: I0313 12:42:58.179500 4632 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/611401cc-04fe-4276-82fa-a896182802d4-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 13 12:42:58 crc kubenswrapper[4632]: I0313 12:42:58.179514 4632 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/611401cc-04fe-4276-82fa-a896182802d4-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 13 12:42:58 crc kubenswrapper[4632]: I0313 12:42:58.179523 4632 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Mar 13 12:42:58 crc kubenswrapper[4632]: I0313 12:42:58.179534 4632 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/611401cc-04fe-4276-82fa-a896182802d4-ssh-key\") on node \"crc\" DevicePath \"\"" Mar 13 12:43:00 crc kubenswrapper[4632]: I0313 12:43:00.499952 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 13 12:43:00 crc kubenswrapper[4632]: E0313 12:43:00.500845 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0325e4fd-6765-4752-afb6-831414e3e532" containerName="extract-content" Mar 13 12:43:00 crc kubenswrapper[4632]: I0313 12:43:00.500858 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="0325e4fd-6765-4752-afb6-831414e3e532" containerName="extract-content" Mar 13 12:43:00 crc kubenswrapper[4632]: E0313 12:43:00.500877 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0325e4fd-6765-4752-afb6-831414e3e532" containerName="registry-server" Mar 13 12:43:00 crc kubenswrapper[4632]: I0313 12:43:00.500883 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="0325e4fd-6765-4752-afb6-831414e3e532" containerName="registry-server" Mar 13 12:43:00 crc kubenswrapper[4632]: E0313 12:43:00.500913 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0325e4fd-6765-4752-afb6-831414e3e532" containerName="extract-utilities" Mar 13 12:43:00 crc kubenswrapper[4632]: I0313 12:43:00.500920 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="0325e4fd-6765-4752-afb6-831414e3e532" containerName="extract-utilities" Mar 13 12:43:00 crc kubenswrapper[4632]: E0313 12:43:00.500951 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf13c17d-1ea6-4a0e-bfbd-e3bfc8d453ae" containerName="oc" Mar 13 12:43:00 crc kubenswrapper[4632]: I0313 12:43:00.500959 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf13c17d-1ea6-4a0e-bfbd-e3bfc8d453ae" containerName="oc" Mar 13 12:43:00 crc kubenswrapper[4632]: E0313 12:43:00.500970 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="611401cc-04fe-4276-82fa-a896182802d4" containerName="tempest-tests-tempest-tests-runner" Mar 13 12:43:00 crc kubenswrapper[4632]: I0313 12:43:00.500977 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="611401cc-04fe-4276-82fa-a896182802d4" containerName="tempest-tests-tempest-tests-runner" Mar 13 12:43:00 crc kubenswrapper[4632]: I0313 12:43:00.501138 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf13c17d-1ea6-4a0e-bfbd-e3bfc8d453ae" containerName="oc" Mar 13 12:43:00 crc kubenswrapper[4632]: I0313 12:43:00.501158 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="0325e4fd-6765-4752-afb6-831414e3e532" containerName="registry-server" Mar 13 12:43:00 crc kubenswrapper[4632]: I0313 12:43:00.501171 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="611401cc-04fe-4276-82fa-a896182802d4" containerName="tempest-tests-tempest-tests-runner" Mar 13 12:43:00 crc kubenswrapper[4632]: I0313 12:43:00.501801 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 13 12:43:00 crc kubenswrapper[4632]: I0313 12:43:00.507963 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-9w9qk" Mar 13 12:43:00 crc kubenswrapper[4632]: I0313 12:43:00.521323 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 13 12:43:00 crc kubenswrapper[4632]: I0313 12:43:00.622798 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c4836490-7b24-4245-bf50-7d590576f21e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 13 12:43:00 crc kubenswrapper[4632]: I0313 12:43:00.622952 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqlzz\" (UniqueName: \"kubernetes.io/projected/c4836490-7b24-4245-bf50-7d590576f21e-kube-api-access-vqlzz\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c4836490-7b24-4245-bf50-7d590576f21e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 13 12:43:00 crc kubenswrapper[4632]: I0313 12:43:00.724951 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqlzz\" (UniqueName: \"kubernetes.io/projected/c4836490-7b24-4245-bf50-7d590576f21e-kube-api-access-vqlzz\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c4836490-7b24-4245-bf50-7d590576f21e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 13 12:43:00 crc kubenswrapper[4632]: I0313 12:43:00.725094 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c4836490-7b24-4245-bf50-7d590576f21e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 13 12:43:00 crc kubenswrapper[4632]: I0313 12:43:00.728741 4632 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c4836490-7b24-4245-bf50-7d590576f21e\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 13 12:43:00 crc kubenswrapper[4632]: I0313 12:43:00.755857 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqlzz\" (UniqueName: \"kubernetes.io/projected/c4836490-7b24-4245-bf50-7d590576f21e-kube-api-access-vqlzz\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c4836490-7b24-4245-bf50-7d590576f21e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 13 12:43:00 crc kubenswrapper[4632]: I0313 12:43:00.764508 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c4836490-7b24-4245-bf50-7d590576f21e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 13 12:43:00 crc kubenswrapper[4632]: I0313 12:43:00.823819 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 13 12:43:01 crc kubenswrapper[4632]: I0313 12:43:01.369676 4632 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 12:43:01 crc kubenswrapper[4632]: I0313 12:43:01.371253 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 13 12:43:02 crc kubenswrapper[4632]: I0313 12:43:02.136230 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"c4836490-7b24-4245-bf50-7d590576f21e","Type":"ContainerStarted","Data":"fcb3b56a670015e8f1282bd22f7ebb710e98ea5f9b1d93e0919fac38dd6d6288"} Mar 13 12:43:03 crc kubenswrapper[4632]: I0313 12:43:03.149389 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"c4836490-7b24-4245-bf50-7d590576f21e","Type":"ContainerStarted","Data":"467eab617c540776869c8f7c1778fe8bb6f1a0f79a6bdc7565116731eee00756"} Mar 13 12:43:03 crc kubenswrapper[4632]: I0313 12:43:03.172795 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.733172398 podStartE2EDuration="3.172779618s" podCreationTimestamp="2026-03-13 12:43:00 +0000 UTC" firstStartedPulling="2026-03-13 12:43:01.368746762 +0000 UTC m=+9555.391276895" lastFinishedPulling="2026-03-13 12:43:02.808353982 +0000 UTC m=+9556.830884115" observedRunningTime="2026-03-13 12:43:03.171060806 +0000 UTC m=+9557.193590939" watchObservedRunningTime="2026-03-13 12:43:03.172779618 +0000 UTC m=+9557.195309751" Mar 13 12:43:50 crc kubenswrapper[4632]: I0313 12:43:50.679199 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jfn52/must-gather-9gqfn"] Mar 13 12:43:50 crc kubenswrapper[4632]: I0313 12:43:50.682374 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jfn52/must-gather-9gqfn" Mar 13 12:43:50 crc kubenswrapper[4632]: I0313 12:43:50.684775 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jfn52"/"openshift-service-ca.crt" Mar 13 12:43:50 crc kubenswrapper[4632]: I0313 12:43:50.687060 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jfn52"/"kube-root-ca.crt" Mar 13 12:43:50 crc kubenswrapper[4632]: I0313 12:43:50.690539 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-jfn52"/"default-dockercfg-74jfd" Mar 13 12:43:50 crc kubenswrapper[4632]: I0313 12:43:50.711215 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/252f97d9-adeb-4cce-858d-eb0bdb151871-must-gather-output\") pod \"must-gather-9gqfn\" (UID: \"252f97d9-adeb-4cce-858d-eb0bdb151871\") " pod="openshift-must-gather-jfn52/must-gather-9gqfn" Mar 13 12:43:50 crc kubenswrapper[4632]: I0313 12:43:50.711505 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwxgx\" (UniqueName: \"kubernetes.io/projected/252f97d9-adeb-4cce-858d-eb0bdb151871-kube-api-access-kwxgx\") pod \"must-gather-9gqfn\" (UID: \"252f97d9-adeb-4cce-858d-eb0bdb151871\") " pod="openshift-must-gather-jfn52/must-gather-9gqfn" Mar 13 12:43:50 crc kubenswrapper[4632]: I0313 12:43:50.813554 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/252f97d9-adeb-4cce-858d-eb0bdb151871-must-gather-output\") pod \"must-gather-9gqfn\" (UID: \"252f97d9-adeb-4cce-858d-eb0bdb151871\") " pod="openshift-must-gather-jfn52/must-gather-9gqfn" Mar 13 12:43:50 crc kubenswrapper[4632]: I0313 12:43:50.813671 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwxgx\" (UniqueName: \"kubernetes.io/projected/252f97d9-adeb-4cce-858d-eb0bdb151871-kube-api-access-kwxgx\") pod \"must-gather-9gqfn\" (UID: \"252f97d9-adeb-4cce-858d-eb0bdb151871\") " pod="openshift-must-gather-jfn52/must-gather-9gqfn" Mar 13 12:43:50 crc kubenswrapper[4632]: I0313 12:43:50.814239 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/252f97d9-adeb-4cce-858d-eb0bdb151871-must-gather-output\") pod \"must-gather-9gqfn\" (UID: \"252f97d9-adeb-4cce-858d-eb0bdb151871\") " pod="openshift-must-gather-jfn52/must-gather-9gqfn" Mar 13 12:43:50 crc kubenswrapper[4632]: I0313 12:43:50.835635 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwxgx\" (UniqueName: \"kubernetes.io/projected/252f97d9-adeb-4cce-858d-eb0bdb151871-kube-api-access-kwxgx\") pod \"must-gather-9gqfn\" (UID: \"252f97d9-adeb-4cce-858d-eb0bdb151871\") " pod="openshift-must-gather-jfn52/must-gather-9gqfn" Mar 13 12:43:50 crc kubenswrapper[4632]: I0313 12:43:50.891172 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jfn52/must-gather-9gqfn"] Mar 13 12:43:51 crc kubenswrapper[4632]: I0313 12:43:51.006463 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jfn52/must-gather-9gqfn" Mar 13 12:43:51 crc kubenswrapper[4632]: I0313 12:43:51.491372 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jfn52/must-gather-9gqfn"] Mar 13 12:43:51 crc kubenswrapper[4632]: I0313 12:43:51.638681 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jfn52/must-gather-9gqfn" event={"ID":"252f97d9-adeb-4cce-858d-eb0bdb151871","Type":"ContainerStarted","Data":"eb201c9262110becebc0a17449ee45af81bfd07a68b8b76aec66f1966a55fe23"} Mar 13 12:44:00 crc kubenswrapper[4632]: I0313 12:44:00.209573 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556764-9lxh8"] Mar 13 12:44:00 crc kubenswrapper[4632]: I0313 12:44:00.211834 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556764-9lxh8" Mar 13 12:44:00 crc kubenswrapper[4632]: I0313 12:44:00.215902 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 12:44:00 crc kubenswrapper[4632]: I0313 12:44:00.216132 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 12:44:00 crc kubenswrapper[4632]: I0313 12:44:00.220396 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556764-9lxh8"] Mar 13 12:44:00 crc kubenswrapper[4632]: I0313 12:44:00.222964 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 12:44:00 crc kubenswrapper[4632]: I0313 12:44:00.338018 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rqnq\" (UniqueName: \"kubernetes.io/projected/437f55ff-c573-4944-a680-6ac2d168cb0f-kube-api-access-9rqnq\") pod \"auto-csr-approver-29556764-9lxh8\" (UID: \"437f55ff-c573-4944-a680-6ac2d168cb0f\") " pod="openshift-infra/auto-csr-approver-29556764-9lxh8" Mar 13 12:44:00 crc kubenswrapper[4632]: I0313 12:44:00.440056 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rqnq\" (UniqueName: \"kubernetes.io/projected/437f55ff-c573-4944-a680-6ac2d168cb0f-kube-api-access-9rqnq\") pod \"auto-csr-approver-29556764-9lxh8\" (UID: \"437f55ff-c573-4944-a680-6ac2d168cb0f\") " pod="openshift-infra/auto-csr-approver-29556764-9lxh8" Mar 13 12:44:00 crc kubenswrapper[4632]: I0313 12:44:00.463604 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rqnq\" (UniqueName: \"kubernetes.io/projected/437f55ff-c573-4944-a680-6ac2d168cb0f-kube-api-access-9rqnq\") pod \"auto-csr-approver-29556764-9lxh8\" (UID: \"437f55ff-c573-4944-a680-6ac2d168cb0f\") " pod="openshift-infra/auto-csr-approver-29556764-9lxh8" Mar 13 12:44:00 crc kubenswrapper[4632]: I0313 12:44:00.539967 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556764-9lxh8" Mar 13 12:44:01 crc kubenswrapper[4632]: I0313 12:44:01.686130 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556764-9lxh8"] Mar 13 12:44:01 crc kubenswrapper[4632]: W0313 12:44:01.693872 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod437f55ff_c573_4944_a680_6ac2d168cb0f.slice/crio-0a3427c255939c243894f261514eaf59b4474ffa7b09ba5549c6a6b3a1fad4af WatchSource:0}: Error finding container 0a3427c255939c243894f261514eaf59b4474ffa7b09ba5549c6a6b3a1fad4af: Status 404 returned error can't find the container with id 0a3427c255939c243894f261514eaf59b4474ffa7b09ba5549c6a6b3a1fad4af Mar 13 12:44:01 crc kubenswrapper[4632]: I0313 12:44:01.749096 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jfn52/must-gather-9gqfn" event={"ID":"252f97d9-adeb-4cce-858d-eb0bdb151871","Type":"ContainerStarted","Data":"8c2cc6936b125830f781b6c13d49ab8294ed023e1773b93feeb9bf3339c7d42f"} Mar 13 12:44:01 crc kubenswrapper[4632]: I0313 12:44:01.749169 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jfn52/must-gather-9gqfn" event={"ID":"252f97d9-adeb-4cce-858d-eb0bdb151871","Type":"ContainerStarted","Data":"ce4c408f0cb872f87b4909caa3af1013273fb7803c492e30e5b30666a913955d"} Mar 13 12:44:01 crc kubenswrapper[4632]: I0313 12:44:01.750932 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556764-9lxh8" event={"ID":"437f55ff-c573-4944-a680-6ac2d168cb0f","Type":"ContainerStarted","Data":"0a3427c255939c243894f261514eaf59b4474ffa7b09ba5549c6a6b3a1fad4af"} Mar 13 12:44:01 crc kubenswrapper[4632]: I0313 12:44:01.770776 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jfn52/must-gather-9gqfn" podStartSLOduration=2.198756595 podStartE2EDuration="11.770759827s" podCreationTimestamp="2026-03-13 12:43:50 +0000 UTC" firstStartedPulling="2026-03-13 12:43:51.498559226 +0000 UTC m=+9605.521089359" lastFinishedPulling="2026-03-13 12:44:01.070562458 +0000 UTC m=+9615.093092591" observedRunningTime="2026-03-13 12:44:01.766463182 +0000 UTC m=+9615.788993335" watchObservedRunningTime="2026-03-13 12:44:01.770759827 +0000 UTC m=+9615.793289960" Mar 13 12:44:04 crc kubenswrapper[4632]: I0313 12:44:04.799672 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556764-9lxh8" event={"ID":"437f55ff-c573-4944-a680-6ac2d168cb0f","Type":"ContainerStarted","Data":"aa02f726269acb1a95d7a68005cbfbbe4f481bb481f4612470d155ee5bde6649"} Mar 13 12:44:04 crc kubenswrapper[4632]: I0313 12:44:04.827040 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556764-9lxh8" podStartSLOduration=3.912939635 podStartE2EDuration="4.827016839s" podCreationTimestamp="2026-03-13 12:44:00 +0000 UTC" firstStartedPulling="2026-03-13 12:44:01.696129017 +0000 UTC m=+9615.718659150" lastFinishedPulling="2026-03-13 12:44:02.610206221 +0000 UTC m=+9616.632736354" observedRunningTime="2026-03-13 12:44:04.819819292 +0000 UTC m=+9618.842349465" watchObservedRunningTime="2026-03-13 12:44:04.827016839 +0000 UTC m=+9618.849546982" Mar 13 12:44:05 crc kubenswrapper[4632]: I0313 12:44:05.809923 4632 generic.go:334] "Generic (PLEG): container finished" podID="437f55ff-c573-4944-a680-6ac2d168cb0f" containerID="aa02f726269acb1a95d7a68005cbfbbe4f481bb481f4612470d155ee5bde6649" exitCode=0 Mar 13 12:44:05 crc kubenswrapper[4632]: I0313 12:44:05.810104 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556764-9lxh8" event={"ID":"437f55ff-c573-4944-a680-6ac2d168cb0f","Type":"ContainerDied","Data":"aa02f726269acb1a95d7a68005cbfbbe4f481bb481f4612470d155ee5bde6649"} Mar 13 12:44:07 crc kubenswrapper[4632]: I0313 12:44:07.514327 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556764-9lxh8" Mar 13 12:44:07 crc kubenswrapper[4632]: I0313 12:44:07.686358 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rqnq\" (UniqueName: \"kubernetes.io/projected/437f55ff-c573-4944-a680-6ac2d168cb0f-kube-api-access-9rqnq\") pod \"437f55ff-c573-4944-a680-6ac2d168cb0f\" (UID: \"437f55ff-c573-4944-a680-6ac2d168cb0f\") " Mar 13 12:44:07 crc kubenswrapper[4632]: I0313 12:44:07.695977 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/437f55ff-c573-4944-a680-6ac2d168cb0f-kube-api-access-9rqnq" (OuterVolumeSpecName: "kube-api-access-9rqnq") pod "437f55ff-c573-4944-a680-6ac2d168cb0f" (UID: "437f55ff-c573-4944-a680-6ac2d168cb0f"). InnerVolumeSpecName "kube-api-access-9rqnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:44:07 crc kubenswrapper[4632]: I0313 12:44:07.789486 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rqnq\" (UniqueName: \"kubernetes.io/projected/437f55ff-c573-4944-a680-6ac2d168cb0f-kube-api-access-9rqnq\") on node \"crc\" DevicePath \"\"" Mar 13 12:44:07 crc kubenswrapper[4632]: I0313 12:44:07.831092 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556764-9lxh8" event={"ID":"437f55ff-c573-4944-a680-6ac2d168cb0f","Type":"ContainerDied","Data":"0a3427c255939c243894f261514eaf59b4474ffa7b09ba5549c6a6b3a1fad4af"} Mar 13 12:44:07 crc kubenswrapper[4632]: I0313 12:44:07.831357 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a3427c255939c243894f261514eaf59b4474ffa7b09ba5549c6a6b3a1fad4af" Mar 13 12:44:07 crc kubenswrapper[4632]: I0313 12:44:07.831190 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556764-9lxh8" Mar 13 12:44:07 crc kubenswrapper[4632]: I0313 12:44:07.904585 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556758-989sl"] Mar 13 12:44:07 crc kubenswrapper[4632]: I0313 12:44:07.913083 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556758-989sl"] Mar 13 12:44:08 crc kubenswrapper[4632]: I0313 12:44:08.057518 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="564cb4aa-8722-4f69-adeb-16bc8b74bff0" path="/var/lib/kubelet/pods/564cb4aa-8722-4f69-adeb-16bc8b74bff0/volumes" Mar 13 12:44:08 crc kubenswrapper[4632]: I0313 12:44:08.349244 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jfn52/crc-debug-9w6fr"] Mar 13 12:44:08 crc kubenswrapper[4632]: E0313 12:44:08.349729 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="437f55ff-c573-4944-a680-6ac2d168cb0f" containerName="oc" Mar 13 12:44:08 crc kubenswrapper[4632]: I0313 12:44:08.349755 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="437f55ff-c573-4944-a680-6ac2d168cb0f" containerName="oc" Mar 13 12:44:08 crc kubenswrapper[4632]: I0313 12:44:08.350038 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="437f55ff-c573-4944-a680-6ac2d168cb0f" containerName="oc" Mar 13 12:44:08 crc kubenswrapper[4632]: I0313 12:44:08.350884 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jfn52/crc-debug-9w6fr" Mar 13 12:44:08 crc kubenswrapper[4632]: I0313 12:44:08.405272 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3ccc7848-e5be-416d-95de-b621b5cc770d-host\") pod \"crc-debug-9w6fr\" (UID: \"3ccc7848-e5be-416d-95de-b621b5cc770d\") " pod="openshift-must-gather-jfn52/crc-debug-9w6fr" Mar 13 12:44:08 crc kubenswrapper[4632]: I0313 12:44:08.405369 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bfjh\" (UniqueName: \"kubernetes.io/projected/3ccc7848-e5be-416d-95de-b621b5cc770d-kube-api-access-7bfjh\") pod \"crc-debug-9w6fr\" (UID: \"3ccc7848-e5be-416d-95de-b621b5cc770d\") " pod="openshift-must-gather-jfn52/crc-debug-9w6fr" Mar 13 12:44:08 crc kubenswrapper[4632]: I0313 12:44:08.506753 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3ccc7848-e5be-416d-95de-b621b5cc770d-host\") pod \"crc-debug-9w6fr\" (UID: \"3ccc7848-e5be-416d-95de-b621b5cc770d\") " pod="openshift-must-gather-jfn52/crc-debug-9w6fr" Mar 13 12:44:08 crc kubenswrapper[4632]: I0313 12:44:08.506880 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bfjh\" (UniqueName: \"kubernetes.io/projected/3ccc7848-e5be-416d-95de-b621b5cc770d-kube-api-access-7bfjh\") pod \"crc-debug-9w6fr\" (UID: \"3ccc7848-e5be-416d-95de-b621b5cc770d\") " pod="openshift-must-gather-jfn52/crc-debug-9w6fr" Mar 13 12:44:08 crc kubenswrapper[4632]: I0313 12:44:08.508169 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3ccc7848-e5be-416d-95de-b621b5cc770d-host\") pod \"crc-debug-9w6fr\" (UID: \"3ccc7848-e5be-416d-95de-b621b5cc770d\") " pod="openshift-must-gather-jfn52/crc-debug-9w6fr" Mar 13 12:44:08 crc kubenswrapper[4632]: I0313 12:44:08.526309 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bfjh\" (UniqueName: \"kubernetes.io/projected/3ccc7848-e5be-416d-95de-b621b5cc770d-kube-api-access-7bfjh\") pod \"crc-debug-9w6fr\" (UID: \"3ccc7848-e5be-416d-95de-b621b5cc770d\") " pod="openshift-must-gather-jfn52/crc-debug-9w6fr" Mar 13 12:44:08 crc kubenswrapper[4632]: I0313 12:44:08.671079 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jfn52/crc-debug-9w6fr" Mar 13 12:44:08 crc kubenswrapper[4632]: I0313 12:44:08.842666 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jfn52/crc-debug-9w6fr" event={"ID":"3ccc7848-e5be-416d-95de-b621b5cc770d","Type":"ContainerStarted","Data":"b1c855fe5ef0d99083c07be960529478cbb1140696aa5e6315932510eb584567"} Mar 13 12:44:21 crc kubenswrapper[4632]: I0313 12:44:21.978577 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jfn52/crc-debug-9w6fr" event={"ID":"3ccc7848-e5be-416d-95de-b621b5cc770d","Type":"ContainerStarted","Data":"ef8f3145eaca72f4029fdc60f2fb5cc46e463f526ce49ed7c2f109c93d6646f5"} Mar 13 12:44:22 crc kubenswrapper[4632]: I0313 12:44:22.003148 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jfn52/crc-debug-9w6fr" podStartSLOduration=1.288398085 podStartE2EDuration="14.003127299s" podCreationTimestamp="2026-03-13 12:44:08 +0000 UTC" firstStartedPulling="2026-03-13 12:44:08.725595785 +0000 UTC m=+9622.748125918" lastFinishedPulling="2026-03-13 12:44:21.440324999 +0000 UTC m=+9635.462855132" observedRunningTime="2026-03-13 12:44:21.99583384 +0000 UTC m=+9636.018363973" watchObservedRunningTime="2026-03-13 12:44:22.003127299 +0000 UTC m=+9636.025657432" Mar 13 12:44:28 crc kubenswrapper[4632]: I0313 12:44:28.703047 4632 scope.go:117] "RemoveContainer" containerID="e9210685ea8bbb4f63a3e5f03db6817cbf29b557c698d4de6b1ae5929063a7c0" Mar 13 12:44:33 crc kubenswrapper[4632]: I0313 12:44:33.186777 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gzkxx"] Mar 13 12:44:33 crc kubenswrapper[4632]: I0313 12:44:33.193698 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gzkxx" Mar 13 12:44:33 crc kubenswrapper[4632]: I0313 12:44:33.195576 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gzkxx"] Mar 13 12:44:33 crc kubenswrapper[4632]: I0313 12:44:33.238434 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/850e7777-b942-4a4a-85ca-355a2ebd2ec9-utilities\") pod \"certified-operators-gzkxx\" (UID: \"850e7777-b942-4a4a-85ca-355a2ebd2ec9\") " pod="openshift-marketplace/certified-operators-gzkxx" Mar 13 12:44:33 crc kubenswrapper[4632]: I0313 12:44:33.239151 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/850e7777-b942-4a4a-85ca-355a2ebd2ec9-catalog-content\") pod \"certified-operators-gzkxx\" (UID: \"850e7777-b942-4a4a-85ca-355a2ebd2ec9\") " pod="openshift-marketplace/certified-operators-gzkxx" Mar 13 12:44:33 crc kubenswrapper[4632]: I0313 12:44:33.239267 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmcqj\" (UniqueName: \"kubernetes.io/projected/850e7777-b942-4a4a-85ca-355a2ebd2ec9-kube-api-access-tmcqj\") pod \"certified-operators-gzkxx\" (UID: \"850e7777-b942-4a4a-85ca-355a2ebd2ec9\") " pod="openshift-marketplace/certified-operators-gzkxx" Mar 13 12:44:33 crc kubenswrapper[4632]: I0313 12:44:33.340368 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/850e7777-b942-4a4a-85ca-355a2ebd2ec9-utilities\") pod \"certified-operators-gzkxx\" (UID: \"850e7777-b942-4a4a-85ca-355a2ebd2ec9\") " pod="openshift-marketplace/certified-operators-gzkxx" Mar 13 12:44:33 crc kubenswrapper[4632]: I0313 12:44:33.340452 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/850e7777-b942-4a4a-85ca-355a2ebd2ec9-catalog-content\") pod \"certified-operators-gzkxx\" (UID: \"850e7777-b942-4a4a-85ca-355a2ebd2ec9\") " pod="openshift-marketplace/certified-operators-gzkxx" Mar 13 12:44:33 crc kubenswrapper[4632]: I0313 12:44:33.340541 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmcqj\" (UniqueName: \"kubernetes.io/projected/850e7777-b942-4a4a-85ca-355a2ebd2ec9-kube-api-access-tmcqj\") pod \"certified-operators-gzkxx\" (UID: \"850e7777-b942-4a4a-85ca-355a2ebd2ec9\") " pod="openshift-marketplace/certified-operators-gzkxx" Mar 13 12:44:33 crc kubenswrapper[4632]: I0313 12:44:33.341015 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/850e7777-b942-4a4a-85ca-355a2ebd2ec9-utilities\") pod \"certified-operators-gzkxx\" (UID: \"850e7777-b942-4a4a-85ca-355a2ebd2ec9\") " pod="openshift-marketplace/certified-operators-gzkxx" Mar 13 12:44:33 crc kubenswrapper[4632]: I0313 12:44:33.341108 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/850e7777-b942-4a4a-85ca-355a2ebd2ec9-catalog-content\") pod \"certified-operators-gzkxx\" (UID: \"850e7777-b942-4a4a-85ca-355a2ebd2ec9\") " pod="openshift-marketplace/certified-operators-gzkxx" Mar 13 12:44:33 crc kubenswrapper[4632]: I0313 12:44:33.565272 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmcqj\" (UniqueName: \"kubernetes.io/projected/850e7777-b942-4a4a-85ca-355a2ebd2ec9-kube-api-access-tmcqj\") pod \"certified-operators-gzkxx\" (UID: \"850e7777-b942-4a4a-85ca-355a2ebd2ec9\") " pod="openshift-marketplace/certified-operators-gzkxx" Mar 13 12:44:33 crc kubenswrapper[4632]: I0313 12:44:33.817741 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gzkxx" Mar 13 12:44:34 crc kubenswrapper[4632]: I0313 12:44:34.966251 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gzkxx"] Mar 13 12:44:36 crc kubenswrapper[4632]: I0313 12:44:36.161922 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzkxx" event={"ID":"850e7777-b942-4a4a-85ca-355a2ebd2ec9","Type":"ContainerStarted","Data":"3fe91ad8e992364755fc6ec2dea5c46c659b3d35b19fd4b30b1afd69688bec39"} Mar 13 12:44:38 crc kubenswrapper[4632]: I0313 12:44:38.181577 4632 generic.go:334] "Generic (PLEG): container finished" podID="850e7777-b942-4a4a-85ca-355a2ebd2ec9" containerID="88c9506140b859200b8b7485eae4c4fe840edad812655ef77eef068e2e2b3959" exitCode=0 Mar 13 12:44:38 crc kubenswrapper[4632]: I0313 12:44:38.181682 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzkxx" event={"ID":"850e7777-b942-4a4a-85ca-355a2ebd2ec9","Type":"ContainerDied","Data":"88c9506140b859200b8b7485eae4c4fe840edad812655ef77eef068e2e2b3959"} Mar 13 12:44:39 crc kubenswrapper[4632]: I0313 12:44:39.217111 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzkxx" event={"ID":"850e7777-b942-4a4a-85ca-355a2ebd2ec9","Type":"ContainerStarted","Data":"58931899bc81756bd316a55670c1933e22c756c6165c637e6a284e4a8b934ab3"} Mar 13 12:44:42 crc kubenswrapper[4632]: I0313 12:44:42.251251 4632 generic.go:334] "Generic (PLEG): container finished" podID="850e7777-b942-4a4a-85ca-355a2ebd2ec9" containerID="58931899bc81756bd316a55670c1933e22c756c6165c637e6a284e4a8b934ab3" exitCode=0 Mar 13 12:44:42 crc kubenswrapper[4632]: I0313 12:44:42.251341 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzkxx" event={"ID":"850e7777-b942-4a4a-85ca-355a2ebd2ec9","Type":"ContainerDied","Data":"58931899bc81756bd316a55670c1933e22c756c6165c637e6a284e4a8b934ab3"} Mar 13 12:44:43 crc kubenswrapper[4632]: I0313 12:44:43.269555 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzkxx" event={"ID":"850e7777-b942-4a4a-85ca-355a2ebd2ec9","Type":"ContainerStarted","Data":"5b7aa43a76c781e9b29d218c7062f057d96c8d06109d34357a6ad258bc4c6803"} Mar 13 12:44:43 crc kubenswrapper[4632]: I0313 12:44:43.304250 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gzkxx" podStartSLOduration=5.709656055 podStartE2EDuration="10.304228027s" podCreationTimestamp="2026-03-13 12:44:33 +0000 UTC" firstStartedPulling="2026-03-13 12:44:38.183213186 +0000 UTC m=+9652.205743319" lastFinishedPulling="2026-03-13 12:44:42.777785158 +0000 UTC m=+9656.800315291" observedRunningTime="2026-03-13 12:44:43.293711249 +0000 UTC m=+9657.316241382" watchObservedRunningTime="2026-03-13 12:44:43.304228027 +0000 UTC m=+9657.326758160" Mar 13 12:44:43 crc kubenswrapper[4632]: I0313 12:44:43.818439 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gzkxx" Mar 13 12:44:43 crc kubenswrapper[4632]: I0313 12:44:43.818873 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gzkxx" Mar 13 12:44:44 crc kubenswrapper[4632]: I0313 12:44:44.878100 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-gzkxx" podUID="850e7777-b942-4a4a-85ca-355a2ebd2ec9" containerName="registry-server" probeResult="failure" output=< Mar 13 12:44:44 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:44:44 crc kubenswrapper[4632]: > Mar 13 12:44:55 crc kubenswrapper[4632]: I0313 12:44:55.024283 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-gzkxx" podUID="850e7777-b942-4a4a-85ca-355a2ebd2ec9" containerName="registry-server" probeResult="failure" output=< Mar 13 12:44:55 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:44:55 crc kubenswrapper[4632]: > Mar 13 12:45:00 crc kubenswrapper[4632]: I0313 12:45:00.157062 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556765-2tqx4"] Mar 13 12:45:00 crc kubenswrapper[4632]: I0313 12:45:00.158755 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556765-2tqx4" Mar 13 12:45:00 crc kubenswrapper[4632]: I0313 12:45:00.160833 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 13 12:45:00 crc kubenswrapper[4632]: I0313 12:45:00.160963 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 13 12:45:00 crc kubenswrapper[4632]: I0313 12:45:00.174756 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556765-2tqx4"] Mar 13 12:45:00 crc kubenswrapper[4632]: I0313 12:45:00.314371 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34a8177f-225c-4996-a654-1e50907b3249-config-volume\") pod \"collect-profiles-29556765-2tqx4\" (UID: \"34a8177f-225c-4996-a654-1e50907b3249\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556765-2tqx4" Mar 13 12:45:00 crc kubenswrapper[4632]: I0313 12:45:00.314470 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34a8177f-225c-4996-a654-1e50907b3249-secret-volume\") pod \"collect-profiles-29556765-2tqx4\" (UID: \"34a8177f-225c-4996-a654-1e50907b3249\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556765-2tqx4" Mar 13 12:45:00 crc kubenswrapper[4632]: I0313 12:45:00.314577 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvvqn\" (UniqueName: \"kubernetes.io/projected/34a8177f-225c-4996-a654-1e50907b3249-kube-api-access-mvvqn\") pod \"collect-profiles-29556765-2tqx4\" (UID: \"34a8177f-225c-4996-a654-1e50907b3249\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556765-2tqx4" Mar 13 12:45:00 crc kubenswrapper[4632]: I0313 12:45:00.415764 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34a8177f-225c-4996-a654-1e50907b3249-secret-volume\") pod \"collect-profiles-29556765-2tqx4\" (UID: \"34a8177f-225c-4996-a654-1e50907b3249\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556765-2tqx4" Mar 13 12:45:00 crc kubenswrapper[4632]: I0313 12:45:00.415910 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvvqn\" (UniqueName: \"kubernetes.io/projected/34a8177f-225c-4996-a654-1e50907b3249-kube-api-access-mvvqn\") pod \"collect-profiles-29556765-2tqx4\" (UID: \"34a8177f-225c-4996-a654-1e50907b3249\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556765-2tqx4" Mar 13 12:45:00 crc kubenswrapper[4632]: I0313 12:45:00.416001 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34a8177f-225c-4996-a654-1e50907b3249-config-volume\") pod \"collect-profiles-29556765-2tqx4\" (UID: \"34a8177f-225c-4996-a654-1e50907b3249\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556765-2tqx4" Mar 13 12:45:00 crc kubenswrapper[4632]: I0313 12:45:00.417146 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34a8177f-225c-4996-a654-1e50907b3249-config-volume\") pod \"collect-profiles-29556765-2tqx4\" (UID: \"34a8177f-225c-4996-a654-1e50907b3249\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556765-2tqx4" Mar 13 12:45:00 crc kubenswrapper[4632]: I0313 12:45:00.426725 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34a8177f-225c-4996-a654-1e50907b3249-secret-volume\") pod \"collect-profiles-29556765-2tqx4\" (UID: \"34a8177f-225c-4996-a654-1e50907b3249\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556765-2tqx4" Mar 13 12:45:00 crc kubenswrapper[4632]: I0313 12:45:00.442094 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvvqn\" (UniqueName: \"kubernetes.io/projected/34a8177f-225c-4996-a654-1e50907b3249-kube-api-access-mvvqn\") pod \"collect-profiles-29556765-2tqx4\" (UID: \"34a8177f-225c-4996-a654-1e50907b3249\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29556765-2tqx4" Mar 13 12:45:00 crc kubenswrapper[4632]: I0313 12:45:00.493520 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556765-2tqx4" Mar 13 12:45:01 crc kubenswrapper[4632]: I0313 12:45:01.109010 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556765-2tqx4"] Mar 13 12:45:01 crc kubenswrapper[4632]: I0313 12:45:01.433617 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556765-2tqx4" event={"ID":"34a8177f-225c-4996-a654-1e50907b3249","Type":"ContainerStarted","Data":"b0423b15daf52bcd3217d9d6cdc18f717ea2ddda79e23e83c7754ec9daf0796a"} Mar 13 12:45:01 crc kubenswrapper[4632]: I0313 12:45:01.436014 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556765-2tqx4" event={"ID":"34a8177f-225c-4996-a654-1e50907b3249","Type":"ContainerStarted","Data":"2c03855a67ca775082742718ba4da7ba63b64bd75f781b18a29c8d5763aa88db"} Mar 13 12:45:01 crc kubenswrapper[4632]: I0313 12:45:01.456618 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29556765-2tqx4" podStartSLOduration=1.456587015 podStartE2EDuration="1.456587015s" podCreationTimestamp="2026-03-13 12:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:45:01.448965098 +0000 UTC m=+9675.471495251" watchObservedRunningTime="2026-03-13 12:45:01.456587015 +0000 UTC m=+9675.479117138" Mar 13 12:45:02 crc kubenswrapper[4632]: I0313 12:45:02.444957 4632 generic.go:334] "Generic (PLEG): container finished" podID="34a8177f-225c-4996-a654-1e50907b3249" containerID="b0423b15daf52bcd3217d9d6cdc18f717ea2ddda79e23e83c7754ec9daf0796a" exitCode=0 Mar 13 12:45:02 crc kubenswrapper[4632]: I0313 12:45:02.445086 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556765-2tqx4" event={"ID":"34a8177f-225c-4996-a654-1e50907b3249","Type":"ContainerDied","Data":"b0423b15daf52bcd3217d9d6cdc18f717ea2ddda79e23e83c7754ec9daf0796a"} Mar 13 12:45:03 crc kubenswrapper[4632]: I0313 12:45:03.882078 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556765-2tqx4" Mar 13 12:45:03 crc kubenswrapper[4632]: I0313 12:45:03.919446 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gzkxx" Mar 13 12:45:03 crc kubenswrapper[4632]: I0313 12:45:03.985147 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gzkxx" Mar 13 12:45:03 crc kubenswrapper[4632]: I0313 12:45:03.988950 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34a8177f-225c-4996-a654-1e50907b3249-config-volume\") pod \"34a8177f-225c-4996-a654-1e50907b3249\" (UID: \"34a8177f-225c-4996-a654-1e50907b3249\") " Mar 13 12:45:03 crc kubenswrapper[4632]: I0313 12:45:03.989016 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34a8177f-225c-4996-a654-1e50907b3249-secret-volume\") pod \"34a8177f-225c-4996-a654-1e50907b3249\" (UID: \"34a8177f-225c-4996-a654-1e50907b3249\") " Mar 13 12:45:03 crc kubenswrapper[4632]: I0313 12:45:03.989078 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvvqn\" (UniqueName: \"kubernetes.io/projected/34a8177f-225c-4996-a654-1e50907b3249-kube-api-access-mvvqn\") pod \"34a8177f-225c-4996-a654-1e50907b3249\" (UID: \"34a8177f-225c-4996-a654-1e50907b3249\") " Mar 13 12:45:03 crc kubenswrapper[4632]: I0313 12:45:03.990696 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34a8177f-225c-4996-a654-1e50907b3249-config-volume" (OuterVolumeSpecName: "config-volume") pod "34a8177f-225c-4996-a654-1e50907b3249" (UID: "34a8177f-225c-4996-a654-1e50907b3249"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:45:03 crc kubenswrapper[4632]: I0313 12:45:03.998251 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34a8177f-225c-4996-a654-1e50907b3249-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "34a8177f-225c-4996-a654-1e50907b3249" (UID: "34a8177f-225c-4996-a654-1e50907b3249"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:45:03 crc kubenswrapper[4632]: I0313 12:45:03.999947 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34a8177f-225c-4996-a654-1e50907b3249-kube-api-access-mvvqn" (OuterVolumeSpecName: "kube-api-access-mvvqn") pod "34a8177f-225c-4996-a654-1e50907b3249" (UID: "34a8177f-225c-4996-a654-1e50907b3249"). InnerVolumeSpecName "kube-api-access-mvvqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:45:04 crc kubenswrapper[4632]: I0313 12:45:04.091291 4632 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34a8177f-225c-4996-a654-1e50907b3249-config-volume\") on node \"crc\" DevicePath \"\"" Mar 13 12:45:04 crc kubenswrapper[4632]: I0313 12:45:04.091337 4632 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34a8177f-225c-4996-a654-1e50907b3249-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 13 12:45:04 crc kubenswrapper[4632]: I0313 12:45:04.091351 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvvqn\" (UniqueName: \"kubernetes.io/projected/34a8177f-225c-4996-a654-1e50907b3249-kube-api-access-mvvqn\") on node \"crc\" DevicePath \"\"" Mar 13 12:45:04 crc kubenswrapper[4632]: I0313 12:45:04.410927 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gzkxx"] Mar 13 12:45:04 crc kubenswrapper[4632]: I0313 12:45:04.465665 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29556765-2tqx4" event={"ID":"34a8177f-225c-4996-a654-1e50907b3249","Type":"ContainerDied","Data":"2c03855a67ca775082742718ba4da7ba63b64bd75f781b18a29c8d5763aa88db"} Mar 13 12:45:04 crc kubenswrapper[4632]: I0313 12:45:04.465714 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c03855a67ca775082742718ba4da7ba63b64bd75f781b18a29c8d5763aa88db" Mar 13 12:45:04 crc kubenswrapper[4632]: I0313 12:45:04.465715 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29556765-2tqx4" Mar 13 12:45:04 crc kubenswrapper[4632]: I0313 12:45:04.549966 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556720-9qfd7"] Mar 13 12:45:04 crc kubenswrapper[4632]: I0313 12:45:04.559416 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29556720-9qfd7"] Mar 13 12:45:05 crc kubenswrapper[4632]: I0313 12:45:05.473866 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gzkxx" podUID="850e7777-b942-4a4a-85ca-355a2ebd2ec9" containerName="registry-server" containerID="cri-o://5b7aa43a76c781e9b29d218c7062f057d96c8d06109d34357a6ad258bc4c6803" gracePeriod=2 Mar 13 12:45:05 crc kubenswrapper[4632]: I0313 12:45:05.973752 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gzkxx" Mar 13 12:45:06 crc kubenswrapper[4632]: I0313 12:45:06.058158 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="453f2bd4-a723-4b7f-9b06-05d75e8df7b8" path="/var/lib/kubelet/pods/453f2bd4-a723-4b7f-9b06-05d75e8df7b8/volumes" Mar 13 12:45:06 crc kubenswrapper[4632]: I0313 12:45:06.128696 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/850e7777-b942-4a4a-85ca-355a2ebd2ec9-utilities\") pod \"850e7777-b942-4a4a-85ca-355a2ebd2ec9\" (UID: \"850e7777-b942-4a4a-85ca-355a2ebd2ec9\") " Mar 13 12:45:06 crc kubenswrapper[4632]: I0313 12:45:06.128871 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmcqj\" (UniqueName: \"kubernetes.io/projected/850e7777-b942-4a4a-85ca-355a2ebd2ec9-kube-api-access-tmcqj\") pod \"850e7777-b942-4a4a-85ca-355a2ebd2ec9\" (UID: \"850e7777-b942-4a4a-85ca-355a2ebd2ec9\") " Mar 13 12:45:06 crc kubenswrapper[4632]: I0313 12:45:06.128959 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/850e7777-b942-4a4a-85ca-355a2ebd2ec9-catalog-content\") pod \"850e7777-b942-4a4a-85ca-355a2ebd2ec9\" (UID: \"850e7777-b942-4a4a-85ca-355a2ebd2ec9\") " Mar 13 12:45:06 crc kubenswrapper[4632]: I0313 12:45:06.129470 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/850e7777-b942-4a4a-85ca-355a2ebd2ec9-utilities" (OuterVolumeSpecName: "utilities") pod "850e7777-b942-4a4a-85ca-355a2ebd2ec9" (UID: "850e7777-b942-4a4a-85ca-355a2ebd2ec9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:45:06 crc kubenswrapper[4632]: I0313 12:45:06.190003 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/850e7777-b942-4a4a-85ca-355a2ebd2ec9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "850e7777-b942-4a4a-85ca-355a2ebd2ec9" (UID: "850e7777-b942-4a4a-85ca-355a2ebd2ec9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:45:06 crc kubenswrapper[4632]: I0313 12:45:06.230726 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/850e7777-b942-4a4a-85ca-355a2ebd2ec9-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 12:45:06 crc kubenswrapper[4632]: I0313 12:45:06.230763 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/850e7777-b942-4a4a-85ca-355a2ebd2ec9-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 12:45:06 crc kubenswrapper[4632]: I0313 12:45:06.464215 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/850e7777-b942-4a4a-85ca-355a2ebd2ec9-kube-api-access-tmcqj" (OuterVolumeSpecName: "kube-api-access-tmcqj") pod "850e7777-b942-4a4a-85ca-355a2ebd2ec9" (UID: "850e7777-b942-4a4a-85ca-355a2ebd2ec9"). InnerVolumeSpecName "kube-api-access-tmcqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:45:06 crc kubenswrapper[4632]: I0313 12:45:06.486381 4632 generic.go:334] "Generic (PLEG): container finished" podID="850e7777-b942-4a4a-85ca-355a2ebd2ec9" containerID="5b7aa43a76c781e9b29d218c7062f057d96c8d06109d34357a6ad258bc4c6803" exitCode=0 Mar 13 12:45:06 crc kubenswrapper[4632]: I0313 12:45:06.486425 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzkxx" event={"ID":"850e7777-b942-4a4a-85ca-355a2ebd2ec9","Type":"ContainerDied","Data":"5b7aa43a76c781e9b29d218c7062f057d96c8d06109d34357a6ad258bc4c6803"} Mar 13 12:45:06 crc kubenswrapper[4632]: I0313 12:45:06.486458 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gzkxx" Mar 13 12:45:06 crc kubenswrapper[4632]: I0313 12:45:06.486477 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzkxx" event={"ID":"850e7777-b942-4a4a-85ca-355a2ebd2ec9","Type":"ContainerDied","Data":"3fe91ad8e992364755fc6ec2dea5c46c659b3d35b19fd4b30b1afd69688bec39"} Mar 13 12:45:06 crc kubenswrapper[4632]: I0313 12:45:06.486500 4632 scope.go:117] "RemoveContainer" containerID="5b7aa43a76c781e9b29d218c7062f057d96c8d06109d34357a6ad258bc4c6803" Mar 13 12:45:06 crc kubenswrapper[4632]: I0313 12:45:06.536399 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmcqj\" (UniqueName: \"kubernetes.io/projected/850e7777-b942-4a4a-85ca-355a2ebd2ec9-kube-api-access-tmcqj\") on node \"crc\" DevicePath \"\"" Mar 13 12:45:06 crc kubenswrapper[4632]: I0313 12:45:06.546949 4632 scope.go:117] "RemoveContainer" containerID="58931899bc81756bd316a55670c1933e22c756c6165c637e6a284e4a8b934ab3" Mar 13 12:45:06 crc kubenswrapper[4632]: I0313 12:45:06.558146 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gzkxx"] Mar 13 12:45:06 crc kubenswrapper[4632]: I0313 12:45:06.572770 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gzkxx"] Mar 13 12:45:06 crc kubenswrapper[4632]: I0313 12:45:06.583637 4632 scope.go:117] "RemoveContainer" containerID="88c9506140b859200b8b7485eae4c4fe840edad812655ef77eef068e2e2b3959" Mar 13 12:45:06 crc kubenswrapper[4632]: I0313 12:45:06.641975 4632 scope.go:117] "RemoveContainer" containerID="5b7aa43a76c781e9b29d218c7062f057d96c8d06109d34357a6ad258bc4c6803" Mar 13 12:45:06 crc kubenswrapper[4632]: E0313 12:45:06.644747 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b7aa43a76c781e9b29d218c7062f057d96c8d06109d34357a6ad258bc4c6803\": container with ID starting with 5b7aa43a76c781e9b29d218c7062f057d96c8d06109d34357a6ad258bc4c6803 not found: ID does not exist" containerID="5b7aa43a76c781e9b29d218c7062f057d96c8d06109d34357a6ad258bc4c6803" Mar 13 12:45:06 crc kubenswrapper[4632]: I0313 12:45:06.644807 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b7aa43a76c781e9b29d218c7062f057d96c8d06109d34357a6ad258bc4c6803"} err="failed to get container status \"5b7aa43a76c781e9b29d218c7062f057d96c8d06109d34357a6ad258bc4c6803\": rpc error: code = NotFound desc = could not find container \"5b7aa43a76c781e9b29d218c7062f057d96c8d06109d34357a6ad258bc4c6803\": container with ID starting with 5b7aa43a76c781e9b29d218c7062f057d96c8d06109d34357a6ad258bc4c6803 not found: ID does not exist" Mar 13 12:45:06 crc kubenswrapper[4632]: I0313 12:45:06.644840 4632 scope.go:117] "RemoveContainer" containerID="58931899bc81756bd316a55670c1933e22c756c6165c637e6a284e4a8b934ab3" Mar 13 12:45:06 crc kubenswrapper[4632]: E0313 12:45:06.645322 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58931899bc81756bd316a55670c1933e22c756c6165c637e6a284e4a8b934ab3\": container with ID starting with 58931899bc81756bd316a55670c1933e22c756c6165c637e6a284e4a8b934ab3 not found: ID does not exist" containerID="58931899bc81756bd316a55670c1933e22c756c6165c637e6a284e4a8b934ab3" Mar 13 12:45:06 crc kubenswrapper[4632]: I0313 12:45:06.645363 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58931899bc81756bd316a55670c1933e22c756c6165c637e6a284e4a8b934ab3"} err="failed to get container status \"58931899bc81756bd316a55670c1933e22c756c6165c637e6a284e4a8b934ab3\": rpc error: code = NotFound desc = could not find container \"58931899bc81756bd316a55670c1933e22c756c6165c637e6a284e4a8b934ab3\": container with ID starting with 58931899bc81756bd316a55670c1933e22c756c6165c637e6a284e4a8b934ab3 not found: ID does not exist" Mar 13 12:45:06 crc kubenswrapper[4632]: I0313 12:45:06.645388 4632 scope.go:117] "RemoveContainer" containerID="88c9506140b859200b8b7485eae4c4fe840edad812655ef77eef068e2e2b3959" Mar 13 12:45:06 crc kubenswrapper[4632]: E0313 12:45:06.645811 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88c9506140b859200b8b7485eae4c4fe840edad812655ef77eef068e2e2b3959\": container with ID starting with 88c9506140b859200b8b7485eae4c4fe840edad812655ef77eef068e2e2b3959 not found: ID does not exist" containerID="88c9506140b859200b8b7485eae4c4fe840edad812655ef77eef068e2e2b3959" Mar 13 12:45:06 crc kubenswrapper[4632]: I0313 12:45:06.645833 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88c9506140b859200b8b7485eae4c4fe840edad812655ef77eef068e2e2b3959"} err="failed to get container status \"88c9506140b859200b8b7485eae4c4fe840edad812655ef77eef068e2e2b3959\": rpc error: code = NotFound desc = could not find container \"88c9506140b859200b8b7485eae4c4fe840edad812655ef77eef068e2e2b3959\": container with ID starting with 88c9506140b859200b8b7485eae4c4fe840edad812655ef77eef068e2e2b3959 not found: ID does not exist" Mar 13 12:45:08 crc kubenswrapper[4632]: I0313 12:45:08.058244 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="850e7777-b942-4a4a-85ca-355a2ebd2ec9" path="/var/lib/kubelet/pods/850e7777-b942-4a4a-85ca-355a2ebd2ec9/volumes" Mar 13 12:45:10 crc kubenswrapper[4632]: I0313 12:45:10.460735 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:45:10 crc kubenswrapper[4632]: I0313 12:45:10.462207 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:45:18 crc kubenswrapper[4632]: I0313 12:45:18.600039 4632 generic.go:334] "Generic (PLEG): container finished" podID="3ccc7848-e5be-416d-95de-b621b5cc770d" containerID="ef8f3145eaca72f4029fdc60f2fb5cc46e463f526ce49ed7c2f109c93d6646f5" exitCode=0 Mar 13 12:45:18 crc kubenswrapper[4632]: I0313 12:45:18.600274 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jfn52/crc-debug-9w6fr" event={"ID":"3ccc7848-e5be-416d-95de-b621b5cc770d","Type":"ContainerDied","Data":"ef8f3145eaca72f4029fdc60f2fb5cc46e463f526ce49ed7c2f109c93d6646f5"} Mar 13 12:45:19 crc kubenswrapper[4632]: I0313 12:45:19.745139 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jfn52/crc-debug-9w6fr" Mar 13 12:45:19 crc kubenswrapper[4632]: I0313 12:45:19.791701 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jfn52/crc-debug-9w6fr"] Mar 13 12:45:19 crc kubenswrapper[4632]: I0313 12:45:19.803493 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jfn52/crc-debug-9w6fr"] Mar 13 12:45:19 crc kubenswrapper[4632]: I0313 12:45:19.873905 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3ccc7848-e5be-416d-95de-b621b5cc770d-host\") pod \"3ccc7848-e5be-416d-95de-b621b5cc770d\" (UID: \"3ccc7848-e5be-416d-95de-b621b5cc770d\") " Mar 13 12:45:19 crc kubenswrapper[4632]: I0313 12:45:19.874041 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ccc7848-e5be-416d-95de-b621b5cc770d-host" (OuterVolumeSpecName: "host") pod "3ccc7848-e5be-416d-95de-b621b5cc770d" (UID: "3ccc7848-e5be-416d-95de-b621b5cc770d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:45:19 crc kubenswrapper[4632]: I0313 12:45:19.874080 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bfjh\" (UniqueName: \"kubernetes.io/projected/3ccc7848-e5be-416d-95de-b621b5cc770d-kube-api-access-7bfjh\") pod \"3ccc7848-e5be-416d-95de-b621b5cc770d\" (UID: \"3ccc7848-e5be-416d-95de-b621b5cc770d\") " Mar 13 12:45:19 crc kubenswrapper[4632]: I0313 12:45:19.874536 4632 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3ccc7848-e5be-416d-95de-b621b5cc770d-host\") on node \"crc\" DevicePath \"\"" Mar 13 12:45:19 crc kubenswrapper[4632]: I0313 12:45:19.882547 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ccc7848-e5be-416d-95de-b621b5cc770d-kube-api-access-7bfjh" (OuterVolumeSpecName: "kube-api-access-7bfjh") pod "3ccc7848-e5be-416d-95de-b621b5cc770d" (UID: "3ccc7848-e5be-416d-95de-b621b5cc770d"). InnerVolumeSpecName "kube-api-access-7bfjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:45:19 crc kubenswrapper[4632]: I0313 12:45:19.975956 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bfjh\" (UniqueName: \"kubernetes.io/projected/3ccc7848-e5be-416d-95de-b621b5cc770d-kube-api-access-7bfjh\") on node \"crc\" DevicePath \"\"" Mar 13 12:45:20 crc kubenswrapper[4632]: I0313 12:45:20.059301 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ccc7848-e5be-416d-95de-b621b5cc770d" path="/var/lib/kubelet/pods/3ccc7848-e5be-416d-95de-b621b5cc770d/volumes" Mar 13 12:45:20 crc kubenswrapper[4632]: I0313 12:45:20.629120 4632 scope.go:117] "RemoveContainer" containerID="ef8f3145eaca72f4029fdc60f2fb5cc46e463f526ce49ed7c2f109c93d6646f5" Mar 13 12:45:20 crc kubenswrapper[4632]: I0313 12:45:20.629131 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jfn52/crc-debug-9w6fr" Mar 13 12:45:21 crc kubenswrapper[4632]: I0313 12:45:21.033089 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jfn52/crc-debug-mm9wk"] Mar 13 12:45:21 crc kubenswrapper[4632]: E0313 12:45:21.033607 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ccc7848-e5be-416d-95de-b621b5cc770d" containerName="container-00" Mar 13 12:45:21 crc kubenswrapper[4632]: I0313 12:45:21.034296 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ccc7848-e5be-416d-95de-b621b5cc770d" containerName="container-00" Mar 13 12:45:21 crc kubenswrapper[4632]: E0313 12:45:21.034334 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34a8177f-225c-4996-a654-1e50907b3249" containerName="collect-profiles" Mar 13 12:45:21 crc kubenswrapper[4632]: I0313 12:45:21.034346 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a8177f-225c-4996-a654-1e50907b3249" containerName="collect-profiles" Mar 13 12:45:21 crc kubenswrapper[4632]: E0313 12:45:21.034369 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="850e7777-b942-4a4a-85ca-355a2ebd2ec9" containerName="extract-content" Mar 13 12:45:21 crc kubenswrapper[4632]: I0313 12:45:21.034378 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="850e7777-b942-4a4a-85ca-355a2ebd2ec9" containerName="extract-content" Mar 13 12:45:21 crc kubenswrapper[4632]: E0313 12:45:21.034403 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="850e7777-b942-4a4a-85ca-355a2ebd2ec9" containerName="extract-utilities" Mar 13 12:45:21 crc kubenswrapper[4632]: I0313 12:45:21.034412 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="850e7777-b942-4a4a-85ca-355a2ebd2ec9" containerName="extract-utilities" Mar 13 12:45:21 crc kubenswrapper[4632]: E0313 12:45:21.034434 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="850e7777-b942-4a4a-85ca-355a2ebd2ec9" containerName="registry-server" Mar 13 12:45:21 crc kubenswrapper[4632]: I0313 12:45:21.034443 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="850e7777-b942-4a4a-85ca-355a2ebd2ec9" containerName="registry-server" Mar 13 12:45:21 crc kubenswrapper[4632]: I0313 12:45:21.034761 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="850e7777-b942-4a4a-85ca-355a2ebd2ec9" containerName="registry-server" Mar 13 12:45:21 crc kubenswrapper[4632]: I0313 12:45:21.034805 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ccc7848-e5be-416d-95de-b621b5cc770d" containerName="container-00" Mar 13 12:45:21 crc kubenswrapper[4632]: I0313 12:45:21.034817 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="34a8177f-225c-4996-a654-1e50907b3249" containerName="collect-profiles" Mar 13 12:45:21 crc kubenswrapper[4632]: I0313 12:45:21.035699 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jfn52/crc-debug-mm9wk" Mar 13 12:45:21 crc kubenswrapper[4632]: I0313 12:45:21.096276 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7a5eb284-6ea3-45af-bd45-6ba4aa90b9ae-host\") pod \"crc-debug-mm9wk\" (UID: \"7a5eb284-6ea3-45af-bd45-6ba4aa90b9ae\") " pod="openshift-must-gather-jfn52/crc-debug-mm9wk" Mar 13 12:45:21 crc kubenswrapper[4632]: I0313 12:45:21.096329 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc7bg\" (UniqueName: \"kubernetes.io/projected/7a5eb284-6ea3-45af-bd45-6ba4aa90b9ae-kube-api-access-gc7bg\") pod \"crc-debug-mm9wk\" (UID: \"7a5eb284-6ea3-45af-bd45-6ba4aa90b9ae\") " pod="openshift-must-gather-jfn52/crc-debug-mm9wk" Mar 13 12:45:21 crc kubenswrapper[4632]: I0313 12:45:21.197695 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7a5eb284-6ea3-45af-bd45-6ba4aa90b9ae-host\") pod \"crc-debug-mm9wk\" (UID: \"7a5eb284-6ea3-45af-bd45-6ba4aa90b9ae\") " pod="openshift-must-gather-jfn52/crc-debug-mm9wk" Mar 13 12:45:21 crc kubenswrapper[4632]: I0313 12:45:21.198039 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc7bg\" (UniqueName: \"kubernetes.io/projected/7a5eb284-6ea3-45af-bd45-6ba4aa90b9ae-kube-api-access-gc7bg\") pod \"crc-debug-mm9wk\" (UID: \"7a5eb284-6ea3-45af-bd45-6ba4aa90b9ae\") " pod="openshift-must-gather-jfn52/crc-debug-mm9wk" Mar 13 12:45:21 crc kubenswrapper[4632]: I0313 12:45:21.199760 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7a5eb284-6ea3-45af-bd45-6ba4aa90b9ae-host\") pod \"crc-debug-mm9wk\" (UID: \"7a5eb284-6ea3-45af-bd45-6ba4aa90b9ae\") " pod="openshift-must-gather-jfn52/crc-debug-mm9wk" Mar 13 12:45:21 crc kubenswrapper[4632]: I0313 12:45:21.217405 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc7bg\" (UniqueName: \"kubernetes.io/projected/7a5eb284-6ea3-45af-bd45-6ba4aa90b9ae-kube-api-access-gc7bg\") pod \"crc-debug-mm9wk\" (UID: \"7a5eb284-6ea3-45af-bd45-6ba4aa90b9ae\") " pod="openshift-must-gather-jfn52/crc-debug-mm9wk" Mar 13 12:45:21 crc kubenswrapper[4632]: I0313 12:45:21.351412 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jfn52/crc-debug-mm9wk" Mar 13 12:45:21 crc kubenswrapper[4632]: I0313 12:45:21.639554 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jfn52/crc-debug-mm9wk" event={"ID":"7a5eb284-6ea3-45af-bd45-6ba4aa90b9ae","Type":"ContainerStarted","Data":"f9576f389f75db79c6cd02f685bff29c0e4ed007b62591df661e2a8ee57c8ce2"} Mar 13 12:45:21 crc kubenswrapper[4632]: I0313 12:45:21.639599 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jfn52/crc-debug-mm9wk" event={"ID":"7a5eb284-6ea3-45af-bd45-6ba4aa90b9ae","Type":"ContainerStarted","Data":"f533186bae84ac6e618c29157ca171a61d7d3023adf769459e9d86f07e720694"} Mar 13 12:45:21 crc kubenswrapper[4632]: I0313 12:45:21.672740 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jfn52/crc-debug-mm9wk" podStartSLOduration=0.672717328 podStartE2EDuration="672.717328ms" podCreationTimestamp="2026-03-13 12:45:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:45:21.659213896 +0000 UTC m=+9695.681744049" watchObservedRunningTime="2026-03-13 12:45:21.672717328 +0000 UTC m=+9695.695247471" Mar 13 12:45:22 crc kubenswrapper[4632]: I0313 12:45:22.695320 4632 generic.go:334] "Generic (PLEG): container finished" podID="7a5eb284-6ea3-45af-bd45-6ba4aa90b9ae" containerID="f9576f389f75db79c6cd02f685bff29c0e4ed007b62591df661e2a8ee57c8ce2" exitCode=0 Mar 13 12:45:22 crc kubenswrapper[4632]: I0313 12:45:22.696668 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jfn52/crc-debug-mm9wk" event={"ID":"7a5eb284-6ea3-45af-bd45-6ba4aa90b9ae","Type":"ContainerDied","Data":"f9576f389f75db79c6cd02f685bff29c0e4ed007b62591df661e2a8ee57c8ce2"} Mar 13 12:45:23 crc kubenswrapper[4632]: I0313 12:45:23.862476 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jfn52/crc-debug-mm9wk" Mar 13 12:45:24 crc kubenswrapper[4632]: I0313 12:45:24.005007 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7a5eb284-6ea3-45af-bd45-6ba4aa90b9ae-host\") pod \"7a5eb284-6ea3-45af-bd45-6ba4aa90b9ae\" (UID: \"7a5eb284-6ea3-45af-bd45-6ba4aa90b9ae\") " Mar 13 12:45:24 crc kubenswrapper[4632]: I0313 12:45:24.005172 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a5eb284-6ea3-45af-bd45-6ba4aa90b9ae-host" (OuterVolumeSpecName: "host") pod "7a5eb284-6ea3-45af-bd45-6ba4aa90b9ae" (UID: "7a5eb284-6ea3-45af-bd45-6ba4aa90b9ae"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:45:24 crc kubenswrapper[4632]: I0313 12:45:24.005207 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gc7bg\" (UniqueName: \"kubernetes.io/projected/7a5eb284-6ea3-45af-bd45-6ba4aa90b9ae-kube-api-access-gc7bg\") pod \"7a5eb284-6ea3-45af-bd45-6ba4aa90b9ae\" (UID: \"7a5eb284-6ea3-45af-bd45-6ba4aa90b9ae\") " Mar 13 12:45:24 crc kubenswrapper[4632]: I0313 12:45:24.005776 4632 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7a5eb284-6ea3-45af-bd45-6ba4aa90b9ae-host\") on node \"crc\" DevicePath \"\"" Mar 13 12:45:24 crc kubenswrapper[4632]: I0313 12:45:24.016096 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a5eb284-6ea3-45af-bd45-6ba4aa90b9ae-kube-api-access-gc7bg" (OuterVolumeSpecName: "kube-api-access-gc7bg") pod "7a5eb284-6ea3-45af-bd45-6ba4aa90b9ae" (UID: "7a5eb284-6ea3-45af-bd45-6ba4aa90b9ae"). InnerVolumeSpecName "kube-api-access-gc7bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:45:24 crc kubenswrapper[4632]: I0313 12:45:24.124533 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gc7bg\" (UniqueName: \"kubernetes.io/projected/7a5eb284-6ea3-45af-bd45-6ba4aa90b9ae-kube-api-access-gc7bg\") on node \"crc\" DevicePath \"\"" Mar 13 12:45:24 crc kubenswrapper[4632]: I0313 12:45:24.222009 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jfn52/crc-debug-mm9wk"] Mar 13 12:45:24 crc kubenswrapper[4632]: I0313 12:45:24.233711 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jfn52/crc-debug-mm9wk"] Mar 13 12:45:24 crc kubenswrapper[4632]: I0313 12:45:24.713673 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f533186bae84ac6e618c29157ca171a61d7d3023adf769459e9d86f07e720694" Mar 13 12:45:24 crc kubenswrapper[4632]: I0313 12:45:24.713891 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jfn52/crc-debug-mm9wk" Mar 13 12:45:25 crc kubenswrapper[4632]: I0313 12:45:25.453862 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jfn52/crc-debug-wv7nx"] Mar 13 12:45:25 crc kubenswrapper[4632]: E0313 12:45:25.454295 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a5eb284-6ea3-45af-bd45-6ba4aa90b9ae" containerName="container-00" Mar 13 12:45:25 crc kubenswrapper[4632]: I0313 12:45:25.454308 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a5eb284-6ea3-45af-bd45-6ba4aa90b9ae" containerName="container-00" Mar 13 12:45:25 crc kubenswrapper[4632]: I0313 12:45:25.454482 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a5eb284-6ea3-45af-bd45-6ba4aa90b9ae" containerName="container-00" Mar 13 12:45:25 crc kubenswrapper[4632]: I0313 12:45:25.455349 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jfn52/crc-debug-wv7nx" Mar 13 12:45:25 crc kubenswrapper[4632]: I0313 12:45:25.552523 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/84dbf8f9-b22a-4cd1-8589-6c437bc73a36-host\") pod \"crc-debug-wv7nx\" (UID: \"84dbf8f9-b22a-4cd1-8589-6c437bc73a36\") " pod="openshift-must-gather-jfn52/crc-debug-wv7nx" Mar 13 12:45:25 crc kubenswrapper[4632]: I0313 12:45:25.552594 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jgfj\" (UniqueName: \"kubernetes.io/projected/84dbf8f9-b22a-4cd1-8589-6c437bc73a36-kube-api-access-7jgfj\") pod \"crc-debug-wv7nx\" (UID: \"84dbf8f9-b22a-4cd1-8589-6c437bc73a36\") " pod="openshift-must-gather-jfn52/crc-debug-wv7nx" Mar 13 12:45:25 crc kubenswrapper[4632]: I0313 12:45:25.654730 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/84dbf8f9-b22a-4cd1-8589-6c437bc73a36-host\") pod \"crc-debug-wv7nx\" (UID: \"84dbf8f9-b22a-4cd1-8589-6c437bc73a36\") " pod="openshift-must-gather-jfn52/crc-debug-wv7nx" Mar 13 12:45:25 crc kubenswrapper[4632]: I0313 12:45:25.655167 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jgfj\" (UniqueName: \"kubernetes.io/projected/84dbf8f9-b22a-4cd1-8589-6c437bc73a36-kube-api-access-7jgfj\") pod \"crc-debug-wv7nx\" (UID: \"84dbf8f9-b22a-4cd1-8589-6c437bc73a36\") " pod="openshift-must-gather-jfn52/crc-debug-wv7nx" Mar 13 12:45:25 crc kubenswrapper[4632]: I0313 12:45:25.654882 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/84dbf8f9-b22a-4cd1-8589-6c437bc73a36-host\") pod \"crc-debug-wv7nx\" (UID: \"84dbf8f9-b22a-4cd1-8589-6c437bc73a36\") " pod="openshift-must-gather-jfn52/crc-debug-wv7nx" Mar 13 12:45:25 crc kubenswrapper[4632]: I0313 12:45:25.680698 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jgfj\" (UniqueName: \"kubernetes.io/projected/84dbf8f9-b22a-4cd1-8589-6c437bc73a36-kube-api-access-7jgfj\") pod \"crc-debug-wv7nx\" (UID: \"84dbf8f9-b22a-4cd1-8589-6c437bc73a36\") " pod="openshift-must-gather-jfn52/crc-debug-wv7nx" Mar 13 12:45:25 crc kubenswrapper[4632]: I0313 12:45:25.777402 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jfn52/crc-debug-wv7nx" Mar 13 12:45:26 crc kubenswrapper[4632]: I0313 12:45:26.083725 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a5eb284-6ea3-45af-bd45-6ba4aa90b9ae" path="/var/lib/kubelet/pods/7a5eb284-6ea3-45af-bd45-6ba4aa90b9ae/volumes" Mar 13 12:45:26 crc kubenswrapper[4632]: I0313 12:45:26.736354 4632 generic.go:334] "Generic (PLEG): container finished" podID="84dbf8f9-b22a-4cd1-8589-6c437bc73a36" containerID="8c80829077510548bea49d9ed53848049ef8179bb726bd5562c95f37f2977880" exitCode=0 Mar 13 12:45:26 crc kubenswrapper[4632]: I0313 12:45:26.736449 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jfn52/crc-debug-wv7nx" event={"ID":"84dbf8f9-b22a-4cd1-8589-6c437bc73a36","Type":"ContainerDied","Data":"8c80829077510548bea49d9ed53848049ef8179bb726bd5562c95f37f2977880"} Mar 13 12:45:26 crc kubenswrapper[4632]: I0313 12:45:26.736655 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jfn52/crc-debug-wv7nx" event={"ID":"84dbf8f9-b22a-4cd1-8589-6c437bc73a36","Type":"ContainerStarted","Data":"adceec6a78211534843923e49837eb34cf6d7f0ef86a1e17c8ecd2fc03018512"} Mar 13 12:45:26 crc kubenswrapper[4632]: I0313 12:45:26.778307 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jfn52/crc-debug-wv7nx"] Mar 13 12:45:26 crc kubenswrapper[4632]: I0313 12:45:26.789921 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jfn52/crc-debug-wv7nx"] Mar 13 12:45:27 crc kubenswrapper[4632]: I0313 12:45:27.847658 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jfn52/crc-debug-wv7nx" Mar 13 12:45:28 crc kubenswrapper[4632]: I0313 12:45:28.015507 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jgfj\" (UniqueName: \"kubernetes.io/projected/84dbf8f9-b22a-4cd1-8589-6c437bc73a36-kube-api-access-7jgfj\") pod \"84dbf8f9-b22a-4cd1-8589-6c437bc73a36\" (UID: \"84dbf8f9-b22a-4cd1-8589-6c437bc73a36\") " Mar 13 12:45:28 crc kubenswrapper[4632]: I0313 12:45:28.015854 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/84dbf8f9-b22a-4cd1-8589-6c437bc73a36-host\") pod \"84dbf8f9-b22a-4cd1-8589-6c437bc73a36\" (UID: \"84dbf8f9-b22a-4cd1-8589-6c437bc73a36\") " Mar 13 12:45:28 crc kubenswrapper[4632]: I0313 12:45:28.016000 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84dbf8f9-b22a-4cd1-8589-6c437bc73a36-host" (OuterVolumeSpecName: "host") pod "84dbf8f9-b22a-4cd1-8589-6c437bc73a36" (UID: "84dbf8f9-b22a-4cd1-8589-6c437bc73a36"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:45:28 crc kubenswrapper[4632]: I0313 12:45:28.016619 4632 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/84dbf8f9-b22a-4cd1-8589-6c437bc73a36-host\") on node \"crc\" DevicePath \"\"" Mar 13 12:45:28 crc kubenswrapper[4632]: I0313 12:45:28.023208 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84dbf8f9-b22a-4cd1-8589-6c437bc73a36-kube-api-access-7jgfj" (OuterVolumeSpecName: "kube-api-access-7jgfj") pod "84dbf8f9-b22a-4cd1-8589-6c437bc73a36" (UID: "84dbf8f9-b22a-4cd1-8589-6c437bc73a36"). InnerVolumeSpecName "kube-api-access-7jgfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:45:28 crc kubenswrapper[4632]: I0313 12:45:28.055325 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84dbf8f9-b22a-4cd1-8589-6c437bc73a36" path="/var/lib/kubelet/pods/84dbf8f9-b22a-4cd1-8589-6c437bc73a36/volumes" Mar 13 12:45:28 crc kubenswrapper[4632]: I0313 12:45:28.118231 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jgfj\" (UniqueName: \"kubernetes.io/projected/84dbf8f9-b22a-4cd1-8589-6c437bc73a36-kube-api-access-7jgfj\") on node \"crc\" DevicePath \"\"" Mar 13 12:45:28 crc kubenswrapper[4632]: I0313 12:45:28.755165 4632 scope.go:117] "RemoveContainer" containerID="8c80829077510548bea49d9ed53848049ef8179bb726bd5562c95f37f2977880" Mar 13 12:45:28 crc kubenswrapper[4632]: I0313 12:45:28.755192 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jfn52/crc-debug-wv7nx" Mar 13 12:45:29 crc kubenswrapper[4632]: I0313 12:45:29.137363 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jbtjg"] Mar 13 12:45:29 crc kubenswrapper[4632]: E0313 12:45:29.138223 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84dbf8f9-b22a-4cd1-8589-6c437bc73a36" containerName="container-00" Mar 13 12:45:29 crc kubenswrapper[4632]: I0313 12:45:29.138246 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="84dbf8f9-b22a-4cd1-8589-6c437bc73a36" containerName="container-00" Mar 13 12:45:29 crc kubenswrapper[4632]: I0313 12:45:29.138517 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="84dbf8f9-b22a-4cd1-8589-6c437bc73a36" containerName="container-00" Mar 13 12:45:29 crc kubenswrapper[4632]: I0313 12:45:29.139865 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jbtjg" Mar 13 12:45:29 crc kubenswrapper[4632]: I0313 12:45:29.157153 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbtjg"] Mar 13 12:45:29 crc kubenswrapper[4632]: I0313 12:45:29.239743 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90e1bcb1-81b1-42c2-a625-fe691fe60434-utilities\") pod \"redhat-marketplace-jbtjg\" (UID: \"90e1bcb1-81b1-42c2-a625-fe691fe60434\") " pod="openshift-marketplace/redhat-marketplace-jbtjg" Mar 13 12:45:29 crc kubenswrapper[4632]: I0313 12:45:29.239788 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrqsw\" (UniqueName: \"kubernetes.io/projected/90e1bcb1-81b1-42c2-a625-fe691fe60434-kube-api-access-nrqsw\") pod \"redhat-marketplace-jbtjg\" (UID: \"90e1bcb1-81b1-42c2-a625-fe691fe60434\") " pod="openshift-marketplace/redhat-marketplace-jbtjg" Mar 13 12:45:29 crc kubenswrapper[4632]: I0313 12:45:29.239835 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90e1bcb1-81b1-42c2-a625-fe691fe60434-catalog-content\") pod \"redhat-marketplace-jbtjg\" (UID: \"90e1bcb1-81b1-42c2-a625-fe691fe60434\") " pod="openshift-marketplace/redhat-marketplace-jbtjg" Mar 13 12:45:29 crc kubenswrapper[4632]: I0313 12:45:29.341708 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90e1bcb1-81b1-42c2-a625-fe691fe60434-catalog-content\") pod \"redhat-marketplace-jbtjg\" (UID: \"90e1bcb1-81b1-42c2-a625-fe691fe60434\") " pod="openshift-marketplace/redhat-marketplace-jbtjg" Mar 13 12:45:29 crc kubenswrapper[4632]: I0313 12:45:29.341911 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90e1bcb1-81b1-42c2-a625-fe691fe60434-utilities\") pod \"redhat-marketplace-jbtjg\" (UID: \"90e1bcb1-81b1-42c2-a625-fe691fe60434\") " pod="openshift-marketplace/redhat-marketplace-jbtjg" Mar 13 12:45:29 crc kubenswrapper[4632]: I0313 12:45:29.341976 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrqsw\" (UniqueName: \"kubernetes.io/projected/90e1bcb1-81b1-42c2-a625-fe691fe60434-kube-api-access-nrqsw\") pod \"redhat-marketplace-jbtjg\" (UID: \"90e1bcb1-81b1-42c2-a625-fe691fe60434\") " pod="openshift-marketplace/redhat-marketplace-jbtjg" Mar 13 12:45:29 crc kubenswrapper[4632]: I0313 12:45:29.342320 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90e1bcb1-81b1-42c2-a625-fe691fe60434-catalog-content\") pod \"redhat-marketplace-jbtjg\" (UID: \"90e1bcb1-81b1-42c2-a625-fe691fe60434\") " pod="openshift-marketplace/redhat-marketplace-jbtjg" Mar 13 12:45:29 crc kubenswrapper[4632]: I0313 12:45:29.342369 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90e1bcb1-81b1-42c2-a625-fe691fe60434-utilities\") pod \"redhat-marketplace-jbtjg\" (UID: \"90e1bcb1-81b1-42c2-a625-fe691fe60434\") " pod="openshift-marketplace/redhat-marketplace-jbtjg" Mar 13 12:45:29 crc kubenswrapper[4632]: I0313 12:45:29.362337 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrqsw\" (UniqueName: \"kubernetes.io/projected/90e1bcb1-81b1-42c2-a625-fe691fe60434-kube-api-access-nrqsw\") pod \"redhat-marketplace-jbtjg\" (UID: \"90e1bcb1-81b1-42c2-a625-fe691fe60434\") " pod="openshift-marketplace/redhat-marketplace-jbtjg" Mar 13 12:45:29 crc kubenswrapper[4632]: I0313 12:45:29.461663 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jbtjg" Mar 13 12:45:30 crc kubenswrapper[4632]: I0313 12:45:30.084557 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbtjg"] Mar 13 12:45:30 crc kubenswrapper[4632]: I0313 12:45:30.135425 4632 scope.go:117] "RemoveContainer" containerID="513ba32a6f64209e9e7a4b86369065ec16320243702d6e9f6899a7182c651338" Mar 13 12:45:30 crc kubenswrapper[4632]: I0313 12:45:30.778560 4632 generic.go:334] "Generic (PLEG): container finished" podID="90e1bcb1-81b1-42c2-a625-fe691fe60434" containerID="a5eb8f564f7bb06a4d39ae737d43ff00b69617f7796a97fc2d384634398638cc" exitCode=0 Mar 13 12:45:30 crc kubenswrapper[4632]: I0313 12:45:30.778621 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbtjg" event={"ID":"90e1bcb1-81b1-42c2-a625-fe691fe60434","Type":"ContainerDied","Data":"a5eb8f564f7bb06a4d39ae737d43ff00b69617f7796a97fc2d384634398638cc"} Mar 13 12:45:30 crc kubenswrapper[4632]: I0313 12:45:30.778686 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbtjg" event={"ID":"90e1bcb1-81b1-42c2-a625-fe691fe60434","Type":"ContainerStarted","Data":"7c9d33658cf93236a127bb3fe33f8e8067d984a174cb915b8ecf156bd22eb63f"} Mar 13 12:45:31 crc kubenswrapper[4632]: I0313 12:45:31.788898 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbtjg" event={"ID":"90e1bcb1-81b1-42c2-a625-fe691fe60434","Type":"ContainerStarted","Data":"c40b8467f0fbf11bb8bc8bea09fc62d937cebf42c2af6303174926f814c485aa"} Mar 13 12:45:33 crc kubenswrapper[4632]: I0313 12:45:33.816679 4632 generic.go:334] "Generic (PLEG): container finished" podID="90e1bcb1-81b1-42c2-a625-fe691fe60434" containerID="c40b8467f0fbf11bb8bc8bea09fc62d937cebf42c2af6303174926f814c485aa" exitCode=0 Mar 13 12:45:33 crc kubenswrapper[4632]: I0313 12:45:33.817348 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbtjg" event={"ID":"90e1bcb1-81b1-42c2-a625-fe691fe60434","Type":"ContainerDied","Data":"c40b8467f0fbf11bb8bc8bea09fc62d937cebf42c2af6303174926f814c485aa"} Mar 13 12:45:34 crc kubenswrapper[4632]: I0313 12:45:34.833614 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbtjg" event={"ID":"90e1bcb1-81b1-42c2-a625-fe691fe60434","Type":"ContainerStarted","Data":"b40e35ab8281b9feedc9a3c0e3cb1babc5421097a7d08144dee8b68a590ba85f"} Mar 13 12:45:34 crc kubenswrapper[4632]: I0313 12:45:34.864398 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jbtjg" podStartSLOduration=2.39969714 podStartE2EDuration="5.864378367s" podCreationTimestamp="2026-03-13 12:45:29 +0000 UTC" firstStartedPulling="2026-03-13 12:45:30.781230444 +0000 UTC m=+9704.803760577" lastFinishedPulling="2026-03-13 12:45:34.245911671 +0000 UTC m=+9708.268441804" observedRunningTime="2026-03-13 12:45:34.858881402 +0000 UTC m=+9708.881411535" watchObservedRunningTime="2026-03-13 12:45:34.864378367 +0000 UTC m=+9708.886908510" Mar 13 12:45:39 crc kubenswrapper[4632]: I0313 12:45:39.462853 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jbtjg" Mar 13 12:45:39 crc kubenswrapper[4632]: I0313 12:45:39.463495 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jbtjg" Mar 13 12:45:40 crc kubenswrapper[4632]: I0313 12:45:40.460769 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:45:40 crc kubenswrapper[4632]: I0313 12:45:40.461365 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:45:40 crc kubenswrapper[4632]: I0313 12:45:40.794083 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-jbtjg" podUID="90e1bcb1-81b1-42c2-a625-fe691fe60434" containerName="registry-server" probeResult="failure" output=< Mar 13 12:45:40 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:45:40 crc kubenswrapper[4632]: > Mar 13 12:45:49 crc kubenswrapper[4632]: I0313 12:45:49.523315 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jbtjg" Mar 13 12:45:49 crc kubenswrapper[4632]: I0313 12:45:49.583730 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jbtjg" Mar 13 12:45:49 crc kubenswrapper[4632]: I0313 12:45:49.762075 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbtjg"] Mar 13 12:45:50 crc kubenswrapper[4632]: I0313 12:45:50.987144 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jbtjg" podUID="90e1bcb1-81b1-42c2-a625-fe691fe60434" containerName="registry-server" containerID="cri-o://b40e35ab8281b9feedc9a3c0e3cb1babc5421097a7d08144dee8b68a590ba85f" gracePeriod=2 Mar 13 12:45:51 crc kubenswrapper[4632]: I0313 12:45:51.491650 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jbtjg" Mar 13 12:45:51 crc kubenswrapper[4632]: I0313 12:45:51.651229 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90e1bcb1-81b1-42c2-a625-fe691fe60434-utilities\") pod \"90e1bcb1-81b1-42c2-a625-fe691fe60434\" (UID: \"90e1bcb1-81b1-42c2-a625-fe691fe60434\") " Mar 13 12:45:51 crc kubenswrapper[4632]: I0313 12:45:51.651926 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90e1bcb1-81b1-42c2-a625-fe691fe60434-utilities" (OuterVolumeSpecName: "utilities") pod "90e1bcb1-81b1-42c2-a625-fe691fe60434" (UID: "90e1bcb1-81b1-42c2-a625-fe691fe60434"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:45:51 crc kubenswrapper[4632]: I0313 12:45:51.657106 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90e1bcb1-81b1-42c2-a625-fe691fe60434-catalog-content\") pod \"90e1bcb1-81b1-42c2-a625-fe691fe60434\" (UID: \"90e1bcb1-81b1-42c2-a625-fe691fe60434\") " Mar 13 12:45:51 crc kubenswrapper[4632]: I0313 12:45:51.657279 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrqsw\" (UniqueName: \"kubernetes.io/projected/90e1bcb1-81b1-42c2-a625-fe691fe60434-kube-api-access-nrqsw\") pod \"90e1bcb1-81b1-42c2-a625-fe691fe60434\" (UID: \"90e1bcb1-81b1-42c2-a625-fe691fe60434\") " Mar 13 12:45:51 crc kubenswrapper[4632]: I0313 12:45:51.658390 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90e1bcb1-81b1-42c2-a625-fe691fe60434-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 12:45:51 crc kubenswrapper[4632]: I0313 12:45:51.676359 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90e1bcb1-81b1-42c2-a625-fe691fe60434-kube-api-access-nrqsw" (OuterVolumeSpecName: "kube-api-access-nrqsw") pod "90e1bcb1-81b1-42c2-a625-fe691fe60434" (UID: "90e1bcb1-81b1-42c2-a625-fe691fe60434"). InnerVolumeSpecName "kube-api-access-nrqsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:45:51 crc kubenswrapper[4632]: I0313 12:45:51.683074 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90e1bcb1-81b1-42c2-a625-fe691fe60434-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90e1bcb1-81b1-42c2-a625-fe691fe60434" (UID: "90e1bcb1-81b1-42c2-a625-fe691fe60434"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:45:51 crc kubenswrapper[4632]: I0313 12:45:51.760762 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90e1bcb1-81b1-42c2-a625-fe691fe60434-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 12:45:51 crc kubenswrapper[4632]: I0313 12:45:51.760804 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrqsw\" (UniqueName: \"kubernetes.io/projected/90e1bcb1-81b1-42c2-a625-fe691fe60434-kube-api-access-nrqsw\") on node \"crc\" DevicePath \"\"" Mar 13 12:45:52 crc kubenswrapper[4632]: I0313 12:45:52.001090 4632 generic.go:334] "Generic (PLEG): container finished" podID="90e1bcb1-81b1-42c2-a625-fe691fe60434" containerID="b40e35ab8281b9feedc9a3c0e3cb1babc5421097a7d08144dee8b68a590ba85f" exitCode=0 Mar 13 12:45:52 crc kubenswrapper[4632]: I0313 12:45:52.001139 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbtjg" event={"ID":"90e1bcb1-81b1-42c2-a625-fe691fe60434","Type":"ContainerDied","Data":"b40e35ab8281b9feedc9a3c0e3cb1babc5421097a7d08144dee8b68a590ba85f"} Mar 13 12:45:52 crc kubenswrapper[4632]: I0313 12:45:52.001173 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbtjg" event={"ID":"90e1bcb1-81b1-42c2-a625-fe691fe60434","Type":"ContainerDied","Data":"7c9d33658cf93236a127bb3fe33f8e8067d984a174cb915b8ecf156bd22eb63f"} Mar 13 12:45:52 crc kubenswrapper[4632]: I0313 12:45:52.001196 4632 scope.go:117] "RemoveContainer" containerID="b40e35ab8281b9feedc9a3c0e3cb1babc5421097a7d08144dee8b68a590ba85f" Mar 13 12:45:52 crc kubenswrapper[4632]: I0313 12:45:52.003514 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jbtjg" Mar 13 12:45:52 crc kubenswrapper[4632]: I0313 12:45:52.036082 4632 scope.go:117] "RemoveContainer" containerID="c40b8467f0fbf11bb8bc8bea09fc62d937cebf42c2af6303174926f814c485aa" Mar 13 12:45:52 crc kubenswrapper[4632]: I0313 12:45:52.063258 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbtjg"] Mar 13 12:45:52 crc kubenswrapper[4632]: I0313 12:45:52.072055 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbtjg"] Mar 13 12:45:52 crc kubenswrapper[4632]: I0313 12:45:52.101108 4632 scope.go:117] "RemoveContainer" containerID="a5eb8f564f7bb06a4d39ae737d43ff00b69617f7796a97fc2d384634398638cc" Mar 13 12:45:52 crc kubenswrapper[4632]: I0313 12:45:52.125647 4632 scope.go:117] "RemoveContainer" containerID="b40e35ab8281b9feedc9a3c0e3cb1babc5421097a7d08144dee8b68a590ba85f" Mar 13 12:45:52 crc kubenswrapper[4632]: E0313 12:45:52.126320 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b40e35ab8281b9feedc9a3c0e3cb1babc5421097a7d08144dee8b68a590ba85f\": container with ID starting with b40e35ab8281b9feedc9a3c0e3cb1babc5421097a7d08144dee8b68a590ba85f not found: ID does not exist" containerID="b40e35ab8281b9feedc9a3c0e3cb1babc5421097a7d08144dee8b68a590ba85f" Mar 13 12:45:52 crc kubenswrapper[4632]: I0313 12:45:52.126363 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b40e35ab8281b9feedc9a3c0e3cb1babc5421097a7d08144dee8b68a590ba85f"} err="failed to get container status \"b40e35ab8281b9feedc9a3c0e3cb1babc5421097a7d08144dee8b68a590ba85f\": rpc error: code = NotFound desc = could not find container \"b40e35ab8281b9feedc9a3c0e3cb1babc5421097a7d08144dee8b68a590ba85f\": container with ID starting with b40e35ab8281b9feedc9a3c0e3cb1babc5421097a7d08144dee8b68a590ba85f not found: ID does not exist" Mar 13 12:45:52 crc kubenswrapper[4632]: I0313 12:45:52.126390 4632 scope.go:117] "RemoveContainer" containerID="c40b8467f0fbf11bb8bc8bea09fc62d937cebf42c2af6303174926f814c485aa" Mar 13 12:45:52 crc kubenswrapper[4632]: E0313 12:45:52.126613 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c40b8467f0fbf11bb8bc8bea09fc62d937cebf42c2af6303174926f814c485aa\": container with ID starting with c40b8467f0fbf11bb8bc8bea09fc62d937cebf42c2af6303174926f814c485aa not found: ID does not exist" containerID="c40b8467f0fbf11bb8bc8bea09fc62d937cebf42c2af6303174926f814c485aa" Mar 13 12:45:52 crc kubenswrapper[4632]: I0313 12:45:52.126636 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c40b8467f0fbf11bb8bc8bea09fc62d937cebf42c2af6303174926f814c485aa"} err="failed to get container status \"c40b8467f0fbf11bb8bc8bea09fc62d937cebf42c2af6303174926f814c485aa\": rpc error: code = NotFound desc = could not find container \"c40b8467f0fbf11bb8bc8bea09fc62d937cebf42c2af6303174926f814c485aa\": container with ID starting with c40b8467f0fbf11bb8bc8bea09fc62d937cebf42c2af6303174926f814c485aa not found: ID does not exist" Mar 13 12:45:52 crc kubenswrapper[4632]: I0313 12:45:52.126650 4632 scope.go:117] "RemoveContainer" containerID="a5eb8f564f7bb06a4d39ae737d43ff00b69617f7796a97fc2d384634398638cc" Mar 13 12:45:52 crc kubenswrapper[4632]: E0313 12:45:52.126842 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5eb8f564f7bb06a4d39ae737d43ff00b69617f7796a97fc2d384634398638cc\": container with ID starting with a5eb8f564f7bb06a4d39ae737d43ff00b69617f7796a97fc2d384634398638cc not found: ID does not exist" containerID="a5eb8f564f7bb06a4d39ae737d43ff00b69617f7796a97fc2d384634398638cc" Mar 13 12:45:52 crc kubenswrapper[4632]: I0313 12:45:52.126871 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5eb8f564f7bb06a4d39ae737d43ff00b69617f7796a97fc2d384634398638cc"} err="failed to get container status \"a5eb8f564f7bb06a4d39ae737d43ff00b69617f7796a97fc2d384634398638cc\": rpc error: code = NotFound desc = could not find container \"a5eb8f564f7bb06a4d39ae737d43ff00b69617f7796a97fc2d384634398638cc\": container with ID starting with a5eb8f564f7bb06a4d39ae737d43ff00b69617f7796a97fc2d384634398638cc not found: ID does not exist" Mar 13 12:45:54 crc kubenswrapper[4632]: I0313 12:45:54.060054 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90e1bcb1-81b1-42c2-a625-fe691fe60434" path="/var/lib/kubelet/pods/90e1bcb1-81b1-42c2-a625-fe691fe60434/volumes" Mar 13 12:45:57 crc kubenswrapper[4632]: I0313 12:45:57.241015 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-756c4b86c6-rm274_dbc1c989-5fa1-46dc-818e-8d609c069e34/barbican-api/0.log" Mar 13 12:45:57 crc kubenswrapper[4632]: I0313 12:45:57.464791 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-756c4b86c6-rm274_dbc1c989-5fa1-46dc-818e-8d609c069e34/barbican-api-log/0.log" Mar 13 12:45:57 crc kubenswrapper[4632]: I0313 12:45:57.622743 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6c97cdfb86-z2dqq_58332dcc-b1a6-4550-9c8b-8bbb82c04ff0/barbican-keystone-listener/0.log" Mar 13 12:45:57 crc kubenswrapper[4632]: I0313 12:45:57.922333 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6c97cdfb86-z2dqq_58332dcc-b1a6-4550-9c8b-8bbb82c04ff0/barbican-keystone-listener-log/0.log" Mar 13 12:45:57 crc kubenswrapper[4632]: I0313 12:45:57.942375 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5fc9b6f5b5-6ps9m_51b847ef-ada2-456f-819d-0084fbb17185/barbican-worker-log/0.log" Mar 13 12:45:57 crc kubenswrapper[4632]: I0313 12:45:57.962260 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5fc9b6f5b5-6ps9m_51b847ef-ada2-456f-819d-0084fbb17185/barbican-worker/0.log" Mar 13 12:45:58 crc kubenswrapper[4632]: I0313 12:45:58.221484 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-fvfjp_684a2658-ba02-40cf-a371-ec2a8934c0d3/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Mar 13 12:45:58 crc kubenswrapper[4632]: I0313 12:45:58.383415 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_046f071d-f091-4681-8a9b-06c7e7dc2192/ceilometer-central-agent/0.log" Mar 13 12:45:59 crc kubenswrapper[4632]: I0313 12:45:59.423601 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_046f071d-f091-4681-8a9b-06c7e7dc2192/ceilometer-notification-agent/0.log" Mar 13 12:45:59 crc kubenswrapper[4632]: I0313 12:45:59.470606 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_046f071d-f091-4681-8a9b-06c7e7dc2192/sg-core/0.log" Mar 13 12:45:59 crc kubenswrapper[4632]: I0313 12:45:59.554565 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_046f071d-f091-4681-8a9b-06c7e7dc2192/proxy-httpd/0.log" Mar 13 12:45:59 crc kubenswrapper[4632]: I0313 12:45:59.848193 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_6785ba8c-a47b-4851-945e-c07ccecb9911/cinder-api/0.log" Mar 13 12:45:59 crc kubenswrapper[4632]: I0313 12:45:59.882927 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_6785ba8c-a47b-4851-945e-c07ccecb9911/cinder-api-log/0.log" Mar 13 12:45:59 crc kubenswrapper[4632]: I0313 12:45:59.988152 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d2c1c19b-95a5-4db1-8e54-36fe83704b25/cinder-scheduler/1.log" Mar 13 12:46:00 crc kubenswrapper[4632]: I0313 12:46:00.159837 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556766-lklf4"] Mar 13 12:46:00 crc kubenswrapper[4632]: E0313 12:46:00.160244 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90e1bcb1-81b1-42c2-a625-fe691fe60434" containerName="registry-server" Mar 13 12:46:00 crc kubenswrapper[4632]: I0313 12:46:00.160261 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="90e1bcb1-81b1-42c2-a625-fe691fe60434" containerName="registry-server" Mar 13 12:46:00 crc kubenswrapper[4632]: E0313 12:46:00.160279 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90e1bcb1-81b1-42c2-a625-fe691fe60434" containerName="extract-utilities" Mar 13 12:46:00 crc kubenswrapper[4632]: I0313 12:46:00.160286 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="90e1bcb1-81b1-42c2-a625-fe691fe60434" containerName="extract-utilities" Mar 13 12:46:00 crc kubenswrapper[4632]: E0313 12:46:00.160300 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90e1bcb1-81b1-42c2-a625-fe691fe60434" containerName="extract-content" Mar 13 12:46:00 crc kubenswrapper[4632]: I0313 12:46:00.160306 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="90e1bcb1-81b1-42c2-a625-fe691fe60434" containerName="extract-content" Mar 13 12:46:00 crc kubenswrapper[4632]: I0313 12:46:00.160503 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="90e1bcb1-81b1-42c2-a625-fe691fe60434" containerName="registry-server" Mar 13 12:46:00 crc kubenswrapper[4632]: I0313 12:46:00.161228 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556766-lklf4" Mar 13 12:46:00 crc kubenswrapper[4632]: I0313 12:46:00.163375 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 12:46:00 crc kubenswrapper[4632]: I0313 12:46:00.167296 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 12:46:00 crc kubenswrapper[4632]: I0313 12:46:00.167321 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 12:46:00 crc kubenswrapper[4632]: I0313 12:46:00.188134 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556766-lklf4"] Mar 13 12:46:00 crc kubenswrapper[4632]: I0313 12:46:00.279874 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d2c1c19b-95a5-4db1-8e54-36fe83704b25/cinder-scheduler/0.log" Mar 13 12:46:00 crc kubenswrapper[4632]: I0313 12:46:00.320803 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltmx4\" (UniqueName: \"kubernetes.io/projected/e38323af-ae58-48ce-979e-c8905218b4fe-kube-api-access-ltmx4\") pod \"auto-csr-approver-29556766-lklf4\" (UID: \"e38323af-ae58-48ce-979e-c8905218b4fe\") " pod="openshift-infra/auto-csr-approver-29556766-lklf4" Mar 13 12:46:00 crc kubenswrapper[4632]: I0313 12:46:00.400279 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d2c1c19b-95a5-4db1-8e54-36fe83704b25/probe/0.log" Mar 13 12:46:00 crc kubenswrapper[4632]: I0313 12:46:00.422273 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltmx4\" (UniqueName: \"kubernetes.io/projected/e38323af-ae58-48ce-979e-c8905218b4fe-kube-api-access-ltmx4\") pod \"auto-csr-approver-29556766-lklf4\" (UID: \"e38323af-ae58-48ce-979e-c8905218b4fe\") " pod="openshift-infra/auto-csr-approver-29556766-lklf4" Mar 13 12:46:00 crc kubenswrapper[4632]: I0313 12:46:00.430683 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-tpk84_bcd0e6df-81c2-4541-b0b5-d5c539f03451/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 13 12:46:00 crc kubenswrapper[4632]: I0313 12:46:00.760908 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltmx4\" (UniqueName: \"kubernetes.io/projected/e38323af-ae58-48ce-979e-c8905218b4fe-kube-api-access-ltmx4\") pod \"auto-csr-approver-29556766-lklf4\" (UID: \"e38323af-ae58-48ce-979e-c8905218b4fe\") " pod="openshift-infra/auto-csr-approver-29556766-lklf4" Mar 13 12:46:00 crc kubenswrapper[4632]: I0313 12:46:00.785012 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556766-lklf4" Mar 13 12:46:01 crc kubenswrapper[4632]: I0313 12:46:01.049540 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-5fcgw_4931647b-bba4-489f-b5c1-cbe714834388/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 13 12:46:01 crc kubenswrapper[4632]: I0313 12:46:01.304281 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7b457785b5-7hzp6_1aca78bb-c923-4964-9b4c-5f7fb50badba/init/0.log" Mar 13 12:46:01 crc kubenswrapper[4632]: I0313 12:46:01.358451 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556766-lklf4"] Mar 13 12:46:01 crc kubenswrapper[4632]: I0313 12:46:01.594241 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7b457785b5-7hzp6_1aca78bb-c923-4964-9b4c-5f7fb50badba/init/0.log" Mar 13 12:46:01 crc kubenswrapper[4632]: I0313 12:46:01.717651 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-754cp_0d75181a-4c91-485e-8bcd-02e2aedd4d45/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Mar 13 12:46:01 crc kubenswrapper[4632]: I0313 12:46:01.888536 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7b457785b5-7hzp6_1aca78bb-c923-4964-9b4c-5f7fb50badba/dnsmasq-dns/0.log" Mar 13 12:46:02 crc kubenswrapper[4632]: I0313 12:46:02.052039 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_a2394af9-fd85-4291-8d57-c2bff02eccce/glance-httpd/0.log" Mar 13 12:46:02 crc kubenswrapper[4632]: I0313 12:46:02.097819 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556766-lklf4" event={"ID":"e38323af-ae58-48ce-979e-c8905218b4fe","Type":"ContainerStarted","Data":"9f75319750315cb6a2a63bbf725562683577f1266cb27af1c861282ce8fe51d0"} Mar 13 12:46:02 crc kubenswrapper[4632]: I0313 12:46:02.125650 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_a2394af9-fd85-4291-8d57-c2bff02eccce/glance-log/0.log" Mar 13 12:46:02 crc kubenswrapper[4632]: I0313 12:46:02.644579 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_97cf3e4a-cbe1-441c-8652-281a30fcf432/glance-httpd/0.log" Mar 13 12:46:02 crc kubenswrapper[4632]: I0313 12:46:02.846283 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_97cf3e4a-cbe1-441c-8652-281a30fcf432/glance-log/0.log" Mar 13 12:46:03 crc kubenswrapper[4632]: I0313 12:46:03.631709 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-c959f64fb-hx4t8_53145947-4584-4cef-b085-a0e0f550dde9/heat-engine/0.log" Mar 13 12:46:04 crc kubenswrapper[4632]: I0313 12:46:04.125294 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556766-lklf4" event={"ID":"e38323af-ae58-48ce-979e-c8905218b4fe","Type":"ContainerStarted","Data":"e2d392c178854d8d02c1d90a74a70ca0dce9ae28135802be619d355191eb7f40"} Mar 13 12:46:04 crc kubenswrapper[4632]: I0313 12:46:04.149167 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556766-lklf4" podStartSLOduration=2.9709213070000002 podStartE2EDuration="4.149147718s" podCreationTimestamp="2026-03-13 12:46:00 +0000 UTC" firstStartedPulling="2026-03-13 12:46:01.354593544 +0000 UTC m=+9735.377123667" lastFinishedPulling="2026-03-13 12:46:02.532819945 +0000 UTC m=+9736.555350078" observedRunningTime="2026-03-13 12:46:04.142988828 +0000 UTC m=+9738.165518971" watchObservedRunningTime="2026-03-13 12:46:04.149147718 +0000 UTC m=+9738.171677851" Mar 13 12:46:04 crc kubenswrapper[4632]: I0313 12:46:04.269549 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-689764498d-rg7vt_5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c/horizon/3.log" Mar 13 12:46:04 crc kubenswrapper[4632]: I0313 12:46:04.327138 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-689764498d-rg7vt_5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c/horizon/2.log" Mar 13 12:46:04 crc kubenswrapper[4632]: I0313 12:46:04.684653 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-n8wkn_41861d23-3e34-4f91-bafc-1b7eeee125db/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Mar 13 12:46:04 crc kubenswrapper[4632]: I0313 12:46:04.754848 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-7fcc47f8dc-lhqhx_00b138c6-9e7c-4782-8454-1a4c035b1fbc/heat-api/0.log" Mar 13 12:46:05 crc kubenswrapper[4632]: I0313 12:46:05.077452 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-86bb565f45-ntq5k_de2e3cc7-c5cb-449a-a19c-2d671f08c656/heat-cfnapi/0.log" Mar 13 12:46:05 crc kubenswrapper[4632]: I0313 12:46:05.246961 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-srdvg_78de7f45-2a11-4cbe-84bf-46c4307a1459/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 13 12:46:05 crc kubenswrapper[4632]: I0313 12:46:05.413485 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29556661-2p4wf_6c20fa3e-2873-4076-b17a-3ee171199959/keystone-cron/0.log" Mar 13 12:46:05 crc kubenswrapper[4632]: I0313 12:46:05.619373 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-689764498d-rg7vt_5a03baf3-a8ba-4b13-9fa9-17eafe7b9b7c/horizon-log/0.log" Mar 13 12:46:05 crc kubenswrapper[4632]: I0313 12:46:05.649678 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29556721-l54tk_c8daf4c2-f012-4d18-b11a-e666e00d6a03/keystone-cron/0.log" Mar 13 12:46:06 crc kubenswrapper[4632]: I0313 12:46:06.154615 4632 generic.go:334] "Generic (PLEG): container finished" podID="e38323af-ae58-48ce-979e-c8905218b4fe" containerID="e2d392c178854d8d02c1d90a74a70ca0dce9ae28135802be619d355191eb7f40" exitCode=0 Mar 13 12:46:06 crc kubenswrapper[4632]: I0313 12:46:06.154658 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556766-lklf4" event={"ID":"e38323af-ae58-48ce-979e-c8905218b4fe","Type":"ContainerDied","Data":"e2d392c178854d8d02c1d90a74a70ca0dce9ae28135802be619d355191eb7f40"} Mar 13 12:46:06 crc kubenswrapper[4632]: I0313 12:46:06.187914 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_26ce3314-15f1-490c-83e5-a1c609212437/kube-state-metrics/0.log" Mar 13 12:46:06 crc kubenswrapper[4632]: I0313 12:46:06.326144 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-skjrh_ed1a2c50-a476-43ca-9764-e0ebffb14134/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Mar 13 12:46:07 crc kubenswrapper[4632]: I0313 12:46:07.251269 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-695f666b49-nw48z_3a5c1185-e64b-44a9-b4b8-0108d4e80f9a/neutron-httpd/0.log" Mar 13 12:46:07 crc kubenswrapper[4632]: I0313 12:46:07.708549 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-c45jq_96e4ce1c-8f09-4563-864f-da1f95bdd500/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Mar 13 12:46:07 crc kubenswrapper[4632]: I0313 12:46:07.733853 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556766-lklf4" Mar 13 12:46:07 crc kubenswrapper[4632]: I0313 12:46:07.920459 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltmx4\" (UniqueName: \"kubernetes.io/projected/e38323af-ae58-48ce-979e-c8905218b4fe-kube-api-access-ltmx4\") pod \"e38323af-ae58-48ce-979e-c8905218b4fe\" (UID: \"e38323af-ae58-48ce-979e-c8905218b4fe\") " Mar 13 12:46:07 crc kubenswrapper[4632]: I0313 12:46:07.956674 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e38323af-ae58-48ce-979e-c8905218b4fe-kube-api-access-ltmx4" (OuterVolumeSpecName: "kube-api-access-ltmx4") pod "e38323af-ae58-48ce-979e-c8905218b4fe" (UID: "e38323af-ae58-48ce-979e-c8905218b4fe"). InnerVolumeSpecName "kube-api-access-ltmx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:46:08 crc kubenswrapper[4632]: I0313 12:46:08.022246 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltmx4\" (UniqueName: \"kubernetes.io/projected/e38323af-ae58-48ce-979e-c8905218b4fe-kube-api-access-ltmx4\") on node \"crc\" DevicePath \"\"" Mar 13 12:46:08 crc kubenswrapper[4632]: I0313 12:46:08.163985 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-f664b756d-8fxf4_df64dbf7-8526-4fab-950a-4afefe47ec77/keystone-api/0.log" Mar 13 12:46:08 crc kubenswrapper[4632]: I0313 12:46:08.183813 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556766-lklf4" event={"ID":"e38323af-ae58-48ce-979e-c8905218b4fe","Type":"ContainerDied","Data":"9f75319750315cb6a2a63bbf725562683577f1266cb27af1c861282ce8fe51d0"} Mar 13 12:46:08 crc kubenswrapper[4632]: I0313 12:46:08.183850 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f75319750315cb6a2a63bbf725562683577f1266cb27af1c861282ce8fe51d0" Mar 13 12:46:08 crc kubenswrapper[4632]: I0313 12:46:08.183911 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556766-lklf4" Mar 13 12:46:08 crc kubenswrapper[4632]: I0313 12:46:08.321171 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556760-fxrw2"] Mar 13 12:46:08 crc kubenswrapper[4632]: I0313 12:46:08.373117 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556760-fxrw2"] Mar 13 12:46:09 crc kubenswrapper[4632]: I0313 12:46:09.083499 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-695f666b49-nw48z_3a5c1185-e64b-44a9-b4b8-0108d4e80f9a/neutron-api/0.log" Mar 13 12:46:09 crc kubenswrapper[4632]: I0313 12:46:09.237232 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_dbe53f0a-8bf3-4572-b5c8-01d5ed72c426/nova-cell0-conductor-conductor/0.log" Mar 13 12:46:09 crc kubenswrapper[4632]: I0313 12:46:09.788841 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_febcbdc5-25a6-46f7-8c06-d6f45624a466/nova-cell1-conductor-conductor/0.log" Mar 13 12:46:10 crc kubenswrapper[4632]: I0313 12:46:10.059414 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58d6a88c-498f-4887-998a-c3e3a1a2fef2" path="/var/lib/kubelet/pods/58d6a88c-498f-4887-998a-c3e3a1a2fef2/volumes" Mar 13 12:46:10 crc kubenswrapper[4632]: I0313 12:46:10.172155 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_3ef77ea1-fee2-432d-9aba-c0acfedb4e69/nova-api-log/0.log" Mar 13 12:46:10 crc kubenswrapper[4632]: I0313 12:46:10.266089 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_bf01307f-1529-4aa7-95fc-8af84b061970/nova-cell1-novncproxy-novncproxy/0.log" Mar 13 12:46:10 crc kubenswrapper[4632]: I0313 12:46:10.467205 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:46:10 crc kubenswrapper[4632]: I0313 12:46:10.467535 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:46:10 crc kubenswrapper[4632]: I0313 12:46:10.467584 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 12:46:10 crc kubenswrapper[4632]: I0313 12:46:10.472559 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ac0aa587db0bc6f14a810b1c0a407933497eafb76c5051481d9814592d0380b3"} pod="openshift-machine-config-operator/machine-config-daemon-zkscb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 12:46:10 crc kubenswrapper[4632]: I0313 12:46:10.472704 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" containerID="cri-o://ac0aa587db0bc6f14a810b1c0a407933497eafb76c5051481d9814592d0380b3" gracePeriod=600 Mar 13 12:46:10 crc kubenswrapper[4632]: I0313 12:46:10.651702 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-dl4cq_c897af06-c467-4ec3-aa76-c29a3ea3a462/nova-edpm-deployment-openstack-edpm-ipam/0.log" Mar 13 12:46:10 crc kubenswrapper[4632]: I0313 12:46:10.721915 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b75084d0-782c-4f7e-8cc0-62ac424eec6f/nova-metadata-log/0.log" Mar 13 12:46:11 crc kubenswrapper[4632]: I0313 12:46:11.223356 4632 generic.go:334] "Generic (PLEG): container finished" podID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerID="ac0aa587db0bc6f14a810b1c0a407933497eafb76c5051481d9814592d0380b3" exitCode=0 Mar 13 12:46:11 crc kubenswrapper[4632]: I0313 12:46:11.223651 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerDied","Data":"ac0aa587db0bc6f14a810b1c0a407933497eafb76c5051481d9814592d0380b3"} Mar 13 12:46:11 crc kubenswrapper[4632]: I0313 12:46:11.223712 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerStarted","Data":"f99b8bf20e90f85804d1361e895aa3d02f6ce45057066d1347ea4edee62e1086"} Mar 13 12:46:11 crc kubenswrapper[4632]: I0313 12:46:11.223777 4632 scope.go:117] "RemoveContainer" containerID="23322afa11cd9b7f3f2b893b0662422a41461a3fe0777a0243c232f90f5a4eb9" Mar 13 12:46:11 crc kubenswrapper[4632]: I0313 12:46:11.533017 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_1761ca69-46fd-4375-af60-22b3e77c19a2/mysql-bootstrap/0.log" Mar 13 12:46:11 crc kubenswrapper[4632]: I0313 12:46:11.720542 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_bd274a76-bf05-4f69-8d56-4844012a1fd1/nova-scheduler-scheduler/0.log" Mar 13 12:46:11 crc kubenswrapper[4632]: I0313 12:46:11.753663 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_3ef77ea1-fee2-432d-9aba-c0acfedb4e69/nova-api-api/0.log" Mar 13 12:46:11 crc kubenswrapper[4632]: I0313 12:46:11.835821 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_1761ca69-46fd-4375-af60-22b3e77c19a2/mysql-bootstrap/0.log" Mar 13 12:46:12 crc kubenswrapper[4632]: I0313 12:46:12.132517 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_1761ca69-46fd-4375-af60-22b3e77c19a2/galera/1.log" Mar 13 12:46:12 crc kubenswrapper[4632]: I0313 12:46:12.158035 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_1761ca69-46fd-4375-af60-22b3e77c19a2/galera/0.log" Mar 13 12:46:12 crc kubenswrapper[4632]: I0313 12:46:12.509736 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2cb2f546-c8c5-4ec9-aba8-d3782431de10/mysql-bootstrap/0.log" Mar 13 12:46:13 crc kubenswrapper[4632]: I0313 12:46:13.255316 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2cb2f546-c8c5-4ec9-aba8-d3782431de10/galera/1.log" Mar 13 12:46:13 crc kubenswrapper[4632]: I0313 12:46:13.293724 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2cb2f546-c8c5-4ec9-aba8-d3782431de10/galera/0.log" Mar 13 12:46:13 crc kubenswrapper[4632]: I0313 12:46:13.306010 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2cb2f546-c8c5-4ec9-aba8-d3782431de10/mysql-bootstrap/0.log" Mar 13 12:46:13 crc kubenswrapper[4632]: I0313 12:46:13.600289 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_aef9680f-df77-4e2e-ac53-9d7530c2270c/openstackclient/0.log" Mar 13 12:46:13 crc kubenswrapper[4632]: I0313 12:46:13.960836 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-9kd7r_eab798dd-482a-4c66-983b-908966cd1f94/ovn-controller/0.log" Mar 13 12:46:14 crc kubenswrapper[4632]: I0313 12:46:14.095290 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-798sf_9246fc4f-3716-4a8b-9854-52137cf04e9a/openstack-network-exporter/0.log" Mar 13 12:46:14 crc kubenswrapper[4632]: I0313 12:46:14.291471 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-c5xnp_d2677b19-4860-497e-a473-6d52d4901d8c/ovsdb-server-init/0.log" Mar 13 12:46:14 crc kubenswrapper[4632]: I0313 12:46:14.516689 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-c5xnp_d2677b19-4860-497e-a473-6d52d4901d8c/ovsdb-server-init/0.log" Mar 13 12:46:14 crc kubenswrapper[4632]: I0313 12:46:14.990131 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-c5xnp_d2677b19-4860-497e-a473-6d52d4901d8c/ovs-vswitchd/0.log" Mar 13 12:46:15 crc kubenswrapper[4632]: I0313 12:46:15.109875 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-c5xnp_d2677b19-4860-497e-a473-6d52d4901d8c/ovsdb-server/0.log" Mar 13 12:46:15 crc kubenswrapper[4632]: I0313 12:46:15.307256 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-6t9b6_96ca1247-6625-4b08-b155-34c56f02ec04/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Mar 13 12:46:15 crc kubenswrapper[4632]: I0313 12:46:15.351989 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9a169306-9d47-41ae-8667-1efb89c43d82/openstack-network-exporter/0.log" Mar 13 12:46:15 crc kubenswrapper[4632]: I0313 12:46:15.576907 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9a169306-9d47-41ae-8667-1efb89c43d82/ovn-northd/0.log" Mar 13 12:46:15 crc kubenswrapper[4632]: I0313 12:46:15.605539 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4ee148f1-cc66-4aa0-b603-c8a70f3554f5/openstack-network-exporter/0.log" Mar 13 12:46:15 crc kubenswrapper[4632]: I0313 12:46:15.858769 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4ee148f1-cc66-4aa0-b603-c8a70f3554f5/ovsdbserver-nb/0.log" Mar 13 12:46:15 crc kubenswrapper[4632]: I0313 12:46:15.981535 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b75084d0-782c-4f7e-8cc0-62ac424eec6f/nova-metadata-metadata/0.log" Mar 13 12:46:16 crc kubenswrapper[4632]: I0313 12:46:16.065062 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_5529a725-48d8-4a60-91cd-775a4b520c20/openstack-network-exporter/0.log" Mar 13 12:46:16 crc kubenswrapper[4632]: I0313 12:46:16.203633 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_5529a725-48d8-4a60-91cd-775a4b520c20/ovsdbserver-sb/0.log" Mar 13 12:46:16 crc kubenswrapper[4632]: I0313 12:46:16.733008 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a3d80d9f-c956-40f5-b2e1-8aea2f136b6e/setup-container/0.log" Mar 13 12:46:16 crc kubenswrapper[4632]: I0313 12:46:16.913885 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6db55c595b-pwgcg_ab896d5b-a5b6-46a3-84d8-c3a8c968eac0/placement-api/0.log" Mar 13 12:46:16 crc kubenswrapper[4632]: I0313 12:46:16.966741 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a3d80d9f-c956-40f5-b2e1-8aea2f136b6e/setup-container/0.log" Mar 13 12:46:17 crc kubenswrapper[4632]: I0313 12:46:17.087062 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6db55c595b-pwgcg_ab896d5b-a5b6-46a3-84d8-c3a8c968eac0/placement-log/0.log" Mar 13 12:46:17 crc kubenswrapper[4632]: I0313 12:46:17.099177 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a3d80d9f-c956-40f5-b2e1-8aea2f136b6e/rabbitmq/0.log" Mar 13 12:46:17 crc kubenswrapper[4632]: I0313 12:46:17.321485 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e/setup-container/0.log" Mar 13 12:46:17 crc kubenswrapper[4632]: I0313 12:46:17.567344 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e/rabbitmq/0.log" Mar 13 12:46:17 crc kubenswrapper[4632]: I0313 12:46:17.573198 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c8eb7c17-3ca6-4538-9d8b-b46cdfafb69e/setup-container/0.log" Mar 13 12:46:17 crc kubenswrapper[4632]: I0313 12:46:17.734779 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-fzxsz_4eaeef27-fa4c-41d9-a197-a780a6a6cebd/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 13 12:46:17 crc kubenswrapper[4632]: I0313 12:46:17.848746 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-5s64h_1dc9191f-32b9-45b9-b49f-fd704075f0a5/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Mar 13 12:46:18 crc kubenswrapper[4632]: I0313 12:46:18.033804 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-r6nh9_0ea59acf-3206-492e-a7a8-bf855823d92c/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Mar 13 12:46:18 crc kubenswrapper[4632]: I0313 12:46:18.391259 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-f9rbk_f69a3b21-eb1c-4300-91dc-55766900da95/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 13 12:46:18 crc kubenswrapper[4632]: I0313 12:46:18.423144 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-55n7g_9ff4122d-b9f1-4dd0-80dc-deb9d84760e1/ssh-known-hosts-edpm-deployment/0.log" Mar 13 12:46:18 crc kubenswrapper[4632]: I0313 12:46:18.931981 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7dbf8b9ddc-6p5vh_03ca050c-63a7-4b37-91fe-fe5c322cca78/proxy-server/0.log" Mar 13 12:46:18 crc kubenswrapper[4632]: I0313 12:46:18.988916 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-mkdcg_bc39c52e-008f-40c1-b93b-532707127fcd/swift-ring-rebalance/0.log" Mar 13 12:46:19 crc kubenswrapper[4632]: I0313 12:46:19.111320 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7dbf8b9ddc-6p5vh_03ca050c-63a7-4b37-91fe-fe5c322cca78/proxy-httpd/0.log" Mar 13 12:46:19 crc kubenswrapper[4632]: I0313 12:46:19.232248 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e37b3d77-de2e-4be9-9984-550d4ba0f2f0/account-auditor/0.log" Mar 13 12:46:19 crc kubenswrapper[4632]: I0313 12:46:19.298281 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e37b3d77-de2e-4be9-9984-550d4ba0f2f0/account-reaper/0.log" Mar 13 12:46:19 crc kubenswrapper[4632]: I0313 12:46:19.500081 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e37b3d77-de2e-4be9-9984-550d4ba0f2f0/container-auditor/0.log" Mar 13 12:46:19 crc kubenswrapper[4632]: I0313 12:46:19.542089 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e37b3d77-de2e-4be9-9984-550d4ba0f2f0/account-replicator/0.log" Mar 13 12:46:19 crc kubenswrapper[4632]: I0313 12:46:19.624918 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e37b3d77-de2e-4be9-9984-550d4ba0f2f0/account-server/0.log" Mar 13 12:46:19 crc kubenswrapper[4632]: I0313 12:46:19.743504 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e37b3d77-de2e-4be9-9984-550d4ba0f2f0/container-replicator/0.log" Mar 13 12:46:19 crc kubenswrapper[4632]: I0313 12:46:19.812475 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e37b3d77-de2e-4be9-9984-550d4ba0f2f0/container-server/0.log" Mar 13 12:46:19 crc kubenswrapper[4632]: I0313 12:46:19.890260 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e37b3d77-de2e-4be9-9984-550d4ba0f2f0/container-updater/0.log" Mar 13 12:46:19 crc kubenswrapper[4632]: I0313 12:46:19.960829 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e37b3d77-de2e-4be9-9984-550d4ba0f2f0/object-auditor/0.log" Mar 13 12:46:20 crc kubenswrapper[4632]: I0313 12:46:20.070672 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e37b3d77-de2e-4be9-9984-550d4ba0f2f0/object-expirer/0.log" Mar 13 12:46:20 crc kubenswrapper[4632]: I0313 12:46:20.166386 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e37b3d77-de2e-4be9-9984-550d4ba0f2f0/object-server/0.log" Mar 13 12:46:20 crc kubenswrapper[4632]: I0313 12:46:20.212161 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e37b3d77-de2e-4be9-9984-550d4ba0f2f0/object-replicator/0.log" Mar 13 12:46:20 crc kubenswrapper[4632]: I0313 12:46:20.359209 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e37b3d77-de2e-4be9-9984-550d4ba0f2f0/object-updater/0.log" Mar 13 12:46:20 crc kubenswrapper[4632]: I0313 12:46:20.461026 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e37b3d77-de2e-4be9-9984-550d4ba0f2f0/swift-recon-cron/0.log" Mar 13 12:46:20 crc kubenswrapper[4632]: I0313 12:46:20.478179 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e37b3d77-de2e-4be9-9984-550d4ba0f2f0/rsync/0.log" Mar 13 12:46:20 crc kubenswrapper[4632]: I0313 12:46:20.778308 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-tq6dw_4656b24f-4b10-481a-ba5b-1c17e5f2f7ef/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Mar 13 12:46:21 crc kubenswrapper[4632]: I0313 12:46:21.095720 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s01-single-thread-testing_611401cc-04fe-4276-82fa-a896182802d4/tempest-tests-tempest-tests-runner/0.log" Mar 13 12:46:21 crc kubenswrapper[4632]: I0313 12:46:21.175676 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_c4836490-7b24-4245-bf50-7d590576f21e/test-operator-logs-container/0.log" Mar 13 12:46:21 crc kubenswrapper[4632]: I0313 12:46:21.279772 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s00-multi-thread-testing_a62e0eae-95dd-40a3-a489-80646fde4301/tempest-tests-tempest-tests-runner/0.log" Mar 13 12:46:21 crc kubenswrapper[4632]: I0313 12:46:21.485122 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-6w5gg_a1c30ff2-4a23-4fb1-b689-59318014bf57/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 13 12:46:30 crc kubenswrapper[4632]: I0313 12:46:30.364507 4632 scope.go:117] "RemoveContainer" containerID="a5bdb6d7b1972d01ea3faadd8b4d91d40f96718626d20034621dcf3eda3e5f37" Mar 13 12:46:38 crc kubenswrapper[4632]: I0313 12:46:38.838071 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_d9100748-6b15-4ccf-b961-aab1135f08d1/memcached/0.log" Mar 13 12:46:59 crc kubenswrapper[4632]: I0313 12:46:59.841929 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wcgkr"] Mar 13 12:46:59 crc kubenswrapper[4632]: E0313 12:46:59.842927 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e38323af-ae58-48ce-979e-c8905218b4fe" containerName="oc" Mar 13 12:46:59 crc kubenswrapper[4632]: I0313 12:46:59.842963 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="e38323af-ae58-48ce-979e-c8905218b4fe" containerName="oc" Mar 13 12:46:59 crc kubenswrapper[4632]: I0313 12:46:59.843178 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="e38323af-ae58-48ce-979e-c8905218b4fe" containerName="oc" Mar 13 12:46:59 crc kubenswrapper[4632]: I0313 12:46:59.848006 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wcgkr" Mar 13 12:46:59 crc kubenswrapper[4632]: I0313 12:46:59.884285 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wcgkr"] Mar 13 12:46:59 crc kubenswrapper[4632]: I0313 12:46:59.990662 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/964e423d-b9e8-4f29-af5d-84b106ae8159-catalog-content\") pod \"redhat-operators-wcgkr\" (UID: \"964e423d-b9e8-4f29-af5d-84b106ae8159\") " pod="openshift-marketplace/redhat-operators-wcgkr" Mar 13 12:46:59 crc kubenswrapper[4632]: I0313 12:46:59.990809 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/964e423d-b9e8-4f29-af5d-84b106ae8159-utilities\") pod \"redhat-operators-wcgkr\" (UID: \"964e423d-b9e8-4f29-af5d-84b106ae8159\") " pod="openshift-marketplace/redhat-operators-wcgkr" Mar 13 12:46:59 crc kubenswrapper[4632]: I0313 12:46:59.990844 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4frc4\" (UniqueName: \"kubernetes.io/projected/964e423d-b9e8-4f29-af5d-84b106ae8159-kube-api-access-4frc4\") pod \"redhat-operators-wcgkr\" (UID: \"964e423d-b9e8-4f29-af5d-84b106ae8159\") " pod="openshift-marketplace/redhat-operators-wcgkr" Mar 13 12:47:00 crc kubenswrapper[4632]: I0313 12:47:00.104109 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4frc4\" (UniqueName: \"kubernetes.io/projected/964e423d-b9e8-4f29-af5d-84b106ae8159-kube-api-access-4frc4\") pod \"redhat-operators-wcgkr\" (UID: \"964e423d-b9e8-4f29-af5d-84b106ae8159\") " pod="openshift-marketplace/redhat-operators-wcgkr" Mar 13 12:47:00 crc kubenswrapper[4632]: I0313 12:47:00.104328 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/964e423d-b9e8-4f29-af5d-84b106ae8159-catalog-content\") pod \"redhat-operators-wcgkr\" (UID: \"964e423d-b9e8-4f29-af5d-84b106ae8159\") " pod="openshift-marketplace/redhat-operators-wcgkr" Mar 13 12:47:00 crc kubenswrapper[4632]: I0313 12:47:00.104746 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/964e423d-b9e8-4f29-af5d-84b106ae8159-utilities\") pod \"redhat-operators-wcgkr\" (UID: \"964e423d-b9e8-4f29-af5d-84b106ae8159\") " pod="openshift-marketplace/redhat-operators-wcgkr" Mar 13 12:47:00 crc kubenswrapper[4632]: I0313 12:47:00.105505 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/964e423d-b9e8-4f29-af5d-84b106ae8159-utilities\") pod \"redhat-operators-wcgkr\" (UID: \"964e423d-b9e8-4f29-af5d-84b106ae8159\") " pod="openshift-marketplace/redhat-operators-wcgkr" Mar 13 12:47:00 crc kubenswrapper[4632]: I0313 12:47:00.105932 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/964e423d-b9e8-4f29-af5d-84b106ae8159-catalog-content\") pod \"redhat-operators-wcgkr\" (UID: \"964e423d-b9e8-4f29-af5d-84b106ae8159\") " pod="openshift-marketplace/redhat-operators-wcgkr" Mar 13 12:47:00 crc kubenswrapper[4632]: I0313 12:47:00.144970 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4frc4\" (UniqueName: \"kubernetes.io/projected/964e423d-b9e8-4f29-af5d-84b106ae8159-kube-api-access-4frc4\") pod \"redhat-operators-wcgkr\" (UID: \"964e423d-b9e8-4f29-af5d-84b106ae8159\") " pod="openshift-marketplace/redhat-operators-wcgkr" Mar 13 12:47:00 crc kubenswrapper[4632]: I0313 12:47:00.217578 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wcgkr" Mar 13 12:47:00 crc kubenswrapper[4632]: I0313 12:47:00.342806 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m_13abf84a-b499-4439-ab4e-1c34bcf07308/util/0.log" Mar 13 12:47:00 crc kubenswrapper[4632]: I0313 12:47:00.523090 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m_13abf84a-b499-4439-ab4e-1c34bcf07308/util/0.log" Mar 13 12:47:00 crc kubenswrapper[4632]: I0313 12:47:00.820222 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m_13abf84a-b499-4439-ab4e-1c34bcf07308/pull/0.log" Mar 13 12:47:00 crc kubenswrapper[4632]: I0313 12:47:00.953319 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m_13abf84a-b499-4439-ab4e-1c34bcf07308/pull/0.log" Mar 13 12:47:01 crc kubenswrapper[4632]: I0313 12:47:01.004445 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wcgkr"] Mar 13 12:47:01 crc kubenswrapper[4632]: I0313 12:47:01.181617 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m_13abf84a-b499-4439-ab4e-1c34bcf07308/util/0.log" Mar 13 12:47:01 crc kubenswrapper[4632]: I0313 12:47:01.326302 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m_13abf84a-b499-4439-ab4e-1c34bcf07308/pull/0.log" Mar 13 12:47:01 crc kubenswrapper[4632]: I0313 12:47:01.522536 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cdce3512d6dd40d0fa1f5a9460ef5ddb632791dec16c7770c73458169cskc7m_13abf84a-b499-4439-ab4e-1c34bcf07308/extract/0.log" Mar 13 12:47:01 crc kubenswrapper[4632]: I0313 12:47:01.756130 4632 generic.go:334] "Generic (PLEG): container finished" podID="964e423d-b9e8-4f29-af5d-84b106ae8159" containerID="4814d54398dddc7491e0a4c9f868011d9742ed90e2d05a245b23e35c28be791e" exitCode=0 Mar 13 12:47:01 crc kubenswrapper[4632]: I0313 12:47:01.756170 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wcgkr" event={"ID":"964e423d-b9e8-4f29-af5d-84b106ae8159","Type":"ContainerDied","Data":"4814d54398dddc7491e0a4c9f868011d9742ed90e2d05a245b23e35c28be791e"} Mar 13 12:47:01 crc kubenswrapper[4632]: I0313 12:47:01.756197 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wcgkr" event={"ID":"964e423d-b9e8-4f29-af5d-84b106ae8159","Type":"ContainerStarted","Data":"38d0595c24aa0a1cd0e1355e6ab1c56e237ac066c9da90382b7cbb6e3fba6db4"} Mar 13 12:47:02 crc kubenswrapper[4632]: I0313 12:47:02.293316 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66d56f6ff4-cfcgn_75d652c7-8521-4039-913a-fa625f89b094/manager/0.log" Mar 13 12:47:03 crc kubenswrapper[4632]: I0313 12:47:03.065068 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5964f64c48-qg79l_20f92131-aca4-41ea-9144-a23bd9216f49/manager/0.log" Mar 13 12:47:03 crc kubenswrapper[4632]: I0313 12:47:03.784800 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wcgkr" event={"ID":"964e423d-b9e8-4f29-af5d-84b106ae8159","Type":"ContainerStarted","Data":"219da7f0798a57241a68dc972a8e6cf63665a59509f96ff776be2e82e493c3c5"} Mar 13 12:47:03 crc kubenswrapper[4632]: I0313 12:47:03.960400 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-77b6666d85-cgh6c_ff6d4dcb-9eb8-44fc-951e-f2aecd77a639/manager/0.log" Mar 13 12:47:04 crc kubenswrapper[4632]: I0313 12:47:04.760448 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d9d6b584d-2rv7s_9a963f9c-ac58-4e21-abfa-fca1279a192d/manager/0.log" Mar 13 12:47:06 crc kubenswrapper[4632]: I0313 12:47:06.153128 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6bbb499bbc-wtzrw_c8fc6f03-c43b-4ade-92a8-acc5537a4eeb/manager/0.log" Mar 13 12:47:06 crc kubenswrapper[4632]: I0313 12:47:06.580293 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-5995f4446f-flfxh_1542a9c8-92f6-4bc9-8231-829f649b0b8f/manager/0.log" Mar 13 12:47:06 crc kubenswrapper[4632]: I0313 12:47:06.957581 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-984cd4dcf-f6c87_3f3a462e-4d89-45b3-8611-181aca5f8558/manager/0.log" Mar 13 12:47:06 crc kubenswrapper[4632]: I0313 12:47:06.974825 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-684f77d66d-6nb82_f0be4a6b-e3ac-4141-b5bf-b3fafcca5fda/manager/1.log" Mar 13 12:47:07 crc kubenswrapper[4632]: I0313 12:47:07.142238 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-684f77d66d-6nb82_f0be4a6b-e3ac-4141-b5bf-b3fafcca5fda/manager/0.log" Mar 13 12:47:07 crc kubenswrapper[4632]: I0313 12:47:07.579928 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-68f45f9d9f-sxw8d_7b491335-6a73-46de-8098-f27ff4c6f795/manager/0.log" Mar 13 12:47:08 crc kubenswrapper[4632]: I0313 12:47:08.001848 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-658d4cdd5-szd7c_9040a0e0-2a56-4331-ba50-b19ff05ef0c0/manager/0.log" Mar 13 12:47:08 crc kubenswrapper[4632]: I0313 12:47:08.193520 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-776c5696bf-bkmbn_c33d0da9-5a04-42d6-80d3-2f558b4a90b0/manager/0.log" Mar 13 12:47:08 crc kubenswrapper[4632]: I0313 12:47:08.609551 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4f55cb5c-62gpm_9e1d6ac6-c4ee-4381-86b8-c337f8c2d6a5/manager/0.log" Mar 13 12:47:08 crc kubenswrapper[4632]: I0313 12:47:08.679318 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-569cc54c5-628ss_d04e9aa6-f234-4ffa-81e2-1a2407addb77/manager/0.log" Mar 13 12:47:08 crc kubenswrapper[4632]: I0313 12:47:08.868022 4632 generic.go:334] "Generic (PLEG): container finished" podID="964e423d-b9e8-4f29-af5d-84b106ae8159" containerID="219da7f0798a57241a68dc972a8e6cf63665a59509f96ff776be2e82e493c3c5" exitCode=0 Mar 13 12:47:08 crc kubenswrapper[4632]: I0313 12:47:08.868069 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wcgkr" event={"ID":"964e423d-b9e8-4f29-af5d-84b106ae8159","Type":"ContainerDied","Data":"219da7f0798a57241a68dc972a8e6cf63665a59509f96ff776be2e82e493c3c5"} Mar 13 12:47:08 crc kubenswrapper[4632]: I0313 12:47:08.911837 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-557ccf57b7v927j_2d221857-ee77-4165-a351-ecd5fc424970/manager/0.log" Mar 13 12:47:09 crc kubenswrapper[4632]: I0313 12:47:09.702639 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-865685cd99-ls9jq_82fe7ef6-50a5-41d4-9419-787812e16bd6/operator/0.log" Mar 13 12:47:09 crc kubenswrapper[4632]: I0313 12:47:09.870017 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-2jqnk_7de02b7f-4e1c-4ba1-9659-c864e9080092/registry-server/1.log" Mar 13 12:47:09 crc kubenswrapper[4632]: I0313 12:47:09.890435 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wcgkr" event={"ID":"964e423d-b9e8-4f29-af5d-84b106ae8159","Type":"ContainerStarted","Data":"537f35c912c9c8057ecc3ec80663f0a0d3c386360eb374b59f3e50a3f8bd59ee"} Mar 13 12:47:09 crc kubenswrapper[4632]: I0313 12:47:09.916121 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wcgkr" podStartSLOduration=3.329566062 podStartE2EDuration="10.916101569s" podCreationTimestamp="2026-03-13 12:46:59 +0000 UTC" firstStartedPulling="2026-03-13 12:47:01.760069158 +0000 UTC m=+9795.782599291" lastFinishedPulling="2026-03-13 12:47:09.346604665 +0000 UTC m=+9803.369134798" observedRunningTime="2026-03-13 12:47:09.911336962 +0000 UTC m=+9803.933867105" watchObservedRunningTime="2026-03-13 12:47:09.916101569 +0000 UTC m=+9803.938631702" Mar 13 12:47:09 crc kubenswrapper[4632]: I0313 12:47:09.966344 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-2jqnk_7de02b7f-4e1c-4ba1-9659-c864e9080092/registry-server/0.log" Mar 13 12:47:10 crc kubenswrapper[4632]: I0313 12:47:10.218507 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wcgkr" Mar 13 12:47:10 crc kubenswrapper[4632]: I0313 12:47:10.218572 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wcgkr" Mar 13 12:47:10 crc kubenswrapper[4632]: I0313 12:47:10.407047 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bbc5b68f9-4m8kf_0a9d48f4-d68b-4ef9-826e-ed619c761405/manager/0.log" Mar 13 12:47:10 crc kubenswrapper[4632]: I0313 12:47:10.974538 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-574d45c66c-qkr9n_e66fe20e-05b5-42cd-ac1d-bc4eaee4c8e5/manager/0.log" Mar 13 12:47:11 crc kubenswrapper[4632]: I0313 12:47:11.276240 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wcgkr" podUID="964e423d-b9e8-4f29-af5d-84b106ae8159" containerName="registry-server" probeResult="failure" output=< Mar 13 12:47:11 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:47:11 crc kubenswrapper[4632]: > Mar 13 12:47:11 crc kubenswrapper[4632]: I0313 12:47:11.283256 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-2lzt8_daba1153-3b28-4234-8dd0-ec20160abbfe/operator/0.log" Mar 13 12:47:11 crc kubenswrapper[4632]: I0313 12:47:11.558423 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-677c674df7-qbfg2_2d8a9f3a-6631-4c1e-8381-3bc313837ca0/manager/0.log" Mar 13 12:47:11 crc kubenswrapper[4632]: I0313 12:47:11.824736 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-677bd678f7-wj9qs_68c5eb80-4214-42c5-a08d-de6012969621/manager/0.log" Mar 13 12:47:12 crc kubenswrapper[4632]: I0313 12:47:12.131198 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6cd66dbd4b-nt7np_ee081327-4c3f-4c0a-9085-71085c6487b5/manager/0.log" Mar 13 12:47:12 crc kubenswrapper[4632]: I0313 12:47:12.257137 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5c5cb9c4d7-jwrgq_7bab78c8-7dac-48dc-a426-ccd4ae00a428/manager/0.log" Mar 13 12:47:12 crc kubenswrapper[4632]: I0313 12:47:12.408415 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-85c677895b-thbc4_3fdb377f-5a78-4687-82e1-50718514290d/manager/0.log" Mar 13 12:47:12 crc kubenswrapper[4632]: I0313 12:47:12.502302 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6dd88c6f67-kv8b2_e0d1d349-d63d-498b-ae15-3121f9ae73f8/manager/0.log" Mar 13 12:47:18 crc kubenswrapper[4632]: I0313 12:47:18.289894 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-677bd678f7-wj9qs_68c5eb80-4214-42c5-a08d-de6012969621/manager/1.log" Mar 13 12:47:21 crc kubenswrapper[4632]: I0313 12:47:21.269244 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wcgkr" podUID="964e423d-b9e8-4f29-af5d-84b106ae8159" containerName="registry-server" probeResult="failure" output=< Mar 13 12:47:21 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:47:21 crc kubenswrapper[4632]: > Mar 13 12:47:31 crc kubenswrapper[4632]: I0313 12:47:31.275643 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wcgkr" podUID="964e423d-b9e8-4f29-af5d-84b106ae8159" containerName="registry-server" probeResult="failure" output=< Mar 13 12:47:31 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:47:31 crc kubenswrapper[4632]: > Mar 13 12:47:39 crc kubenswrapper[4632]: I0313 12:47:39.810107 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-pvwll_2332524f-f990-4ef2-90b3-8b90c389d873/control-plane-machine-set-operator/0.log" Mar 13 12:47:40 crc kubenswrapper[4632]: I0313 12:47:40.347407 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-c6jnc_275c3112-6912-49f8-9d3f-8147662fb99f/kube-rbac-proxy/0.log" Mar 13 12:47:40 crc kubenswrapper[4632]: I0313 12:47:40.348769 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-c6jnc_275c3112-6912-49f8-9d3f-8147662fb99f/machine-api-operator/0.log" Mar 13 12:47:41 crc kubenswrapper[4632]: I0313 12:47:41.283430 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wcgkr" podUID="964e423d-b9e8-4f29-af5d-84b106ae8159" containerName="registry-server" probeResult="failure" output=< Mar 13 12:47:41 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:47:41 crc kubenswrapper[4632]: > Mar 13 12:47:51 crc kubenswrapper[4632]: I0313 12:47:51.596910 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wcgkr" podUID="964e423d-b9e8-4f29-af5d-84b106ae8159" containerName="registry-server" probeResult="failure" output=< Mar 13 12:47:51 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:47:51 crc kubenswrapper[4632]: > Mar 13 12:47:57 crc kubenswrapper[4632]: I0313 12:47:57.340618 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-kh4n9_43729a96-008f-4af6-ba0d-d52f2f179c0b/cert-manager-controller/0.log" Mar 13 12:47:57 crc kubenswrapper[4632]: I0313 12:47:57.598231 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-xg2df_348f2814-4e97-4ec5-bcbb-35a868955687/cert-manager-cainjector/0.log" Mar 13 12:47:57 crc kubenswrapper[4632]: I0313 12:47:57.667583 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-tjkbb_a0d52d98-fe87-4bc8-890e-5c5efb1f30d6/cert-manager-webhook/0.log" Mar 13 12:48:00 crc kubenswrapper[4632]: I0313 12:48:00.302845 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wcgkr" Mar 13 12:48:00 crc kubenswrapper[4632]: I0313 12:48:00.406264 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wcgkr" Mar 13 12:48:00 crc kubenswrapper[4632]: I0313 12:48:00.464411 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556768-qx4vs"] Mar 13 12:48:00 crc kubenswrapper[4632]: I0313 12:48:00.483366 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556768-qx4vs" Mar 13 12:48:00 crc kubenswrapper[4632]: I0313 12:48:00.488156 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556768-qx4vs"] Mar 13 12:48:00 crc kubenswrapper[4632]: I0313 12:48:00.503203 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 12:48:00 crc kubenswrapper[4632]: I0313 12:48:00.503169 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 12:48:00 crc kubenswrapper[4632]: I0313 12:48:00.503426 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 12:48:00 crc kubenswrapper[4632]: I0313 12:48:00.607695 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wcgkr"] Mar 13 12:48:00 crc kubenswrapper[4632]: I0313 12:48:00.630283 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsv78\" (UniqueName: \"kubernetes.io/projected/7c0a936c-9a75-4f0b-81b1-fb7f74d9911f-kube-api-access-xsv78\") pod \"auto-csr-approver-29556768-qx4vs\" (UID: \"7c0a936c-9a75-4f0b-81b1-fb7f74d9911f\") " pod="openshift-infra/auto-csr-approver-29556768-qx4vs" Mar 13 12:48:00 crc kubenswrapper[4632]: I0313 12:48:00.732012 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsv78\" (UniqueName: \"kubernetes.io/projected/7c0a936c-9a75-4f0b-81b1-fb7f74d9911f-kube-api-access-xsv78\") pod \"auto-csr-approver-29556768-qx4vs\" (UID: \"7c0a936c-9a75-4f0b-81b1-fb7f74d9911f\") " pod="openshift-infra/auto-csr-approver-29556768-qx4vs" Mar 13 12:48:01 crc kubenswrapper[4632]: I0313 12:48:01.365799 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsv78\" (UniqueName: \"kubernetes.io/projected/7c0a936c-9a75-4f0b-81b1-fb7f74d9911f-kube-api-access-xsv78\") pod \"auto-csr-approver-29556768-qx4vs\" (UID: \"7c0a936c-9a75-4f0b-81b1-fb7f74d9911f\") " pod="openshift-infra/auto-csr-approver-29556768-qx4vs" Mar 13 12:48:01 crc kubenswrapper[4632]: I0313 12:48:01.425310 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556768-qx4vs" Mar 13 12:48:01 crc kubenswrapper[4632]: I0313 12:48:01.426187 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wcgkr" podUID="964e423d-b9e8-4f29-af5d-84b106ae8159" containerName="registry-server" containerID="cri-o://537f35c912c9c8057ecc3ec80663f0a0d3c386360eb374b59f3e50a3f8bd59ee" gracePeriod=2 Mar 13 12:48:02 crc kubenswrapper[4632]: I0313 12:48:02.437410 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wcgkr" event={"ID":"964e423d-b9e8-4f29-af5d-84b106ae8159","Type":"ContainerDied","Data":"537f35c912c9c8057ecc3ec80663f0a0d3c386360eb374b59f3e50a3f8bd59ee"} Mar 13 12:48:02 crc kubenswrapper[4632]: I0313 12:48:02.438248 4632 generic.go:334] "Generic (PLEG): container finished" podID="964e423d-b9e8-4f29-af5d-84b106ae8159" containerID="537f35c912c9c8057ecc3ec80663f0a0d3c386360eb374b59f3e50a3f8bd59ee" exitCode=0 Mar 13 12:48:02 crc kubenswrapper[4632]: I0313 12:48:02.941584 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556768-qx4vs"] Mar 13 12:48:03 crc kubenswrapper[4632]: I0313 12:48:03.286334 4632 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 12:48:03 crc kubenswrapper[4632]: I0313 12:48:03.449892 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556768-qx4vs" event={"ID":"7c0a936c-9a75-4f0b-81b1-fb7f74d9911f","Type":"ContainerStarted","Data":"d34eb978076dffd0023033a08fbd76563d781881681a241dbff3f3a50aa7ac78"} Mar 13 12:48:03 crc kubenswrapper[4632]: I0313 12:48:03.454241 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wcgkr" event={"ID":"964e423d-b9e8-4f29-af5d-84b106ae8159","Type":"ContainerDied","Data":"38d0595c24aa0a1cd0e1355e6ab1c56e237ac066c9da90382b7cbb6e3fba6db4"} Mar 13 12:48:03 crc kubenswrapper[4632]: I0313 12:48:03.454862 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38d0595c24aa0a1cd0e1355e6ab1c56e237ac066c9da90382b7cbb6e3fba6db4" Mar 13 12:48:03 crc kubenswrapper[4632]: I0313 12:48:03.485535 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wcgkr" Mar 13 12:48:03 crc kubenswrapper[4632]: I0313 12:48:03.491874 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4frc4\" (UniqueName: \"kubernetes.io/projected/964e423d-b9e8-4f29-af5d-84b106ae8159-kube-api-access-4frc4\") pod \"964e423d-b9e8-4f29-af5d-84b106ae8159\" (UID: \"964e423d-b9e8-4f29-af5d-84b106ae8159\") " Mar 13 12:48:03 crc kubenswrapper[4632]: I0313 12:48:03.491986 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/964e423d-b9e8-4f29-af5d-84b106ae8159-utilities\") pod \"964e423d-b9e8-4f29-af5d-84b106ae8159\" (UID: \"964e423d-b9e8-4f29-af5d-84b106ae8159\") " Mar 13 12:48:03 crc kubenswrapper[4632]: I0313 12:48:03.492081 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/964e423d-b9e8-4f29-af5d-84b106ae8159-catalog-content\") pod \"964e423d-b9e8-4f29-af5d-84b106ae8159\" (UID: \"964e423d-b9e8-4f29-af5d-84b106ae8159\") " Mar 13 12:48:03 crc kubenswrapper[4632]: I0313 12:48:03.494499 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/964e423d-b9e8-4f29-af5d-84b106ae8159-utilities" (OuterVolumeSpecName: "utilities") pod "964e423d-b9e8-4f29-af5d-84b106ae8159" (UID: "964e423d-b9e8-4f29-af5d-84b106ae8159"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:48:03 crc kubenswrapper[4632]: I0313 12:48:03.510177 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/964e423d-b9e8-4f29-af5d-84b106ae8159-kube-api-access-4frc4" (OuterVolumeSpecName: "kube-api-access-4frc4") pod "964e423d-b9e8-4f29-af5d-84b106ae8159" (UID: "964e423d-b9e8-4f29-af5d-84b106ae8159"). InnerVolumeSpecName "kube-api-access-4frc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:48:03 crc kubenswrapper[4632]: I0313 12:48:03.598584 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4frc4\" (UniqueName: \"kubernetes.io/projected/964e423d-b9e8-4f29-af5d-84b106ae8159-kube-api-access-4frc4\") on node \"crc\" DevicePath \"\"" Mar 13 12:48:03 crc kubenswrapper[4632]: I0313 12:48:03.598824 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/964e423d-b9e8-4f29-af5d-84b106ae8159-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 12:48:03 crc kubenswrapper[4632]: I0313 12:48:03.657731 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/964e423d-b9e8-4f29-af5d-84b106ae8159-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "964e423d-b9e8-4f29-af5d-84b106ae8159" (UID: "964e423d-b9e8-4f29-af5d-84b106ae8159"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:48:03 crc kubenswrapper[4632]: I0313 12:48:03.700999 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/964e423d-b9e8-4f29-af5d-84b106ae8159-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 12:48:04 crc kubenswrapper[4632]: I0313 12:48:04.462315 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wcgkr" Mar 13 12:48:04 crc kubenswrapper[4632]: I0313 12:48:04.504074 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wcgkr"] Mar 13 12:48:04 crc kubenswrapper[4632]: I0313 12:48:04.517652 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wcgkr"] Mar 13 12:48:05 crc kubenswrapper[4632]: I0313 12:48:05.473688 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556768-qx4vs" event={"ID":"7c0a936c-9a75-4f0b-81b1-fb7f74d9911f","Type":"ContainerStarted","Data":"fbdf7412c66e2fa539b75629b05076618e8fad2c845d05a467fd575d619baa55"} Mar 13 12:48:05 crc kubenswrapper[4632]: I0313 12:48:05.494457 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556768-qx4vs" podStartSLOduration=4.593300179 podStartE2EDuration="5.493558994s" podCreationTimestamp="2026-03-13 12:48:00 +0000 UTC" firstStartedPulling="2026-03-13 12:48:03.280838457 +0000 UTC m=+9857.303368580" lastFinishedPulling="2026-03-13 12:48:04.181097262 +0000 UTC m=+9858.203627395" observedRunningTime="2026-03-13 12:48:05.489484404 +0000 UTC m=+9859.512014537" watchObservedRunningTime="2026-03-13 12:48:05.493558994 +0000 UTC m=+9859.516089137" Mar 13 12:48:06 crc kubenswrapper[4632]: I0313 12:48:06.058140 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="964e423d-b9e8-4f29-af5d-84b106ae8159" path="/var/lib/kubelet/pods/964e423d-b9e8-4f29-af5d-84b106ae8159/volumes" Mar 13 12:48:06 crc kubenswrapper[4632]: I0313 12:48:06.485146 4632 generic.go:334] "Generic (PLEG): container finished" podID="7c0a936c-9a75-4f0b-81b1-fb7f74d9911f" containerID="fbdf7412c66e2fa539b75629b05076618e8fad2c845d05a467fd575d619baa55" exitCode=0 Mar 13 12:48:06 crc kubenswrapper[4632]: I0313 12:48:06.485208 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556768-qx4vs" event={"ID":"7c0a936c-9a75-4f0b-81b1-fb7f74d9911f","Type":"ContainerDied","Data":"fbdf7412c66e2fa539b75629b05076618e8fad2c845d05a467fd575d619baa55"} Mar 13 12:48:07 crc kubenswrapper[4632]: I0313 12:48:07.903816 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556768-qx4vs" Mar 13 12:48:08 crc kubenswrapper[4632]: I0313 12:48:08.074357 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsv78\" (UniqueName: \"kubernetes.io/projected/7c0a936c-9a75-4f0b-81b1-fb7f74d9911f-kube-api-access-xsv78\") pod \"7c0a936c-9a75-4f0b-81b1-fb7f74d9911f\" (UID: \"7c0a936c-9a75-4f0b-81b1-fb7f74d9911f\") " Mar 13 12:48:08 crc kubenswrapper[4632]: I0313 12:48:08.095912 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c0a936c-9a75-4f0b-81b1-fb7f74d9911f-kube-api-access-xsv78" (OuterVolumeSpecName: "kube-api-access-xsv78") pod "7c0a936c-9a75-4f0b-81b1-fb7f74d9911f" (UID: "7c0a936c-9a75-4f0b-81b1-fb7f74d9911f"). InnerVolumeSpecName "kube-api-access-xsv78". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:48:08 crc kubenswrapper[4632]: I0313 12:48:08.176733 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsv78\" (UniqueName: \"kubernetes.io/projected/7c0a936c-9a75-4f0b-81b1-fb7f74d9911f-kube-api-access-xsv78\") on node \"crc\" DevicePath \"\"" Mar 13 12:48:08 crc kubenswrapper[4632]: I0313 12:48:08.504208 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556768-qx4vs" event={"ID":"7c0a936c-9a75-4f0b-81b1-fb7f74d9911f","Type":"ContainerDied","Data":"d34eb978076dffd0023033a08fbd76563d781881681a241dbff3f3a50aa7ac78"} Mar 13 12:48:08 crc kubenswrapper[4632]: I0313 12:48:08.504546 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d34eb978076dffd0023033a08fbd76563d781881681a241dbff3f3a50aa7ac78" Mar 13 12:48:08 crc kubenswrapper[4632]: I0313 12:48:08.504281 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556768-qx4vs" Mar 13 12:48:08 crc kubenswrapper[4632]: I0313 12:48:08.579926 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556762-mjtxh"] Mar 13 12:48:08 crc kubenswrapper[4632]: I0313 12:48:08.589524 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556762-mjtxh"] Mar 13 12:48:10 crc kubenswrapper[4632]: I0313 12:48:10.057545 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf13c17d-1ea6-4a0e-bfbd-e3bfc8d453ae" path="/var/lib/kubelet/pods/cf13c17d-1ea6-4a0e-bfbd-e3bfc8d453ae/volumes" Mar 13 12:48:10 crc kubenswrapper[4632]: I0313 12:48:10.460917 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:48:10 crc kubenswrapper[4632]: I0313 12:48:10.461669 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:48:16 crc kubenswrapper[4632]: I0313 12:48:16.492471 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-kzrvn_1ca5cae6-5549-492a-a257-745bb41d3574/nmstate-console-plugin/0.log" Mar 13 12:48:16 crc kubenswrapper[4632]: I0313 12:48:16.794970 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-mpfnk_33445a2b-7fa8-4198-a60a-09caeb69b8ed/nmstate-handler/0.log" Mar 13 12:48:16 crc kubenswrapper[4632]: I0313 12:48:16.990905 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-lnfrw_0c63c4bc-5c1a-4af0-b255-eb418d8a02cd/kube-rbac-proxy/0.log" Mar 13 12:48:17 crc kubenswrapper[4632]: I0313 12:48:17.032507 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-lnfrw_0c63c4bc-5c1a-4af0-b255-eb418d8a02cd/nmstate-metrics/0.log" Mar 13 12:48:17 crc kubenswrapper[4632]: I0313 12:48:17.159502 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-bzmdv_3b679db2-06cc-4796-945a-5ced45b39053/nmstate-operator/0.log" Mar 13 12:48:17 crc kubenswrapper[4632]: I0313 12:48:17.283645 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-gcngd_9bf11778-d854-4c97-acd1-ed4822ee5f47/nmstate-webhook/0.log" Mar 13 12:48:30 crc kubenswrapper[4632]: I0313 12:48:30.594131 4632 scope.go:117] "RemoveContainer" containerID="8dfadb29bc36e882b8d8ebf6016fec294107233b5f8602de74595b7d612d371c" Mar 13 12:48:40 crc kubenswrapper[4632]: I0313 12:48:40.463153 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:48:40 crc kubenswrapper[4632]: I0313 12:48:40.467579 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:48:53 crc kubenswrapper[4632]: I0313 12:48:53.628247 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-62bwr_277ddd7f-fd9c-4b27-9563-c904f1dffd40/controller/0.log" Mar 13 12:48:53 crc kubenswrapper[4632]: I0313 12:48:53.653932 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-62bwr_277ddd7f-fd9c-4b27-9563-c904f1dffd40/kube-rbac-proxy/0.log" Mar 13 12:48:53 crc kubenswrapper[4632]: I0313 12:48:53.921397 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lvlxj_85b58bb0-63f5-4c85-8759-ce28d2c7db58/cp-frr-files/0.log" Mar 13 12:48:54 crc kubenswrapper[4632]: I0313 12:48:54.154237 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lvlxj_85b58bb0-63f5-4c85-8759-ce28d2c7db58/cp-frr-files/0.log" Mar 13 12:48:54 crc kubenswrapper[4632]: I0313 12:48:54.178001 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lvlxj_85b58bb0-63f5-4c85-8759-ce28d2c7db58/cp-metrics/0.log" Mar 13 12:48:54 crc kubenswrapper[4632]: I0313 12:48:54.205110 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lvlxj_85b58bb0-63f5-4c85-8759-ce28d2c7db58/cp-reloader/0.log" Mar 13 12:48:54 crc kubenswrapper[4632]: I0313 12:48:54.214783 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lvlxj_85b58bb0-63f5-4c85-8759-ce28d2c7db58/cp-reloader/0.log" Mar 13 12:48:54 crc kubenswrapper[4632]: I0313 12:48:54.710685 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lvlxj_85b58bb0-63f5-4c85-8759-ce28d2c7db58/cp-metrics/0.log" Mar 13 12:48:54 crc kubenswrapper[4632]: I0313 12:48:54.754788 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lvlxj_85b58bb0-63f5-4c85-8759-ce28d2c7db58/cp-frr-files/0.log" Mar 13 12:48:54 crc kubenswrapper[4632]: I0313 12:48:54.787283 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lvlxj_85b58bb0-63f5-4c85-8759-ce28d2c7db58/cp-reloader/0.log" Mar 13 12:48:54 crc kubenswrapper[4632]: I0313 12:48:54.855837 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lvlxj_85b58bb0-63f5-4c85-8759-ce28d2c7db58/cp-metrics/0.log" Mar 13 12:48:55 crc kubenswrapper[4632]: I0313 12:48:55.081317 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lvlxj_85b58bb0-63f5-4c85-8759-ce28d2c7db58/cp-frr-files/0.log" Mar 13 12:48:55 crc kubenswrapper[4632]: I0313 12:48:55.169279 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lvlxj_85b58bb0-63f5-4c85-8759-ce28d2c7db58/cp-reloader/0.log" Mar 13 12:48:55 crc kubenswrapper[4632]: I0313 12:48:55.232813 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lvlxj_85b58bb0-63f5-4c85-8759-ce28d2c7db58/controller/1.log" Mar 13 12:48:55 crc kubenswrapper[4632]: I0313 12:48:55.310028 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lvlxj_85b58bb0-63f5-4c85-8759-ce28d2c7db58/cp-metrics/0.log" Mar 13 12:48:55 crc kubenswrapper[4632]: I0313 12:48:55.469839 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lvlxj_85b58bb0-63f5-4c85-8759-ce28d2c7db58/controller/0.log" Mar 13 12:48:55 crc kubenswrapper[4632]: I0313 12:48:55.635915 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lvlxj_85b58bb0-63f5-4c85-8759-ce28d2c7db58/frr-metrics/0.log" Mar 13 12:48:55 crc kubenswrapper[4632]: I0313 12:48:55.883806 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lvlxj_85b58bb0-63f5-4c85-8759-ce28d2c7db58/kube-rbac-proxy/0.log" Mar 13 12:48:55 crc kubenswrapper[4632]: I0313 12:48:55.995441 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lvlxj_85b58bb0-63f5-4c85-8759-ce28d2c7db58/kube-rbac-proxy-frr/0.log" Mar 13 12:48:56 crc kubenswrapper[4632]: I0313 12:48:56.369821 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lvlxj_85b58bb0-63f5-4c85-8759-ce28d2c7db58/reloader/0.log" Mar 13 12:48:56 crc kubenswrapper[4632]: I0313 12:48:56.775950 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-9zbh8_b33bccd8-6f28-4ffe-9500-069a52aab5df/frr-k8s-webhook-server/0.log" Mar 13 12:48:56 crc kubenswrapper[4632]: I0313 12:48:56.782750 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-9zbh8_b33bccd8-6f28-4ffe-9500-069a52aab5df/frr-k8s-webhook-server/1.log" Mar 13 12:48:57 crc kubenswrapper[4632]: I0313 12:48:57.275921 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-ffdcc767b-qxvlq_e62d674f-5b2c-4788-85a3-95b51621dbef/manager/0.log" Mar 13 12:48:57 crc kubenswrapper[4632]: I0313 12:48:57.621626 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6c7bf5ddc5-v6t5l_712b2002-4fce-4983-926a-99a4b2dc7a8c/webhook-server/0.log" Mar 13 12:48:57 crc kubenswrapper[4632]: I0313 12:48:57.931766 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-tztd9_8f51973a-596d-40dc-9b5b-b2c95a60ea0c/kube-rbac-proxy/0.log" Mar 13 12:48:57 crc kubenswrapper[4632]: I0313 12:48:57.974576 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lvlxj_85b58bb0-63f5-4c85-8759-ce28d2c7db58/frr/1.log" Mar 13 12:48:58 crc kubenswrapper[4632]: I0313 12:48:58.528718 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-tztd9_8f51973a-596d-40dc-9b5b-b2c95a60ea0c/speaker/1.log" Mar 13 12:48:58 crc kubenswrapper[4632]: I0313 12:48:58.862732 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lvlxj_85b58bb0-63f5-4c85-8759-ce28d2c7db58/frr/0.log" Mar 13 12:48:58 crc kubenswrapper[4632]: I0313 12:48:58.885215 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-tztd9_8f51973a-596d-40dc-9b5b-b2c95a60ea0c/speaker/0.log" Mar 13 12:49:10 crc kubenswrapper[4632]: I0313 12:49:10.460857 4632 patch_prober.go:28] interesting pod/machine-config-daemon-zkscb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:49:10 crc kubenswrapper[4632]: I0313 12:49:10.461329 4632 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:49:10 crc kubenswrapper[4632]: I0313 12:49:10.462849 4632 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" Mar 13 12:49:10 crc kubenswrapper[4632]: I0313 12:49:10.466132 4632 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f99b8bf20e90f85804d1361e895aa3d02f6ce45057066d1347ea4edee62e1086"} pod="openshift-machine-config-operator/machine-config-daemon-zkscb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 12:49:10 crc kubenswrapper[4632]: I0313 12:49:10.466214 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerName="machine-config-daemon" containerID="cri-o://f99b8bf20e90f85804d1361e895aa3d02f6ce45057066d1347ea4edee62e1086" gracePeriod=600 Mar 13 12:49:10 crc kubenswrapper[4632]: E0313 12:49:10.608650 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:49:11 crc kubenswrapper[4632]: I0313 12:49:11.131972 4632 generic.go:334] "Generic (PLEG): container finished" podID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" containerID="f99b8bf20e90f85804d1361e895aa3d02f6ce45057066d1347ea4edee62e1086" exitCode=0 Mar 13 12:49:11 crc kubenswrapper[4632]: I0313 12:49:11.132306 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerDied","Data":"f99b8bf20e90f85804d1361e895aa3d02f6ce45057066d1347ea4edee62e1086"} Mar 13 12:49:11 crc kubenswrapper[4632]: I0313 12:49:11.133059 4632 scope.go:117] "RemoveContainer" containerID="ac0aa587db0bc6f14a810b1c0a407933497eafb76c5051481d9814592d0380b3" Mar 13 12:49:11 crc kubenswrapper[4632]: I0313 12:49:11.133481 4632 scope.go:117] "RemoveContainer" containerID="f99b8bf20e90f85804d1361e895aa3d02f6ce45057066d1347ea4edee62e1086" Mar 13 12:49:11 crc kubenswrapper[4632]: E0313 12:49:11.133754 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:49:16 crc kubenswrapper[4632]: I0313 12:49:16.321881 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg_2e270cfe-55fc-4855-87ff-4313a0ad319c/util/0.log" Mar 13 12:49:16 crc kubenswrapper[4632]: I0313 12:49:16.602175 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg_2e270cfe-55fc-4855-87ff-4313a0ad319c/util/0.log" Mar 13 12:49:16 crc kubenswrapper[4632]: I0313 12:49:16.622254 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg_2e270cfe-55fc-4855-87ff-4313a0ad319c/pull/0.log" Mar 13 12:49:16 crc kubenswrapper[4632]: I0313 12:49:16.675583 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg_2e270cfe-55fc-4855-87ff-4313a0ad319c/pull/0.log" Mar 13 12:49:16 crc kubenswrapper[4632]: I0313 12:49:16.891193 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg_2e270cfe-55fc-4855-87ff-4313a0ad319c/util/0.log" Mar 13 12:49:16 crc kubenswrapper[4632]: I0313 12:49:16.945130 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg_2e270cfe-55fc-4855-87ff-4313a0ad319c/pull/0.log" Mar 13 12:49:16 crc kubenswrapper[4632]: I0313 12:49:16.970364 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874pdtfg_2e270cfe-55fc-4855-87ff-4313a0ad319c/extract/0.log" Mar 13 12:49:17 crc kubenswrapper[4632]: I0313 12:49:17.156687 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj_8c1e4d78-3f38-48b5-b157-a1a076f31b76/util/0.log" Mar 13 12:49:17 crc kubenswrapper[4632]: I0313 12:49:17.431413 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj_8c1e4d78-3f38-48b5-b157-a1a076f31b76/util/0.log" Mar 13 12:49:17 crc kubenswrapper[4632]: I0313 12:49:17.500222 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj_8c1e4d78-3f38-48b5-b157-a1a076f31b76/pull/0.log" Mar 13 12:49:17 crc kubenswrapper[4632]: I0313 12:49:17.500674 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj_8c1e4d78-3f38-48b5-b157-a1a076f31b76/pull/0.log" Mar 13 12:49:17 crc kubenswrapper[4632]: I0313 12:49:17.782904 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj_8c1e4d78-3f38-48b5-b157-a1a076f31b76/util/0.log" Mar 13 12:49:17 crc kubenswrapper[4632]: I0313 12:49:17.839569 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj_8c1e4d78-3f38-48b5-b157-a1a076f31b76/pull/0.log" Mar 13 12:49:17 crc kubenswrapper[4632]: I0313 12:49:17.871513 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1n2dcj_8c1e4d78-3f38-48b5-b157-a1a076f31b76/extract/0.log" Mar 13 12:49:18 crc kubenswrapper[4632]: I0313 12:49:18.009043 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7ksc5_0fa3faab-9e82-4fde-afff-3de6939a17d1/extract-utilities/0.log" Mar 13 12:49:18 crc kubenswrapper[4632]: I0313 12:49:18.299557 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7ksc5_0fa3faab-9e82-4fde-afff-3de6939a17d1/extract-utilities/0.log" Mar 13 12:49:18 crc kubenswrapper[4632]: I0313 12:49:18.325003 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7ksc5_0fa3faab-9e82-4fde-afff-3de6939a17d1/extract-content/0.log" Mar 13 12:49:18 crc kubenswrapper[4632]: I0313 12:49:18.325018 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7ksc5_0fa3faab-9e82-4fde-afff-3de6939a17d1/extract-content/0.log" Mar 13 12:49:18 crc kubenswrapper[4632]: I0313 12:49:18.504286 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7ksc5_0fa3faab-9e82-4fde-afff-3de6939a17d1/extract-utilities/0.log" Mar 13 12:49:18 crc kubenswrapper[4632]: I0313 12:49:18.557308 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7ksc5_0fa3faab-9e82-4fde-afff-3de6939a17d1/extract-content/0.log" Mar 13 12:49:18 crc kubenswrapper[4632]: I0313 12:49:18.955748 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c87c2_7bbc76a6-d812-41c7-a63b-09f6fdb37405/extract-utilities/0.log" Mar 13 12:49:19 crc kubenswrapper[4632]: I0313 12:49:19.155654 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c87c2_7bbc76a6-d812-41c7-a63b-09f6fdb37405/extract-utilities/0.log" Mar 13 12:49:19 crc kubenswrapper[4632]: I0313 12:49:19.213287 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c87c2_7bbc76a6-d812-41c7-a63b-09f6fdb37405/extract-content/0.log" Mar 13 12:49:19 crc kubenswrapper[4632]: I0313 12:49:19.401114 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c87c2_7bbc76a6-d812-41c7-a63b-09f6fdb37405/extract-content/0.log" Mar 13 12:49:19 crc kubenswrapper[4632]: I0313 12:49:19.676072 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c87c2_7bbc76a6-d812-41c7-a63b-09f6fdb37405/extract-utilities/0.log" Mar 13 12:49:19 crc kubenswrapper[4632]: I0313 12:49:19.687851 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c87c2_7bbc76a6-d812-41c7-a63b-09f6fdb37405/extract-content/0.log" Mar 13 12:49:19 crc kubenswrapper[4632]: I0313 12:49:19.944439 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7ksc5_0fa3faab-9e82-4fde-afff-3de6939a17d1/registry-server/0.log" Mar 13 12:49:20 crc kubenswrapper[4632]: I0313 12:49:19.995090 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-d9n25_023be687-a773-401c-981b-e3d7136f53b6/marketplace-operator/0.log" Mar 13 12:49:20 crc kubenswrapper[4632]: I0313 12:49:20.460368 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c87c2_7bbc76a6-d812-41c7-a63b-09f6fdb37405/registry-server/0.log" Mar 13 12:49:20 crc kubenswrapper[4632]: I0313 12:49:20.470462 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gdt8x_f7f61b75-16bf-4c5a-be30-c88d155c203f/extract-utilities/0.log" Mar 13 12:49:20 crc kubenswrapper[4632]: I0313 12:49:20.998284 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gdt8x_f7f61b75-16bf-4c5a-be30-c88d155c203f/extract-utilities/0.log" Mar 13 12:49:21 crc kubenswrapper[4632]: I0313 12:49:21.008168 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gdt8x_f7f61b75-16bf-4c5a-be30-c88d155c203f/extract-content/0.log" Mar 13 12:49:21 crc kubenswrapper[4632]: I0313 12:49:21.044377 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gdt8x_f7f61b75-16bf-4c5a-be30-c88d155c203f/extract-content/0.log" Mar 13 12:49:21 crc kubenswrapper[4632]: I0313 12:49:21.205122 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gdt8x_f7f61b75-16bf-4c5a-be30-c88d155c203f/extract-utilities/0.log" Mar 13 12:49:21 crc kubenswrapper[4632]: I0313 12:49:21.257334 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gdt8x_f7f61b75-16bf-4c5a-be30-c88d155c203f/extract-content/0.log" Mar 13 12:49:21 crc kubenswrapper[4632]: I0313 12:49:21.527329 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gdt8x_f7f61b75-16bf-4c5a-be30-c88d155c203f/registry-server/0.log" Mar 13 12:49:21 crc kubenswrapper[4632]: I0313 12:49:21.596371 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vwgfr_f5cc71d2-1901-4778-8e20-93646cfc1a85/extract-utilities/0.log" Mar 13 12:49:21 crc kubenswrapper[4632]: I0313 12:49:21.823865 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vwgfr_f5cc71d2-1901-4778-8e20-93646cfc1a85/extract-content/0.log" Mar 13 12:49:21 crc kubenswrapper[4632]: I0313 12:49:21.842252 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vwgfr_f5cc71d2-1901-4778-8e20-93646cfc1a85/extract-utilities/0.log" Mar 13 12:49:21 crc kubenswrapper[4632]: I0313 12:49:21.877350 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vwgfr_f5cc71d2-1901-4778-8e20-93646cfc1a85/extract-content/0.log" Mar 13 12:49:22 crc kubenswrapper[4632]: I0313 12:49:22.073498 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vwgfr_f5cc71d2-1901-4778-8e20-93646cfc1a85/extract-utilities/0.log" Mar 13 12:49:22 crc kubenswrapper[4632]: I0313 12:49:22.079469 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vwgfr_f5cc71d2-1901-4778-8e20-93646cfc1a85/extract-content/0.log" Mar 13 12:49:23 crc kubenswrapper[4632]: I0313 12:49:23.645645 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vwgfr_f5cc71d2-1901-4778-8e20-93646cfc1a85/registry-server/0.log" Mar 13 12:49:24 crc kubenswrapper[4632]: I0313 12:49:24.045628 4632 scope.go:117] "RemoveContainer" containerID="f99b8bf20e90f85804d1361e895aa3d02f6ce45057066d1347ea4edee62e1086" Mar 13 12:49:24 crc kubenswrapper[4632]: E0313 12:49:24.046175 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:49:38 crc kubenswrapper[4632]: I0313 12:49:38.055964 4632 scope.go:117] "RemoveContainer" containerID="f99b8bf20e90f85804d1361e895aa3d02f6ce45057066d1347ea4edee62e1086" Mar 13 12:49:38 crc kubenswrapper[4632]: E0313 12:49:38.056806 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:49:49 crc kubenswrapper[4632]: I0313 12:49:49.044697 4632 scope.go:117] "RemoveContainer" containerID="f99b8bf20e90f85804d1361e895aa3d02f6ce45057066d1347ea4edee62e1086" Mar 13 12:49:49 crc kubenswrapper[4632]: E0313 12:49:49.045424 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:49:50 crc kubenswrapper[4632]: E0313 12:49:50.109786 4632 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.182:37506->38.102.83.182:37465: read tcp 38.102.83.182:37506->38.102.83.182:37465: read: connection reset by peer Mar 13 12:50:00 crc kubenswrapper[4632]: I0313 12:50:00.617120 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556770-clndp"] Mar 13 12:50:00 crc kubenswrapper[4632]: E0313 12:50:00.623152 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="964e423d-b9e8-4f29-af5d-84b106ae8159" containerName="extract-utilities" Mar 13 12:50:00 crc kubenswrapper[4632]: I0313 12:50:00.623190 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="964e423d-b9e8-4f29-af5d-84b106ae8159" containerName="extract-utilities" Mar 13 12:50:00 crc kubenswrapper[4632]: E0313 12:50:00.623219 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="964e423d-b9e8-4f29-af5d-84b106ae8159" containerName="registry-server" Mar 13 12:50:00 crc kubenswrapper[4632]: I0313 12:50:00.623226 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="964e423d-b9e8-4f29-af5d-84b106ae8159" containerName="registry-server" Mar 13 12:50:00 crc kubenswrapper[4632]: E0313 12:50:00.623239 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c0a936c-9a75-4f0b-81b1-fb7f74d9911f" containerName="oc" Mar 13 12:50:00 crc kubenswrapper[4632]: I0313 12:50:00.623245 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c0a936c-9a75-4f0b-81b1-fb7f74d9911f" containerName="oc" Mar 13 12:50:00 crc kubenswrapper[4632]: E0313 12:50:00.623272 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="964e423d-b9e8-4f29-af5d-84b106ae8159" containerName="extract-content" Mar 13 12:50:00 crc kubenswrapper[4632]: I0313 12:50:00.623278 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="964e423d-b9e8-4f29-af5d-84b106ae8159" containerName="extract-content" Mar 13 12:50:00 crc kubenswrapper[4632]: I0313 12:50:00.625552 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="964e423d-b9e8-4f29-af5d-84b106ae8159" containerName="registry-server" Mar 13 12:50:00 crc kubenswrapper[4632]: I0313 12:50:00.625585 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c0a936c-9a75-4f0b-81b1-fb7f74d9911f" containerName="oc" Mar 13 12:50:00 crc kubenswrapper[4632]: I0313 12:50:00.633739 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556770-clndp" Mar 13 12:50:00 crc kubenswrapper[4632]: I0313 12:50:00.651074 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 12:50:00 crc kubenswrapper[4632]: I0313 12:50:00.651074 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 12:50:00 crc kubenswrapper[4632]: I0313 12:50:00.651083 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 12:50:00 crc kubenswrapper[4632]: I0313 12:50:00.739389 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556770-clndp"] Mar 13 12:50:00 crc kubenswrapper[4632]: I0313 12:50:00.781364 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xgqb\" (UniqueName: \"kubernetes.io/projected/40d97b55-9e4c-4d9a-962c-4030dc7dd36b-kube-api-access-6xgqb\") pod \"auto-csr-approver-29556770-clndp\" (UID: \"40d97b55-9e4c-4d9a-962c-4030dc7dd36b\") " pod="openshift-infra/auto-csr-approver-29556770-clndp" Mar 13 12:50:00 crc kubenswrapper[4632]: I0313 12:50:00.882748 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xgqb\" (UniqueName: \"kubernetes.io/projected/40d97b55-9e4c-4d9a-962c-4030dc7dd36b-kube-api-access-6xgqb\") pod \"auto-csr-approver-29556770-clndp\" (UID: \"40d97b55-9e4c-4d9a-962c-4030dc7dd36b\") " pod="openshift-infra/auto-csr-approver-29556770-clndp" Mar 13 12:50:00 crc kubenswrapper[4632]: I0313 12:50:00.932314 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xgqb\" (UniqueName: \"kubernetes.io/projected/40d97b55-9e4c-4d9a-962c-4030dc7dd36b-kube-api-access-6xgqb\") pod \"auto-csr-approver-29556770-clndp\" (UID: \"40d97b55-9e4c-4d9a-962c-4030dc7dd36b\") " pod="openshift-infra/auto-csr-approver-29556770-clndp" Mar 13 12:50:00 crc kubenswrapper[4632]: I0313 12:50:00.964411 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556770-clndp" Mar 13 12:50:02 crc kubenswrapper[4632]: I0313 12:50:02.044893 4632 scope.go:117] "RemoveContainer" containerID="f99b8bf20e90f85804d1361e895aa3d02f6ce45057066d1347ea4edee62e1086" Mar 13 12:50:02 crc kubenswrapper[4632]: E0313 12:50:02.046389 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:50:02 crc kubenswrapper[4632]: I0313 12:50:02.644927 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556770-clndp"] Mar 13 12:50:03 crc kubenswrapper[4632]: I0313 12:50:03.617382 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556770-clndp" event={"ID":"40d97b55-9e4c-4d9a-962c-4030dc7dd36b","Type":"ContainerStarted","Data":"17f3bfa72055148fc6070a539aaf1077a5482350320ffbe2a1d181f4064e1eb0"} Mar 13 12:50:05 crc kubenswrapper[4632]: I0313 12:50:05.644976 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556770-clndp" event={"ID":"40d97b55-9e4c-4d9a-962c-4030dc7dd36b","Type":"ContainerStarted","Data":"3a1743912c3055d81c796b10e35eb2de0472d47edbe9b1cc8de66ceb54f1f127"} Mar 13 12:50:05 crc kubenswrapper[4632]: I0313 12:50:05.677810 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556770-clndp" podStartSLOduration=4.535975405 podStartE2EDuration="5.676281276s" podCreationTimestamp="2026-03-13 12:50:00 +0000 UTC" firstStartedPulling="2026-03-13 12:50:02.693812234 +0000 UTC m=+9976.716342367" lastFinishedPulling="2026-03-13 12:50:03.834118105 +0000 UTC m=+9977.856648238" observedRunningTime="2026-03-13 12:50:05.670982216 +0000 UTC m=+9979.693512349" watchObservedRunningTime="2026-03-13 12:50:05.676281276 +0000 UTC m=+9979.698811419" Mar 13 12:50:06 crc kubenswrapper[4632]: I0313 12:50:06.656051 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556770-clndp" event={"ID":"40d97b55-9e4c-4d9a-962c-4030dc7dd36b","Type":"ContainerDied","Data":"3a1743912c3055d81c796b10e35eb2de0472d47edbe9b1cc8de66ceb54f1f127"} Mar 13 12:50:06 crc kubenswrapper[4632]: I0313 12:50:06.658371 4632 generic.go:334] "Generic (PLEG): container finished" podID="40d97b55-9e4c-4d9a-962c-4030dc7dd36b" containerID="3a1743912c3055d81c796b10e35eb2de0472d47edbe9b1cc8de66ceb54f1f127" exitCode=0 Mar 13 12:50:08 crc kubenswrapper[4632]: I0313 12:50:08.076567 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556770-clndp" Mar 13 12:50:08 crc kubenswrapper[4632]: I0313 12:50:08.142201 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xgqb\" (UniqueName: \"kubernetes.io/projected/40d97b55-9e4c-4d9a-962c-4030dc7dd36b-kube-api-access-6xgqb\") pod \"40d97b55-9e4c-4d9a-962c-4030dc7dd36b\" (UID: \"40d97b55-9e4c-4d9a-962c-4030dc7dd36b\") " Mar 13 12:50:08 crc kubenswrapper[4632]: I0313 12:50:08.175118 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40d97b55-9e4c-4d9a-962c-4030dc7dd36b-kube-api-access-6xgqb" (OuterVolumeSpecName: "kube-api-access-6xgqb") pod "40d97b55-9e4c-4d9a-962c-4030dc7dd36b" (UID: "40d97b55-9e4c-4d9a-962c-4030dc7dd36b"). InnerVolumeSpecName "kube-api-access-6xgqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:50:08 crc kubenswrapper[4632]: I0313 12:50:08.244828 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xgqb\" (UniqueName: \"kubernetes.io/projected/40d97b55-9e4c-4d9a-962c-4030dc7dd36b-kube-api-access-6xgqb\") on node \"crc\" DevicePath \"\"" Mar 13 12:50:08 crc kubenswrapper[4632]: I0313 12:50:08.679018 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556770-clndp" Mar 13 12:50:08 crc kubenswrapper[4632]: I0313 12:50:08.678932 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556770-clndp" event={"ID":"40d97b55-9e4c-4d9a-962c-4030dc7dd36b","Type":"ContainerDied","Data":"17f3bfa72055148fc6070a539aaf1077a5482350320ffbe2a1d181f4064e1eb0"} Mar 13 12:50:08 crc kubenswrapper[4632]: I0313 12:50:08.680685 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17f3bfa72055148fc6070a539aaf1077a5482350320ffbe2a1d181f4064e1eb0" Mar 13 12:50:09 crc kubenswrapper[4632]: I0313 12:50:09.175927 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556764-9lxh8"] Mar 13 12:50:09 crc kubenswrapper[4632]: I0313 12:50:09.185734 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556764-9lxh8"] Mar 13 12:50:10 crc kubenswrapper[4632]: I0313 12:50:10.059366 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="437f55ff-c573-4944-a680-6ac2d168cb0f" path="/var/lib/kubelet/pods/437f55ff-c573-4944-a680-6ac2d168cb0f/volumes" Mar 13 12:50:16 crc kubenswrapper[4632]: I0313 12:50:16.044220 4632 scope.go:117] "RemoveContainer" containerID="f99b8bf20e90f85804d1361e895aa3d02f6ce45057066d1347ea4edee62e1086" Mar 13 12:50:16 crc kubenswrapper[4632]: E0313 12:50:16.045660 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:50:27 crc kubenswrapper[4632]: I0313 12:50:27.046570 4632 scope.go:117] "RemoveContainer" containerID="f99b8bf20e90f85804d1361e895aa3d02f6ce45057066d1347ea4edee62e1086" Mar 13 12:50:27 crc kubenswrapper[4632]: E0313 12:50:27.047872 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:50:30 crc kubenswrapper[4632]: I0313 12:50:30.939549 4632 scope.go:117] "RemoveContainer" containerID="aa02f726269acb1a95d7a68005cbfbbe4f481bb481f4612470d155ee5bde6649" Mar 13 12:50:39 crc kubenswrapper[4632]: I0313 12:50:39.044422 4632 scope.go:117] "RemoveContainer" containerID="f99b8bf20e90f85804d1361e895aa3d02f6ce45057066d1347ea4edee62e1086" Mar 13 12:50:39 crc kubenswrapper[4632]: E0313 12:50:39.045250 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:50:50 crc kubenswrapper[4632]: I0313 12:50:50.045075 4632 scope.go:117] "RemoveContainer" containerID="f99b8bf20e90f85804d1361e895aa3d02f6ce45057066d1347ea4edee62e1086" Mar 13 12:50:50 crc kubenswrapper[4632]: E0313 12:50:50.047325 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:51:01 crc kubenswrapper[4632]: I0313 12:51:01.045696 4632 scope.go:117] "RemoveContainer" containerID="f99b8bf20e90f85804d1361e895aa3d02f6ce45057066d1347ea4edee62e1086" Mar 13 12:51:01 crc kubenswrapper[4632]: E0313 12:51:01.046616 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:51:13 crc kubenswrapper[4632]: I0313 12:51:13.046033 4632 scope.go:117] "RemoveContainer" containerID="f99b8bf20e90f85804d1361e895aa3d02f6ce45057066d1347ea4edee62e1086" Mar 13 12:51:13 crc kubenswrapper[4632]: E0313 12:51:13.046819 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:51:27 crc kubenswrapper[4632]: I0313 12:51:27.044665 4632 scope.go:117] "RemoveContainer" containerID="f99b8bf20e90f85804d1361e895aa3d02f6ce45057066d1347ea4edee62e1086" Mar 13 12:51:27 crc kubenswrapper[4632]: E0313 12:51:27.045307 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:51:31 crc kubenswrapper[4632]: I0313 12:51:31.130526 4632 scope.go:117] "RemoveContainer" containerID="f9576f389f75db79c6cd02f685bff29c0e4ed007b62591df661e2a8ee57c8ce2" Mar 13 12:51:41 crc kubenswrapper[4632]: I0313 12:51:41.045732 4632 scope.go:117] "RemoveContainer" containerID="f99b8bf20e90f85804d1361e895aa3d02f6ce45057066d1347ea4edee62e1086" Mar 13 12:51:41 crc kubenswrapper[4632]: E0313 12:51:41.046650 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:51:56 crc kubenswrapper[4632]: I0313 12:51:56.044870 4632 scope.go:117] "RemoveContainer" containerID="f99b8bf20e90f85804d1361e895aa3d02f6ce45057066d1347ea4edee62e1086" Mar 13 12:51:56 crc kubenswrapper[4632]: E0313 12:51:56.045927 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:52:00 crc kubenswrapper[4632]: I0313 12:52:00.171600 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556772-dvdjm"] Mar 13 12:52:00 crc kubenswrapper[4632]: E0313 12:52:00.172656 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40d97b55-9e4c-4d9a-962c-4030dc7dd36b" containerName="oc" Mar 13 12:52:00 crc kubenswrapper[4632]: I0313 12:52:00.172675 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="40d97b55-9e4c-4d9a-962c-4030dc7dd36b" containerName="oc" Mar 13 12:52:00 crc kubenswrapper[4632]: I0313 12:52:00.172887 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="40d97b55-9e4c-4d9a-962c-4030dc7dd36b" containerName="oc" Mar 13 12:52:00 crc kubenswrapper[4632]: I0313 12:52:00.173597 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556772-dvdjm" Mar 13 12:52:00 crc kubenswrapper[4632]: I0313 12:52:00.177194 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 12:52:00 crc kubenswrapper[4632]: I0313 12:52:00.177732 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 12:52:00 crc kubenswrapper[4632]: I0313 12:52:00.178006 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 12:52:00 crc kubenswrapper[4632]: I0313 12:52:00.181602 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556772-dvdjm"] Mar 13 12:52:00 crc kubenswrapper[4632]: I0313 12:52:00.367309 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmdn2\" (UniqueName: \"kubernetes.io/projected/576ef56c-ba4f-4def-89ac-cdef2e378fca-kube-api-access-wmdn2\") pod \"auto-csr-approver-29556772-dvdjm\" (UID: \"576ef56c-ba4f-4def-89ac-cdef2e378fca\") " pod="openshift-infra/auto-csr-approver-29556772-dvdjm" Mar 13 12:52:00 crc kubenswrapper[4632]: I0313 12:52:00.469882 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmdn2\" (UniqueName: \"kubernetes.io/projected/576ef56c-ba4f-4def-89ac-cdef2e378fca-kube-api-access-wmdn2\") pod \"auto-csr-approver-29556772-dvdjm\" (UID: \"576ef56c-ba4f-4def-89ac-cdef2e378fca\") " pod="openshift-infra/auto-csr-approver-29556772-dvdjm" Mar 13 12:52:00 crc kubenswrapper[4632]: I0313 12:52:00.504168 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmdn2\" (UniqueName: \"kubernetes.io/projected/576ef56c-ba4f-4def-89ac-cdef2e378fca-kube-api-access-wmdn2\") pod \"auto-csr-approver-29556772-dvdjm\" (UID: \"576ef56c-ba4f-4def-89ac-cdef2e378fca\") " pod="openshift-infra/auto-csr-approver-29556772-dvdjm" Mar 13 12:52:00 crc kubenswrapper[4632]: I0313 12:52:00.796742 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556772-dvdjm" Mar 13 12:52:01 crc kubenswrapper[4632]: I0313 12:52:01.307485 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556772-dvdjm"] Mar 13 12:52:01 crc kubenswrapper[4632]: I0313 12:52:01.882703 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556772-dvdjm" event={"ID":"576ef56c-ba4f-4def-89ac-cdef2e378fca","Type":"ContainerStarted","Data":"ac34c41e4a723fd479a80798f16607c4694c8f642f8c95c8503d5d4d0b6c9f9f"} Mar 13 12:52:03 crc kubenswrapper[4632]: I0313 12:52:03.903208 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556772-dvdjm" event={"ID":"576ef56c-ba4f-4def-89ac-cdef2e378fca","Type":"ContainerStarted","Data":"2288f2f8e699c9708c03d351054c645fb4cb8dd9cc206c490b95db693eefe035"} Mar 13 12:52:03 crc kubenswrapper[4632]: I0313 12:52:03.922898 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556772-dvdjm" podStartSLOduration=2.539646749 podStartE2EDuration="3.922875079s" podCreationTimestamp="2026-03-13 12:52:00 +0000 UTC" firstStartedPulling="2026-03-13 12:52:01.303055858 +0000 UTC m=+10095.325585991" lastFinishedPulling="2026-03-13 12:52:02.686284188 +0000 UTC m=+10096.708814321" observedRunningTime="2026-03-13 12:52:03.913623843 +0000 UTC m=+10097.936153986" watchObservedRunningTime="2026-03-13 12:52:03.922875079 +0000 UTC m=+10097.945405212" Mar 13 12:52:04 crc kubenswrapper[4632]: I0313 12:52:04.916186 4632 generic.go:334] "Generic (PLEG): container finished" podID="576ef56c-ba4f-4def-89ac-cdef2e378fca" containerID="2288f2f8e699c9708c03d351054c645fb4cb8dd9cc206c490b95db693eefe035" exitCode=0 Mar 13 12:52:04 crc kubenswrapper[4632]: I0313 12:52:04.916273 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556772-dvdjm" event={"ID":"576ef56c-ba4f-4def-89ac-cdef2e378fca","Type":"ContainerDied","Data":"2288f2f8e699c9708c03d351054c645fb4cb8dd9cc206c490b95db693eefe035"} Mar 13 12:52:06 crc kubenswrapper[4632]: I0313 12:52:06.400798 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556772-dvdjm" Mar 13 12:52:06 crc kubenswrapper[4632]: I0313 12:52:06.415669 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmdn2\" (UniqueName: \"kubernetes.io/projected/576ef56c-ba4f-4def-89ac-cdef2e378fca-kube-api-access-wmdn2\") pod \"576ef56c-ba4f-4def-89ac-cdef2e378fca\" (UID: \"576ef56c-ba4f-4def-89ac-cdef2e378fca\") " Mar 13 12:52:06 crc kubenswrapper[4632]: I0313 12:52:06.423201 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/576ef56c-ba4f-4def-89ac-cdef2e378fca-kube-api-access-wmdn2" (OuterVolumeSpecName: "kube-api-access-wmdn2") pod "576ef56c-ba4f-4def-89ac-cdef2e378fca" (UID: "576ef56c-ba4f-4def-89ac-cdef2e378fca"). InnerVolumeSpecName "kube-api-access-wmdn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:52:06 crc kubenswrapper[4632]: I0313 12:52:06.518981 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmdn2\" (UniqueName: \"kubernetes.io/projected/576ef56c-ba4f-4def-89ac-cdef2e378fca-kube-api-access-wmdn2\") on node \"crc\" DevicePath \"\"" Mar 13 12:52:06 crc kubenswrapper[4632]: I0313 12:52:06.951842 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556772-dvdjm" event={"ID":"576ef56c-ba4f-4def-89ac-cdef2e378fca","Type":"ContainerDied","Data":"ac34c41e4a723fd479a80798f16607c4694c8f642f8c95c8503d5d4d0b6c9f9f"} Mar 13 12:52:06 crc kubenswrapper[4632]: I0313 12:52:06.951905 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac34c41e4a723fd479a80798f16607c4694c8f642f8c95c8503d5d4d0b6c9f9f" Mar 13 12:52:06 crc kubenswrapper[4632]: I0313 12:52:06.951914 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556772-dvdjm" Mar 13 12:52:07 crc kubenswrapper[4632]: I0313 12:52:07.030007 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556766-lklf4"] Mar 13 12:52:07 crc kubenswrapper[4632]: I0313 12:52:07.051062 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556766-lklf4"] Mar 13 12:52:08 crc kubenswrapper[4632]: I0313 12:52:08.061037 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e38323af-ae58-48ce-979e-c8905218b4fe" path="/var/lib/kubelet/pods/e38323af-ae58-48ce-979e-c8905218b4fe/volumes" Mar 13 12:52:11 crc kubenswrapper[4632]: I0313 12:52:11.044634 4632 scope.go:117] "RemoveContainer" containerID="f99b8bf20e90f85804d1361e895aa3d02f6ce45057066d1347ea4edee62e1086" Mar 13 12:52:11 crc kubenswrapper[4632]: E0313 12:52:11.045567 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:52:18 crc kubenswrapper[4632]: I0313 12:52:18.109838 4632 generic.go:334] "Generic (PLEG): container finished" podID="252f97d9-adeb-4cce-858d-eb0bdb151871" containerID="ce4c408f0cb872f87b4909caa3af1013273fb7803c492e30e5b30666a913955d" exitCode=0 Mar 13 12:52:18 crc kubenswrapper[4632]: I0313 12:52:18.110425 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jfn52/must-gather-9gqfn" event={"ID":"252f97d9-adeb-4cce-858d-eb0bdb151871","Type":"ContainerDied","Data":"ce4c408f0cb872f87b4909caa3af1013273fb7803c492e30e5b30666a913955d"} Mar 13 12:52:18 crc kubenswrapper[4632]: I0313 12:52:18.111521 4632 scope.go:117] "RemoveContainer" containerID="ce4c408f0cb872f87b4909caa3af1013273fb7803c492e30e5b30666a913955d" Mar 13 12:52:18 crc kubenswrapper[4632]: I0313 12:52:18.520381 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jfn52_must-gather-9gqfn_252f97d9-adeb-4cce-858d-eb0bdb151871/gather/0.log" Mar 13 12:52:24 crc kubenswrapper[4632]: I0313 12:52:24.044926 4632 scope.go:117] "RemoveContainer" containerID="f99b8bf20e90f85804d1361e895aa3d02f6ce45057066d1347ea4edee62e1086" Mar 13 12:52:24 crc kubenswrapper[4632]: E0313 12:52:24.046285 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:52:25 crc kubenswrapper[4632]: I0313 12:52:25.814252 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4hj7k"] Mar 13 12:52:25 crc kubenswrapper[4632]: E0313 12:52:25.816482 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="576ef56c-ba4f-4def-89ac-cdef2e378fca" containerName="oc" Mar 13 12:52:25 crc kubenswrapper[4632]: I0313 12:52:25.816505 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="576ef56c-ba4f-4def-89ac-cdef2e378fca" containerName="oc" Mar 13 12:52:25 crc kubenswrapper[4632]: I0313 12:52:25.816859 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="576ef56c-ba4f-4def-89ac-cdef2e378fca" containerName="oc" Mar 13 12:52:25 crc kubenswrapper[4632]: I0313 12:52:25.827588 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4hj7k" Mar 13 12:52:25 crc kubenswrapper[4632]: I0313 12:52:25.849015 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4hj7k"] Mar 13 12:52:25 crc kubenswrapper[4632]: I0313 12:52:25.963043 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24088399-8751-4389-b28b-1bca8ff6f809-catalog-content\") pod \"community-operators-4hj7k\" (UID: \"24088399-8751-4389-b28b-1bca8ff6f809\") " pod="openshift-marketplace/community-operators-4hj7k" Mar 13 12:52:25 crc kubenswrapper[4632]: I0313 12:52:25.963255 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc94g\" (UniqueName: \"kubernetes.io/projected/24088399-8751-4389-b28b-1bca8ff6f809-kube-api-access-mc94g\") pod \"community-operators-4hj7k\" (UID: \"24088399-8751-4389-b28b-1bca8ff6f809\") " pod="openshift-marketplace/community-operators-4hj7k" Mar 13 12:52:25 crc kubenswrapper[4632]: I0313 12:52:25.963422 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24088399-8751-4389-b28b-1bca8ff6f809-utilities\") pod \"community-operators-4hj7k\" (UID: \"24088399-8751-4389-b28b-1bca8ff6f809\") " pod="openshift-marketplace/community-operators-4hj7k" Mar 13 12:52:26 crc kubenswrapper[4632]: I0313 12:52:26.065795 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24088399-8751-4389-b28b-1bca8ff6f809-utilities\") pod \"community-operators-4hj7k\" (UID: \"24088399-8751-4389-b28b-1bca8ff6f809\") " pod="openshift-marketplace/community-operators-4hj7k" Mar 13 12:52:26 crc kubenswrapper[4632]: I0313 12:52:26.066674 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24088399-8751-4389-b28b-1bca8ff6f809-utilities\") pod \"community-operators-4hj7k\" (UID: \"24088399-8751-4389-b28b-1bca8ff6f809\") " pod="openshift-marketplace/community-operators-4hj7k" Mar 13 12:52:26 crc kubenswrapper[4632]: I0313 12:52:26.066874 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24088399-8751-4389-b28b-1bca8ff6f809-catalog-content\") pod \"community-operators-4hj7k\" (UID: \"24088399-8751-4389-b28b-1bca8ff6f809\") " pod="openshift-marketplace/community-operators-4hj7k" Mar 13 12:52:26 crc kubenswrapper[4632]: I0313 12:52:26.067733 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc94g\" (UniqueName: \"kubernetes.io/projected/24088399-8751-4389-b28b-1bca8ff6f809-kube-api-access-mc94g\") pod \"community-operators-4hj7k\" (UID: \"24088399-8751-4389-b28b-1bca8ff6f809\") " pod="openshift-marketplace/community-operators-4hj7k" Mar 13 12:52:26 crc kubenswrapper[4632]: I0313 12:52:26.071314 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24088399-8751-4389-b28b-1bca8ff6f809-catalog-content\") pod \"community-operators-4hj7k\" (UID: \"24088399-8751-4389-b28b-1bca8ff6f809\") " pod="openshift-marketplace/community-operators-4hj7k" Mar 13 12:52:26 crc kubenswrapper[4632]: I0313 12:52:26.457752 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc94g\" (UniqueName: \"kubernetes.io/projected/24088399-8751-4389-b28b-1bca8ff6f809-kube-api-access-mc94g\") pod \"community-operators-4hj7k\" (UID: \"24088399-8751-4389-b28b-1bca8ff6f809\") " pod="openshift-marketplace/community-operators-4hj7k" Mar 13 12:52:26 crc kubenswrapper[4632]: I0313 12:52:26.470281 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4hj7k" Mar 13 12:52:27 crc kubenswrapper[4632]: I0313 12:52:27.107425 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4hj7k"] Mar 13 12:52:27 crc kubenswrapper[4632]: I0313 12:52:27.212286 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hj7k" event={"ID":"24088399-8751-4389-b28b-1bca8ff6f809","Type":"ContainerStarted","Data":"c31bc30aa7a32f866165e740bb17b10d2dfb78b39bb7977502e5547cf122fe4f"} Mar 13 12:52:28 crc kubenswrapper[4632]: I0313 12:52:28.222102 4632 generic.go:334] "Generic (PLEG): container finished" podID="24088399-8751-4389-b28b-1bca8ff6f809" containerID="4f8879384b19de20a7ba023ed6242f065059f220338be6e6dcb54651850e6173" exitCode=0 Mar 13 12:52:28 crc kubenswrapper[4632]: I0313 12:52:28.222161 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hj7k" event={"ID":"24088399-8751-4389-b28b-1bca8ff6f809","Type":"ContainerDied","Data":"4f8879384b19de20a7ba023ed6242f065059f220338be6e6dcb54651850e6173"} Mar 13 12:52:29 crc kubenswrapper[4632]: I0313 12:52:29.232759 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hj7k" event={"ID":"24088399-8751-4389-b28b-1bca8ff6f809","Type":"ContainerStarted","Data":"84f0f7be6e5f1526a1ffb017b5e0805dd39faaa07bca4f8b78e79b3871714f7e"} Mar 13 12:52:31 crc kubenswrapper[4632]: I0313 12:52:31.228440 4632 scope.go:117] "RemoveContainer" containerID="e2d392c178854d8d02c1d90a74a70ca0dce9ae28135802be619d355191eb7f40" Mar 13 12:52:32 crc kubenswrapper[4632]: I0313 12:52:32.253723 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jfn52/must-gather-9gqfn"] Mar 13 12:52:32 crc kubenswrapper[4632]: I0313 12:52:32.253979 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-jfn52/must-gather-9gqfn" podUID="252f97d9-adeb-4cce-858d-eb0bdb151871" containerName="copy" containerID="cri-o://8c2cc6936b125830f781b6c13d49ab8294ed023e1773b93feeb9bf3339c7d42f" gracePeriod=2 Mar 13 12:52:32 crc kubenswrapper[4632]: I0313 12:52:32.263499 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jfn52/must-gather-9gqfn"] Mar 13 12:52:32 crc kubenswrapper[4632]: I0313 12:52:32.264890 4632 generic.go:334] "Generic (PLEG): container finished" podID="24088399-8751-4389-b28b-1bca8ff6f809" containerID="84f0f7be6e5f1526a1ffb017b5e0805dd39faaa07bca4f8b78e79b3871714f7e" exitCode=0 Mar 13 12:52:32 crc kubenswrapper[4632]: I0313 12:52:32.264956 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hj7k" event={"ID":"24088399-8751-4389-b28b-1bca8ff6f809","Type":"ContainerDied","Data":"84f0f7be6e5f1526a1ffb017b5e0805dd39faaa07bca4f8b78e79b3871714f7e"} Mar 13 12:52:33 crc kubenswrapper[4632]: I0313 12:52:33.074911 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jfn52_must-gather-9gqfn_252f97d9-adeb-4cce-858d-eb0bdb151871/copy/0.log" Mar 13 12:52:33 crc kubenswrapper[4632]: I0313 12:52:33.076567 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jfn52/must-gather-9gqfn" Mar 13 12:52:33 crc kubenswrapper[4632]: I0313 12:52:33.232471 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwxgx\" (UniqueName: \"kubernetes.io/projected/252f97d9-adeb-4cce-858d-eb0bdb151871-kube-api-access-kwxgx\") pod \"252f97d9-adeb-4cce-858d-eb0bdb151871\" (UID: \"252f97d9-adeb-4cce-858d-eb0bdb151871\") " Mar 13 12:52:33 crc kubenswrapper[4632]: I0313 12:52:33.232803 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/252f97d9-adeb-4cce-858d-eb0bdb151871-must-gather-output\") pod \"252f97d9-adeb-4cce-858d-eb0bdb151871\" (UID: \"252f97d9-adeb-4cce-858d-eb0bdb151871\") " Mar 13 12:52:33 crc kubenswrapper[4632]: I0313 12:52:33.264041 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/252f97d9-adeb-4cce-858d-eb0bdb151871-kube-api-access-kwxgx" (OuterVolumeSpecName: "kube-api-access-kwxgx") pod "252f97d9-adeb-4cce-858d-eb0bdb151871" (UID: "252f97d9-adeb-4cce-858d-eb0bdb151871"). InnerVolumeSpecName "kube-api-access-kwxgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:52:33 crc kubenswrapper[4632]: I0313 12:52:33.282883 4632 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jfn52_must-gather-9gqfn_252f97d9-adeb-4cce-858d-eb0bdb151871/copy/0.log" Mar 13 12:52:33 crc kubenswrapper[4632]: I0313 12:52:33.283420 4632 generic.go:334] "Generic (PLEG): container finished" podID="252f97d9-adeb-4cce-858d-eb0bdb151871" containerID="8c2cc6936b125830f781b6c13d49ab8294ed023e1773b93feeb9bf3339c7d42f" exitCode=143 Mar 13 12:52:33 crc kubenswrapper[4632]: I0313 12:52:33.283497 4632 scope.go:117] "RemoveContainer" containerID="8c2cc6936b125830f781b6c13d49ab8294ed023e1773b93feeb9bf3339c7d42f" Mar 13 12:52:33 crc kubenswrapper[4632]: I0313 12:52:33.284185 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jfn52/must-gather-9gqfn" Mar 13 12:52:33 crc kubenswrapper[4632]: I0313 12:52:33.293248 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hj7k" event={"ID":"24088399-8751-4389-b28b-1bca8ff6f809","Type":"ContainerStarted","Data":"4e0eda93026c95782b0f1087fe1c83f62b5c89bdde15c230bf8e0fb7a66c1f63"} Mar 13 12:52:33 crc kubenswrapper[4632]: I0313 12:52:33.323283 4632 scope.go:117] "RemoveContainer" containerID="ce4c408f0cb872f87b4909caa3af1013273fb7803c492e30e5b30666a913955d" Mar 13 12:52:33 crc kubenswrapper[4632]: I0313 12:52:33.329711 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4hj7k" podStartSLOduration=3.7149638659999997 podStartE2EDuration="8.32969037s" podCreationTimestamp="2026-03-13 12:52:25 +0000 UTC" firstStartedPulling="2026-03-13 12:52:28.223983826 +0000 UTC m=+10122.246513959" lastFinishedPulling="2026-03-13 12:52:32.83871033 +0000 UTC m=+10126.861240463" observedRunningTime="2026-03-13 12:52:33.324060631 +0000 UTC m=+10127.346590764" watchObservedRunningTime="2026-03-13 12:52:33.32969037 +0000 UTC m=+10127.352220503" Mar 13 12:52:33 crc kubenswrapper[4632]: I0313 12:52:33.338529 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwxgx\" (UniqueName: \"kubernetes.io/projected/252f97d9-adeb-4cce-858d-eb0bdb151871-kube-api-access-kwxgx\") on node \"crc\" DevicePath \"\"" Mar 13 12:52:33 crc kubenswrapper[4632]: I0313 12:52:33.368565 4632 scope.go:117] "RemoveContainer" containerID="8c2cc6936b125830f781b6c13d49ab8294ed023e1773b93feeb9bf3339c7d42f" Mar 13 12:52:33 crc kubenswrapper[4632]: E0313 12:52:33.375816 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c2cc6936b125830f781b6c13d49ab8294ed023e1773b93feeb9bf3339c7d42f\": container with ID starting with 8c2cc6936b125830f781b6c13d49ab8294ed023e1773b93feeb9bf3339c7d42f not found: ID does not exist" containerID="8c2cc6936b125830f781b6c13d49ab8294ed023e1773b93feeb9bf3339c7d42f" Mar 13 12:52:33 crc kubenswrapper[4632]: I0313 12:52:33.375883 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c2cc6936b125830f781b6c13d49ab8294ed023e1773b93feeb9bf3339c7d42f"} err="failed to get container status \"8c2cc6936b125830f781b6c13d49ab8294ed023e1773b93feeb9bf3339c7d42f\": rpc error: code = NotFound desc = could not find container \"8c2cc6936b125830f781b6c13d49ab8294ed023e1773b93feeb9bf3339c7d42f\": container with ID starting with 8c2cc6936b125830f781b6c13d49ab8294ed023e1773b93feeb9bf3339c7d42f not found: ID does not exist" Mar 13 12:52:33 crc kubenswrapper[4632]: I0313 12:52:33.375921 4632 scope.go:117] "RemoveContainer" containerID="ce4c408f0cb872f87b4909caa3af1013273fb7803c492e30e5b30666a913955d" Mar 13 12:52:33 crc kubenswrapper[4632]: E0313 12:52:33.376684 4632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce4c408f0cb872f87b4909caa3af1013273fb7803c492e30e5b30666a913955d\": container with ID starting with ce4c408f0cb872f87b4909caa3af1013273fb7803c492e30e5b30666a913955d not found: ID does not exist" containerID="ce4c408f0cb872f87b4909caa3af1013273fb7803c492e30e5b30666a913955d" Mar 13 12:52:33 crc kubenswrapper[4632]: I0313 12:52:33.376720 4632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce4c408f0cb872f87b4909caa3af1013273fb7803c492e30e5b30666a913955d"} err="failed to get container status \"ce4c408f0cb872f87b4909caa3af1013273fb7803c492e30e5b30666a913955d\": rpc error: code = NotFound desc = could not find container \"ce4c408f0cb872f87b4909caa3af1013273fb7803c492e30e5b30666a913955d\": container with ID starting with ce4c408f0cb872f87b4909caa3af1013273fb7803c492e30e5b30666a913955d not found: ID does not exist" Mar 13 12:52:33 crc kubenswrapper[4632]: I0313 12:52:33.384740 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/252f97d9-adeb-4cce-858d-eb0bdb151871-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "252f97d9-adeb-4cce-858d-eb0bdb151871" (UID: "252f97d9-adeb-4cce-858d-eb0bdb151871"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:52:33 crc kubenswrapper[4632]: I0313 12:52:33.441017 4632 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/252f97d9-adeb-4cce-858d-eb0bdb151871-must-gather-output\") on node \"crc\" DevicePath \"\"" Mar 13 12:52:34 crc kubenswrapper[4632]: I0313 12:52:34.055776 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="252f97d9-adeb-4cce-858d-eb0bdb151871" path="/var/lib/kubelet/pods/252f97d9-adeb-4cce-858d-eb0bdb151871/volumes" Mar 13 12:52:36 crc kubenswrapper[4632]: I0313 12:52:36.470447 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4hj7k" Mar 13 12:52:36 crc kubenswrapper[4632]: I0313 12:52:36.470979 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4hj7k" Mar 13 12:52:37 crc kubenswrapper[4632]: I0313 12:52:37.044415 4632 scope.go:117] "RemoveContainer" containerID="f99b8bf20e90f85804d1361e895aa3d02f6ce45057066d1347ea4edee62e1086" Mar 13 12:52:37 crc kubenswrapper[4632]: E0313 12:52:37.044922 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:52:37 crc kubenswrapper[4632]: I0313 12:52:37.527737 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-4hj7k" podUID="24088399-8751-4389-b28b-1bca8ff6f809" containerName="registry-server" probeResult="failure" output=< Mar 13 12:52:37 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:52:37 crc kubenswrapper[4632]: > Mar 13 12:52:47 crc kubenswrapper[4632]: I0313 12:52:47.519981 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-4hj7k" podUID="24088399-8751-4389-b28b-1bca8ff6f809" containerName="registry-server" probeResult="failure" output=< Mar 13 12:52:47 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:52:47 crc kubenswrapper[4632]: > Mar 13 12:52:50 crc kubenswrapper[4632]: I0313 12:52:50.044831 4632 scope.go:117] "RemoveContainer" containerID="f99b8bf20e90f85804d1361e895aa3d02f6ce45057066d1347ea4edee62e1086" Mar 13 12:52:50 crc kubenswrapper[4632]: E0313 12:52:50.045576 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:52:56 crc kubenswrapper[4632]: I0313 12:52:56.525956 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4hj7k" Mar 13 12:52:56 crc kubenswrapper[4632]: I0313 12:52:56.597331 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4hj7k" Mar 13 12:52:56 crc kubenswrapper[4632]: I0313 12:52:56.778778 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4hj7k"] Mar 13 12:52:58 crc kubenswrapper[4632]: I0313 12:52:58.549161 4632 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4hj7k" podUID="24088399-8751-4389-b28b-1bca8ff6f809" containerName="registry-server" containerID="cri-o://4e0eda93026c95782b0f1087fe1c83f62b5c89bdde15c230bf8e0fb7a66c1f63" gracePeriod=2 Mar 13 12:52:59 crc kubenswrapper[4632]: I0313 12:52:59.594671 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hj7k" event={"ID":"24088399-8751-4389-b28b-1bca8ff6f809","Type":"ContainerDied","Data":"4e0eda93026c95782b0f1087fe1c83f62b5c89bdde15c230bf8e0fb7a66c1f63"} Mar 13 12:52:59 crc kubenswrapper[4632]: I0313 12:52:59.597761 4632 generic.go:334] "Generic (PLEG): container finished" podID="24088399-8751-4389-b28b-1bca8ff6f809" containerID="4e0eda93026c95782b0f1087fe1c83f62b5c89bdde15c230bf8e0fb7a66c1f63" exitCode=0 Mar 13 12:52:59 crc kubenswrapper[4632]: I0313 12:52:59.597824 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hj7k" event={"ID":"24088399-8751-4389-b28b-1bca8ff6f809","Type":"ContainerDied","Data":"c31bc30aa7a32f866165e740bb17b10d2dfb78b39bb7977502e5547cf122fe4f"} Mar 13 12:52:59 crc kubenswrapper[4632]: I0313 12:52:59.597851 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c31bc30aa7a32f866165e740bb17b10d2dfb78b39bb7977502e5547cf122fe4f" Mar 13 12:52:59 crc kubenswrapper[4632]: I0313 12:52:59.599520 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4hj7k" Mar 13 12:52:59 crc kubenswrapper[4632]: I0313 12:52:59.734709 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24088399-8751-4389-b28b-1bca8ff6f809-catalog-content\") pod \"24088399-8751-4389-b28b-1bca8ff6f809\" (UID: \"24088399-8751-4389-b28b-1bca8ff6f809\") " Mar 13 12:52:59 crc kubenswrapper[4632]: I0313 12:52:59.734859 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mc94g\" (UniqueName: \"kubernetes.io/projected/24088399-8751-4389-b28b-1bca8ff6f809-kube-api-access-mc94g\") pod \"24088399-8751-4389-b28b-1bca8ff6f809\" (UID: \"24088399-8751-4389-b28b-1bca8ff6f809\") " Mar 13 12:52:59 crc kubenswrapper[4632]: I0313 12:52:59.734933 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24088399-8751-4389-b28b-1bca8ff6f809-utilities\") pod \"24088399-8751-4389-b28b-1bca8ff6f809\" (UID: \"24088399-8751-4389-b28b-1bca8ff6f809\") " Mar 13 12:52:59 crc kubenswrapper[4632]: I0313 12:52:59.735537 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24088399-8751-4389-b28b-1bca8ff6f809-utilities" (OuterVolumeSpecName: "utilities") pod "24088399-8751-4389-b28b-1bca8ff6f809" (UID: "24088399-8751-4389-b28b-1bca8ff6f809"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:52:59 crc kubenswrapper[4632]: I0313 12:52:59.750521 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24088399-8751-4389-b28b-1bca8ff6f809-kube-api-access-mc94g" (OuterVolumeSpecName: "kube-api-access-mc94g") pod "24088399-8751-4389-b28b-1bca8ff6f809" (UID: "24088399-8751-4389-b28b-1bca8ff6f809"). InnerVolumeSpecName "kube-api-access-mc94g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:52:59 crc kubenswrapper[4632]: I0313 12:52:59.804998 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24088399-8751-4389-b28b-1bca8ff6f809-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24088399-8751-4389-b28b-1bca8ff6f809" (UID: "24088399-8751-4389-b28b-1bca8ff6f809"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:52:59 crc kubenswrapper[4632]: I0313 12:52:59.840208 4632 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24088399-8751-4389-b28b-1bca8ff6f809-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 12:52:59 crc kubenswrapper[4632]: I0313 12:52:59.840546 4632 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24088399-8751-4389-b28b-1bca8ff6f809-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 12:52:59 crc kubenswrapper[4632]: I0313 12:52:59.840638 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mc94g\" (UniqueName: \"kubernetes.io/projected/24088399-8751-4389-b28b-1bca8ff6f809-kube-api-access-mc94g\") on node \"crc\" DevicePath \"\"" Mar 13 12:53:00 crc kubenswrapper[4632]: I0313 12:53:00.606653 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4hj7k" Mar 13 12:53:00 crc kubenswrapper[4632]: I0313 12:53:00.634560 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4hj7k"] Mar 13 12:53:00 crc kubenswrapper[4632]: I0313 12:53:00.642894 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4hj7k"] Mar 13 12:53:02 crc kubenswrapper[4632]: I0313 12:53:02.058968 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24088399-8751-4389-b28b-1bca8ff6f809" path="/var/lib/kubelet/pods/24088399-8751-4389-b28b-1bca8ff6f809/volumes" Mar 13 12:53:05 crc kubenswrapper[4632]: I0313 12:53:05.045442 4632 scope.go:117] "RemoveContainer" containerID="f99b8bf20e90f85804d1361e895aa3d02f6ce45057066d1347ea4edee62e1086" Mar 13 12:53:05 crc kubenswrapper[4632]: E0313 12:53:05.046779 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:53:17 crc kubenswrapper[4632]: I0313 12:53:17.044160 4632 scope.go:117] "RemoveContainer" containerID="f99b8bf20e90f85804d1361e895aa3d02f6ce45057066d1347ea4edee62e1086" Mar 13 12:53:17 crc kubenswrapper[4632]: E0313 12:53:17.044793 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:53:31 crc kubenswrapper[4632]: I0313 12:53:31.045214 4632 scope.go:117] "RemoveContainer" containerID="f99b8bf20e90f85804d1361e895aa3d02f6ce45057066d1347ea4edee62e1086" Mar 13 12:53:31 crc kubenswrapper[4632]: E0313 12:53:31.046602 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:53:31 crc kubenswrapper[4632]: I0313 12:53:31.308091 4632 scope.go:117] "RemoveContainer" containerID="4814d54398dddc7491e0a4c9f868011d9742ed90e2d05a245b23e35c28be791e" Mar 13 12:53:31 crc kubenswrapper[4632]: I0313 12:53:31.342130 4632 scope.go:117] "RemoveContainer" containerID="537f35c912c9c8057ecc3ec80663f0a0d3c386360eb374b59f3e50a3f8bd59ee" Mar 13 12:53:31 crc kubenswrapper[4632]: I0313 12:53:31.378101 4632 scope.go:117] "RemoveContainer" containerID="219da7f0798a57241a68dc972a8e6cf63665a59509f96ff776be2e82e493c3c5" Mar 13 12:53:46 crc kubenswrapper[4632]: I0313 12:53:46.045599 4632 scope.go:117] "RemoveContainer" containerID="f99b8bf20e90f85804d1361e895aa3d02f6ce45057066d1347ea4edee62e1086" Mar 13 12:53:46 crc kubenswrapper[4632]: E0313 12:53:46.046661 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:53:57 crc kubenswrapper[4632]: I0313 12:53:57.044345 4632 scope.go:117] "RemoveContainer" containerID="f99b8bf20e90f85804d1361e895aa3d02f6ce45057066d1347ea4edee62e1086" Mar 13 12:53:57 crc kubenswrapper[4632]: E0313 12:53:57.046655 4632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zkscb_openshift-machine-config-operator(d77b18a7-7ad9-4bf5-bff5-da45878af7f4)\"" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" podUID="d77b18a7-7ad9-4bf5-bff5-da45878af7f4" Mar 13 12:54:00 crc kubenswrapper[4632]: I0313 12:54:00.285257 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29556774-hwghb"] Mar 13 12:54:00 crc kubenswrapper[4632]: E0313 12:54:00.289616 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24088399-8751-4389-b28b-1bca8ff6f809" containerName="extract-utilities" Mar 13 12:54:00 crc kubenswrapper[4632]: I0313 12:54:00.289645 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="24088399-8751-4389-b28b-1bca8ff6f809" containerName="extract-utilities" Mar 13 12:54:00 crc kubenswrapper[4632]: E0313 12:54:00.289679 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24088399-8751-4389-b28b-1bca8ff6f809" containerName="extract-content" Mar 13 12:54:00 crc kubenswrapper[4632]: I0313 12:54:00.289687 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="24088399-8751-4389-b28b-1bca8ff6f809" containerName="extract-content" Mar 13 12:54:00 crc kubenswrapper[4632]: E0313 12:54:00.289701 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="252f97d9-adeb-4cce-858d-eb0bdb151871" containerName="gather" Mar 13 12:54:00 crc kubenswrapper[4632]: I0313 12:54:00.289707 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="252f97d9-adeb-4cce-858d-eb0bdb151871" containerName="gather" Mar 13 12:54:00 crc kubenswrapper[4632]: E0313 12:54:00.289730 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="252f97d9-adeb-4cce-858d-eb0bdb151871" containerName="copy" Mar 13 12:54:00 crc kubenswrapper[4632]: I0313 12:54:00.289736 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="252f97d9-adeb-4cce-858d-eb0bdb151871" containerName="copy" Mar 13 12:54:00 crc kubenswrapper[4632]: E0313 12:54:00.289757 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24088399-8751-4389-b28b-1bca8ff6f809" containerName="registry-server" Mar 13 12:54:00 crc kubenswrapper[4632]: I0313 12:54:00.289763 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="24088399-8751-4389-b28b-1bca8ff6f809" containerName="registry-server" Mar 13 12:54:00 crc kubenswrapper[4632]: I0313 12:54:00.290712 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="252f97d9-adeb-4cce-858d-eb0bdb151871" containerName="gather" Mar 13 12:54:00 crc kubenswrapper[4632]: I0313 12:54:00.290743 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="252f97d9-adeb-4cce-858d-eb0bdb151871" containerName="copy" Mar 13 12:54:00 crc kubenswrapper[4632]: I0313 12:54:00.290756 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="24088399-8751-4389-b28b-1bca8ff6f809" containerName="registry-server" Mar 13 12:54:00 crc kubenswrapper[4632]: I0313 12:54:00.297493 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556774-hwghb" Mar 13 12:54:00 crc kubenswrapper[4632]: I0313 12:54:00.308883 4632 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-kdp2p" Mar 13 12:54:00 crc kubenswrapper[4632]: I0313 12:54:00.308885 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 12:54:00 crc kubenswrapper[4632]: I0313 12:54:00.308890 4632 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 12:54:00 crc kubenswrapper[4632]: I0313 12:54:00.355184 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556774-hwghb"] Mar 13 12:54:00 crc kubenswrapper[4632]: I0313 12:54:00.390101 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6hnl\" (UniqueName: \"kubernetes.io/projected/f3f96182-2be7-4262-9ba2-7b363f07fd2d-kube-api-access-r6hnl\") pod \"auto-csr-approver-29556774-hwghb\" (UID: \"f3f96182-2be7-4262-9ba2-7b363f07fd2d\") " pod="openshift-infra/auto-csr-approver-29556774-hwghb" Mar 13 12:54:00 crc kubenswrapper[4632]: I0313 12:54:00.492549 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6hnl\" (UniqueName: \"kubernetes.io/projected/f3f96182-2be7-4262-9ba2-7b363f07fd2d-kube-api-access-r6hnl\") pod \"auto-csr-approver-29556774-hwghb\" (UID: \"f3f96182-2be7-4262-9ba2-7b363f07fd2d\") " pod="openshift-infra/auto-csr-approver-29556774-hwghb" Mar 13 12:54:00 crc kubenswrapper[4632]: I0313 12:54:00.532335 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6hnl\" (UniqueName: \"kubernetes.io/projected/f3f96182-2be7-4262-9ba2-7b363f07fd2d-kube-api-access-r6hnl\") pod \"auto-csr-approver-29556774-hwghb\" (UID: \"f3f96182-2be7-4262-9ba2-7b363f07fd2d\") " pod="openshift-infra/auto-csr-approver-29556774-hwghb" Mar 13 12:54:00 crc kubenswrapper[4632]: I0313 12:54:00.622512 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556774-hwghb" Mar 13 12:54:01 crc kubenswrapper[4632]: I0313 12:54:01.050680 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29556774-hwghb"] Mar 13 12:54:01 crc kubenswrapper[4632]: W0313 12:54:01.066803 4632 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3f96182_2be7_4262_9ba2_7b363f07fd2d.slice/crio-26cf7dbf74e22752caadec7e37b845a467437edabf9d757be9c601c40aef85e4 WatchSource:0}: Error finding container 26cf7dbf74e22752caadec7e37b845a467437edabf9d757be9c601c40aef85e4: Status 404 returned error can't find the container with id 26cf7dbf74e22752caadec7e37b845a467437edabf9d757be9c601c40aef85e4 Mar 13 12:54:01 crc kubenswrapper[4632]: I0313 12:54:01.088117 4632 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 12:54:01 crc kubenswrapper[4632]: I0313 12:54:01.176151 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556774-hwghb" event={"ID":"f3f96182-2be7-4262-9ba2-7b363f07fd2d","Type":"ContainerStarted","Data":"26cf7dbf74e22752caadec7e37b845a467437edabf9d757be9c601c40aef85e4"} Mar 13 12:54:04 crc kubenswrapper[4632]: I0313 12:54:04.204422 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556774-hwghb" event={"ID":"f3f96182-2be7-4262-9ba2-7b363f07fd2d","Type":"ContainerStarted","Data":"d2d7c4023cf6c21c65c9b9134fc1ea2d1154414b6f5373f5fd1987f228f3c9f6"} Mar 13 12:54:04 crc kubenswrapper[4632]: I0313 12:54:04.235566 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29556774-hwghb" podStartSLOduration=2.9074145639999998 podStartE2EDuration="4.234407034s" podCreationTimestamp="2026-03-13 12:54:00 +0000 UTC" firstStartedPulling="2026-03-13 12:54:01.083038656 +0000 UTC m=+10215.105568789" lastFinishedPulling="2026-03-13 12:54:02.410031126 +0000 UTC m=+10216.432561259" observedRunningTime="2026-03-13 12:54:04.219758194 +0000 UTC m=+10218.242288357" watchObservedRunningTime="2026-03-13 12:54:04.234407034 +0000 UTC m=+10218.256937177" Mar 13 12:54:05 crc kubenswrapper[4632]: I0313 12:54:05.216636 4632 generic.go:334] "Generic (PLEG): container finished" podID="f3f96182-2be7-4262-9ba2-7b363f07fd2d" containerID="d2d7c4023cf6c21c65c9b9134fc1ea2d1154414b6f5373f5fd1987f228f3c9f6" exitCode=0 Mar 13 12:54:05 crc kubenswrapper[4632]: I0313 12:54:05.216681 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556774-hwghb" event={"ID":"f3f96182-2be7-4262-9ba2-7b363f07fd2d","Type":"ContainerDied","Data":"d2d7c4023cf6c21c65c9b9134fc1ea2d1154414b6f5373f5fd1987f228f3c9f6"} Mar 13 12:54:06 crc kubenswrapper[4632]: I0313 12:54:06.602418 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556774-hwghb" Mar 13 12:54:06 crc kubenswrapper[4632]: I0313 12:54:06.718373 4632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6hnl\" (UniqueName: \"kubernetes.io/projected/f3f96182-2be7-4262-9ba2-7b363f07fd2d-kube-api-access-r6hnl\") pod \"f3f96182-2be7-4262-9ba2-7b363f07fd2d\" (UID: \"f3f96182-2be7-4262-9ba2-7b363f07fd2d\") " Mar 13 12:54:06 crc kubenswrapper[4632]: I0313 12:54:06.732591 4632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3f96182-2be7-4262-9ba2-7b363f07fd2d-kube-api-access-r6hnl" (OuterVolumeSpecName: "kube-api-access-r6hnl") pod "f3f96182-2be7-4262-9ba2-7b363f07fd2d" (UID: "f3f96182-2be7-4262-9ba2-7b363f07fd2d"). InnerVolumeSpecName "kube-api-access-r6hnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:54:06 crc kubenswrapper[4632]: I0313 12:54:06.820628 4632 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6hnl\" (UniqueName: \"kubernetes.io/projected/f3f96182-2be7-4262-9ba2-7b363f07fd2d-kube-api-access-r6hnl\") on node \"crc\" DevicePath \"\"" Mar 13 12:54:07 crc kubenswrapper[4632]: I0313 12:54:07.239969 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29556774-hwghb" event={"ID":"f3f96182-2be7-4262-9ba2-7b363f07fd2d","Type":"ContainerDied","Data":"26cf7dbf74e22752caadec7e37b845a467437edabf9d757be9c601c40aef85e4"} Mar 13 12:54:07 crc kubenswrapper[4632]: I0313 12:54:07.240034 4632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26cf7dbf74e22752caadec7e37b845a467437edabf9d757be9c601c40aef85e4" Mar 13 12:54:07 crc kubenswrapper[4632]: I0313 12:54:07.240057 4632 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29556774-hwghb" Mar 13 12:54:07 crc kubenswrapper[4632]: I0313 12:54:07.343178 4632 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29556768-qx4vs"] Mar 13 12:54:07 crc kubenswrapper[4632]: I0313 12:54:07.356417 4632 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29556768-qx4vs"] Mar 13 12:54:08 crc kubenswrapper[4632]: I0313 12:54:08.070959 4632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c0a936c-9a75-4f0b-81b1-fb7f74d9911f" path="/var/lib/kubelet/pods/7c0a936c-9a75-4f0b-81b1-fb7f74d9911f/volumes" Mar 13 12:54:12 crc kubenswrapper[4632]: I0313 12:54:12.044983 4632 scope.go:117] "RemoveContainer" containerID="f99b8bf20e90f85804d1361e895aa3d02f6ce45057066d1347ea4edee62e1086" Mar 13 12:54:13 crc kubenswrapper[4632]: I0313 12:54:13.306362 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zkscb" event={"ID":"d77b18a7-7ad9-4bf5-bff5-da45878af7f4","Type":"ContainerStarted","Data":"07bc2cd5f1b1c505fce3a8916e93f01bdf10c685e4d30c4c6edb1e67e635fe5f"} Mar 13 12:54:31 crc kubenswrapper[4632]: I0313 12:54:31.482849 4632 scope.go:117] "RemoveContainer" containerID="fbdf7412c66e2fa539b75629b05076618e8fad2c845d05a467fd575d619baa55" Mar 13 12:55:10 crc kubenswrapper[4632]: I0313 12:55:10.144298 4632 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8wm2q"] Mar 13 12:55:10 crc kubenswrapper[4632]: E0313 12:55:10.149672 4632 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3f96182-2be7-4262-9ba2-7b363f07fd2d" containerName="oc" Mar 13 12:55:10 crc kubenswrapper[4632]: I0313 12:55:10.149825 4632 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3f96182-2be7-4262-9ba2-7b363f07fd2d" containerName="oc" Mar 13 12:55:10 crc kubenswrapper[4632]: I0313 12:55:10.150674 4632 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3f96182-2be7-4262-9ba2-7b363f07fd2d" containerName="oc" Mar 13 12:55:10 crc kubenswrapper[4632]: I0313 12:55:10.158457 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8wm2q" Mar 13 12:55:10 crc kubenswrapper[4632]: I0313 12:55:10.165596 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8wm2q"] Mar 13 12:55:10 crc kubenswrapper[4632]: I0313 12:55:10.342089 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/156b689a-7e5a-4335-b861-6470b6c336e9-catalog-content\") pod \"certified-operators-8wm2q\" (UID: \"156b689a-7e5a-4335-b861-6470b6c336e9\") " pod="openshift-marketplace/certified-operators-8wm2q" Mar 13 12:55:10 crc kubenswrapper[4632]: I0313 12:55:10.342478 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/156b689a-7e5a-4335-b861-6470b6c336e9-utilities\") pod \"certified-operators-8wm2q\" (UID: \"156b689a-7e5a-4335-b861-6470b6c336e9\") " pod="openshift-marketplace/certified-operators-8wm2q" Mar 13 12:55:10 crc kubenswrapper[4632]: I0313 12:55:10.342552 4632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gclsw\" (UniqueName: \"kubernetes.io/projected/156b689a-7e5a-4335-b861-6470b6c336e9-kube-api-access-gclsw\") pod \"certified-operators-8wm2q\" (UID: \"156b689a-7e5a-4335-b861-6470b6c336e9\") " pod="openshift-marketplace/certified-operators-8wm2q" Mar 13 12:55:10 crc kubenswrapper[4632]: I0313 12:55:10.444510 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/156b689a-7e5a-4335-b861-6470b6c336e9-utilities\") pod \"certified-operators-8wm2q\" (UID: \"156b689a-7e5a-4335-b861-6470b6c336e9\") " pod="openshift-marketplace/certified-operators-8wm2q" Mar 13 12:55:10 crc kubenswrapper[4632]: I0313 12:55:10.444671 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gclsw\" (UniqueName: \"kubernetes.io/projected/156b689a-7e5a-4335-b861-6470b6c336e9-kube-api-access-gclsw\") pod \"certified-operators-8wm2q\" (UID: \"156b689a-7e5a-4335-b861-6470b6c336e9\") " pod="openshift-marketplace/certified-operators-8wm2q" Mar 13 12:55:10 crc kubenswrapper[4632]: I0313 12:55:10.444830 4632 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/156b689a-7e5a-4335-b861-6470b6c336e9-catalog-content\") pod \"certified-operators-8wm2q\" (UID: \"156b689a-7e5a-4335-b861-6470b6c336e9\") " pod="openshift-marketplace/certified-operators-8wm2q" Mar 13 12:55:10 crc kubenswrapper[4632]: I0313 12:55:10.445481 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/156b689a-7e5a-4335-b861-6470b6c336e9-utilities\") pod \"certified-operators-8wm2q\" (UID: \"156b689a-7e5a-4335-b861-6470b6c336e9\") " pod="openshift-marketplace/certified-operators-8wm2q" Mar 13 12:55:10 crc kubenswrapper[4632]: I0313 12:55:10.446754 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/156b689a-7e5a-4335-b861-6470b6c336e9-catalog-content\") pod \"certified-operators-8wm2q\" (UID: \"156b689a-7e5a-4335-b861-6470b6c336e9\") " pod="openshift-marketplace/certified-operators-8wm2q" Mar 13 12:55:10 crc kubenswrapper[4632]: I0313 12:55:10.469831 4632 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gclsw\" (UniqueName: \"kubernetes.io/projected/156b689a-7e5a-4335-b861-6470b6c336e9-kube-api-access-gclsw\") pod \"certified-operators-8wm2q\" (UID: \"156b689a-7e5a-4335-b861-6470b6c336e9\") " pod="openshift-marketplace/certified-operators-8wm2q" Mar 13 12:55:10 crc kubenswrapper[4632]: I0313 12:55:10.479964 4632 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8wm2q" Mar 13 12:55:11 crc kubenswrapper[4632]: I0313 12:55:11.110864 4632 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8wm2q"] Mar 13 12:55:11 crc kubenswrapper[4632]: I0313 12:55:11.995743 4632 generic.go:334] "Generic (PLEG): container finished" podID="156b689a-7e5a-4335-b861-6470b6c336e9" containerID="8dab11ebf37d779f2da3a122c2b995fd25ff3a689b642c34db1453ee35e1b169" exitCode=0 Mar 13 12:55:11 crc kubenswrapper[4632]: I0313 12:55:11.997235 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8wm2q" event={"ID":"156b689a-7e5a-4335-b861-6470b6c336e9","Type":"ContainerDied","Data":"8dab11ebf37d779f2da3a122c2b995fd25ff3a689b642c34db1453ee35e1b169"} Mar 13 12:55:11 crc kubenswrapper[4632]: I0313 12:55:11.997378 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8wm2q" event={"ID":"156b689a-7e5a-4335-b861-6470b6c336e9","Type":"ContainerStarted","Data":"feb59d2fc60689fa5a353a08e74284afe95caab4744d06f1c02c1c8737813065"} Mar 13 12:55:13 crc kubenswrapper[4632]: I0313 12:55:13.016205 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8wm2q" event={"ID":"156b689a-7e5a-4335-b861-6470b6c336e9","Type":"ContainerStarted","Data":"d7b53ae310ac7f474cee531840648431c705e87927ace402b17a25d79a19a361"} Mar 13 12:55:15 crc kubenswrapper[4632]: I0313 12:55:15.034917 4632 generic.go:334] "Generic (PLEG): container finished" podID="156b689a-7e5a-4335-b861-6470b6c336e9" containerID="d7b53ae310ac7f474cee531840648431c705e87927ace402b17a25d79a19a361" exitCode=0 Mar 13 12:55:15 crc kubenswrapper[4632]: I0313 12:55:15.034979 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8wm2q" event={"ID":"156b689a-7e5a-4335-b861-6470b6c336e9","Type":"ContainerDied","Data":"d7b53ae310ac7f474cee531840648431c705e87927ace402b17a25d79a19a361"} Mar 13 12:55:17 crc kubenswrapper[4632]: I0313 12:55:17.069817 4632 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8wm2q" event={"ID":"156b689a-7e5a-4335-b861-6470b6c336e9","Type":"ContainerStarted","Data":"a56230e1aca4eab0b09b3cd9ce5c2f2212d66ef93f13ffa3c09b64aa2806a748"} Mar 13 12:55:17 crc kubenswrapper[4632]: I0313 12:55:17.093815 4632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8wm2q" podStartSLOduration=3.532117583 podStartE2EDuration="7.09374426s" podCreationTimestamp="2026-03-13 12:55:10 +0000 UTC" firstStartedPulling="2026-03-13 12:55:12.001057944 +0000 UTC m=+10286.023588077" lastFinishedPulling="2026-03-13 12:55:15.562684581 +0000 UTC m=+10289.585214754" observedRunningTime="2026-03-13 12:55:17.088371538 +0000 UTC m=+10291.110901681" watchObservedRunningTime="2026-03-13 12:55:17.09374426 +0000 UTC m=+10291.116274393" Mar 13 12:55:20 crc kubenswrapper[4632]: I0313 12:55:20.480781 4632 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8wm2q" Mar 13 12:55:20 crc kubenswrapper[4632]: I0313 12:55:20.481331 4632 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8wm2q" Mar 13 12:55:21 crc kubenswrapper[4632]: I0313 12:55:21.588000 4632 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8wm2q" podUID="156b689a-7e5a-4335-b861-6470b6c336e9" containerName="registry-server" probeResult="failure" output=< Mar 13 12:55:21 crc kubenswrapper[4632]: timeout: failed to connect service ":50051" within 1s Mar 13 12:55:21 crc kubenswrapper[4632]: >